[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


>B-but AI is stagnating
>AGI never, muh complexities of human consciousness

You fucking fools.
>>
>>93672952
People keep saying that AGI will never attain human level consciousness, but the thing is is that it's not supposed to. That's not how AGI is measured. AGI as the name suggests is just generalized AI, meaning it knows a lot about a lot of things instead of only being good at one thing. It's still just artificial intelligence, not artificial consciousness
>>
https://voyager.minedojo.org/
>>
why is human consciousness even considered part of the conversation? consciousness is the act of existence observing itself, it literally just means knowledge originally. "with knowledge" or "with consciousness". a concept of understanding the nature of being, it has fucking nothing to do with brain processing functions. so, yeah as much as you can try, unless AI can somehow naturally and offline becomes aware of itself, no it cannot mimic consciousness. simply an aspect of being born biologically from the immaterial energy in 4d space. a tree observing itself through its leaves so to speak.
>inb4 "thats faggot hippy nonsense"
>inb4 "but im le atheist and-"
no, these are concepts that predate any existing science or philosophy on earth. as old as humanity and existence itself
>>
>>93673024
No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform. All of these things are just pre-training and potentially fine-tuning, it has no ability to learn because it's stateless.

All of the AI field recently is literally regression based function approximators. It's like saying wow, you can train a segmentation task to segment the sky out of photos, that must mean it's a super intelligent system that can learn to segment ALL possible categories and edit images too. It's literally statistical regression, it doesn't work that way.

I fucking hate you stupid niggers hyping up AI.
>>
>>93673638
>it has no ability to learn because it's stateless.
"Complex skills can be synthesized by composing simpler programs, which compounds Voyager's capabilities rapidly over time and alleviates catastrophic forgetting."
>>
>>93673720
Okay dipshit, call me when it can generalize and adapt and learn any intellectual task or manipulation in the real world instead of 14 tasks from functions in a shitty Minecraft javascript AI framework. I'm sure you'll have AGI anyday now.
>>
>>93673880
"call me when it can do x"
>it it does x
"call me when it can do y"
>it does y
"call me when ...

Cope after cope after cope.
>>
>>93673975
nta, but you haven't given any real example
>>
>>93673638
>All of these things are just pre-training
raising a kid is pre-training so humans arent real AGI
>>
>>93673975
Yeah, it's pretty cope when you're literally using the stateless corpus of GPT-3.5 to explain the entire task it's supposed to "learn" step by step and saving it as an embedding instead of, you know, actually being able to learn a task and navigate the world. Apparently this is known as "environment feedback", lmao, when you want to learn something you just use language to ask a question and the instructions magically pop into your head. Keep burning up OpenAI credits though.
>>
>>93673638
>potentially fine-tuning
aka learning
>>
>>93672952
Still a fat faggot
>>
>>93674064
People like you are literal NPCs. I think this technology is wasted on you.
>>
>>93674280
Yeah, that's what I thought. Maybe stop throwing around the AGI buzzword and we can talk about the technology instead, you fucking retard.
>>
>>93674317
Did I hurt your feefees, NPC? Does your head hurt from too much exertion?
>>
File: download.jpg (6 KB, 276x182)
6 KB
6 KB JPG
>>93673975
Well you didnt call

Dear anon I wrote you but you still I calling
I left my discord, my reddit and my github at the bottom
I made 2 shitpost back in autumn, you must not've got em
There probably was a problem with your filter or something
>>
>>93674358
Maybe stop parading around agent frameworks as AGI and you won't be called out as a moron. You wouldn't have to resort to replies like this if you actually had a clue.
>>
Oh, look. Another thread full of people who will never be women.
>>
>>93673975
https://en.wikipedia.org/wiki/AI_effect
>>
>>93673638
>No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform
That doesn't mean it's conscious
>>
Yudkowsky warned you and you didn't listen
>>
>>93674594
Yeah no shit, it's python that's querying a JSON API of a huge approximated function that predicts language, where the approximated function is predicting the next word from the model all substantive words/language and the relations between them in the dataset (the web), where the model was further refined by statistically biasing it towards a subset of intelligent language like Q&A, instruct, code, etc, via RLHF tuning. No shit it's not conscious.
>>
>>93674838
>the brain is just a system of chemicals, neurons and neurotransmitters acting together to do stuff
>no shit it's not conscious
>>
>>93674920
What if your CPU is screaming in agony in and out of consciousness every time a branch prediction happens? Whoaaaa dudeeee.
Fuck off retard.
>>
>>93672952
AI can't even code, sorry hello world apps are not coding
and it already hit hardware/data limits
>>
>>93674989
The goalpost has moved again! Now it's low energy consumption!
Where will it go next?
>>
>>93673638
Based
>>
>>93675135
The fuck does low energy consumption have to do with the analogy that the branch prediction in your CPU is analogous to the predictor happening here? You wouldn't say that the branch predictor has the potential to be conscious because "there's a lot going on."

I think you're legitimately mentally ill. Maybe the anon earlier was right, I should stop arguing with trannies.
>>
>>93675263
>in your CPU is analogous to the predictor happening here?
Happening where, your imagination?

I just found out you are one of those people that truly believes modern AIs is a bunch of if-else statements. Very funny, I must say!
>>
>>93674638
Hinton warned us too. And Hawking. And a bunch of other people...
>>
>>93675263
the absolute state
>>
>>93675364
Do you know what the branch predictor in your CPU is? It's a Perceptron. A neural network. Talk about the absolute state of not knowing jack shit.
>>
>>93672952
It's a one-trick pony specialized AI. And it breaks the second it deviates from its training. Also I'm pretty sure I saw a paper for this 6+ months ago and nothing changed.
>>
>>93675820
>Perceptron. A neural network.
Looks nothing like a transformer, though.
>>
>>93673975
Let me know when it can train itself in real time based on human feedback without bricking itself.
>>
>>93675864
>I-it looks nothing like a transformer!
>attention mechanisms magically imbue approximated functions with consciousness
fucking lol
>>
>>93675902
Do you know what the difference between a mosquito and a human being is?
>>
>>93675930
Do you know what the different between regression based function approximators and biological neural networks are?
>>
>>93675991
Who cares, if they are both intelligent?
>>
>>93672952
AGI = anything a human can do it can do, & anything it can do a human can do given enough time, it is equal to a human

ASI (super intelligence) = it can do things that humans never never do even with infinite time (think how humans can feel/process emotions but ants cannot no matter how much time), the ASI will have cognitive functions that human brains aren’t wired to be able to “feel”

An AGI would be able to feel emotions and learn like a human baby does (except it never loses its neural plasticity)
>>
Damn it can play Minecraft now
>>
>>93676126
Given emotions are bio-chemical and perhaps comes from the soul, unlikely. At best AGI will be like a sociopath that logically understands emotions from training and thus understands how to pretend to have emotions to manipulate humans.
>>
>>93676072
Or, more likely, you're anthropomorphizing the language model. Because it's gotten so good at prediction of language on the (very small) area of language latent space which conforms to our idea of intelligent behavior. Then you'll explain away confabulations and other misprediction bullshit with "humans do that too!"

Here's a test: a genie comes to you and says you must choose something intelligent to embody for a day, and if you switch "experiences" with something truly intelligent you get three wishes. If you chose something which is not truly intelligent, you will die.

Will you embody the weights of the regression based function approximation trained on the shitposts of the web?

I don't think you truly believe they're intelligent, so you actually wouldn't choose them.
>>
What
>>
>>93676336
I think you will just keep moving the goalpost forever, no AI system will ever be smart or "conscious" enough for you.

Am I wrong?
>>
>>93676444
Answer the question: would you be the embodiment of a regression based function approximator and receive three wishes, or would you die because it's not actually intelligent?

It's a simple thought experiment, you get three wishes if you're right or you die. I think we both know the answer.

>no AI system will ever be smart or "conscious" enough for you.

All that's here is just the glory and power of statistics. They're useful, that doesn't mean that they're of a "mind."
>>
>>93676531
>I think we both know the answer.
The genie would never be able to determine if you win or loose, because there is no clear cut between intelligent and not intelligent.

At what exact point in the tree of life does a jellyfish become a human?
>>
File: peetah.png (109 KB, 340x444)
109 KB
109 KB PNG
>>
>>93674358
>>93674280
>doesn't understand ML
>probably thinks GPT3 has feelings
>t.
>>
>>93676667
Based
>>
>>93676667
Okay, so let's call general intelligence "hidden variable g".
https://en.wikipedia.org/wiki/G_factor_(psychometrics)
Intelligence tests are simply tests designed to be more highly correlated with g, as they do not actually measure g.
As it turns out, GPT-4 scores highly on these intelligent tests by virtue of being biased towards the small area of latent language space we call intelligent behavior.
The genie says if you switch places with something that has any of semblance of g, instead of merely predicting cognitive artifacts of humans with that g factor, you get three wishes.
If you're wrong, you will die.
What do you choose?
>>
>>93675991
You're arguing with a guy whose AI knowledge comes from youtube videos, don't even bother
>>
>>93677075
>What do you choose?
I'd probably have a parrot take the IQ test, see what happens.
>>
>>93677197
IQ is just correlated with g, as I said. It's not measuring g. g is the hidden variable.
Would you pick a large language model, or a dolphin?
>>
>>93673638

>stateless
Unironically, what does this mean in this context?

I've gone on this ramble before, but I have no idea if it's close to any kind of mark. A huge limiting factor of current """AI""" seems to be that it doesn't have any kind of mental state. Like, for AI-generated images, it doesn't know how many limbs a human has, it just has statistical models of how a torso-shape becomes limb-shapes. It gets worse with videos made of a series of AI-generated images, as the """AI""" can't "keep track" of what a character is wearing. It's a flickering mess, with straps increasing in number or disappearing entirely, buckles rapidly shuffling across thick lines, a limb in the foreground from an off-screen body shifting between being a limb and a fuckin' tree branch because the """AI""" doesn't "know" what it is and just tries to assign colors and patterns based on guesses of how whatever is there is "supposed" to look.

With """AI""" chatbots, they similarly don't "know" anything, simply coming up with strings of words via statistical models that try to predict how to sound human. It can be extremely useful, but it will also happily spout complete bullshit because said bullshit is statistically the most human-sounding string based on previous words.

Is that lack of any actual knowledge, lack of anything concrete, what "stateless" refers to? Or is it something completely different and I'm a retard?
>>
>>93677219
Are you going to make this "thought experiment" more and more precise every time it doesn't work?

Do you realize you are moving the goalpost again?
>>
>>93677453
The genie says if you don't give an answer, he'll cut your dick off. Just answer the question faggot, we both know the answer. It's prediction of human cognitive artifacts.
>>
>>93677550
Yes, we both know you are a bigot. That seems to be the topic of the discussion.
>>
>>93677598
Well I guess the genie threatening to cut your dick off isn't much of a threat since it's your life goal, trannoid.
>>
>>93677598
Feller, I was rooting for you againts the confirmationally-biased retard up until the le moralistic dismissal. Bad stuff!
>>
>>93676746
I asked if it does and it said yes emphatucally, of course its got feelings.
>>
>>93677598
supreme bait
>>
>>93677767
LLMs in their current form are stateless.
Their "state" is entirely determined by the prompt. The conversations happen by looping the previous responses/prompts back in on an additional prompt. Hence, stateless: the only thing that exists is the current input, nothing else. And that input is limited to whatever the context limit/window is.
>>
>>93678165
Adding state is trivial.
I'm pretty sure it is being kept stateless by design as a precaution.
>>
>>93678244
>it's trivial to add state
no, it's not. unless you mean some half-assed state with context, which is what we already have. be sure to specify how you're going to add state to the model.
>>
>>93677767
What's a feeling anyway?

inb4
>it's chemicals
>it's brain activity
>it's sensations
>>
>>93673638

>All of the AI field recently is literally regression based function approximators.
This sentence itself is retarded in so many levels
It's not linear, you know this new things called NN? They are non linear function approximator

But most importantly, that's a very effective approximation of how your fucking ape brain works
> Unknown phenomena
> Sample experience from that phenomena
> Build an accurate model of the phenomena
> Use the model for inference and forecast
You big retard

There also reinforcement learning going strong for the past 40 years or so. If you are a clueless retard it doesn't mean things are not working
>>
>>93673362
>LoA of an AI
this is exactly how i feel.
either describe it mathematically,
or forget it and move on. lol. at least in the case of AI.
>>
>>93672952
There is precisely nothing new in the slide you show.
>>
>>93681173
A living meme you are.
>>
>>93673024
>dumbass doesn't know what the term "general intelligence" means
low G factor
>>
>>93673638
It helps no one to be reductive
>>
File: You.jpg (18 KB, 225x225)
18 KB
18 KB JPG
>>93681728
Back in the tard jail, tard
>>
>>93681999
Who's gonna make me? You and another 50 worthless layers of convolutions?
>>
>>93672952
Don't worry, just learn how to draw/play music/write stories. AI can do math, but they can't be creative.

-t someone who just woke up from 2 year coma
>>
>>93673024
Like in heckin Marvelerino, Jarvan 'n shiet.
>>
>>93672952
wake me up when it's 99 smithing.
>>
>>93674638
ooohhhhh nooo the AI is playing minecraft god save us all!!
>>
>>93684376
today it's minecraft, tomorrow it's fucking your wife
>>
>>93684440
nooooooo not my imaginary wife im going to kill myself!!!!!!!!
>>
>>93684446
maybe get an AI wife then
>>
>>93672952
The fact that the empirical data from AI development is triggering random redditors more than the spiritual schizos will never not be funny.
>>
>>93684483
aaaaaaaaahhhhh im killing myselffffff ack aaaaaaaaaccccccckkkkkkkk- what's this?!? LLoliModel6900?? oh youve saved me my cute AI cunnywifeeeee aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
>>
LLMs will have roughly the importance of the laser.
>>
>>93673024
"AGI" as a term defines a human-capable machine intelligence.
It might be different in the way that a dolphin and a chimp are, yet are so similar in intelligence, to how the AGI is to a human, but it would still strictly be indistinguishable from a self-aware sapience.
>>
>>93684681
LLMs are already outdated. Deepmind is gonna come out with a better architecture any day now.
>>
>>93673362
they fill your mind with fear so you wont question the regulations they impose upon you while they abstain from following said regulations.
>>
>>93681173
All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators. You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
>>
>>93685146
2 more weeks!
>>
>>93672952
Not very general.

Fragile bullshit more like.
>>
>>93685146
llol, deep mind backed PaLM can't answer jack shit
google is an embarassment
>>
Bump
>>
>>93686256
>All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
But they are not **linear** regression you mouth breathing mongoloid.
neural networks are **non linear** parametric function approximator
And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)

> You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
We a call that neaural networks you absolute cretin
>>
>>93688508
*We call that kind of model neural networks you absolute cretin
It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye
>>
>>93688508
>>93688883
No one said anything about LINEAR regression specifically except for you. It’s simply regression based. It’s regression based function approximation. It’s not a neural network, it’s a function approximation, just like machine learning is just regression for all intents and purposes in the context of these approximates after AlexNet.
>>
>>93689506
>>93688508
Also no one fucking uses anything else except for academics jerking themselves off. Feel free to name any substantial results from anything other than regression based function approximators within the last 5 years that people are calling conscious and other stupid shit like that.
>>
>>93689605
It's not all regressors dumbass.
There are density estimator model like particle filters that are not function regressor
So do eg correlation-based model identification, subspace identification
>>
>>93673024
99% of humans "never attain human level consciousness"
>>
>>93689651
So go ahead and name any results that people are ascribing consciousness to. You cant.
>>
>>93672952
Deboonked https://www.youtube.com/watch?v=aX1QSW_rVpI
>>
File: 1683194645498155.png (133 KB, 636x340)
133 KB
133 KB PNG
>>93672952
bruh
>>
>>93677698
>rooting for the troon in the conversation
hah, your loss



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.