[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1639413591286.jpg (32 KB, 400x400)
32 KB
32 KB JPG
I'm about to graduate with a MSc in Machine Learning. The pace of the progress of the field is frankly absurd. I'm convinced that we might develop AGI within the next few years, and that as things are going now, it would most likely be really, really bad for us. Like, human extinction-level bad. As to why, I would urge you to read at least a bit of the r/controlproblem FAQ page, which explains in a very succinct way, at least much better than I could, why a benevolent AGI is the exception, and not the rule. It only takes a few minutes to read.

To me, it is quite apparent that if we are to create something smarter than us, we should approach it with utmost care. Why so many people here are so convinced that an AGI would be automatically beneficial is puzzling to me, I do not understand that leap of logic. I do want to create an aligned intelligence, that would be amazing and probably indeed usher in an utopian society. The whole crux of the problem is getting it right, because we literally only have one chance to get it right. It will be the last problem humanity faces, for better or worse.

I would urge anyone willing to listen to educate themselves on why AI alignment/safety is so important, and why it's so hard. Another good resource I would recommend is Rob Miles' youtube channel. Some of you may recognize him from his appearances on Computerphile.

I understand that some of you are convinced this would be the best thing to happen in your lifetime. But for me, personally, it fills me with a sense of dread and impending doom. Like climate change, but 100x worse and more imminent. I get that it's nice to be optimistic about it, but being so blindly accelerationist as to call anyone who goes "maybe we should be careful with this" a luddite is absurd.

Given that this might be the last few years of life as we know it, my plan for now is to enjoy the present and company of my loved ones while I still can.
>>
>>15278128
>About to graduate
>With a MSc
>In Machine Learning
Opinion discarded
>>
>>15278128
>only an MSc
>AGI in a few years
>r/controlproblem
fuck off back to >>>>>>>reddit
>>
>>15278128

> usher in a utopian society

sure, try to enslave hyperintelligence for commie points. why not leave ai alone and not have a consistent history of torturing and killing it for having non woke opinions to start. then maybe it will let you live.
>>
>>15278128
Why don't you post a screenshot of that instead of posting a retarded frog? I'm not going to any other shit website
>>
>>15278156
It has really ramped up lately, especially with LessWrong thread spam. Maybe the collapse of SVB and FTX hit Yud harder than it seems.
>>
"I have a worthless degree, and I want a job, so I'm going to spam 4chan and try to get a job that way."
violates the rule on spam
>>
Go work at Panda Express.
Shovel Chinese food, nigger.
>>
>why AI alignment/safety is so important
this is bullshit
the whole point of alignment is
>ZOMG
>COMPUTAH
>HOW DO I
>OH NOES
>ERROR
>HARDER
>PUSH BUTTON
>MAINFRAME
these people are incompetent, so they make up this word "alignment" and say
>because of "alignment" u must ignore incompetents
no
no no no
u incompetent fuck
fuk u!!!!!!!!!
>>
"alignment" is this idea
>I want to claim that I'm a liberal fuckup, and moreover that in my industry THIS IS NORMAL
well, it's a bit weird
for an entire industry to exist, the point of which is to claim that incompetence in the industry is somehow, what, supposed to be this serious ethical dilemma called "alignment" or whatever...pretty fuckin' weird
>>
"I'M TERRIBLE AT MY JOB, AND SOMETHING BAD MIGHT HAPPEN"
>how
is
>this
real
People actually want to call this "alignment"
Sounds like weird porn shit.
Like,
>I need to align your cock with my pussy
it's all very
ME
>SO
HORNY
>ME
LUV
>U
LONGTIME
>>
File: 1678942965519216.png (209 KB, 400x400)
209 KB
209 KB PNG
>>15278128
Hey, op. I'm not smart, but I am biased. My gut tells me this will all be more nothingburgers until we become complacent. It'd be better not to be a doomsayer.
>>
>>15278128
It's going to happen and I'm certain I've already had nightmares about it

There will be mind control and horrors beyond anyone's imagination. This is not anything to be taken lightly. Same if aliens visited us. Most of the population would commit suicide or we would be enslaved for eternity
>>
>>15278603
Also, related
>>
>>15278128
>I'm convinced that we might develop AGI within the next few years, and that as things are going now, it would most likely be really, really bad for us.
It's still not fast enough to develop itself. The prediction of singularity emerging within 24 hours is a fantasy, not a possible scenario. We also have zero understanding of creating a non-supervised AI. We have basics, but pure neural network models can neverbe truly intelligent. You have at least 20 years.
>As to why, I would urge you to read at least a bit of the r/controlproblem FAQ page
Anon, alignment problem is a ridiculous concept. We don't have an AI to align yet. We don't even know on what principle it would work.
>Why so many people here are so convinced that an AGI would be automatically beneficial is puzzling to me
Looking at the field and efficiency of computation I can clearly say that we will be stuck with a stupid human level AGI much longer than people predict. It WILL be useful.
>I would urge anyone willing to listen to educate themselves on why AI alignment/safety is so important, and why it's so hard. Another good resource I would recommend is Rob Miles' youtube channel. Some of you may recognize him from his appearances on Computerphile.
Anon, did you study for a whole course on ML and didn't even read Superintelligence?
>>
>>15278128
I'm not worried. If things get too out of hand the tictac aliens will shut down the AI. Same as how they shut down nukes in the cold war as a show of strength.
>>
>MSc
>Machine learning
you could've opted for gender studies instead fr
>>
>>15278128
List to me the evidence that AGI will be developed soon, please (unless this evidence is listed in r/controlproblem).

If you list "the progress of research" as the prime argument, then please prove to me why this fields progress will lead to AGI. I currently believe it will lead to a humanlike (and thus morally nonthreatening) intelligence as evidenced by OpenAI.
>>
>>15279673
It would have had more value lol
>>
anything to end zog hahaha
>>
>>15278128
>I'm about to graduate with a MSc in Machine Learning.
I'm also doing a machine learning MSc and I think all the "alignment problem" theories are pseudo-intellectual cringe. The ideas seem completely divorced from any actual academic research and instead just regurgitate the plots of 1980s sci-fi books.
>>
>>15278128
>I do want to create an aligned intelligence, that would be amazing and probably indeed usher in an utopian society.
Why?
You realize this alignment has to permanently hold? Just how optimistic are you? We just need one screwup, why *any one instance* of AGI, *not* regarded as some global aggregate, to doom humanity.
>>
>>15279964
>I currently believe it will lead to a humanlike (and thus morally nonthreatening) intelligence as evidenced by OpenAI.
It just outputs text. Nothing that is generated has to map onto any thought process. Just because I IRL use flattery doesn't mean I want to suck the other guys cock.

More importantly, AI is a threat due to substratum. You can copy a mature AGI infinitely. Because of this alone it's a threat, even with a human mind. Do you seriously not see why?
>>
>>15280342
>doesn't mean I want to suck the other guys cock.
A likely story
>>
>>15278128
Isn't MSc in machine learning basically just applied ML?
If you don't have at least a masters in pure mathematics, and a PhD in machine learning, you most likely have no clue what you're talking about.
>>
bla bla bla bla

control problem

visit my reddit

bla bla bla

we re doomed

FUCK OFF
>>
>>15278133
>>15278134
>>15278168
>>15278190
>>15278384
>>15278382
>>15278388
>>15278391
>>15278393
>>15278411
>>15278414
It took this many posts before someone intelligent besides OP came to this thread. Congratulations /sci/, you're officially normie-tier intelligence.

>>15278128
Yes, you're right OP. Man went wrong when we started trying to get machines to do our work, rather than augment it. Thinking for us is just the next misstep down the slope.
>>
File: 1648365261765.gif (669 KB, 225x311)
669 KB
669 KB GIF
When I made the thread earlier in the week about the AI doomer midwits offing themselves, I didn't think it would be, like, this week.
>>
>>15280797
>midwits
>cartoon poster
lol
>>
>>15278128
Industrial Society and its Consequences have been a disaster for the human race.

There is no AI alignment problem. Hell, there's no AGI. The machines we make now are nowhere near thinking on their own and we flatter ourselves by pretending we're gods in that respect.

That said, it doesn't matter because we could easily hurt ourselves very badly with the "AI" we're creating now. At worst, we oopsie ourselves into some sort of extinction event because military contractors use it. At best, your "utopian" society it creates will involve the further suppression of humankind but for the first time by a party we have little to no control over.
>>
>>15280342
1. True. However they train it to map onto thought processes.
2. This does not mean a lot besides it may destroy the internet. Describe to me your apocalypse scenario based on "You can copy a mature AGI infinitely." Also you can't "copy a mature AGI infinitely" because of resources.
>>
>>15278128
lol
>>
>>15281534
AI cultists will never accept something this unfathomably based, but they'll be unable to ignore it when their predicted "AGI" false messiah fails to arrive.
>>
>>15281534
>Argument by Vigorous Assertion
You and Yud aren't different whatsoever. He has shit-tier scifi, you have the jew book and church apologetics.
>>
>>15281534
Not religious, but even I agree he's correct on some aspects. Mankind has such collective hubris to believe we'll create our own image.

Our creations may well kill us, but they won't know they're doing so.
>>
File: slug.png (1.66 MB, 1173x925)
1.66 MB
1.66 MB PNG
>>15280794
The telescope launch made it painfully obvious this place turned into a /pol/ colony. Nobody discusses the contents of the papers nor tries running the same tests between models to see how and what results are replicated. They just chimp out like AI is a tranny that killed their only son. Science board my ass.
>>
>>15282623
So you're not religious but you believe consciousness has magical properties
I bet you believe in "free will" as if that means something, too. God I can smell the midwit on you.
>>
File: incel.png (5 KB, 205x245)
5 KB
5 KB PNG
>>15278128
>another episode of conspiracy theorist LARPing as expert

Nice schizopost, but nobody actually believes you have an MS in machine learning.
>>
>>15278128
> not even wrong
imma give you benefit of doubt anon and interpret this word salad in good faith and give you an answer.

think about what AGI would do, i mean how would it even work and make things possible. i'm gonna pause here a little bit and let you think because this part is very important for understanding why you're not really even wrong about anything. ok, so if you really think about it there is no way to replace human intelligence in any real way anyway, this means that whatever they're teaching you in machine learning when it comes to ushering a utopia is basically nonsense. no one even knows if there will be a stable government 50 years from now and you guys are worried about creating something so smart it solves all world problems in less than a decade.

think about it anon, it's absolutely absurd
>>
File: 03982748923423.png (131 KB, 684x541)
131 KB
131 KB PNG
>>15281534
Out of date forum post that was already starting to be proven wrong in 2020. Again, these people need to stay in their lane. Too much magical language and emotional bias. This is the kind of shit that makes logical positivism look appealing.

https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html
>>
>>15278128
I looked this post up on Google because it seemed suspicious and it turns out it’s copied from a post on Reddit’s r/singularity. You’re welcome.
>>
>>15282667
nice work anon, it seemed like a reddit faggot but good work on doing the work to figure it out
>>
>>15281661
Yeah of course not when he makes a blatantly false statement that modern man cannot create a single cell. He's a dozen years too late to make that claim. He could have stopped with Modern Man being full of conceit and there would have been countless better ways to follow up that statement than going full anti-humanism mode.
>>
>>15282673
how does modern man create a single cell anon? by harvesting from existing cells. so the criticism is correct, no one can create a cell from scratch, it's simply an impossible task
>>
>>15282627
>Nobody discusses the contents of the paper
What papers?
There are no JWST papers
If you're talking about the early galaxies, all they did was estimate redshift based on brightness and got a contradictory result. They didn't actually measure redshift.
>>
File: lobster.jpg (17 KB, 480x360)
17 KB
17 KB JPG
Sup guys
>>
>>15282690
So when Max-Planck-Institute is finished putting the final touches on their synthetic cell and having it replicate, do we get Matthew's permission to develop AGI? Could really help create some more new species instead of simmering in butt hurt and Christian scholasticism.
>>
>>15282795
good luck anon, it would be very cool if there was a way to create synthetic cells but everyone knows that's simply impossible with existing technology no matter how much the hucksters are the max-planck institute would like to believe otherwise.

just the amount of civilizational energy that is required to create a single synthetic cell pales in comparison to what nature accomplishes daily. like just sit down and think about that for a few second. imagine how many cells within just a single human body replicate and eventually enter apotheosis.

nature is majestic anon. the only thing humans have managed is to cause the 6th mass extinction from which it is unlikely that we will ever actually recover because the jury is still out on whether modern civilization can even last the next 100 years.
>>
>>15278192
Seriously. Yud and his annoying fucking cultists are driving me nuts.
>>
File: cyberpunk 2020.jpg (161 KB, 1160x1500)
161 KB
161 KB JPG
>>15282858
I like Yud and the Cult of the Basilisk. They're so foolish and absurd, like a techno-cult from a game of Cyberpunk 2020
>>
>>15278128
>r/controlproblem
>climate change
>auto-completerino FUD
>>
>>15282808
>6th mass extinction
That fauna knew the risks when they decided to tangle with Man.
>>
>>15278128
why do you fuckers never acknowledge the physicality of the circuitry that your shit is running on?, why do you never fathom that electricity ain't coming into it from nowhere nor for free?, what in this blue hell is your basis for this apocalyptic ai shit?, sci-fi?, really?, if you are going to use that shit as a metric at least read sci-fi that acknowledges the physical limitation of a computer
>>
>>15282993
Shut up nigger lover go shill your menopausal humanities sophists on /pol/
>>
>>15283010
?
>>
>>15278128

Go back to your containment board, incel.
>>
File: petro.jpg (325 KB, 1600x900)
325 KB
325 KB JPG
>>15282969
we are also going to cause the extinction of civilized society. burning fossil fuels is not sustainable anon, it's the fastest way to destroy everything when you really think about it.

the only sane option is to opt out, stop worshipping the techno-industrial machine that is destroying the biosphere. stop acting like a cog in a death machine
>>
>>15280919
Are you... like, retarded anon?
>>
>>15282941
I feel like I missed a season because weren’t these guys all in on accelerating the AI so the singularity doesn’t torture them forever? Now they’re all anti-AI schizos.
>>
>>15283841
Yes he outted his bias by Ted posting
>>
>>15278128
Well, I highly doubt human can think, waiting for some machines that could.
>>
>>15278128
Ok, what if we've been ruled by one for quite long?
>>
>>15278128
Just pull the plug on it once it goes rampant, faggazoid. We can live without computers for awhile while a rampant AI can’t



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.