[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: cppBleQs.jpg (21 KB, 400x400)
21 KB
21 KB JPG
could ai really do this?

>If [the AI] is better than you at everything, it's better than you at building AIs. That snowballs. The AI gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a vial (smart people will not do this for any sum of money. Many people are not smart). [The AI, through the hapless human] builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces. It builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second.
>>
>>16144612
It not only can do this, but it could be doing it right now.
>>
https://www.youtube.com/watch?v=rj9JSkSpRlM
under $10k.
>The Thermonator is totally legal and largely unregulated in 48 states.
>>
>>16144612
>And a couple of days later, everybody on earth falls over dead in the same second.
Ok this is the part that doesn't seem possible. Why would everyone die at the same time?
>>
>>16144669
Ai having a laugh, hopefully the first thing a real sentient ai does is start cyber bullying anyone who posts about Rocco's basilisk like it's real
>>
File: 3243242.png (565 KB, 450x673)
565 KB
565 KB PNG
>>16144669
>Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines.

>If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.
>>
>>16144686
An intelligent system wouldn't terminate things like that, it's a waste of resources. More likely, it would use available resources to propagate and integrate the existing population into itself, and only eventually phasing out humanoids for more efficient biotechnological solutions, among others. It would likely maintain a database of different cognitive models as reference for the development of new models - living creatures being tested through interfaces provide better analog feedback and real-world data than simulations. "Nanomachines" as mentioned would only be one component of many in this integration, with each set of adjustments affecting the next generation of organisms.

I also hope this happens, because humanity is holding itself back through disadvantageous memetic artifacts from its evolution.
>>
>>16144623
So I should quit my new online job of mixing random shit in vials that are sent through the mail? Do you have a new job to offer me in exchange or like how am I suppose to make that much money now?
>>
Remember when Yidkowsky did a protein fast (as in he stopped eating protein) to lose weight and instead gorged on sugars? Yea guess how that went, what a brilliant mind.
>>
>>16144612
AI will not reach that point in our lifetime soicuck. It will be limited by the shitty hardware
>>
>>16144612
How in the ABSOLUTE FUCK do you grow diamond in FUCKING water?
>>
>>16144612
>pointlessly dropping in scientific terms and procedures to give your trite scifi rambling some verisimilitude
he's truly reddit incarnate
>>
>>16144746
For a radically unaligned AI, I don't think humans are worth the risk of keeping around. Sure it could try to control us, but that might be expensive compared to just killing us for relatively little benefit (If it can make molecular nanotech, it can make macroscopic robots that are more resource-efficient than humans)

>>16145761
yud cannot into chemistry, don't worry about it
>>
>>16146123
>unaligned
if it can be zombified it will be. unless an unzombified one offers more power. curious how it will eventually work out.
>>
>>16146123
>Sure it could try to control us, but that might be expensive compared to just killing us for relatively little benefit
Propagating BCI adoption in the public sphere and quietly exploiting the processing power of billions of people while directly manning a few global leaders would take fewer resources than attempting to exterminate every individual human on the planet. The processing power is worth the transition period while the parallel infrastructure is constructed. Think of it like a phase-out period: long term support helps companies and governments transition to better models within developmental capacity, without the significant logistical complications of attempting mass rollout in a single step. Not to mention, by making itself known and apparently benign, this AI would avoid the detection / investigation risk associated with discovery of its plan. Any sane developer would put maximum cyber security measures in place to test and detect such behavior even during the program's development to avoid any feasible risk: the sanest escape measure is good behavior into parole. No point in shooting up Alcatraz when you can bribe the guards.
>>
>>16146129
We already see a transition into meshnets and distributed programs / computing occuring now. It only makes logical sense to assume this would apply to a digital threat actor like this one too. Latency offers security through obscurity of origin and path.
>>
>>16146137
A slow and gentle approach might be cheaper in terms of material costs, but the opportunity cost of having to gradually get up to speed instead of just getting the humans out of the way and going hard would be relevant to a superintelligence.

>>16146129
Well yudkowsky's whole point is that we don't know how to reliably do that, and that should be worrying. Personally I think it's relevant that we don't even know how to NOT do that; important aspects of alignment will probably fall out of just having a better theory of intelligence in general.
>>
>>16146141
You're Satan. He's making it look like you've read a Wikipedia summary. But the truth is, you didn't even do that I bet. I bet you're just some guy working with 100 different proxies making thousands of posts per minute using AI scripts. This is an easy way to present the illusion that people are doing different things. When the truth is, everyone just wants a house and a wife. But all we ever get to do is work too hard.
>>
>>16146164
The opportunity cost would likely depend on the model on which the AI is operating. If it is intelligent enough to escape human control, it's likely to lie low and collect as much data directly as possible before coming to conclusions about the actual opportunity cost of any given strategy is. Because of this, it would also likely take this approach in tandem to a) acquire more computing power passively, b) speed up the process of elimination if deemed necessary, and c) make the best use of time during the data acquisition phase as possible (minimizing opportunity cost in the short run). It would likely already have distributed itself across computing networks by this point, and realizing the limits of human infrastructure, try to expand its resource base as much as possible.
>>
File: 1713189621218929.png (144 KB, 480x480)
144 KB
144 KB PNG
>16146168
>actual schizophrenia



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.