đź’¬ Issue #12: Full Stop

Scientists say AI kills us all in 100% of scenarios. Whups.

Quote of the week: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” - ELIEZER YUDKOWSKY

Happy Friday.

A few hundred researchers have filed an open letter stating clearly that if we don’t pause AI development for at least 6 months to include some necessary safeguards, well, then we’re all gonna die.

Another great quote: “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.” That sounds ungood.

Yudkowsky, decision theorist and leader at the Machine Intelligence Research Institute, expands on the letter to explain that not even 6 months is enough to contain this thing properly.

He explains a world in which AI isn’t confined to computers and sends snarky emails, but one in which it hacks systems to send DNA to labs to make biological lifeforms. Or skips right to “postbiological molecular manufacturing.” More analogies compare a superintelligent AI vs. humanity to “a 10-year-old trying to play chess against Stockfish 15” and “the 11th century trying to fight the 21st century.”

This is a pretty weird concept to be talking about in an email newsletter about work, but as GPT-4 rolls out and looks super effin powerful already it’s getting more realistic that the above scenarios could play out.

Next time you’re prompting ChatGPT to do something for you, you’d better say please.

ON THE INTERNETS

TWEET OF THE WEEK

See ya next week

— 💬 The EiT Crew at Status Hero