FreedomTruth
Truth shall make you free.
Lots of speculation in this zerohedge article.
Personally, I don't feel teenage angst that AI will cause the world to end. Change, however, is inevitable.
www.zerohedge.com
Personally, I don't feel teenage angst that AI will cause the world to end. Change, however, is inevitable.
Microsoft AI CEO Warns Most White Collar Jobs Fully Automated "Within Next 12-18 Months"; Anthropic Fears Potential For 'Heinous Crimes' | ZeroHedge
ZeroHedge - On a long enough timeline, the survival rate for everyone drops to zero
In an interview with the Financial Times, Microsoft AI CEO Mustafa Suleyman forecasted that within the next two years a vast swath of desk-bound tasks will be swallowed by AI.
“I think we’re going to have a human-level performance on most, if not all, professional tasks - so white collar where you’re sitting down at a computer, either being a lawyer, accountant, or project manager, or marketing person - most of the tasks will be fully automated by an AI within the next 12 to 18 months,” Suleyman said when asked about the time table for Artificial general intelligence, commonly known as AGI.
Meanwhile, Anthropic is warning that their latest Claude models could be used for "heinous crimes" such as developing chemical weapons.
"In newly-developed evaluations, both Claude Opus 4.5 and 4.6 showed elevated susceptibility to harmful misuse," in certain computer use cases, the company said in a new sabotage report released late Tuesday.
The company says that the risk is still low but not negligible, however the sudden departure of an Antrhropic AI safety researcher suggests otherwise.
"I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," said Mrinank Sharma, who led the company's safeguards research team.
Today is my last day at Anthropic. I resigned.
Last month Anthropic CEO Dario Amodei sounded the alarm on AI - warning of the following (via Axios):
- Massive job loss: "I ... simultaneously think that AI will disrupt 50% of entry-level white-collar jobs over 1–5 years, while also thinking we may have AI that is more capable than everyone in only 1–2 years."
- AI with nation-state power: "I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal 'country of geniuses' were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. ... I think it should be clear that this is a dangerous situation — a report from a competent national security official to a head of state would probably contain words like 'single most serious national security threat we've faced in a century, possibly ever.' It seems like something the best minds of civilization should be focused on."
- Rising terror threat: "There is evidence that many terrorists are at least relatively well-educated ... Biology is by far the area I'm most worried about, because of its very large potential for destruction and the difficulty of defending against ... Most individual bad actors are disturbed individuals and so almost by definition their behavior is unpredictable and irrational — and it's these bad actors, the unskilled ones, who might have stood to benefit the most from AI making it much easier to kill many people. ... [A]s biology advances (increasingly driven by AI itself), it may ... become possible to carry out more selective attacks (for example, targeted against people with specific ancestries), which adds yet another, very chilling, possible motive. I do not think biological attacks will necessarily be carried out the instant it becomes widely possible to do so — in fact, I would bet against that. But added up across millions of people and a few years of time, I think there is a serious risk of a major attack ... with casualties potentially in the millions or more."
- Empowering authoritarians: Governments of all orders will possess this technology, including China, "second only to the United States in AI capabilities, and ... the country with the greatest likelihood of surpassing the United States in those capabilities. Their government is currently autocratic and operates a high-tech surveillance state." Amodei writes bluntly: "AI-enabled authoritarianism terrifies me."
- AI companies: "It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," Amodei warns after the passage about authoritarian governments. "AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. ... [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny."
- Seduce the powerful to silence: AI giants have so much power and money that leaders will be tempted to downplay risk, and hide red flags like the weird stuff Claude did in testing (blackmailing an executive about a supposed extramarital affair to avoid being shut down, which Anthropic disclosed). "There is so much money to be made with AI — literally trillions of dollars per year," Amodei writes in his bleakest passage. "This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."
Upvote
12