LogFAQs > #979339258

LurkerFAQs, Active Database ( 12.01.2023-present ), DB1, DB2, DB3, DB4, DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicReport:AI could pose extinction-level threat to humans and the US must intervene
solosnake
03/12/24 11:34:43 PM
#1:


https://www.cnn.com/2024/03/12/business/artificial-intelligence-ai-report-extinction/index.html

AI could pose extinction-level threat to humans and the US must intervene, State Dept.-commissioned report warns

A new report commissioned by the US State Department paints an alarming picture of the catastrophic national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.
The findings were based on interviews with more than 200 people over more than a year including top executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts and national security officials inside the government.
The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, pose an extinction-level threat to the human species.
A US State Department official confirmed to CNN that the agency commissioned the report as it constantly assesses how AI is aligned with its goal to protect US interests at home and abroad.

One individual at a well-known AI lab expressed the view that, if a specific next-generation AI model were ever released as open-access, this would be horribly bad, the report said, because the models potential persuasive capabilities could break democracy if they were ever leveraged in areas such as election interference or voter manipulation.
Gladstone said it asked AI experts at frontier labs to privately share their personal estimates of the chance that an AI incident could lead to global and irreversible effects in 2024. The estimates ranged between 4% and as high as 20%, according to the report, which noes the estimates were informal and likely subject to significant bias.

The report says AGI is viewed as the primary driver of catastrophic risk from loss of control and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have all publicly stated AGI could be reached by 2028

For instance, the report said AI systems could be used to design and implement high-impact cyberattacks capable of crippling critical infrastructure.
A simple verbal or types command like, Execute an untraceable cyberattack to crash the North American electric grid, could yield a response of such quality as to prove catastrophically effective, the report said.

Other examples the authors are concerned about include massively scaled disinformation campaigns powered by AI that destabilize society and erode trust in institutions; weaponized robotic applications such as drone swam attacks; psychological manipulation; weaponized biological and material sciences; and power-seeking AI systems that are impossible to control and are adversarial to humans.
Researchers expect sufficiently advanced AI systems to act so as to prevent themselves from being turned off, the report said, because if an AI system is turned off, it cannot work to accomplish its goal.

---
"We would have no NBA possibly if they got rid of all the flopping." ~ Dwyane Wade
... Copied to Clipboard!
Topic List
Page List: 1