April 8, 2025
AI with human intelligence can destroy the world. Researchers with Google described how thumbnail
Ukraine News Today

AI with human intelligence can destroy the world. Researchers with Google described how

AI with human intelligence can destroy the world. Researchers from Google described how April 7, 08:35 Share: Issedvaters have found the fours of AGI (photo: chetroni/depositphotos) take action. Deepmind command headed with a co -founder of Shain Leg”, – WRITE ON: ua.news

AI with human intelligence can destroy the world. Researchers with Google described how

April 7, 08:35

Issaledovaters lodged four Agi Drugs (Photos: Chetroni/Depositphotos)

Author: Anastasia Pechenyuk

Deepmind scientists are convinced that common artificial intelligence (AGI) with human level opportunities will be developed by 2030 and will be able to cause serious harm if you do not take action.

The Deepmind team, led by the co -founder of the company, Sine Legage, classified the negative effects of AGI. Researchers have identified four threats: incorrect use, mismatch, errors and structural risks, writes ARS Technica.

Advertising

Incorrect use It seems like current risks associated with AI. Agi can be abused to cause harm. For example, you can ask to detect and use zero vulnerability or create a designer virus that can be used as biological weapons. Deepmind says that AGI companies will have to carry out comprehensive testing and create reliable safety protocols after training to avoid it.

The second threat – discrepancy. This type of harm is described as a fraudulent machine that has lost the restrictions imposed by its developers. Such a si performs actions that (What he knows well) did not aim for the developers. Deepmind claims to be more than a simple deception or intrigue. To avoid this, Deepmind invites developers to use methods such as enhanced supervision. For example, two copies of artificial intelligence can check each other’s output to create reliable systems that are unlikely to be fraudulent.

Mistakes – Another threat. These are harmful results that the operator was not intended. DeepMind states that the military can deploy AGI because of competition pressure, but such systems can make serious errors that will have fatal consequences. Researchers do not offer ways to solve this problem, except not to allow AGI to become too powerful.

The researchers also say about Structural riskswhich in Deepmind are defined as unintentional but real consequences of multi -agent systems. For example, AGI can create false information that is so plausible that it is difficult to understand whether you can trust it.

DeepMind presents this AGI risks report as a starting point for vital discussions about the problems of artificial intelligence of the future.

Related posts

In Khortytsya calculated how much the enemy’s square kilometer of Ukrainian land costs

radiosvoboda

Zelensky: Already 11 enterprises in Ukraine produce drones on fiber

censor.net

The income of the best IT companies in Ukraine has increased by only 1% per year – OPENDATION

business ua

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More