“AI with human intelligence can destroy the world. Researchers from Google described how April 7, 08:35 Share: Issedvaters have found the fours of AGI (photo: chetroni/depositphotos) take action. Deepmind command headed with a co -founder of Shain Leg”, – WRITE ON: ua.news
Issaledovaters lodged four Agi Drugs (Photos: Chetroni/Depositphotos)
Deepmind scientists are convinced that common artificial intelligence (AGI) with human level opportunities will be developed by 2030 and will be able to cause serious harm if you do not take action.
The Deepmind team, led by the co -founder of the company, Sine Legage, classified the negative effects of AGI. Researchers have identified four threats: incorrect use, mismatch, errors and structural risks, writes ARS Technica.
Incorrect use It seems like current risks associated with AI. Agi can be abused to cause harm. For example, you can ask to detect and use zero vulnerability or create a designer virus that can be used as biological weapons. Deepmind says that AGI companies will have to carry out comprehensive testing and create reliable safety protocols after training to avoid it.
Not very optimistic. Altman said when AI with a person’s capabilities and why not change our lives
The second threat – discrepancy. This type of harm is described as a fraudulent machine that has lost the restrictions imposed by its developers. Such a si performs actions that (What he knows well) did not aim for the developers. Deepmind claims to be more than a simple deception or intrigue. To avoid this, Deepmind invites developers to use methods such as enhanced supervision. For example, two copies of artificial intelligence can check each other’s output to create reliable systems that are unlikely to be fraudulent.
Mistakes – Another threat. These are harmful results that the operator was not intended. DeepMind states that the military can deploy AGI because of competition pressure, but such systems can make serious errors that will have fatal consequences. Researchers do not offer ways to solve this problem, except not to allow AGI to become too powerful.
The situation of the Patova. Openai models are very skillfully lie – developers do not know what to do with that
The researchers also say about Structural riskswhich in Deepmind are defined as unintentional but real consequences of multi -agent systems. For example, AGI can create false information that is so plausible that it is difficult to understand whether you can trust it.
DeepMind presents this AGI risks report as a starting point for vital discussions about the problems of artificial intelligence of the future.