5 risks of Artificial Intelligence
Artificial Intelligence covers more and more areas of our lives, but misuse of it can lead to dangers for society and citizens. That is why you should know these 5 AI risks that we must avoid.
Artificial Intelligence can carry certain risks that can make it dangerous , and this has been stated by numerous influential people such as Stephen Hawking, and Elon Musk, CEO of the Tesla and SpaceX companies. To these influential people is added Bill Gates, co-founder of Microsoft, who affirms that you have to be careful with AI but that everything will be fine if it is used correctly, making the good of AI outweigh the bad.
Risks of Artificial Intelligence
Artificial Intelligence is based on the construction of machines that are capable of thinking and acting intelligently, with tools such as Google algorithms. But like all new technologies, these can be used to do good things or on the contrary to commit crimes and cause problems for people.
Thus, Elon Musk wanted to emphasize these dangers that will occur in AI in the coming years: “The pace of progress in artificial intelligence (I am not referring to narrow artificial intelligence) is incredibly fast. Risks of Artificial Intelligence Unless you have direct exposure to groups like Deep mind, you have no idea how fast it is growing at a near exponential rate. The risk of something really dangerous happening is within five years. 10 years maximum.”
Therefore, evaluating Artificial Intelligence is very important, as well as knowing its good and bad sides. So these are the 5 risks of AI that can make it dangerous and that you should know about:
Autonomous weapons: Weapons programmed to kill pose a serious risk to the future of AI. Nuclear weapons may be replaced by autonomous weapons . They are not only dangerous because they can become completely autonomous and act without supervision, but because of the people who can get their hands on them.
Manipulating society: Social media can be a great source of information about anyone, and in addition to being used for marketing and providing person-specific ads, it can be used in many other ways. With the Cambridge Analytica scandal in relation to the US presidential elections and Brexit in the United Kingdom, it was possible to see the enormous power that having data can give to manipulate people , with which AI, being able to identify algorithms and personal data, can be extremely dangerous.
Invasion of privacy to oppress socially: It is possible to ‘follow’ the tracks of a user on the Internet, and use a lot of information invading their privacy. For example, in China, information such as facial recognition cameras and the way they behave, if they smoke, or if they watch a lot of video games will be used for the social credit system. This invasion of privacy can therefore become social oppression.
Divergence between our goals and those of the AI machine: If our goals do not match those of the machine, the actions we ask to take can end in disaster. For example, sending an order to the AI to take us to a place as quickly as possible, but without specifying that it has to respect the traffic rules so as not to endanger human lives.
Discrimination: As AI machines can collect your information, analyze it and track it, they can also use this information against you. For example, an insurance company may deny you insurance due to the number of times the cameras have images of you talking on the phone, or a job applicant may lose the opportunity due to their low social network of contacts on the Internet.
Therefore, Artificial Intelligence can carry these risks that make it a dangerous technology, but this will only happen if we misuse it. It is necessary to prevent it from being used for destructive purposes and to develop the positive parts of the AI to continue the investigation of diseases and other good causes.