Is Artificial Intelligence dangerous?
Updated: Feb 15
Some renowned individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader and innovator Elon Musk suggest AI could potentially be very dangerous; Musk at one point was comparing AI to the dangers of the dictator of North Korea. Bill Gates also believes there’s reason to be cautious, but that the good can outweigh the bad if managed properly.
What is Artificial Intelligence?
Artificial intelligence (AI) is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
Some of the activities computers with artificial intelligence are designed for include:
How can artificial intelligence be dangerous?
Artificial Intelligence — this was something that was vague a few years ago. No one knew what it really meant. No one knew what it really did. But 2018 saw a boost of artificial intelligence in almost everything — from speakers to air conditioners, smartphones to cars. With AI and ML (Machine Learning) hand-in-hand, things can be automated to glory, without much human intervention.
Today we use AI everywhere — from common home appliances such as smart vacuum cleaners that automatically clean the surroundings and charge themselves, to driverless cars that ride you to your destiny at the push of a button. And from smartphones that can automatically detect the frame and do some sorcery to get the best image from a puny sensor to smart speakers and android robots that can communicate with you like another human being. While these seem to be something that we look forward to getting better and flawless technology in the future, there are a few AI areas that could be equally dangerous to mankind in future.
While AI is constantly getting better as it learns along the way, there could be dangers lurking for us in the future. We have seen fictional movies showing how AI-based systems take over the world and killer robots that could put an end to mankind. Well, we could see those days nearing soon unless we make sure that we limit the power of AI before it is too late. If we are not careful, vigil or cautious, AI could be manipulated by bad guys, and who knows what could be in store for us ahead. Let’s look at a few AI-based platforms and the dangers that could lurk within each of them, and if they are not attended to carefully by their creators, they could spell disaster.
Where is it used?
1. Autonomous cars:
Well, AI is the best example here. Using sensors, cameras and radars, the car can drive around a given route by sensing and viewing its surroundings. There are a fistful of companies (such as Google, Uber and a few more) who are experimenting with AI-driven cars and many are also currently running around the streets in a few countries.
2. Face recognition:
Recently, the London police went on to once again place cameras around its streets in order to do a test on live or real-time video-based face detection on its citizens. The police were working on a test drive and the citizens were well informed about the same. This drive was to check how the face detection algorithm could be used in real time to search for people, who are lost, and wanted criminals and terrorists.
HOW IT WORKS
Facial recognition is based on face descriptors. The system calculates the similarity between the input face descriptor and all face descriptors previously stored in a gallery. The goal is to find the face(s) from the gallery that are most similar to the input face.
No images are stored or processed
Gallery contains face descriptors and their corresponding names
Similarity between faces is returned as a value between 0 (no similarity) and 1 (maximum similarity)
Deepfakes (a blend of "deep learning" and "fake") are a branch of synthetic media in which a person in an existing image or video is replaced with someone else's likeness using artificial neural networks. They often combine and superimpose existing media onto source media using machine learning techniques known as auto-encoders and generative adversarial networks (GANs).
Till date, voice recordings could be doctored and voices could be mimicked by voice artists. Photos could also be photoshopped or morphed with technology getting better and give out almost flawless results. However, the new trend with evolving AI technology can now manipulate videos too. Recently a few videos were out where AI was used to create fake videos. Google was in the news last year for helping the Pentagon with AI-based drones for military operations. The pilot project from Google set off alarms between its own employees when they found out about the involvement. The AI that was used in drone operations could detect and identify objects in the footage, known as Project Maven, was questioned about its ethical use of machine learning. The concern was with the technology that could be used to kill innocent people. While Google denied the use of the technology for combat operations, it was finally reportedly abandoned the project.