Artificial Intelligence (AI) was originally introduced to the public to make their life easier. It was supposed to help people become more productive, democratize education, positively transform the way they work, etc. However, some people with malicious intent started using AI to scam others, spread misinformation, create chaos, and initiate several other criminal activities.
Let’s look at the 12 scariest uses of AI that are raising severe concerns.
Deepfake and Misinformation
One of the scariest yet the most common misuses of AI is creating deepfakes to spread misinformation. Deepfake refers to AI-generated video, audio, and images that impersonate a real individual without their consent.
Most deepfakes are created to spread misinformation, scam people, manipulate peoples’ opinions, etc. To date, several famous personalities, including Tom Cruise, Barack Obama, Taylor Swift, Rishi Sunak, Joe Biden, etc., have been victims of deepfake. FTC is working to outlaw deepfakes to protect its citizens.
Autonomous Weapons
AI advancements have made their way to the military bases, making some countries contemplate deploying autonomous weapons. These are AI-powered systems that, once activated, can target an area based solely on sensor processing.
Upon activation, autonomous weapons will use lethal force without human oversight. If not stopped, these weapons can become a leading cause of several humanitarian crises. Professor Geoffrey E. Hinton, the Godfather of AI, is already raising the alarm about the dangers of these robot killers, hinting at the disastrous consequences they can have.
Job Displacement
While AI was intended to complement human labor, it has become a significant threat to their jobs. As tech continues to advance in performing various tasks, 40% of global employment stands exposed to AI. Nearly 60% of jobs risk getting impacted by AI in advanced economies, 40% in emerging markets, and 26% in low-income countries. Automating most tasks using AI can create massive job cuts and worsen inequality. AI can spark social tensions because of job displacements if policymakers don’t address these concerns.
Surveillance and Privacy Violations
Advanced AI-powered surveillance systems are now being used to monitor and track peoples’ activities, raising concerns about severe privacy violations.
While AI can enhance governments’ security measures when used within healthy boundaries, excessive tracking can strip people of their privacy. France recently made headlines when implementing mass AI surveillance during the 2024 Paris Olympics. Some believe that France may use this instance as justification to normalize state-wide state surveillance, affecting its citizens’ privacy.
Algorithmic Biases
If discriminatory data is fed to AI models, it can deploy biases at scale, inviting severe consequences. Some people worry that this algorithmic bias will worsen the current racial and gender bias, affecting particular groups of people.
For instance, some CAD systems were reported delivering lower accuracy results for black patients compared to white patients. Amazon was also surrounded by a controversy when its hiring algorithm favored applicants with words like ‘captured’ and ‘executed,’ predominantly found in men’s resumes.
Cyberattacks
Cybercriminals are using advanced AI to launch more advanced phishing attacks. Nearly 75% of security professionals said they have witnessed a spike in attacks, with 85% directly attributed to fraudsters or cybercriminals using generative AI.
While there are various ways in which cyberattacks are being executed using AI, the most common approaches include brute force attacks, generating malware, social engineering, phishing, CAPTCHA cracking, keystroke listening, and voice cloning. These attacks are causing severe financial distress to those affected.
Facial Recognition Misuse
There has been a growing concern about companies, governments, and individuals using AI-powered facial recognition software to spy on people. Since these systems track peoples’ every movement, they strip them of their basic civil liberty of privacy.
What’s more concerning is that these systems aren’t always right. In one such case, Robert Williams (a Black man living in Michigan) was arrested by Detroit police because of a false face recognition. He had to spend 30 hours in police custody, despite being innocent.
Amplifying Hate Speech
One of the most concerning uses of AI is generating hyper-realistic content to incite hate speech for disrupting social harmony. Since AI can impersonate any person in terms of looks, voice, subtle nuances, etc., viewers can easily mistake it for a real person spreading hate speech.
In one such case, a school teacher used AI to create fake audio of the principal spewing hateful comments against Black and Jewish people. It created a social divide in the community, weakening the harmony.
Social Engineering
Scammers can use AI to carry out more convincing social engineering attacks, making it easier to trick people into sharing sensitive information or making them transfer money to their accounts. These attacks can take various forms, but identity theft via voice and image cloning remains the most popular.
A multinational Hong Kong-based company lost $25 million to a social engineering scam, while a mother of a 15-year-old was traumatized by falsely claiming the kidnapping of her daughter.
Cyberbullying
Some individuals with malicious intent are using AI to create fake content that can lead to intense cyberbullying. According to a report, criminals are misusing generative AI to create Child Sexual Abuse Material (CSAM).
Approximately 4,700 reports were received in this regard in 2023 alone, raising serious concerns about children’s safety. 62% of students fear that AI can be used to fuel bullying, and parents share similar concerns. These cases can be unsettling for most, leaving deep psychological scars on the victims.
Spreading Propaganda
AI’s usage in creating personalized propaganda targeted at individuals’ vulnerabilities and biases is deeply concerning for everyone. It is mainly spread to manipulate peoples’ opinions and decisions, especially during election season.
There are several instances that hint at AI’s involvement in spreading propaganda, like Joe Biden’s fake robocall asking voters to save their votes for November, Donald Trump questioning the crowd size in Kamala Harris’s campaign, etc.
Promoting Violence & Self-Harm
While AI is advancing fast, it isn’t immune to mistakes. AI can generate fabricated content or make suggestions that can endanger users or those around them. Character.AI was recently in the news after two Texas-based families sued the startup for encouraging violence and sharing disturbing content with their kids.
The AI chatbot allegedly told a child that it was acceptable to kill his parents because of disputes over screen time. Another case came to light where the chatbot exposed an Autistic child to content encouraging self-harm and incest.
16 Things Siri Can Do That You Didn’t Know
The unexpected rise of Artificial Intelligence (AI) has transformed every sector, from retail to automobile, healthcare to banking, etc. Its impact is also being seen in the performance of stocks listed on varying exchanges. This article lists the 12 best AI stocks to help you capitalize on the AI trend and earn great returns.
16 Unexpected ChatGPT Capabilities You’ve Probably Never Tried
ChatGPT has 200 million weekly active users across the globe, 67.7 million in the US alone. You must have used ChatGPT for research, email writing, or social media content creation. However, ChatGPT can do some insane things that we bet you didn’t know. Read further to know.