Is AI Watching You? 11 Cases That Will Make You Think Twice

Artificial Intelligence (AI) is fast becoming an inseparable part of our daily lives. From smart assistants to data-driven recommendations on shopping platforms to chatbots, AI is everywhere. While this advanced technology simplifies lives, it raises serious privacy concerns worldwide.

Let’s explore 11 cases highlighting how AI is becoming a threat to privacy. 

Amazon’s Ring Doorbell Privacy Concerns

Image Credits: RossHelen via Canva.com

Amazon came under fire after its AI-powered doorbell ring was accused of illicit recording and sharing access with company employees.

The integration of AI in Amazon’s surveillance system has sparked serious privacy debate among customers, especially after reports of Amazon’s Ring doorbell camera unit being accused of spying on female customers for months came to light. Amazon paid $5.8 million in settlement over these privacy violations.   

AI-Powered Smart Home Devices Violating Privacy Rights

Image Credits: Jakub Zerdzicki from Pexels via Canva.com

The need for convenience over privacy has created a massive market for smart home devices. Reports estimate 785.16 million smart home device users by 2028. While these devices make our lives easier, they also intrude into our personal space, capturing more information that people should be sharing. 

Amazon’s Alexa emerged as one of the biggest data aggregators, gathering multiple user information, such as contact details, health-related information, precise location data, etc. Alexa violated children’s privacy rights by failing to delete Alexa recordings at parents’ request. Amazon paid $25 million to settle this case. 

Facebook’s Facial Recognition Lawsuit

Image Credits: MOHI SYED from Pexels. Via Canva.com

Facebook (now Meta) violated users’ privacy rights by illegally using their facial recognition data without their consent. Its AI-powered facial recognition systems are surveilling people more than they know. It enabled auto-tagging in photos for most users. While it may appear insignificant, auto-tagging meant your face could become available to companies outside your photo platform’s walls. Meta faced several lawsuits for this privacy violation and agreed to pay $1.4 billion in settlement to Texas alone.   

China’s AI-Powered Surveillance State

Image Credits:  Africa images via canva.com

China is using AI to upgrade its surveillance capabilities. The country is using ‘One person, one file’ AI software to sort the collected data of its residents. Its remarkable ability to capture partially blocked, masked, or low-resolution portraits is both amusing and scary. Since China can use this technology to track individuals accurately, activists worry that China can use it to create a surveillance state that infringes on citizen’s privacy rights and targets certain religious groups.   

AI-Powered Chatbots Collecting Personal Data

Image Credits:  Kittipong Jirasukhanont from PhonlamaiPhoto’s Images via Canva.com

All AI-powered chatbots use customer data to personalize responses and improve performance. This data may be misused or shared with third-party platforms without their consent, leading to grave privacy violations. Several AI chatbots have been accused of illegal wiretapping and alleged unlawful recording of private conversations, putting user’s privacy at stake. Several legal suits are being filed against companies like Ford, Home Depot, General Motors, etc., for violating customers’ privacy rights using AI chatbots.      

Smart TVs Collecting Viewing Data

Image Credits: Kaspars Grinvalds via Canva.com

Most smart TVs feature AI-powered Automatic Content Recognition (ACR) to capture and share screenshots of what viewers are watching. What’s concerning is it takes screenshots even if the content plays on external devices like laptops. Besides gathering viewing history, smart TVs also collect user details like their location, user pathways, etc., and sell them to third parties like streaming services. As most people have no idea about ACR and the collected data, they’re significantly unaware of the extent of their privacy violations.  

Google’s AI-Powered Health Data Collection

Image Credits:  studioroman via Canva.com

Google has faced multiple accusations of using AI-powered tools to collect users’ health data without their consent. One prominent instance is Project Nightingale, a collaboration between Ascension (one of America’s largest healthcare systems) and Google. In this project, Google transferred the health records of over 50 million patients to its servers to develop AI-driven healthcare solutions. Patients and Ascension healthcare providers were unaware of this information being used by Google, creating serious privacy violations.  

AI-powered Predictive Policing

Image Credits: Kittipong Jirasukhanont from PhonlamaiPhoto’s Images via Canva.com

AI-powered surveillance and predictive policing are both revolutionary and controversial. While governments are using this technology to create safer societies for citizens via real-time crime mapping, gunshot detection, crowd management, etc., it is raising serious privacy concerns about people being recorded constantly without their consent. Predictive policing is undesired in privacy-aware geographies like North America and the European Union (EU). However, its implementation in Asia raises concerns for protecting peoples’ liberties and civil rights, including privacy. 

AI-Powered Social Media Monitoring

Image Credits: Solen Feyissa from Pexels via Canva.com

The growing use of AI in social media monitoring is concerning on various levels. AI-powered tools can collect vast amounts of user data without their awareness and consent. It can be used for profiling and surveillance, infringing individual privacy. Unfortunately, the trend of AI-powered social media surveillance is increasing throughout the world. 40 of the 65 countries analyzed in a study report using advanced social media monitoring programs. In the wrong hands, this tech can be used to observe, collect, and analyze users’ social media content to detect and suppress dissent. 

Google’s AI-Powered Location Tracking

Image Credits: Worawee Meepian’s Images via Canva.com

People using Google’s AI and other services on Android devices and iPhones were unknowingly sharing their location data with the tech giant. The event came to light after a press article revealed how several Google services store user locations even if they enable privacy settings to prevent Google from doing so. Google later paid $391.5 million to settle lawsuits for its illegal tracking practices.  

Microsoft’s AI-Powered Recall

Image Credits: Kaspars Grinvalds via Canva.com

Microsoft was surrounded by a massive controversy after announcing its AI-powered Recall feature. The feature was presented as an explorable visual timeline that captures screenshots of whatever appears on the users’ screens every five seconds. The screenshots would then be analyzed and parsed to surface relevant information. It quickly became Microsoft’s much-criticized launch because of privacy concerns. Microsoft eventually pushed the launch, making it an optional opt-in instead of a default feature. 

Recommended