• NextWave AI
  • Posts
  • Privacy Concerns Rise as Meta Faces Scrutiny Over Workers Viewing Sensitive AI Glasses Footage

Privacy Concerns Rise as Meta Faces Scrutiny Over Workers Viewing Sensitive AI Glasses Footage

In partnership with

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.

That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype

In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.

Technology giant Meta Platforms is facing growing scrutiny after reports revealed that outsourced workers may have viewed highly sensitive footage captured by the company’s AI-powered smart glasses. The issue has raised serious questions about user privacy and data protection, prompting the United Kingdom’s data regulator to seek answers from the company.

The controversy centers on the Ray-Ban Meta Smart Glasses, a wearable device designed to combine artificial intelligence with everyday eyewear. The glasses allow users to record videos, capture photos, and interact with AI using voice commands. While the product promises convenience and hands-free assistance, recent investigations suggest that the data collected by the device may sometimes be reviewed by human contractors.

According to an investigation conducted by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, workers employed by an outsourcing company in Kenya were able to view video footage and images recorded by the glasses. Some of this material reportedly included extremely private and intimate moments, such as individuals using the bathroom or engaging in sexual activity.

One worker involved in reviewing the content was quoted in the investigation as saying, “We see everything — from living rooms to naked bodies.” The remark highlights the extent of the access that data annotators may have when reviewing user-generated footage for AI training purposes.

These workers were reportedly employed by Sama, a Nairobi-based outsourcing firm that specializes in data annotation services. Data annotators play a key role in developing artificial intelligence systems. Their job typically involves labeling images, reviewing videos, and evaluating transcripts of conversations between users and AI systems to help improve the accuracy and reliability of the technology.

Meta confirmed that in certain cases contractors may review content shared with its AI systems. The company stated that this practice is used to improve the performance of its AI-powered products, including the smart glasses. According to Meta, such reviews are conducted in line with its privacy policies and terms of service.

In a statement, the company emphasized that it takes user privacy seriously and has implemented safeguards to protect personal data. “When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people’s experience with the glasses,” the company said. Meta added that the data is first filtered to protect users’ privacy.

The filtering process may include techniques such as blurring faces in images or removing identifying details before the content is reviewed. However, sources quoted in the Swedish investigation claimed that these protections sometimes fail, allowing reviewers to see faces and other identifiable information.

Another concern raised in the report is that many users may not realize their recordings could be reviewed by human workers. While Meta’s terms of service mention that interactions with its AI systems may be reviewed either automatically or manually, critics argue that these disclosures are often buried within lengthy legal documents that users rarely read.

The issue has drawn the attention of the United Kingdom’s data protection authority, the Information Commissioner's Office (ICO). The regulator stated that the claims are “concerning” and confirmed that it would contact Meta to request more information about how the company handles user data.

The ICO stressed that devices that process personal data must provide transparency and allow users to remain in control of their information. According to the regulator, companies must clearly explain what data is collected and how it will be used.

Beyond the privacy implications for users, the investigation also sheds light on the challenging nature of data annotation work. Workers described strict security measures at their workplace, including surveillance cameras and bans on mobile phones. Despite these precautions, employees reported regularly encountering disturbing or deeply personal content while performing their tasks.

In one reported incident, a worker said that a pair of smart glasses had been accidentally left recording in a bedroom. The footage later showed a woman, believed to be the user’s wife, undressing, illustrating how easily private moments can be captured unintentionally by wearable devices.

Meta’s smart glasses include a small LED light that activates when the camera is recording, intended to alert people nearby. The company advises users to be mindful when recording and to avoid capturing footage in private settings. However, critics argue that these safeguards may not always be sufficient to prevent misuse.

The controversy comes at a time when AI-powered wearable devices are becoming increasingly popular. These technologies can perform a range of functions, from translating text in real time to answering questions about objects in the user’s surroundings. For individuals who are blind or partially sighted, such tools can offer significant benefits by providing information about the environment.

Nevertheless, the rapid spread of these devices has sparked concerns about surveillance, consent, and digital privacy. Previous reports have highlighted cases in which individuals were unknowingly recorded by smart glasses and later found themselves featured in videos shared online.

As regulators begin to examine how companies manage the vast amounts of data generated by AI-powered devices, the debate surrounding privacy and accountability in emerging technologies is likely to intensify. The outcome of the inquiries into Meta’s practices could influence how wearable AI devices are regulated in the future.

For now, the incident serves as a reminder that while artificial intelligence promises innovation and convenience, it also raises complex ethical questions about how personal data is collected, processed, and protected in an increasingly connected world.