What Happened
A joint investigation by Svenska Dagbladet and Göteborgs-Posten has exposed the hidden human infrastructure behind Meta's AI smart glasses. Workers at Sama, a subcontractor in Nairobi, Kenya, are manually reviewing video footage captured by Meta Ray-Ban glasses worn by consumers around the world.
The content these workers see is deeply private: bathroom visits, people undressing, sex scenes, visible bank cards, and conversations about crimes. "We see everything," one worker told journalists. "From living rooms to naked bodies. Meta has that type of content in its databases."
Meta's terms of service do mention that "human review" may occur, but it's buried in dense legal documents. When Swedish journalists visited ten retailers selling the glasses, sales staff consistently claimed users had "full control" of their data. Technical analysis revealed that the glasses continuously communicate with Meta servers in Sweden and Denmark, and cannot function with AI features while offline.
Why This Matters for Enterprise AI
If Meta, with its vast resources and legal teams, cannot prevent intimate consumer data from being viewed by workers in third countries, what happens to your business data when you use cloud AI services?
The investigation exposes three uncomfortable truths about modern AI:
AI requires human labor. The "machine learning" behind AI glasses is actually thousands of human workers drawing boxes around objects, labeling pixels, and watching videos. This human element means your data isn't just being processed by algorithms, it may be seen by people.
Terms of service are not safeguards. Meta's privacy policy technically allows human review, but users have no practical way to know what content will be reviewed, by whom, or for how long. The "transparency" is illusory.
GDPR compliance is unclear. Data protection lawyers quoted in the investigation say Meta's practices raise serious questions about legal basis for processing, transparency, and whether explicit consent is actually being obtained. Swedish authorities have not yet reviewed the glasses.
The Case for Sovereign AI
At Laava, we've been advocating for sovereign AI since our founding. Not because we're paranoid, but because we understand how AI systems actually work.
When you send business data to a cloud AI provider, you're trusting:
- That their terms of service actually protect you
- That their subcontractors are properly vetted
- That data won't be used for training without consent
- That human reviewers won't see your confidential documents
- That anonymization filters actually work (Meta's don't always)
The Meta investigation shows that even tech giants with billions in resources cannot guarantee this chain of trust. The solution isn't to avoid AI. It's to own your AI infrastructure.
What Sovereign AI Looks Like in Practice
Open-source models on your infrastructure. Models like Llama 3 and Mistral can be deployed in your own cloud environment or on-premise. No data leaves your network. No terms of service to parse. No subcontractors in other countries.
Enterprise zero-retention APIs. When cloud AI is necessary, use enterprise tiers with contractual zero-retention guarantees. Azure OpenAI and AWS Bedrock offer configurations where data is processed but never stored or used for training.
PII redaction before processing. Don't trust the AI provider to handle sensitive data correctly. Implement Named Entity Recognition (NER) pipelines that strip personal and confidential information before it ever reaches an external service.
EU data residency. For European organizations, process data in European data centers only. This isn't just about GDPR compliance, it's about controlling the jurisdictional exposure of your data.
What You Should Do Now
Audit your AI supply chain. Every AI tool your organization uses should be documented: What data does it process? Where is that data stored? Who has access? What are the retention policies? If you can't answer these questions, you have a governance gap.
Question "privacy by design" claims. Sales staff at Meta glass retailers confidently claimed user data wasn't shared. They were wrong. Don't accept vendor assurances at face value. Ask for architectural documentation.
Consider hybrid architectures. Not all AI workloads need the most powerful cloud models. Often, a smaller open-source model running locally handles 80% of use cases, with cloud APIs used only for edge cases that truly require them.
The Meta glasses investigation is a wake-up call. AI is powerful, but the infrastructure behind it is often less transparent than users realize. For businesses processing sensitive data, sovereign AI isn't a luxury. It's a requirement.
If you're evaluating your AI data governance strategy, we offer a free 90-minute Roadmap Session to assess your current setup and identify sovereign alternatives. No obligation, just honest technical guidance.