When Meta opened its Ray-Ban smart glasses up for pre-order, it made clear of one thing: your privacy will be secure. “Ray-Ban Meta smart glasses are built with privacy at their core,” read a statement at the time, released in September 2023. The marketing was unambiguous about your privacy, and as a result, you might have seen people wearing them around town, in a Super Bowl ad, or even in a court proceeding about child safety on Meta’s own platforms. ICE agents were even reportedly wearing them in the field.

What you might not have seen is, well, yourself caught in the crosshairs of the glasses’ camera. Now, a new study—and a federal lawsuit that quickly followed—alleges the company is even less transparent than those thick lenses, claiming the company is quietly routing users’ footage to human workers overseas instead of its AI models. These workers have seen everything from people undressing to sensitive financial documents, and it’s thanks to users who opt into data sharing for AI training purposes.

“In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording,” a worker said he saw in the videos from the glasses.

In late February, Swedish publications Svenska Dagbladet and Göteborgs-Posten published an investigation into Meta’s AI training pipeline, finding Meta contractors in Kenya help train the artificial intelligence powering the glasses (comprised of the Ray-Ban Meta Wayfarer (Gen 2), the Ray-Ban Display, and the Oakley Meta HSTNs models). What they saw was startling. 

“We see everything, from living rooms to naked bodies,” the workers were quoted in the study. “Meta has that type of content in its databases.”

Any user who opts into sharing data for AI training purposes effectively allows all parts of their life to be recorded, and then as a result, reviewed, either by the AIs it’s supposed to train or by the humans behind it. That includes footage of people in bathrooms, undressing, watching porn, and, in at least one documented case, a pair of glasses left on a bedside table that captured a partner who had never consented to being recorded. 

Meta’s subcontractors—who were data annotators teaching the AI to interpret images by manually labelling content—also reported viewing users’ credit card numbers and financial documents. At the time of the study’s release, Meta responded through a spokesperson, saying: “When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people’s experience with the glasses, as stated in our privacy policy. This data is first filtered to protect people’s privacy.”

A class action begins

The report triggered legal action. On March 4, plaintiffs Gina Bartone and Mateo Canu filed a class action lawsuit against Meta Platforms Inc. (and glassesmaker Luxottica of America) accusing the company of violating federal and state laws by failing to disclose that videos captured by the glasses are transmitted to its servers and then to the Kenyan subcontractor for manual labeling.​ Referencing new privacy bills and regulations as result of the increase in AI and the surveillance economy, the suit reads that “Meta knows this” in reference to the public’s growing concern of their privacy and safety, and “against this backdrop,” Meta released the glasses with a “reassuring promise: the Glasses were ‘designed for privacy, controlled by you.’”

Brian Hall, a privacy and AI attorney at Stubbs Alderton & Markiles, says the revelations were as predictable as they were alarming. “That’s horrifying. It’s kind of exactly what we all imagined would happen,” Hall told Fortune. “I’m old enough to remember 10 or 12 years ago when Google had their glasses, and that was a concern about people going into restrooms with them on. We’re kind of right back there now.”​

(When Google unveiled its prototype Google Glass in 2013, it ignited a fierce public backlash over surveillance, consent, and the death of anonymity. Bars, restaurants, casinos, and strip clubs banned the device outright, and wearers were mockingly dubbed “Glassholes”).

Hall says the legal liability remains murky, partly because Meta’s own Terms of Service state that data annotators “will review your interaction with AI, including the content of your conversations with or messages to AI,” and specifies this review “can be automated or manual.” “If we went and did a close reading of their privacy policy, there’s not going to be anything explicitly that says they don’t do that,” Hall said. “In terms of their legal liability, I don’t know, but it’s certainly a PR liability. This is some of the most sensitive information and imagery that there is out there.”​

Hall says his biggest concern isn’t actually the glasses wearers themselves, it’s everyone else caught in the frame. “The bystanders, the people who are being filmed and identified, they’re the ones that are at risk,” he said. “Sadly, our privacy laws are not designed to protect those people. They’re designed to protect the people who are wearing the glasses and their ability to manage their own data.”​

In reference to reports of a man using the glasses in a U.K. court to help “coach” him through testimony, Hall said the risk compounds significantly as Meta reportedly considers adding facial recognition to the glasses. “It really is moving from a world where today you might be able to see somebody on the street, in a courtroom, in a bar, and you might be able to do some investigation on Facebook and Instagram and find them. But this is instant. It’s automatic, zero effort. You could be sitting in a courtroom identifying witnesses.”

Hall says existing law is simply not built for what Meta’s glasses make possible. “I don’t know that the existing laws are really sufficient to protect us from the risks of the kind of things that Meta and other social media companies are doing right now,” he said. “It’s sort of getting shoehorned into the privacy laws, but those are rarely enforced as it is,  and this is completely upending the whole framework that those were built upon.”​

“I’m not seeing that people are meaningfully addressing it in any way,” he said, saying current regulations are piecemeal and fail to address the concerns of privacy entirely. Once privacy is addressed, he said “everything else is just kind of window dressing.”​

Meta did not respond to requests for comment.




Source link


administrator