In 2020, a photo of a woman sitting on a toilet—her shorts pulled half-way down her thighs—was shared on Facebook, and it was shared by someone whose job it was to look at that photo and, by labeling the objects in it, help train an artificial intelligence system for a vacuum.
Bizarre? Yes. Unique? No.
In December, MIT Technology Review investigated the data collection and sharing practices of the company iRobot, the developer of the popular self-automated Roomba vacuums. In their reporting, MIT Technology Review discovered a series of 15 images that were all captured by development versions of Roomba vacuums. Those images were eventually shared with third-party contractors in Venezuela who were tasked with the responsibility of "annotation"—the act of labeling photos with identifying information. This work of, say, tagging a cabinet as a cabinet, or a TV as a TV, or a shelf as a shelf, would help the robot vacuums "learn" about their surroundings when inside people's homes.
In response to MIT Technology Review's reporting, iRobot stressed that none of the images found by the outlet came from customers. Instead, the images were "from iRobot development robots used by paid data collectors and employees in 2020." That meant that the images were from people who agreed to be part of a testing or "beta" program for non-public versions of the Roomba vacuums, and that everyone who participated had signed an agreement as to how iRobot would use their data.
According to the company's CEO in a post on LinkedIn: "Participants are informed and acknowledge how the data will be collected."
But after MIT Technology Review published its investigation, people who'd previously participated in iRobot's testing environments reached out. According to several of them, they felt misled.
Today, on the Lock and Code podcast with host David Ruiz, we speak with the investigative reporter of the piece, Eileen Guo, about how all of this happened, and about how, she said, this story illuminates a broader problem in data privacy today.
"What this story is ultimately about is that conversations about privacy, protection, and what that actually means, are so lopsided because we just don't know what it is that we're consenting to."
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Fighting censorship online, or, encryption’s latest surprise use-case, with Mallory Knodel
What is AI ”good” at (and what the heck is it, actually), with Josh Saxe
Fighting tech’s gender gap with TracketPacer
Why does technology no longer excite?
Chasing cryptocurrency through cyberspace, with Brian Carter
Security advisories are falling short. Here’s why, with Dustin Childs
Threat hunting: How MDR secures your business
How student surveillance fails everyone
A gym heist in London goes cyber
Teen talk: What it’s like to grow up online, and the role of parents
Calling in the ransomware negotiator, with Kurtis Minder
The MSP playbook on deciphering tech promises and shaping security culture
Playing Doom on a John Deere tractor with Sick Codes
Donut breach: Lessons from pen-tester Mike Miller
Have we lost the fight for data privacy?
Roe v. Wade: How the cops can use your data
When good-faith hacking gets people arrested, with Harley Geiger
Securing the software supply chain, with Kim Lewandowski
Tor’s (security) role in the future of the Internet, with Alec Muffett
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Black Wolf Feed (Chapo Premium Feed Bootleg)
Bannon`s War Room