Google Triggers Criminal Investigation After Dad Takes Photos Of His Toddler, Naked, For Doctor
Google triggered a criminal investigation and locked a man out of his accounts after flagging photos he sent to his son’s pediatrician as potential child pornography, according to The New York Times.
The tech company initially flagged the images when they were automatically uploaded to Google servers from the father’s phone, according to the NYT. After a nearly year-long investigation of everything in his Google account, including search history, location history, messages and photos, police determined he hadn’t committed a crime.
The father, referred to only as Mark by the NYT, had taken photos of his young son, naked, at the request of a doctor over concerns about his infected penis, according to the NYT. Google quickly locked him out of his account after scanning the photos, and he lost emails, contacts and personal photos and had to get a new phone number after losing access to his Google Fi account.
Privacy is more important than ever. Watch #GML2022 to learn more → https://t.co/5OdGdM8ooZ pic.twitter.com/BYFBoAEhyF
— Google Ads (@GoogleAds) May 24, 2022
“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark, who happens to be a software engineer who had worked on a tech company’s tool for flagging problematic content, told the NYT. “But I haven’t done anything wrong.”
A similar story played out for a a father referred to only as Cassio, who was also flagged by Google after the company’s software flagged photographs he had taken of his son at the request of a pediatrician, according to the NYT. Those photos also triggered a scan after being automatically uploaded to Google servers, and Google locked Cassio out of his accounts in the middle of buying a house.
Google only scans users’ photos when they take an affirmative action, a Google spokesman told the NYT, but that action can include the automatic uploading of user photos to Google Photos or other Google servers. After artificial intelligence scanners detect a red flag on a user’s photos, a human moderator looks over the photos to determine whether they qualify as child sexual abuse material.
“This is precisely the nightmare that we are all concerned about,” Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, told the NYT. “They’re going to scan my family album, and then I’m going to get into trouble.”
Google did not respond to the Daily Caller News Foundation’s request for comment.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com