A father from San Francisco has landed in serious trouble with the police after Google flagged him as a child abuser for sending photos of his toddler's groin over the internet to his doctor.
In February 2021, when most clinics were closed due to the pandemic, Mark (not his real name) and his wife consulted with their toddler's doctor for a groin issue via a digital platform. After sending a photo of the infection on his smartphone, backed up on his Google cloud account, the doctor was able to prescribe the proper antibiotics to relieve the child.
However, a few days later, Mark received a Google notification informing him of his account's suspension because he allegedly violated Google policies when he sent "harmful content" on the internet. He could no longer access his emails, contacts, and images and was blocked off Google's mobile services.
According to the New York Times, Google reported the dad to the National Center for Missing and Exploited Children (NCMEC) for having child sexual abuse material (CSAM). The San Francisco Police Department investigated him for the allegations in December 2021.
Dad did not commit a crime but permanently lost access to Google
Following the investigation, the police determined that no crime occurred and that the CSAM "did not meet the elements of a crime." Kate Klocick, a law professor, told the Times that the father and his wife could have lost custody of their child had the police not done a diligent investigation.
However, the police could not help Mark regain his Google account and access as the company told him of its permanent deletion. The company's artificial intelligence bots also found what appeared to be a video of a child in bed with a naked woman, but Mark could no longer remember what this was.
The father surmised it was a personal, beautiful, private video of his wife and son. Essentially, Google AI's intrusive system infringed on a family's private moment. Mark considered suing the company but ultimately decided it was not worth the time and money, per Daily Mail.
Christa Muldoon, a spokesperson for Google, said that the company has been following the federal laws for CSAM, using a "combination of hash matching technology and artificial intelligence" to remove those contents on the internet. They also have a "team of child safety experts" who review the flagged contents and consult with pediatricians for cases involving medical advice.
Mark's case is similar to Cassio's (not his real name) in Texas. He also sent a photo of his toddler's genitals to the doctor. He was in the middle of transacting a home purchase and was going to send documents digitally when Google banned him from the service.
The police also investigated Cassio, and he was cleared. Google, however, stood by its decision not to reinstate his account and access despite using the services for more than a decade.
CSAM is an invasion of privacy
Jon Callas of the Electronic Frontier Foundation has strongly criticized Google's CSAM practices. He warned that there would be future cases like Mark and Cassio.
"They're going to scan family albums and then [we're] going to get into trouble," Callas said.
Callas has been advocating for better laws protecting children on the internet but not at the expense of the end users who have been flagged for false positives or families that have been victims of the technology's intrusive system.
Meanwhile, Google stated in its Transparency Report that the police had investigated 4,260 new potential child abuse cases in 2021 based on the 621,583 cases of CSAM found by their AI. From these cases, 140,868 accounts have been permanently disabled.