More and more companies have access to our photos. The consequences are not always foreseeable. A man in the US has now been accused by Google of abusing his son. Even after the police had long since agreed with him.
It started small. When Mark noticed a swelling in his son’s infancy crotch, he took some photos to document the development and show them to doctors. Two days later, a nightmare began that not only cost him ten years of data on Google, but also made him a suspect in a police investigation.
A warning signal from his Android smartphone changed everything, reports Mark in the “New York Times”. His Google account was shut down because he shared “harmful content”. This is “a serious violation of Google’s policies and may be illegal,” the notification informed him. His access to Google – his e-mails, contacts, his pictures stored in the cloud, the calendar and even his mobile phone tariff running via the GoogleFi service – was suddenly no longer usable. He was no longer able to receive security codes for other accounts on his smartphone, and he also lost access to them.
Under suspicion
In a complaint form, Hausmann, who lives in San Francisco, tried to explain to Google what had happened. When he discovered the infection on his son’s genitals, he took photos to document it. After his wife made an appointment with a doctor, the doctor’s assistant asked for the pictures to be sent in advance. His wife texted the pictures from Mark’s phone to herself, forwarded them from there. In fact, the doctors identified an infection, she was treated with antibiotics.
Although Mark had been a Google user for over ten years, he received no response from the company for days. Then his application was rejected without further justification. What he didn’t know at the time was that Google had flagged another video as potentially problematic. And reported him to the police for possible child abuse.
Automatic check
The reason for the procedure is basically understandable and does not only apply to Google. Internet companies are now storing vast amounts of their users’ data on their servers. And are also responsible if illegal material such as child abuse footage is included. Google is not the only one to check images when they are loaded onto the server. Facebook also compares media with databases of known abuse recordings. Since 2018, however, Google has gone one step further: With artificial intelligence, the company recognizes potentially problematic images even if they were not previously known. As happened in this case.
Mark was just beginning to understand why Google had banned him. “Oh god, Google thinks it’s child pornography,” he realized. At first he wasn’t worried. Because he himself had dealt with the detection of problematic videos as a software developer, he knew that the automatically marked data would eventually end up with a human reviewer – and hoped that they would recognize the misunderstanding.
Up to that point, “the system was working as it should,” Fallon McNulty of the US Child Protection Agency’s CyberTip hotline told The Times. She and her team examine the images and videos reported by Google and others. The US companies report more than 80,000 cases – every day. However, McNulty says most of the footage is well-known. Despite this, more than 4200 potential new cases were reported to the authorities in 2021. Mark’s son was one of them.
Mail from the police
But he didn’t find out until December. Ten months after being banned from Google, he received a letter from the San Francisco Police Department. The informed him about the investigation against him, contained search warrants for his online accounts and all activities: from Internet searches to his whereabouts logs to messages, photos and videos. It’s about child abuse, was the unmistakable message. Only when Mark called the investigator did he get the redeeming message: the case had been closed, he confirmed. “I have assessed, after reviewing the facts, that no crime was committed,” the investigator confirmed. Because Mark had changed his email and phone number, he was unavailable.
Mark was relieved. And now asked Google with the new findings to reopen his account. Without success. Instead, he received a notification that the account would be completely deleted after a one-year period. Mark still consulted an attorney to keep his data via lawsuit. “But I decided it wasn’t worth $7,000,” he said resignedly. The data is now gone.
Not an isolated case
According to expert Kate Klonick, a law professor at St. John’s University, Mark got off lightly. Due to the automatic reporting to the law enforcement authorities, there is always a risk of losing custody. “You can imagine how this can escalate,” she warns The Times.
According to Google, the ad was ultimately not created because of the genital photo, but because of a video. Mark’s wife had been lying naked next to their child. “It was an innocent moment of closeness,” explains Mark. “If only we had slept in pajamas.” Google explained to the newspaper that the decision was still correct.
Mark now has one last hope of getting his data back after all: the police files contain copies of his cloud data that Google handed over. A police spokesman told The Times that the bureau was very keen to help him.
Source: New York Times