Google’s careful watch
In the tally of privacy grievances, Google’s scanning of emails for child pornography should rank very low, if it appears at all.
The controversy bubbled up after law enforcement last month arrested a Houston man for sending images of child pornography from his Gmail account. Google’s automated system detected the content, and the company then forwarded the information to the National Center for Missing and Exploited Children (NCMEC). The arrest sparked a backlash from privacy experts, including an article in The Post titled, “How closely is Google really reading your email?”
In fact, Google — along with Microsoft, Facebook and Twitter, all of which use similar technology — isn’t “reading” your messages. These companies use digital fingerprint technology, pioneered by Microsoft, to automatically identify known child pornography photos that pass through their systems. When there’s a match, a tip is sent to the NCMEC as required by law. No humans sift through attachments, nor do they adjudicate whether certain images qualify as child pornography.
Yet, critics say Google is assuming the role of a “policeman.” While required by law to report images once they’re found, Google wasn’t compelled to install a system that searched for the pictures. Scanning emails for keywords to sell ads, it’s argued, is to be expected — but reporting to authorities the private contents of emails is a step down a slippery slope.
According to Google’s terms of service, the company “may review content to determine whether it is illegal or violates our policies.” The blanket statement about illegal activity could allow Google to scan email for activities such as piracy or intellectual property theft. This language opens up the possibility of cases such as one involving Microsoft last year, in which the company searched a blogger’s Hotmail account for alleged trade secret theft.
But that’s very unlikely in practice. The child porn technology was designed solely to match known images from a specific database. It can’t be used to search text for general criminal activity. Those searches would be much more complicated and probably yield a much higher error rate. Google, for its part, views child sexual abuse imagery as unique, morally and legally, and says this is the only situation in which the company hands over user account information to a third party.
Exploitation of children is a heinous crime, and Google is being a “proactive neighborhood watch,” says Meg Ambrose, a Georgetown law professor. That’s an apt comparison.
The technology is narrow and extremely accurate — and a powerful tool in helping eliminate content that has no place in our society.
— Washington Post
Rules for posting comments
Comments posted below are from readers. In no way do they represent the view of Oahu Publishing Inc. or this newspaper. This is a public forum.
Comments may be monitored for inappropriate content but the newspaper is under no obligation to do so. Comment posters are solely responsible under the Communications Decency Act for comments posted on this Web site. Oahu Publishing Inc. is not liable for messages from third parties.
IP and email addresses of persons who post are not treated as confidential records and will be disclosed in response to valid legal process.
Do not post:
- Potentially libelous statements or damaging innuendo.
- Obscene, explicit, or racist language.
- Copyrighted materials of any sort without the express permission of the copyright holder.
- Personal attacks, insults or threats.
- The use of another person's real name to disguise your identity.
- Comments unrelated to the story.
If you believe that a commenter has not followed these guidelines, please click the FLAG icon below the comment.