Atelling clue to the radicalization of the couple who killed 14 people in San Bernardino last week was a Facebook post in which one of the shooters, Tashfeen Malik, declared her fealty to Islamic State. Her choice of Facebook is a testament to the appeal that social media networks and their audiences have among Islamic extremists, who use them to distribute propaganda as well as communicate with one another.
Atelling clue to the radicalization of the couple who killed 14 people in San Bernardino last week was a Facebook post in which one of the shooters, Tashfeen Malik, declared her fealty to Islamic State. Her choice of Facebook is a testament to the appeal that social media networks and their audiences have among Islamic extremists, who use them to distribute propaganda as well as communicate with one another.
That’s obviously a problem, even if social media networks aren’t the only tools that Islamic militant groups use to attract recruits.
The tech companies that operate those networks say they’re doing what they can in response, removing material that violates their terms of service whenever users call it to their attention. They also work with counterterrorism officials, and have been “pretty good about telling us what they see,” FBI Director James Comey said earlier this year.
Yet Facebook, which quickly deleted Malik’s declaration of loyalty to Islamic State, apparently missed a number of previous posts that alarmed members of her family back in Pakistan.
That seems to be par for the course when it comes to weeding out extremist propaganda online: Security analysts say the Internet is awash in terrorist recruiting and training materials that don’t get taken down.
Last Sunday, Hillary Clinton called for “an urgent dialogue” between “government and the high-tech community” on ways to stop terrorists from continuing to use the Internet “to celebrate beheadings, recruit future terrorists and call for attacks.”
And Sens. Dianne Feinstein (D-Calif.) and Richard Burr (R-N.C.) introduced a bill Tuesday that would require online companies to report any terrorist activity they learn about on their networks.
As appealing as that may be, however, there are drawbacks to the proposal (which no one in law enforcement has been clamoring for). The bill doesn’t define terrorist activity, and tech workers aren’t trained to identify it or the people who should be scrutinized; after all, extremists are hardly the only ones tweeting about Islamic State videos.
So, if a company tries to police its network, chances are good that it will report far too much to avoid overlooking something important. That would only pile more hay onto the stack that investigators have to pick through, rather than helping to uncover more needles.
Meanwhile, because the bill would require tech companies to report only what they’re aware of, it would create a perverse incentive for them not to monitor their networks at all.
Feinstein has said that online companies have to report the child pornography they find, so why not report terror-related posts as well? But unlike child porn, there is no central database of images, videos and texts that could help identify terrorism-related activity online.
A better response would be to beef up law enforcement’s ability to spot such material, while doing more to rebut the propaganda by offering a better, more truthful vision of Islam and the world.
Tech companies can be important partners in those efforts, but the Feinstein-Burr bill isn’t the right way to build that partnership.
— Los Angeles Times