New tool to fight IS propaganda online
June 27, 2016The "Islamic State" group (IS) has made extraordinary use of social media to recruit followers and inspire attacks. Facebook and Twitter say they are working hard to shut down extremists' profiles and remove offensive content. But IS still has a huge online reach. Now, the nonprofit Counter Extremism Project says it has developed new software capable of automatically flagging images generated by terrorist groups.
DW: The technology that you and your colleagues have created is aimed at helping internet companies to instantly detect extremist content online and remove it from their platforms. How exactly does this tool work?
Mark Wallace: It builds on something called "photoDNA." My colleague, Dr. Hany Farid, who is the chairman at the computer department at Dartmouth College, pioneered "photoDNA" to fight child pornography and its spread online. He extracted a unique signature - DNA, if you will - from any unique photograph. He has been working with the National Center for Missing and Exploited Children (NCMEC). They identified a set of images of child pornography and they hashed and extracted the signature of every one of these unique images and created a software program that is able to quickly and very accurately search all social media feeds to identify every location where those photographs would exist on social media platforms.
Extracting a photo's DNA
Dr. Farid and I have been working together now since last year. We tried to build on this technology, expand it into the area of videos and audio, so that we can extract a unique signature from any video, audio and photograph even when it is altered. We know where it is and are able to search this particular image on every platform and instantaneously identify the locations where it is found. No longer will an [IS] propagandist or executioner be able to quickly put a video on social media platforms to inspire other terrorist actions. We will be able to immediately identify this location on the internet, flag it and have the social media companies remove it.
But first you need to establish certain criteria on how pictures and other materials can be defined as terrorist propaganda, and then you build a database?
That is correct. We believe that, certainly, there could be debate about some content, but let's be realistic: The worst of the worst content - the violent action, the brutal execution and torture video, the most extreme [IS] propaganda - that is what we can all agree on. All the people that work in social media companies around the world can agree this most egregious content has no place. Let's start with that set of images, and let's remove this content and then let's discuss progressively other forms of content.
Are you not afraid that once a database of such images is created, some countries - including repressive regimes - could demand the removal of legitimate political content under the guise of fighting terrorism?
We are going to license that technology in a very limited fashion so that it can only be used in the child pornography context and in the extremism context. We will not allow it to be used and have the code used by anyone who doesn't agree to the terms of that license.
You will be offering the software for free?
When we license it directly, it will be licensed at no charge to social media companies.
Google and Twitter, in particular, have expressed doubts about the efficacy of such a project. They generally seem to be quite skeptical. What is their motive?
I don't know. We have talked to President Obama's team, and they have been very enthusiastic. We have been in ongoing discussions with a variety of social media companies. Let's be serious: I can't imagine any social media company would want its platform to be used by [IS] or other groups as a platform for horrible, hateful, violent videos or audio or pictures in order to cause terrorist attacks.
'No limitation of technology'
As we have seen though, some of the platforms resist any limitation on their technology. And I think that they mistake our tool as a limitation on their technology, when in fact it is designed to protect their technology from being misused by the worst of the worst terrorists. I hope that they'll realize that and understand they really don't want their platforms used to materially support terrorist groups, and neither do we.
You have briefed senior officials at the White House and the US Department of Homeland Security on the issue. Do you hope that they are going to support your project to help get it started?
We don't really need their help. We are very lucky because it is not an expensive undertaking. It is not an industry. It is an NGO, an academic undertaking. We want to be collaborative with everyone, but we don't see a need or place for government other than hopefully to cheer us on.
I think that President Obama said it very clearly when he pointed out that online propaganda causing terrorist attacks is pervasive and easily accessible. It has just so happened that we have been working for a year now and we have a solution and we think it squarely addresses the problem identified by the president.