Some of the web’s biggest destinations for
watching videos have quietly started using automation to remove extremist
content from their sites, according to two people familiar with the process.
The
move is a major step forward for internet companies that are eager to eradicate
violent propaganda from their sites and are under pressure to do so from
governments around the world as attacks by extremists proliferate, from Syria
to Belgium and the United States.
YouTube
and Facebook are among the sites deploying systems to block or rapidly take
down Islamic State videos and other similar material, the sources said.
The
technology was originally developed to identify and remove copyright-protected
content on video sites. It looks for "hashes," a type of unique digital
fingerprint that internet companies automatically assign to specific videos,
allowing all content with matching fingerprints to be removed rapidly.
Such a
system would catch attempts to repost content already identified as
unacceptable, but would not automatically block videos that have not been seen
before.
The
companies would not confirm that they are using the method or talk about how it
might be employed, but numerous people familiar with the technology said that
posted videos could be checked against a database of banned content to identify
new postings of, say, a beheading or a lecture inciting violence.
The two sources would not discuss how much human work goes into reviewing
videos identified as matches or near-matches by the technology. They also would
not say how videos in the databases were initially identified as extremist.
Use of
the new technology is likely to be refined over time as internet companies
continue to discuss the issue internally and with competitors and other
interested parties.
In late April, amid pressure from U.S. President Barack Obama and
other U.S. and European leaders concerned about online radicalization, internet
companies including Alphabet Inc's YouTube, Twitter Inc, Facebook Inc and
CloudFlare held a call to discuss options, including a content-blocking system
put forward by the private Counter Extremism Project, according to one person
on the call and three who were briefed on what was discussed.
The
discussions underscored the central but difficult role some of the world's most
influential companies now play in addressing issues such as terrorism, free
speech and the lines between government and corporate authority.
None of
the companies at this point has embraced the anti-extremist group's system, and
they have typically been wary of outside intervention in how their sites should
be policed.
“It’s a little bit different than copyright or
child pornography, where things are very clearly illegal,” said Seamus Hughes,
deputy director of George Washington University’s Program on Extremism.
Extremist
content exists on a spectrum, Hughes said, and different web companies draw the
line in different places.
Most
have relied until now mainly on users to flag content that violates their terms
of service, and many still do. Flagged material is then individually reviewed
by human editors who delete postings found to be in violation.
The companies now using automation are not publicly discussing it, two sources
said, in part out of concern that terrorists might learn how to manipulate
their systems or that repressive regimes might insist the technology be used to
censor opponents.
“There's no upside in these companies talking about it,” said Matthew Prince,
chief executive of content distribution company CloudFlare. “Why would they
brag about censorship?”
The two people familiar with the still-evolving industry practice confirmed it
to Reuters after the Counter Extremism Project publicly described its
content-blocking system for the first time last week and urged the big internet
companies to adopt it.
WARY
OF OUTSIDE SOLUTION
The April call
was led by Facebook's head of global policy management, Monika Bickert, sources
with knowledge of the call said. On it, Facebook presented options for
discussion, according to one participant, including the one proposed by the
non-profit Counter Extremism Project.
The
anti-extremism group was founded by, among others, Frances Townsend, who
advised former president George W. Bush on homeland security, and Mark Wallace,
who was deputy campaign manager for the Bush 2004 re-election campaign.
Three
sources with knowledge of the April call said that companies expressed wariness
of letting an outside group decide what defined unacceptable content.
Other
alternatives raised on the call included establishing a new industry-controlled
nonprofit or expanding an existing industry-controlled nonprofit. All the
options discussed involved hashing technology.
The
model for an industry-funded organization might be the nonprofit National
Center for Missing and Exploited Children, which identifies known child
pornography images using a system known as PhotoDNA. The system is licensed for
free by Microsoft Corp.
Microsoft
announced in May it was providing funding and technical support to Dartmouth
College computer scientist Hany Farid, who works with the Counter Extremism
Project and helped develop PhotoDNA, "to develop a technology to help
stakeholders identify copies of patently terrorist content."
Facebook’s
Bickert agreed with some of the concerns voiced during the call about the
Counter Extremism Project's proposal, two people familiar with the events said.
She declined to comment publicly on the call or on Facebook's efforts, except
to note in a statement that Facebook is “exploring with others in industry ways
we can collaboratively work to remove content that violates our policies
against terrorism.”
In
recent weeks, one source said, Facebook has sent out a survey to other
companies soliciting their opinions on different options for industry
collaboration on the issue.
William Fitzgerald, a spokesman for Alphabet's Google unit, which owns YouTube,
also declined to comment on the call or about the company's automated efforts
to police content.
A Twitter spokesman said the company was still evaluating the Counter Extremism
Project's proposal and had "not yet taken a position."
A
former Google employee said people there had long debated what else besides
thwarting copyright violations or sharing revenue with creators the company
should do with its Content ID system. Google's system for content-matching is
older and far more sophisticated than Facebook's, according to people familiar
with both.
Lisa
Monaco, senior adviser to the U.S. president on counterterrorism, said in
a statement that the White House welcomed initiatives that seek to help
companies “better respond to the threat posed by terrorists’ activities
online.
Representative
Image
Source:
Reuters
ConversionConversion EmoticonEmoticon