Facebook, Google and Twitter are stepping up efforts to
combat online propaganda and recruiting by Islamic militants, but the Internet
companies are doing it quietly to avoid the perception that they are helping
the authorities police the Web.
On Friday, Facebook Inc said it took down a
profile that the company believed belonged to San Bernardino shooter Tashfeen Malik, who
with her husband is accused of killing 14 people in a mass shooting that the
FBI is investigating as an "act of terrorism."
Just a day earlier,
the French prime minister and European Commission officials met separately with
Facebook, Google , Twitter Inc and other companies to demand faster action on
what the commission called "online terrorism incitement and hate
speech."
The Internet companies described their policies as
straightforward: they ban certain types of content in accordance with their own
terms of service, and require court orders to remove or block anything beyond
that.
Anyone can report, or flag, content for review and possible removal. But
the truth is far more subtle and complicated.
According to former employees,
Facebook, Google and Twitter all worry that if they are public about their true
level of cooperation with Western law enforcement agencies, they will face
endless demands for similar action from countries around the world.
They also
fret about being perceived by consumers as being tools of the government.
Worse, if the companies spell out exactly how their screening works, they run
the risk that technologically savvy militants will learn more about how to beat
their systems.
"If they knew what magic sauce went into pushing content
into the newsfeed, spammers or whomever would take advantage of that,"
said a security expert who had worked at both Facebook and Twitter, who asked
not to be identified because of the sensitivity of the issue.
One of the most
significant yet least understood aspects of the propaganda issue is the range
of ways in which social media companies deal with government officials.
Facebook, Google and Twitter say they do not treat government complaints
differently from citizen complaints, unless the government obtains a court
order.
The trio are among a growing number that publish regular transparency
reports summarizing the number of formal requests from officials about content
on their sites.
But there are workarounds, according to former employees,
activists and government officials.
A key one is for officials or their allies
to complain that a threat, hate speech or celebration of violence violates the
company's terms of service, rather than any law.
Such content can be taken down
within hours or minutes, and without the paper trail that would go with a court
order.
"It is commonplace for federal authorities to directly contact
Twitter and ask for assistance, rather than going through formal
channels," said an activist who has helped get numerous accounts disabled.
In the San Bernardino
case, Facebook said it took down Malik's profile, established under an alias,
for violating its community standards, which prohibit praise or promotion of
"acts of terror."
The spokesman said there was pro-Islamic State
content on the page but declined to elaborate.
Activists mobilize
Some well-organized
online activists have also had success getting social media sites to remove
content.
A French-speaking activist using the Twitter alias NageAnon said he
helped get rid of thousands of YouTube videos by spreading links of clear cases
of policy violations and enlisting other volunteers to report them.
"The
more it gets reported, the more it will get reviewed quickly and treated as an
urgent case," he said in a Twitter message to Reuters.
A person familiar
with YouTube's operations said that company officials tend to quickly review
videos that generate a high number of complaints relative to the number of
views.
Relying on numbers can lead to other kinds of problems.
Facebook
suspended or restricted the accounts of many pro-Western Ukrainians after they
were accused of hate speech by multiple Russian-speaking users in what appeared
to be a coordinated campaign, said former Facebook security staffer Nick
Bilogorskiy, a Ukrainian immigrant who helped some of those accounts win
appeals.
He said the complaints have leveled off.
A similar campaign attributed
to Vietnamese officials at least temporarily blocked content by government
critics, activists said.
Facebook declined to discuss these cases.
What law
enforcement, politicians and some activists would really like is for Internet
companies to stop banned content from being shared in the first place.
But that
would pose a tremendous technological challenge, as well as an enormous policy
shift, former executives said.
Some child pornography can be blocked because
the technology companies have access to a database that identifies previously
known images.
A similar type of system is in place for copyrighted music.
There
is no database for videos of violent acts, and the same footage that might
violate a social network's terms of service if uploaded by an anonymous
militant might pass if it were part of a news broadcast.
Nicole Wong, who
previously served as the White House's deputy chief technology officer, said
tech companies would be reluctant to create a database of jihadists videos,
even if it could be kept current enough to be relevant, for fear that
repressive governments would demand such set-ups to pre-screen any content they
do not like.
"Technology companies are rightfully cautious because they
are global players, and if they build it for one purpose they don't get to say
it can't be used for anything else," said Wong, a former Twitter and
Google legal executive.
"If you build it, they will come - it will also be
used in China
to stop dissidents."
Trusted flagger
There have been some formal policy
changes. Twitter revised its abuse policy to ban indirect threats of violence,
in addition to direct threats, and has dramatically improved its speed for
handling abuse requests, a spokesman said.
"Across the board we respond to
requests more quickly, and it's safe to say government requests are in that
bunch," the spokesman said.
Facebook said it banned this year any content
praising terrorists.
Google's YouTube has expanded a little-known "Trusted
Flagger" program, allowing groups ranging from a British anti-terror
police unit to the Simon
Wiesenthal Center ,
a human rights organization, to flag large numbers of videos as problematic and
get immediate action.
A Google spokeswoman declined to say how many trusted
flaggers there were, but said the vast majority were individuals chosen based
on their past accuracy in identifying content that violated YouTube's policies.
No U.S. government agencies
were part of the program, though some non-profit U.S. entities have joined in the
past year, she said.
"There's no Wizard of Oz syndrome. We send stuff in
and we get an answer," said Rabbi Abraham Cooper, head of the Wiesenthal Center 's Digital Terrorism and Hate
project.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.