Twitter intends to notify users exposed to Russian propaganda

By Updated at 2018-01-17 21:49:39 +0000

18320_twitter_intends_to_notify_users_exposed_to_russian_propaganda



WASHINGTON (Reuters) - Twitter may notify users whether they were exposed to content generated by a suspected Russian propaganda service, a company executive told U.S. lawmakers on Wednesday.

The social media company is “working to identify and inform individually” its users who saw tweets during the 2016 U.S. presidential election produced by accounts tied to the Kremlin-linked Internet Research Army, Carlos Monje, Twitter’s director of public policy, told the U.S. Senate Commerce, Science and Transportation Committee.

A Twitter spokeswoman did not immediately respond to a request for comment about plans to notify its users.

Facebook Inc in December created a portal where its users could learn whether they had liked or followed accounts created by the Internet Research Agency. Alphabet Inc has said the way its services operate make it difficult to provide a similar notice, a position Democratic Senator Richard Blumenthal criticized during the hearing.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, which is investigating Russian meddling in the 2016 election, applauded Twitter’s announcement.

variety of cyber-enabled means to sow political discord and help President Donald Trump win. Russia has repeatedly denied the allegations.

The three social media companies faced a wide array of questions related to how they police different content on their services, including extremist recruitment, gun sales, automated spam accounts, intentionally fake news stories and Russian propaganda.

Monje said Twitter had improved its ability to detect and remove “maliciously automated” accounts, and now challenged up to 4 million per week - up from 2 million per week last year.

Facebook’s head of global policy, Monika Bickert, said the company was deploying a mix of technology and human review to “disrupt false news and help (users) connect with authentic news.”

Most attempts to spread disinformation on Facebook were financially motivated, Bickert said.

The companies repeatedly touted increasing success in using algorithms and artificial intelligence to catch content not suitable for their services.

Juniper Downs, YouTube’s director of public policy, said algorithms quickly catch and remove 98 percent of videos flagged for extremism. But the company still deploys some 10,000 human reviewers to monitor videos, Downs said.

Reporting by Dustin Volz; Editing by Nick Zieminski

Comments