Standards, Regulations & Compliance

Senate Grills Tech Giants Over Russian Fake News

Twitter, Google and Facebook Downplay Impact But Promise Stronger Controls
Senate Grills Tech Giants Over Russian Fake News
Sean Edgett, Twitter's acting general counsel, appears before a Senate Judiciary Committee subcommittee.

Technology lawyers for Twitter, Google and Facebook vowed before a Senate committee Tuesday to implement tighter controls on their platforms after finding Russia's disinformation and propaganda efforts on social media reached far more people in the U.S. than previously thought.

See Also: Meeting the Mandate: A Proactive Approach to Cybersecurity Compliance and Incident Reporting

A Senate Judiciary Committee subcommittee is investigating how Russia used the vast reach of U.S.-made technology platforms to influence the 2016 presidential election, which resulted in Donald Trump' victory. The companies are also scheduled to testify before two other committees this week.

Twitter, Google and Facebook, which largely dominate social media and video distribution, have for months been internally investigating advertisements and postings that may have come from Russia and been intended to influence U.S. voters.

U.S. intelligence agencies concluded in January that Russia used a multiprong campaign designed to shake faith in electoral integrity and that "Putin and the Russian government aspired to help President-elect Trump's election chances." That included using social media to plant controversial narratives or false information on emotive issues such as gun control, immigration and race, as well as launching hacking and data leak operations (see Deep Dive: US Intelligence Report Into Russian Hacking).

While acknowledging that their platforms were used for manipulative activities, the social media firms' lawyers sought to portray the questionable content as being a small fraction of the overall material displayed on their services. But all three revealed many more Russian postings than they had previously disclosed.

It's unclear what action Congress plans to take to prevent a recurrence of what transpired leading up to the 2016 presidential election. But existing rules already offer powerful mechanisms to address foreign interference in elections, Corynne McSherry, legal director of the Electronic Frontier Foundation, tells Information Security Media Group via email.

She warns, however, that any revisions to those rules must be carefully drafted to protect voters. "Above all, our right to participate and voice our opinions must not be compromised on the way to preventing foreign intervention [in] our elections," McSherry says.

Disturbing Advertisements

Facebook says that over a two-year period between 2015 and 2017, 126 million people may have been exposed to content created by a content mill known as the Internet Research Agency, located in St. Petersburg, Russia.

The agency placed 80,000 posts and bought 3,000 ads on Facebook and its photo-sharing service, Instagram, between June 2015 and August 2017. Facebook received $100,000 for the ads, according to written testimony from Colin Stretch, Facebook's general counsel.

"Many of the ads and posts we've seen so far are deeply disturbing - seemingly intended to amplify societal divisions and pit groups of people against each other," Stretch says. "They would be controversial even if they came from authentic accounts in the United States. But coming from foreign actors using fake accounts, they are simply unacceptable."

Two Russian-created advertisements shown during the Oct. 31 hearing. (Source: The Wall Street Journal)

Prior to the presidential election, Twitter spotted some accounts tweeting false information about voting and efforts to amplify the reach of those posts through automated retweeting, Twitter's acting general counsel, Sean Edgett, told the committee.

Since that time, Twitter has identified 36,746 Russia-linked accounts that posted 1.4 million election-related tweets. Edgett contended, however, that those tweets comprised just 0.74 percent of the election-related tweets at the time. Of the Russia-linked accounts, 2,752 were linked to the Internet Research Agency.

"While Russian election-related malicious activity on our platform appears to have been small in comparison to overall activity, we find any such activity unacceptable," Edgett says.

Twitter also received advertising revenue from Russian sources for ads that didn't comply with its prohibition on inflammatory or low-quality content. On Oct. 26, the company announced it would no longer accept advertising from RT, formerly known as Russia Today, the publishing outlet that closely hews to Russia's foreign policy. RT has spent $1.9 million on advertising with Twitter, Edgett says.

Suspicious Ad Money

Google says it reviewed advertisements placed from June 2015 through Nov. 8, 2016, which was Election Day. Two accounts that spent $4,700 were linked to suspected government-backed entities, although Google's Senior Counsel Richard Salgado didn't mention Russia.

Google found 1,100 videos uploaded to its YouTube video service "by individuals who we suspect are associated with this effort and that contained political content," Salgado says. The videos amounted to 43 hours of content, and most had less than 5,000 views.

"While this is a relatively small amount of content people watch over a billion hours of YouTube content a day, and 400 hours of content are uploaded every minute - we understand that any misuse of our platforms for this purpose is a serious challenge to the integrity of our democracy," Salgado told the committee.

Tighter Controls

Facing mounting political pressure as a result of concerns that their platforms could again be used for voter manipulation, the companies outlined their future strategies.

Google's Salgado says his company will release a transparency report in 2018 covering election-related ads, as well as a library of advertising content that researchers can study. Users will also be able to click on a political ad and immediately see the name of the advertiser.

Twitter says it will limit the visibility of abusive or low-quality tweets and halt malicious automated content. It also promises more diligent account monitoring, including detection of new accounts created by actors who've had other accounts banned.

"One of our key initiatives has been to shorten the amount of time that suspicious accounts remain visible on our platform while pending verification - from 35 days to two weeks - with unverified accounts being suspended after that time," Edgett says.

Facebook, whose founder Mark Zuckerberg previously dismissed as a "crazy idea" any suggestion that his platform's circulation of bogus news stories could have swayed the election, now says it is maintaining a calendar of elections to help predict and identify potential threats.

Political advertisers will have to provide more documentation to prove their identities, Facebook's Stretch says. Users will also get a clearer idea of who is paying for what.

"Their accounts and their ads will be marked as political, and they will have to show details, including who paid for the ads," Stretch says.


About the Author

Jeremy Kirk

Jeremy Kirk

Executive Editor, Security and Technology, ISMG

Kirk was executive editor for security and technology for Information Security Media Group. Reporting from Sydney, Australia, he created "The Ransomware Files" podcast, which tells the harrowing stories of IT pros who have fought back against ransomware.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.asia, you agree to our use of cookies.