Weekly News

This is Facebook's self-defense plan for the 2018 midterm elections

March 29,2018 19:09

Facebook has a four-part plan to protect its platform from malicious attacks during the 2018 US midterm elections, company executives said today. In a conference call with reporters, representatives from Facebook's security, product, and advertising ...


Facebook has a four-part plan to protect its platform from malicious attacks during the 2018 US midterm elections, company executives said today. In a conference call with reporters, representatives from Facebook’s security, product, and advertising teams laid out their strategy for preventing the kinds of problems that plagued it during the 2016 campaign. While most bad actors are motivated by profits, executives said, state-sponsored attackers continue in their efforts to manipulate public opinion using posts on Facebook.
Here’s Facebook’s plan to shore up its security over the next several months.
1. Fighting foreign interference. Executives pointed to the FBI’s creation of a task force to monitor social media as an important step to identifying election threats in real time. Alex Stamos, Facebook’s chief information security officer, said the company is also working with unidentified outside experts to identify outside threats. Every election is different, he said, and the company is working to develop different approaches to fighting bad actors depending on the different risks inside each country.
“When you tease apart the overall digital misinformation problem, you find multiple types of bad content and many bad actors with different motivations,” Stamos said. “It is important to match the right approach to these various challenges. And that requires not just careful analysis of what has happened. We also have to have the most up-to-date intelligence to understand completely new types of misinformation.”

“We have to have the most up-to-date intelligence to understand completely new types of misinformation.”
2. Removing fake accounts. Deep-pocketed attackers create thousands of fake Facebook accounts and use them to push divisive narratives inside the countries they target. Last year, researchers at the University of Oxford found that one group in Poland had created 40,000 fake accounts across various social media services and used them in an effort to sway elections.
Samidh Chakrabarti, a product manager who works on election security, said Facebook now deletes “millions” of fake accounts every day. “We’ve been able to do this thanks to advances in machine learning, which have allowed us to find suspicious behaviors — without assessing the content itself,” Chakrabarti said. He said the company recently deployed a new tool that looks for pages of foreign origin that are “distributing inauthentic civic content.” The tool alerted Facebook to a group of Macedonians who were attempting to influence the recent US Senate election in Alabama, he said, and the company subsequently removed them from the platform.
3. Letting people view every ad on the platform. During previous elections, advertisers were able to create so-called “dark posts” — ads that had no permalink and were only viewable to Facebook users in the News Feed. Facebook has since banned the practice, and this summer will roll out globally a tool that lets users view every ad on the platform. Advertisers will have to disclose which candidate or campaign they are representing.
“Beyond the ad creative itself, we’ll also show how much money was spent on each ad, the number of impressions it received, and the demographic information about the audience reached,” said Rob Leathern, director of product management for ads. “And we will display those ads for four years after they ran. So researchers, journalists, watchdog organizations, or individuals who are just curious will be able to see all of these ads in one place.”

“Researchers, journalists, watchdog organizations, or individuals who are just curious will be able to see all of these ads in one place.”
4. Reducing the spread of false news. Facebook relies heavily on third-party fact checkers to assess the accuracy of viral stories. Until recently, though, fact checkers have only been able to assess article links — creating incentives for bad actors to format their fake news into images and videos. Fact checkers complained about this loophole to Facebook, and the company said that starting this week, fact checkers are able to review photos and videos as well as article links.
Tessa Lyons, a product manager for the News Feed, said that articles labeled false by fact checkers were distributed 80 percent less on average. She said Facebook has also started to go after fake news at the domain level, taking disciplinary steps against pages that repeatedly push hoaxes. “We reduce their distribution and remove their ability to advertise and monetize — stopping them from reaching, growing, or profiting from their audience,” she said.
Facebook’s plan appears to be comprehensive — and it will also be costly. Executives said that the total number of employees working on security and integrity-related issues will double this year, to 20,000.
The question is whether the company can anticipate new threats as they emerge. Facebook was fighting spammers and other bad actors before 2016; the problem was that they changed their tactics in ways that the company didn’t anticipate. Facebook, to its credit, is aware of this. The danger to the company is that it gets “tunnel vision” in addressing only the problems it sees today, Stamos said. “We don’t only want to be fighting the last war,” he said.

elections 2018 elections in hungary elections elections in europe elections in italy elections in poland elections in the uk elections serbia elections 2018 europe elections in czech republic

Share this article

Related videos

Trump Effect Helping Dems Win
Trump Effect Helping Dems Win "Unwinnable" Spec...
DHS secretary says election security of
DHS secretary says election security of "extrem...

DON'T MISS THIS STORIES