Ahead of U.S. election, Facebook gives users some control over how they see political ads

By Katie Paul

SAN FRANCISCO (Reuters) – Facebook Inc said on Thursday it was making some changes to its approach to political ads, including allowing users to turn off certain ad-targeting tools, but the updates stop far short of critics’ demands and what rival companies have pledged to do.

The world’s biggest social network has vowed to curb political manipulation of its platform, after failing to counter alleged Russian interference and the misuse of user data by defunct political consulting firm Cambridge Analytica in 2016.

But ahead of the U.S. presidential election in November 2020, Facebook is struggling to quell criticism of its relatively hands-off ads policies. In particular it has come under fire after it exempted politicians’ ads from fact-checking standards applied to other content on its network.

Facebook said that in addition to rolling out a tool enabling individual users to choose to see fewer political and social issue ads on Facebook and its photo-sharing app Instagram, it will also make more ad audience data publicly available.

In contrast, Twitter Inc banned political ads in October, while Alphabet Inc’s Google said it would stop letting advertisers target election ads using data such as public voter records and general political affiliations.

Other online platforms like Spotify, Pinterest and TikTok have also issued bans.

In a blog post, Facebook’s director of product management Rob Leathern said the company considered imposing limits like Google’s, but decided against them as internal data indicated most ads run by U.S. presidential candidates are broadly targeted, at audiences larger than 250,000 people.

“We have based (our policies) on the principle that people should be able to hear from those who wish to lead them, warts and all,” Leathern wrote.

The expanded ad audience data features will be rolled out in the first quarter of this year and Facebook plans to deploy the political ads control starting in the United States early this summer, eventually expanding this preference to more locations.

CUSTOM AUDIENCES

Another change will be to allow users to choose to stop seeing ads based on an advertiser’s “Custom Audience” and that will apply to all types of advertising, not only political ads.

The “Custom Audiences” feature lets advertisers upload lists of personal data they maintain, like email addresses and phone numbers. Facebook then matches that information to user accounts and shows the advertiser’s content to those people.

However, Facebook will not give users a blanket option to turn off the feature, meaning they will have to opt out of seeing ads for each advertiser one by one, a spokesman told Reuters.

The change will also not affect ad targeting via Facebook’s Lookalike Audiences tool, which uses the same uploads of personal data to direct ads at people with similar characteristics to those on the lists, the spokesman said.

Leathern said in the post the company would make new information publicly available about the audience size of political ads in the company’s Ad Library, showing approximately how many people the advertisers aimed to reach.

The changes follow a New York Times report this week of an internal memo by senior Facebook executive Andrew Bosworth, who told employees the company had a duty not to tilt the scales against U.S. President Donald Trump’s re-election campaign.

Bosworth, a close confidant of Chief Executive Mark Zuckerberg who subsequently made his post public, wrote that he believed Facebook was responsible for Trump’s election in 2016, but not because of misinformation or Trump’s work with Cambridge Analytica.

Rather, he said, the Trump campaign used Facebook’s advertising tools most effectively.

(Reporting by Katie Paul; Editing by Edwina Gibbs)

Facebook removes 3.2 billion fake accounts, millions of child abuse posts

Facebook removes 3.2 billion fake accounts, millions of child abuse posts
(Reuters) – Facebook Inc <FB.O> removed 3.2 billion fake accounts between April and September this year, along with millions of posts depicting child abuse and suicide, according to its latest content moderation report released on Wednesday.

That more than doubles the number of fake accounts taken down during the same period last year, when 1.55 billion accounts were removed, according to the report.

The world’s biggest social network also disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, which has been identified as a growing area of concern about fake news by disinformation researchers.

Proactive detection of violating content was lower across all categories on Instagram than on Facebook’s flagship app, where the company initially implemented many of its detection tools, the company said in its fourth content moderation report.

For example, the company said it proactively detected content affiliated with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram.

It removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook and 754,000 pieces on Instagram during the third quarter.

Facebook also added data on actions it took around content involving self-harm for the first time in the report. It said it had removed about 2.5 million posts in the third quarter that depicted or encouraged suicide or self-injury.

The company also removed about 4.4 million pieces involving drug sales during the quarter, it said in a blog post.

(Reporting by Akanksha Rana in Bengaluru and Katie Paul in San Francisco; Editing by Maju Samuel and Lisa Shumaker)

U.S. social media firms say they are removing violent content faster

By David Shepardson

WASHINGTON (Reuters) – Major U.S. social media firms told a Senate panel Wednesday they are doing more to prevent to remove violent or extremist content from online platforms in the wake of several high-profile incidents, focusing on using more technological tools to act faster.

Critics say too many violent videos or posts that back extremist groups supporting terrorism are not immediately removed from social media websites.

Senator Richard Blumenthal, a Democrat, said social media firms need to do more to prevent violent content.

Facebook’s head of global policy management, Monika Bickert, told the Senate Commerce Committee its software detection systems have “reduced the average time it takes for our AI to find a violation on Facebook Live to 12 seconds, a 90% reduction in our average detection time from a few months ago.”

In May, Facebook Inc said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Bickert said Facebook asked law enforcement agencies to help it access “videos that could be helpful training tools” to improve its machine learning to detect violent videos.

Earlier this month, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill after police in Texas said they were “reasonably confident” the man who shot and killed 22 people at a Walmart in El Paso, Texas.

Facebook banned links to violent content that appeared on 8chan.

Twitter Inc public policy director Nick Pickles said the website suspended more than 1.5 million accounts for terrorism promotion violations between August 2015 and the end of 2018 with “more than 90% of these accounts are suspended through our proactive measures.”

Twitter was asked by Senator Rick Scott why the site allows Venezuelan President Nicolas Maduro to have an account given what he said were a series of brazen human rights violations. “If we remove that person’s account it will not change facts on the ground,” Pickles said, who added that Maduro’s account has not broken Twitter’s rules.

Alphabet Inc unit Google’s global director of information policy, Derek Slater, said the answer is “a combination of technology and people. Technology can get better and better at identifying patterns. People can help deal with the right nuances.”

Of 9 million videos removed in a three-month period this year by YouTube, 87% were flagged by artificial intelligence.

(Reporting by David Shepardson; Editing by Nick Zieminski)

U.S. House passes bill to penalize websites for sex trafficking US

FILE PHOTO - The U.S. Capitol Building is lit at sunset in Washington, U.S., December 20, 2016. REUTERS/Joshua Roberts

By Dustin Volz

WASHINGTON (Reuters) – The U.S. House of Representatives on Tuesday overwhelmingly passed legislation to make it easier to penalize operators of websites that facilitate online sex trafficking, chipping away at a bedrock legal shield for the technology industry.

The bill’s passage marks one of the most concrete actions in recent years from the U.S. Congress to tighten regulation of internet firms, which have drawn heavy scrutiny from lawmakers in both parties over the past year due to an array of concerns regarding the size and influence of their platforms.

The House passed the measure 388-25. It still needs to pass the U.S. Senate, where similar legislation has already gained substantial support, and then be signed by President Donald Trump before it can become law.

Speaker Paul Ryan, in a statement before the vote, said the bill would help “put an end to modern-day slavery here in the United States.”

The White House issued a statement generally supportive of the bill, but said the administration “remains concerned” about certain provisions that it hopes can be resolved in the final legislation.

Several major internet companies, including Alphabet Inc’s Google and Facebook Inc, had been reluctant to support any congressional effort to dent what is known as Section 230 of the Communications Decency Act, a decades-old law that protects them from liability for the activities of their users.

But facing political pressure, the internet industry slowly warmed to a proposal that gained traction in the Senate last year, and eventually endorsed it after it gained sizeable bipartisan support.

Republican Senator Rob Portman, a chief architect of the Senate proposal, said in a statement he supported the House’s similar version and called on the Senate to quickly pass it.

The legislation is a result of years of law-enforcement lobbying for a crackdown on the online classified site backpage.com, which is used for sex advertising.

It would make it easier for states and sex-trafficking victims to sue social media networks, advertisers and others that fail to keep exploitative material off their platforms.

Some critics warned that the House measure would weaken Section 230 in a way that would only serve to further help established internet giants, who possess larger resources to police their content, and not adequately address the problem.

“This bill will only prop up the entrenched players who are rapidly losing the public’s trust,” Democratic Senator Ron Wyden, an original author of Section 230, said. “The failure to understand the technological side effects of this bill – specifically that it will become harder to expose sex-traffickers, while hamstringing innovation – will be something that this Congress will regret.”

(Reporting by Dustin Volz; editing by Sandra Maler and Lisa Shumaker)