Facebook suspends Russian Instagram accounts targeting U.S. voters

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

Facebook suspends Russian Instagram accounts targeting U.S. voters
By Jack Stubbs and Christopher Bing

LONDON/WASHINGTON (Reuters) – Facebook Inc. said on Monday it has suspended a network of Instagram accounts operated from Russia that targeted Americans with divisive political messages ahead of next year’s U.S. presidential election, with operators posing as people within the United States.

Facebook said it also had suspended three separate networks operated from Iran. The Russian network “showed some links” to Russia’s Internet Research Agency (IRA), Facebook said, an organization Washington has said was used by Moscow to meddle in the 2016 U.S. election.

“We see this operation targeting largely U.S. public debate and engaging in the sort of political issues that are challenging and sometimes divisive in the U.S. right now,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.

“Whenever you do that, a piece of what you engage on are topics that are going to matter for the election. But I can’t say exactly what their goal was.”

Facebook also announced new steps to fight foreign interference and misinformation ahead of the November 2020 election, including labeling state-controlled media outlets and adding greater protections for elected officials and candidates who may be vulnerable targets for hacking.

U.S. security officials have warned that Russia, Iran and other countries could attempt to sway the result of next year’s presidential vote. Officials say they are on high alert for signs of foreign influence campaigns on social media.

Moscow and Tehran have repeatedly denied the allegations.

Gleicher said the IRA-linked network used 50 Instagram accounts and one Facebook account to gather 246,000 followers, about 60% of which were in the United States.

The earliest accounts dated to January this year and the operation appeared to be “fairly immature in its development,” he said.

“They were pretty focused on audience-building, which is the thing you do first as you’re sort of trying to set up an operation.”

Ben Nimmo, a researcher with social media analysis company Graphika who Facebook commissioned, said the flagged accounts shared material that could appeal to Republican and Democratic voters alike.

Most of the messages plagiarized material authored by leading conservative and progressive pundits. This included recycling comments initially shared on Twitter that criticized U.S. congresswoman Alexandria Ocasio-Cortez, Democratic presidential candidate Joe Biden and current President Donald Trump.

“What’s interesting in this set is so much of what they were doing is copying and pasting genuine material from actual Americans,” Nimmo told Reuters. “This may be indicative of an effort to hide linguistic deficiencies, which have made them easier to detect in the past.”

Attorneys for Concord Management and Consulting LLC have denied any wrongdoing. U.S. prosecutors say the firm is controlled by Russian catering tycoon Evgeny Prigozhin and helped orchestrate the IRA’s operations.

Gleicher said the separate Iranian network his team identified used more than 100 fake and hacked accounts on Facebook and Instagram to target U.S. users and some French-speaking parts of North Africa. Some accounts also repurposed Iranian state media stories to target users in Latin American countries including Venezuela, Brazil, Argentina, Bolivia, Peru, Ecuador and Mexico.

The activity was connected to an Iranian campaign first identified in August last year, which Reuters showed aimed to direct internet users to a sprawling web of pseudo-news websites which repackaged propaganda from Iranian state media.

The accounts “typically posted about local political news and geopolitics including topics like public figures in the U.S., politics in the U.S. and Israel, support of Palestine and conflict in Yemen,” Facebook said.

(Reporting by Jack Stubbs; Additional reporting by Elizabeth Culliford in San Francisco; Editing by Chris Reese, Tom Brown and David Gregorio)

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader
SAN FRANCISCO (Reuters) – Disinformation campaigns helped lead to the assassination of Martin Luther King, the daughter of the U.S. civil rights champion said on Thursday after the head of Facebook said social media should not factcheck political advertisements.

The comments come as Facebook Inc  is under fire for its approach to political advertisements and speech, which Chief Executive Mark Zuckerberg defended on Thursday in a major speech that twice referenced King, known by his initials MLK.

King’s daughter, Bernice, tweeted that she had heard the speech. “I’d like to help Facebook better understand the challenges #MLK faced from disinformation campaigns launched by politicians. These campaigns created an atmosphere for his assassination,” she wrote from the handle @BerniceKing.

King died of an assassin’s bullet in Memphis, Tennessee, on April 4, 1968.

Zuckerberg argued that his company should give voice to minority views and said that court protection for free speech stemmed in part from a case involving a partially inaccurate advertisement by King supporters. The U.S. Supreme Court protected the supporters from a lawsuit.

“People should decide what is credible, not tech companies,” Zuckerberg said.

“We very much appreciate Ms. King’s offer to meet with us. Her perspective is invaluable and one we deeply respect. We look forward to continuing this important dialogue with her in Menlo Park next week,” a Facebook spokesperson said.

(Reporting by Peter Henderson; Editing by Lisa Shumaker)

Facebook’s Zuckerberg hits pause on China, defends political ads policy

Facebook’s Zuckerberg hits pause on China, defends political ads policy
By David Shepardson and Katie Paul

WASHINGTON (Reuters) – Facebook Inc <FB.O> Chief Executive Mark Zuckerberg on Thursday defended the social media company’s political advertising policies and said it was unable to overcome China’s strict censorship, attempting to position his company as a defender of free speech.

“I wanted our services in China because I believe in connecting the whole world, and I thought maybe we could help creating a more open society,” Zuckerberg said, addressing students at Georgetown University.

“I worked hard on this for a long time, but we could never come to agreement on what it would take for us to operate there,” he said. “They never let us in.”

He did not address what conditions or assurances he would need to enter the Chinese market.

Facebook tried for years to break into China, one of the last great obstacles to Zuckerberg’s vision of connecting the world’s entire population on the company’s apps.

Zuckerberg met with Chinese President Xi Jinping in Beijing and welcomed the country’s top internet regulator to Facebook’s campus. He also learned Mandarin and posted a photo of himself running through Tiananmen Square, which drew a sharp reaction from critics of the country’s restrictive policies.

The company briefly won a license to open an “innovation hub” in Hangzhou last year, but it was later revoked.

Zuckerberg effectively closed that door in March, when he announced his plan to pivot Facebook toward more private forms of communication and pledged not to build data centers in countries “that have a track record of violating human rights like privacy or freedom of expression.”

He repeated his concern about data centers on Thursday, this time specifically naming China.

Zuckerberg also defended the company’s political advertising policies on similar grounds, saying Facebook had at one time considered banning all political ads but decided against it, erring on the side of greater expression.

Facebook has been under fire over its advertising policies, particularly from U.S. Senator Elizabeth Warren, a leading contender for the Democratic presidential nomination.

The company exempts politicians’ ads from fact-checking standards applied to other content on the social network. Zuckerberg said political advertising does not contribute much to the company’s revenues, but that he believed it would be inappropriate for a tech company to censor public figures.

Reuters reported in October 2018, citing sources, that Facebook executives briefly debated banning all political ads, which produce less than 5% of the company’s revenue.

The company rejected that because product managers were loath to leave advertising dollars on the table and policy staffers argued that blocking political ads would favor incumbents and wealthy campaigners who can better afford television and print ads, the sources said.

Facebook has been under scrutiny in recent years for its lax approach to fake news reports and disinformation campaigns, which many believe affected the outcome of the 2016 U.S. presidential election, won by Donald Trump.

Trump has disputed claims that Russia has attempted to interfere in U.S. elections. Russian President Vladimir Putin has denied it.

Warren’s Democratic presidential campaign recently challenged Facebook’s policy that exempts politicians’ ads from fact-checking, running ads on the social media platform containing the false claim that Zuckerberg endorsed Trump’s re-election bid.

(Reporting by David Shepardson; Writing by Katie Paul; Editing by Lisa Shumaker)

Mass shooting rumor in Facebook Group shows private chats are not risk-free

By Bryan Pietsch

WASHINGTON (Reuters) – Ahead of the annual Blueberry Festival in Marshall County, Indiana, in early September, a woman broadcast a warning to her neighbors on Facebook.

“I just heard there’s supposed to be a mass shooting tonight at the fireworks,” the woman, whose name is held to protect her privacy, said in a post in a private Facebook Group with over 5,000 members. “Probably just a rumor or kids trying to scare people, but everyone keep their eyes open,” she said in the post, which was later deleted.

There was no shooting at the Blueberry Festival that night, and the local police said there was no threat.

But the post sparked fear in the community, with some group members canceling their plans to attend, and shows the power of rumors in Facebook Groups, which are often private or closed to outsiders. Groups allow community members to quickly spread information, and possibly misinformation, to users who trust the word of their neighbors.

These groups and other private features, rather than public feeds, are “the future” of social media, Facebook Inc <FB.O> Chief Executive Mark Zuckerberg said in April, revealing their importance to Facebook’s business model.

The threat of misinformation spreading rapidly in Groups shows a potential vulnerability in a key part of the company’s growth strategy. It could push Facebook to invest in expensive human content monitoring at the risk of limiting the ability to post in real time, a central benefit of Groups and Facebook in general that has attracted millions of users to the platform.

When asked if Facebook takes accountability for situations like the one in Indiana, a company spokeswoman said it is committed to maintaining groups as a safe place, and that it encourages people to contact law enforcement if they see a potential threat.

Facebook Groups can also serve as a tool for connecting social communities around the world, such as ethnic groups, university alumni and hobbyists.

Facebook’s WhatsApp messaging platform faced similar but more serious problems in 2018 after false messages about child abductors led to mass beatings of more than a dozen people in India, some of whom have died. WhatsApp later limited message forwards and began labeling forwarded messages to quell the risk of fake news.

FIREWORKS FEAR

The Blueberry Festival post caused chaos in the group, named “Local News Now 2…(Marshall and all surrounding Counties).”

In another post, which garnered over 100 comments of confusion and worry, a different member urged the woman to report the threat to the police. “This isn’t something to joke about or take lightly,” she wrote.

The author of the original post did not respond to repeated requests for comment.

Facebook’s policy is to remove language that “incites or facilitates serious violence,” the company spokeswoman said, adding that it did not remove the post and that it did not violate Facebook’s policies because there “was no threat, praise or support of violence.”

Cheryl Siddall, the founder of the Indiana group, said she would welcome tools from Facebook to give her greater “control” over what people post in the group, such as alerts to page moderators if posts contain certain words or phrases.

But Siddall said, “I’m sorry, but that’s a full-time job to sit and monitor everything that’s going on in the page.”

A Facebook spokeswoman said page administrators have the ability to remove a post if it violates the group’s proprietary rules and that administrators can pre-approve individual posts, as well as turn on post approvals for individual group members.

In a post to its blog, Facebook urged administrators to write “great group rules” to “set the tone for your group and help prevent member conflict,” as well as “provide a feeling of safety for group members.”

David Bacon, chief of police for the Plymouth Police Department in Marshall County, said the threat was investigated and traced back to an exaggerated rumor from children. Nonetheless, he said the post to the Facebook group is “what caused the whole problem.”

“One post grows and people see it, and they take it as the gospel, when in actuality you can throw anything you want out there,” Bacon said.

(Reporting by Bryan Pietsch; Editing by Chris Sanders)

FBI director warns Facebook could become platform of ‘child pornographers’

WASHINGTON (Reuters) – FBI Director Christopher Wray said Friday that Facebook’s proposed move to encrypt its popular messaging program would turn the platform into a “dream come true for predators and child pornographers.”

Wray, addressing a crowd of law enforcement and child protection officials at the Department of Justice in Washington, said that Facebook’s plan would produce “a lawless space created not by the American people or their representatives but by the owners of one big company.”

Facebook intends to add encryption of wide swathes of communications on its platform.

His speech, which came ahead of an address on the same topic by Attorney General William Barr, ratchets up the pressure on Facebook as the U.S. and allied governments renew their push to weaken the digital protections around the messages billions of people exchange each day.

Wray’s speech is part of a renewed push by the American, Australian, and British governments to force tech companies to help them circumvent the encryption that helps keeps digital communications secure.

Debates over encryption have been rumbling for more than 25 years, but officials’ anxiety has increased as major tech companies move toward automatically encrypting the messages on their platforms and the data held on their phones.

In the past, officials have cited the threat of terrorism to buttress their campaigns again encryption. But as the Islamic State and other extremist groups fade from the headlines, governments are trying a different tack, invoking the threat of child abuse to argue for “lawful access” to these devices.

Facebook’s privacy-focused move, announced by founder Mark Zuckerberg earlier this year, is causing particular consternation because the platform is the source of millions of tips to authorities about child abuse images every year.

Zuckerberg, speaking on the company’s weekly internal Q&A Livestream, defended the decision, saying he was “optimistic” that Facebook would be able to identify predators even in encrypted systems using the same tools it used to fight election interference.

“We’re going to lose the ability to find those kids who need to be rescued,” Wray said. “We’re going to lose the ability to find the bad guys.”

However, many of those outside the law enforcement have applauded Facebook’s push for privacy and security. Academics, experts, and privacy groups have long worried that circumventing the protections around private communications would open dangerous vulnerabilities that could make the entire internet less safe — and leave billions of users exposed to abusive surveillance.

Wray steered clear of making any specific proposal, saying that “companies themselves are best placed” to offer a way for law enforcement to get around encryption.

(Reporting by Raphael Satter; Editing by Steve Orlofsky)

U.S., allies urge Facebook not to encrypt messages as they fight child abuse, terrorism

By Joseph Menn, Christopher Bing and Katie Paul

WASHINGTON (Reuters) – The United States and allies are seizing on Facebook Inc’s plan to apply end-to-end encryption across its messaging services to press for major changes to a practice long opposed by law enforcement, saying it hinders the fight against child abuse and terrorism.

The United States, the United Kingdom and Australia plan to sign a special data agreement on Thursday that would fast track requests from law enforcement to technology companies for information about the communications of terrorists and child predators, according to documents reviewed by Reuters.

Law enforcement could get information in weeks or even days instead of the current wait of six months to two years, one document said.

The agreement will be announced alongside an open letter to Facebook and its Chief Executive Mark Zuckerberg, calling on the company to suspend plans related to developing end-to-end encryption technology across its messaging services.

The latest tug-of-war between governments and tech companies over user data could also impact Apple Inc, Alphabet Inc’s Google and Microsoft Corp, as well as smaller encrypted chat apps like Signal.

Washington has called for more regulation and launched anti-trust investigations against many tech companies, criticizing them over privacy lapses, election-related activity and dominance in online advertising.

Child predators have increasingly used messaging applications, including Facebook’s Messenger, in the digital age to groom their victims and exchange explicit images and videos. The number of known child sexual abuse images has soared from thousands to tens of millions in just the past few years.

Speaking at an event in Washington on Wednesday, Associate Attorney General Sujit Raman said the National Center for Missing and Exploited Children received more than 18 million tips of online child sex abuse last year, over 90% of them from Facebook.

He estimated that up to 75% of those tips would “go dark” if social media companies like Facebook were to go through with encryption plans.

Facebook said in a statement that it strongly opposes “government efforts to build backdoors,” which it said would undermine privacy and security.

Antigone Davis, Facebook’s global head of safety, told Reuters the company was looking at ways to prevent inappropriate behavior and stop predators from connecting with children.

This approach “offers us an opportunity to prevent harms in a way that simply going after content doesn’t,” she said.

In practice, the bilateral agreement would empower the UK government to directly request data from U.S. tech companies, which remotely store data relevant to their own ongoing criminal investigations, rather than asking for it via U.S. law enforcement officials.

The effort represents a two-pronged approach by the United States and its allies to pressure private technology companies while making information sharing about criminal investigations faster.

A representative for the U.S. Department of Justice declined to comment.

Susan Landau, a professor of cybersecurity and policy at the Fletcher School of Law and Diplomacy at Tufts University, said disputes over encryption have flared on-and-off since the mid-1990s.

She said government officials concerned with fighting child abuse would be better served by making sure investigators had more funding and training.

“They seem to ignore the low-hanging fruit in favor of going after the thing they’ve been going after for the past 25 years,” she said.

The letter addressed to Zuckerberg and Facebook comes from U.S. Attorney General William Barr, UK Secretary of State for the Home Department Priti Patel and Australian Minister of Home Affairs Peter Dutton.

“Our understanding is that much of this activity, which is critical to protecting children and fighting terrorism, will no longer be possible if Facebook implements its proposals as planned,” the letter reads.

“Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.”

WhatsApp’s global head Will Cathcart wrote in a public internet forum https://news.ycombinator.com/item?id=21100588 on Saturday that the company “will always oppose government attempts to build backdoors because they would weaken the security of everyone who uses WhatsApp including governments themselves.”

That app, which is already encrypted, is also owned by Facebook.

(Reporting by Joseph Menn and Katie Paul in San Francisco and Christopher Bing in Washington; Editing by Lisa Shumaker)

U.S. social media firms to testify on violent, extremist online content

By David Shepardson

WASHINGTON (Reuters) – Alphabet Inc’s Google, Facebook Inc and Twitter Inc will testify next week before a U.S. Senate panel on efforts by social media firms to remove violent content from online platforms, the panel said in a statement on Wednesday.

The Sept. 18 hearing of the Senate Commerce Committee follows growing concern in Congress about the use of social media by people committing mass shootings and other violent acts. Last week, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill.

The hearing “will examine the proliferation of extremism online and explore the effectiveness of industry efforts to remove violent content from online platforms. Witnesses will discuss how technology companies are working with law enforcement when violent or threatening content is identified and the processes for removal of such content,” the committee said.

Facebook’s head of global policy management Monika Bickert, Twitter public policy director Nick Pickles and Google’s global director of information policy Derek Slater are due to testify.

Facebook and Google both confirmed they will participate but declined to comment further. Twitter did not immediately comment.

In May, Facebook said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Facebook said it was introducing a “one-strike” policy for use of Facebook Live, a service which lets users broadcast live video. Those who broke the company’s most serious rules anywhere on its site would have their access to make live broadcasts temporarily restricted.

Facebook has come under intense scrutiny in recent years over hate speech, privacy lapses and its dominant market position in social media. The company is trying to address those concerns while averting more strenuous action from regulators.

(Reporting by David Shepardson, Editing by Rosalba O’Brien and Tom Brown)

Twitter, Facebook accuse China of using fake accounts to undermine Hong Kong protests

FILE PHOTO: A 3-D printed Facebook logo is seen in front of displayed binary code in this illustration picture, June 18, 2019. REUTERS/Dado Ruvic/Illustration/File Photo

By Katie Paul and Elizabeth Culliford

(Reuters) – Twitter Inc and Facebook Inc said on Monday they had dismantled a state-backed information operation originating in mainland China that sought to undermine protests in Hong Kong.

Twitter said it suspended 936 accounts and the operations appeared to be a coordinated state-backed effort originating in China. It said these accounts were just the most active portions of this campaign and that a “larger, spammy network” of approximately 200,000 accounts had been proactively suspended before they were substantially active.

Facebook said it had removed accounts and pages from a small network after a tip from Twitter. It said that its investigation found links to individuals associated with the Chinese government.

Social media companies are under pressure to stem illicit political influence campaigns online ahead of the U.S. election in November 2020. A 22-month U.S. investigation concluded Russia interfered in a “sweeping and systematic fashion” in the 2016 U.S. election to help Donald Trump win the presidency.

The Chinese embassy in Washington and the U.S. State Department were not immediately available to comment.

The Hong Kong protests, which have presented one of the biggest challenges for Chinese President Xi Jinping since he came to power in 2012, began in June as opposition to a now-suspended bill that would allow suspects to be extradited to mainland China for trial in Communist Party-controlled courts. They have since swelled into wider calls for democracy.

Twitter in a blog post said the accounts undermined the legitimacy and political positions of the protest movement in Hong Kong.

Examples of posts provided by Twitter included a tweet from a user with photos of protesters storming Hong Kong’s Legislative Council building, which asked: “Are these people who smashed the Legco crazy or taking benefits from the bad guys? It’s a complete violent behavior, we don’t want you radical people in Hong Kong. Just get out of here!”

In examples provided by Facebook, one post called the protesters “Hong Kong cockroaches” and claimed that they “refused to show their faces.”

In a separate statement, Twitter said it was updating its advertising policy and would not accept advertising from state-controlled news media entities going forward.

Alphabet Inc’s YouTube video service told Reuters in June that state-owned media companies maintained the same privileges as any other user, including the ability to run ads in accordance with its rules. YouTube did not immediately respond to a request for comment on Monday on whether it had detected inauthentic content related to protests in Hong Kong.

(Reporting by Katie Paul in Aspen, Colorado, and Elizabeth Culliford in San Francisco; Additional reporting by Sayanti Chakraborty in Bengaluru; Editing by Lisa Shumaker)

Instagram adds tool for users to flag false information

SAN FRANCISCO (Reuters) – Instagram is adding an option for users to report posts they think are false, the company announced on Thursday, as the Facebook-owned photo-sharing site tries to stem misinformation and other abuses on its platform.

Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

Facebook started using image-detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.

Results rated as false are removed from places where users seek out new content, like Instagram’s Explore tab and hashtag search results.

Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is only being rolled out in the United States.

“This is an initial step as we work toward a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.

Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over alleged Russian attempts to spread misinformation around the 2016 U.S. presidential election.

But an independent report commissioned by the Senate Select Committee on Intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.

Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, wrote researchers at New Knowledge, which conducted the analysis.

“Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.

It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.

Last month, UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on the company to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.

(Reporting by Elizabeth Culliford and Katie Paul; Editing by Cynthia Osterman)

U.S. lawmakers challenge Facebook over Libra cryptocurrency plan

FILE PHOTO: Representations of virtual currency are displayed in front of the Libra logo in this illustration picture, June 21, 2019. REUTERS/Dado Ruvic/Illustration/File Photo

By Pete Schroeder and Anna Irrera

WASHINGTON (Reuters) – U.S. lawmakers quizzed Facebook on Wednesday over its planned cryptocurrency, after a bruising first bout a day earlier when senators from both parties condemned the project, saying the company had not shown it could be trusted.

The social media company is fighting to get Washington on its side after it shocked regulators and lawmakers with its announcement on June 18 that it was hoping to launch a new digital coin called Libra in 2020.

It has faced criticism from policymakers and financial watchdogs at home and abroad who fear widespread adoption of the digital currency by Facebook’s 2.38 billion users could upend the financial system.

“I have serious concerns with Facebook’s plans to create a digital currency and digital wallet,” Maxine Waters, chairwoman of the Democrat-controlled House Financial Services Committee, said in her opening remarks.

“If Facebook’s plan comes into fruition, the company and its partners will yield immense economic power that could destabilize currencies.”

Lawmakers are questioning David Marcus, the Facebook executive overseeing the project, who was grilled by the Senate Banking Committee on Tuesday over the possible risks posed by Libra to data privacy, consumer protection and money laundering controls.

The hearing in Congress was proving to be even more tense on Wednesday.

The panel has already circulated draft legislation that could kill the project by banning Facebook and other tech firms from entering the financial services space.

Democratic Representative Carolyn Maloney pushed Marcus to commit to a Libra pilot program with one million users overseen by U.S. financial regulators, including the Federal Reserve.

“I don’t think you should launch Libra at all,” Maloney said. “At the very least you should agree to do this small pilot program.”

Marcus, who was president of PayPal from 2012 to 2014, did not commit to a pilot but tried to assuage lawmakers by pledging not to begin issuing Libra until regulatory concerns had been addressed.

“We will take the time to get this right,” Marcus said.

He said the company had unveiled the project at an early stage in order to get feedback from all stakeholders.

Representatives on both sides of the aisle asked how the company will ensure sufficient consumer protection and prevent the cryptocurrency from being used for illegal activities such as money laundering or terrorist financing.

“I’m concerned a 2020 launch date represents deep insensitivities about how Libra could impact U.S. financial security, the global financial system, the privacy of people across the globe, criminal activity and international human rights,” said Republican Representative Ann Wagner.

Facebook has been on the defense amid a backlash over mishandling user data and not doing enough to prevent Russian interference in the 2016 U.S. presidential election.

(Reporting by Pete Schroeder and Anna Irrera; editing by Cynthia Osterman, Bernadette Baum and Susan Thomas)