Facebook removes seven million posts for sharing false information on coronavirus

(Reuters) – Facebook Inc. said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.

Facebook released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.

The company said it would invite external experts to independently audit the metrics used in the report, beginning 2021.

The world’s biggest social media company removed about 22.5 million posts containing hate speech on its flagship app in the second quarter, up from 9.6 million in the first quarter. It also deleted 8.7 million posts connected to extremist organizations, compared with 6.3 million in the prior period.

Facebook said it relied more heavily on automation technology for reviewing content during the months of April, May and June as it had fewer reviewers at its offices due to the COVID-19 pandemic.

That resulted in company taking action on fewer pieces of content related to suicide and self-injury, child nudity and sexual exploitation on its platforms, Facebook said in a blog post.

The company said it was expanding its hate speech policy to include “content depicting blackface, or stereotypes about Jewish people controlling the world.”

Some U.S. politicians and public figures have caused controversies by donning blackface, a practice that dates back to 19th century minstrel shows that caricatured slaves. It has long been used to demean African-Americans.

(Reporting by Katie Paul in San Francisco and Munsif Vengattil in Bengaluru; Additional Reporting by Bart Meijer; Editing by Shinjini Ganguli and Anil D’Silva)

Instagram adds tool for users to flag false information

SAN FRANCISCO (Reuters) – Instagram is adding an option for users to report posts they think are false, the company announced on Thursday, as the Facebook-owned photo-sharing site tries to stem misinformation and other abuses on its platform.

Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

Facebook started using image-detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.

Results rated as false are removed from places where users seek out new content, like Instagram’s Explore tab and hashtag search results.

Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is only being rolled out in the United States.

“This is an initial step as we work toward a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.

Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over alleged Russian attempts to spread misinformation around the 2016 U.S. presidential election.

But an independent report commissioned by the Senate Select Committee on Intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.

Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, wrote researchers at New Knowledge, which conducted the analysis.

“Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.

It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.

Last month, UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on the company to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.

(Reporting by Elizabeth Culliford and Katie Paul; Editing by Cynthia Osterman)

Factbox: ‘Fake News’ laws around the world

Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

SINGAPORE (Reuters) – Singapore’s parliament on Monday began considering a law on “fake news” that an internet watchdog has called the world’s most far-reaching, stoking fears the government could use additional powers to choke freedom of speech and chill dissent.

Governments and companies worldwide are increasingly worried about the spread of false information online and its impact on everything from share prices to elections and social unrest.

Human rights activists fear laws to curb so-called “fake news” could be abused to silence opposition.

Here are details of such laws around the world:

SINGAPORE

Singapore’s new law would require social media sites like Facebook to carry warnings on posts the government deems false and remove comments against the “public interest”.

Singapore, which ranks 151 among 180 countries rated by the World Press Freedom Index, defines “public interests” as threats to its security, foreign relations, electoral integrity and public perception of the government and state institutions.

Violations could attract fines of up to S$ 1 million ($737,500) and 10 years in prison.

RUSSIA

Last month, President Vladimir Putin signed into law tough new fines for Russians who spread what the authorities regard as fake news or who show “blatant disrespect” for the state online.

Critics have warned the law could aid state censorship, but lawmakers say it is needed to combat false news and abusive online comment.

Authorities may block websites that do not meet requests to remove inaccurate information. Individuals can be fined up to 400,000 rouble ($6,109.44) for circulating false information online that leads to a “mass violation of public order”.

FRANCE

France passed two anti-fake news laws last year, to rein in false information during election campaigns following allegations of Russian meddling in the 2017 presidential vote.

President Emmanuel Macron vowed to overhaul media laws to fight “fake news” on social media, despite criticism that the move was a risk to civil liberties.

GERMANY

Germany passed a law last year for social media companies, such as Facebook and Twitter, to quickly remove hate speech.

Called NetzDG for short, the law is the most ambitious effort by a Western democracy to control what appears on social media. It will enforce online Germany’s tough curbs on hate speech, including pro-Nazi ideology, by giving sites a 24-hour deadline to remove banned content or face fines of up to 50 million euros.

Since it was adopted, however, German officials have said too much online content was being blocked, and are weighing changes.

MALAYSIA

Malaysia’s ousted former government was among the first to adopt a law against fake news, which critics say was used to curb free speech ahead of last year’s general elections, which it lost. The measure was seen as a tool to fend off criticism over graft and mismanagement of funds by then prime minister Najib Razak, who now faces charges linked to a multibillion-dollar scandal at state fund 1 Malaysia Development Berhad.

The new government’s bid to deliver on an election promise to repeal the law was blocked by the opposition-led Senate, however.

EUROPEAN UNION

The European Union and authorities worldwide will have to regulate big technology and social media companies to protect citizens, European Commission deputy head Frans Timmermans said last month.

EU heads of state will urge governments to share information on threats via a new warning system, launched by the bloc’s executive. They will also call for online platforms to do more to remove misleading or illegal content.

Union-level efforts have been limited by different election rules in each member nation and qualms over how vigorously regulators can tackle misleading content online.

(Reporting by Fathin Ungku; Editing by Clarence Fernandez; and Joe Brock)

Facebook, Google to tackle spread of fake news, advisors want more

FILE PHOTO - Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

By Foo Yun Chee

BRUSSELS (Reuters) – Facebook, Google, and other tech firms have agreed on a code of conduct to do more to tackle the spread of fake news, due to concerns it can influence elections, the European Commission said on Wednesday.

Intended to stave off more heavy-handed legislation, the voluntary code covers closer scrutiny of advertising on accounts and websites where fake news appears, and working with fact checkers to filter it out, the Commission said.

But a group of media advisors criticized the companies, also including Twitter and lobby groups for the advertising industry, for failing to present more concrete measures.

With EU parliamentary elections scheduled for May, Brussels is anxious to address the threat of foreign interference during campaigning. Belgium, Denmark, Estonia, Finland, Greece, Poland, Portugal, and Ukraine are also all due to hold national elections next year.

Russia has faced allegations – which it denies – of disseminating false information to influence the U.S. presidential election and Britain’s referendum on European Union membership in 2016, as well as Germany’s national election last year.

The Commission told the firms in April to draft a code of practice or face regulatory action over what it said was their failure to do enough to remove misleading or illegal content.

European Digital Commissioner Mariya Gabriel said on Wednesday that Facebook, Google, Twitter, Mozilla, and advertising groups – which she did not name – had responded with several measures.

“The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and …we welcome this,” she said in a statement.

The steps also include rejecting payment from sites that spread fake news, helping users understand why they have been targeted by specific ads, and distinguishing ads from editorial content.

But the advisory group criticized the code, saying the companies had not offered measurable objectives to monitor its implementation.

“The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” the group said, giving no further details.

Its members include the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics.

(Reporting by Foo Yun Chee; editing by Philip Blenkinsop and John Stonestreet)

Boy Who Claims He Went To Heaven Recants

A boy who claimed that he went to heaven after a 2004 car accident has recanted his story and now says that he only claimed he went to heaven for the attention.

Alex Malarkey was the subject of the book “The Boy Who Came Back From Heaven.”  He was paralyzed in the accident and doctors said he would likely never come out of a coma.  When he woke up two months later, he told those around him that he had angels take him through the gates of heaven to meet Jesus.

Now, he says that was entirely false.

“I said I went to heaven because I thought it would get me attention. When I made the claims that I did, I had never read the Bible,” he explained. “People have profited from lies, and continue to. They should read the Bible, which is enough. The Bible is the only source of truth.”

Alex now is speaking out about the true path to salvation.

“It is only through repentance of your sins and a belief in Jesus as the Son of God, who died for your sins (even though he committed none of his own) so that you can be forgiven may you learn of Heaven outside of what is written in the Bible… not by reading a work of man,” he stated.

The boy’s mother told the Christian Post that Alex has not made any money from the book telling the story and that he never wanted the book published.