Don’t let impact of coronavirus breed hate, urges EU human rights agency

Reuters
By Megan Rowling

BARCELONA (Thomson Reuters Foundation) – The global outbreak of coronavirus will impinge on people’s freedom and other human rights but steps must be taken to stop “unacceptable behaviour” including discrimination and racial attacks, said the head of a European human rights watchdog.

Michael O’Flaherty, director of the European Union Agency for Fundamental Rights, said he was shocked when a waiter told him the best way to tackle coronavirus would be to stop migrants coming into his country because they have poor hygiene.

He noted other media reports of people being beaten up in the street for looking Chinese and others stopped at airports based on similar prejudices around ethnicity.

“That’s the sort of really worrying fake news-based spreading of hate and distrust which further undermines the ability to be welcoming, inclusive, respectful societies,” the Irish human rights lawyer told the Thomson Reuters Foundation.

The U.N.’s children’s agency UNICEF on Wednesday also said that fear of the virus was contributing to “unacceptable” discrimination against vulnerable people, including refugees and migrants, and it would “push pack against stigmatisation”.

O’Flaherty said it was necessary for public health responses to limit human rights to stem the coronavirus spread, but warned governments should not “use a sledgehammer to break a nut”.

“It’s about doing just enough to achieve your purpose and not an exaggerated response,” he added.

His Vienna-based agency, which advises EU and national decision makers on human rights issues, is working with teams of researchers across the European Union to prepare a report this month on ways to protect rights in a time of global turmoil.

“I am not saying everything that’s being done there is wrong … but there is an impact: a reduction in the enjoyment of human rights,” he said.

The World Health Organization described the coronavirus outbreak as a pandemic on Wednesday, with its chief urging the global community to redouble efforts to contain the outbreak.

As well as curtailing people’s freedom of movement, O’Flaherty said there was bound to be a ripple effect from the virus that has so far infected more than 119,000 people and killed nearly 4,300, according to a Reuters tally.

This would range from poor children being deprived of their main daily meal as schools closed, to gig economy workers being laid off with little access to social welfare, he noted.

In response, some governments – from Britain to Ireland and Spain – have introduced measures to boost social security payments or help small businesses stay afloat.

Governments may also need to tackle price inflation and profiteering from in-demand medicines and equipment like face masks, as well as ensure equal access to any treatments or vaccines that may be developed in future, he added.

Extra care was required to protect marginalised social groups from the virus, such as the homeless and refugees living in crowded conditions without decent shelter or healthcare.

O’Flaherty urged states to cooperate and learn from each other’s experiences while urging the private sector to do its bit, for example, by clamping down on hate speech and fake news on social media or attempts to market false cures.

If dealt with in the right way, the coronavirus epidemic could help Europeans understand the importance of safeguarding human rights for everyone, especially in tough times, he said.

“My feeling is that if we engage this crisis smartly, it can be an opportunity to help promote the sense, across our societies, that human rights is about all of us,” he said

(Reporting by Megan Rowling @meganrowling; editing by Belinda Goldsmith; Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org/climate)

Fake news makes disease outbreaks worse, study finds

By Kate Kelland

LONDON (Reuters) – The rise of “fake news” – including misinformation and inaccurate advice on social media – could make disease outbreaks such as the COVID-19 coronavirus epidemic currently spreading in China worse, according to research published on Friday.

In an analysis of how the spread of misinformation affects the spread of disease, scientists at Britain’s East Anglia University (UEA) said any successful efforts to stop people sharing fake news could help save lives.

“When it comes to COVID-19, there has been a lot of speculation, misinformation and fake news circulating on the internet – about how the virus originated, what causes it and how it is spread,” said Paul Hunter, a UEA professor of medicine who co-led the study.

“Misinformation means that bad advice can circulate very quickly – and it can change human behavior to take greater risks,” he added.

In their research, Hunter’s team focused on three other infectious diseases – flu, monkeypox and norovirus – but said their findings could also be useful for dealing with the COVID-19 coronavirus outbreak.

“Fake news is manufactured with no respect for accuracy, and is often based on conspiracy theories,” Hunter said.

For the studies – published on Friday in separate peer-reviewed journals – the researchers created theoretical simulations of outbreaks of norovirus, flu and monkeypox.

Their models took into account studies of real behavior, how different diseases are spread, incubation periods and recovery times, and the speed and frequency of social media posting and real-life information sharing.

They also took into account how lower trust in authorities is linked to tendency to believe conspiracies, how people interact in “information bubbles” online, and the fact that “worryingly, people are more likely to share bad advice on social media than good advice from trusted sources,” Hunter said.

The researchers found that a 10% reduction in the amount of harmful advice being circulated has a mitigating impact on the severity of an outbreak, while making 20% of a population unable to share harmful advice has the same positive effect.

(Reporting by Kate Kelland; Editing by Frances Kerry)

Facebook and eBay pledge to better tackle fake reviews

LONDON (Reuters) – Facebook and eBay have promised to better identify, probe and respond to fake and misleading reviews, Britain’s Competition and Markets Authority (CMA) said on Wednesday after pressing the online platforms to tackle the issue.

Customer reviews have become an integral part of online shopping on several websites and apps but the regulator has expressed concerns that some comments may not be genuine.

Facebook has removed 188 groups and disabled 24 user accounts whilst eBay has permanently banned 140 users since the summer, according to the CMA.

The CMA has also found examples via photo-posting app Instagram which owner Facebook has promised to investigate.

“Millions of people base their shopping decisions on reviews, and if these are misleading or untrue, then shoppers could end up being misled into buying something that isn’t right for them – leaving businesses who play by the rules missing out,” said CMA Chief Executive Andrea Coscelli.

The CMA said neither company was intentionally allowing such content and both had committed to tackle the problem.

“We maintain zero tolerance for fake or misleading reviews and will continue to take action against any seller that breaches our user polices,” said a spokeswoman at eBay.

Facebook said it was working to stop such fraudulent activity, including exploring the use of automated technology to help remove content before it was seen.

“While we have invested heavily to prevent this kind of activity across our services, we know there is more work to do and are working with the CMA to address this issue.”

(Reporting by Costas Pitas, Editing by Paul Sandle)

Factbox: How social media sites handle political ads

By Elizabeth Culliford

(Reuters) – Online platforms including Facebook and Alphabet Inc.’s Google face growing pressure to stop carrying political ads that contain false or misleading claims ahead of the U.S. presidential election.

In the United States, the Communications Act prevents broadcast stations from rejecting or censoring ads from candidates for federal office once they have accepted advertising for that political race, although this does not apply to cable networks like CNN, or to social media sites, where leading presidential candidates are spending millions to target voters in the run-up to the November 2020 election.

The following is how social media platforms have decided to handle false or misleading claims in political ads:

FACEBOOK

Facebook exempts politicians from its third-party fact-checking program, allowing them to run ads with false claims.

The policy  has been attacked by regulators and lawmakers who say it could spread misinformation and cause voter suppression. Critics including Democratic presidential candidate Elizabeth Warren have also run intentionally false Facebook ads to highlight the issue.

Facebook’s chief executive Mark Zuckerberg has defended the company’s stance, arguing that it does not want to stifle political speech, but he also said the company was considering ways to refine the policy.

Facebook does fact-check content from political groups. The company also says it fact-checks politicians if they share previously debunked content and does not allow this content in ads.

TWITTER

Twitter Inc  has banned political ads. On Friday it said this will include ads that reference a political candidate, party, election or legislation, among other limits.

The company also said it will not allow ads that advocate for a specific outcome on political or social causes.

“We believe political message reach should be earned, not bought,” said Twitter CEO Jack Dorsey in a statement last month.

Some lawmakers praised the ban but critics said Twitter’s decision would benefit incumbent and hurt less well-known candidates.

Officials from the Trump campaign, which is out-spending its Democratic rivals on Facebook and Google ads, called the ban “dumb” but also said it would have little effect on the president’s strategy.

The overall political ad spend for the 2018 U.S. midterm elections on Twitter was less than $3 million, Twitter’s Chief Financial Officer Ned Segal said.

“Twitter from an advertising perspective is not a player at all. Facebook and Google are the giants in political ads,” said Steve Passwaiter, vice president of the Campaign Media Analysis Group at Kantar Media.

GOOGLE

Google and its video-streaming service YouTube prohibit certain kinds of misrepresentation  in ads, such as misinformation about public voting procedures or incorrect claims that a public figure has died.

However, Google does not have a wholesale ban on politicians running false or misleading ads.

In October, when former Vice President Joe Biden’s campaign asked the company to take down a Trump campaign ad that it said contained false claims, a Google spokeswoman told Reuters it did not violate the site’s policies.

YouTube has started adding links and information from Wikipedia to give users more information around sensitive content such as conspiracy theory videos, but a spokeswoman said this program does not relate to ads.

SNAP

Snap Inc allows political advertising unless the ads are misleading, deceptive or violate the terms of service on its disappearing message app Snapchat.

The company, which recently joined Facebook, Twitter and Google in launching a public database of its political ads, defines political ads as including election-related, advocacy and issue ads.

Snap does not ban “attack” ads in general, but its policy  does prohibit attacks relating to a candidate’s personal life.

TIKTOK

The Chinese-owned video app popular with U.S. teenagers does not permit political advertising on the platform.

In an October blog pos, TikTok said that the company wants to make sure the platform continues to feel “light-hearted and irreverent.”

“The nature of paid political ads is not something we believe fits the TikTok platform experience,” wrote Blake Chandlee, TikTok’s vice president of global business solutions.

The app, which is owned by Beijing-based tech giant ByteDance, has recently come under scrutiny from U.S. lawmakers over concerns the company may be censoring politically sensitive content, and raising questions about how it stores personal data.

REDDIT

Social network Reddit allows ads related to political issues and it allows ads from political candidates at the federal level, but not for state or local elections.

It also does not allow ads about political issues, elections or candidates outside of the United States.

The company says all political ads must abide by its policies that forbid “deceptive, untrue or misleading advertising” and that prohibit “content that depicts intolerant or overly contentious political or cultural topics or views.”

LINKEDIN

LinkedIn, which is owned by Microsoft Corp, banned political ads last year. It defines political ads as including “ads advocating for or against a particular candidate or ballot proposition or otherwise intended to influence an election outcome.”

Search engine Bing, which is also owned by Microsoft, does not allow ads with political or election-related content.

PINTEREST

Photo-sharing site Pinterest Inc also banned political campaign ads last year.

This includes advertising for political candidates, political action committees (PACs), legislation, or political issues with the intent to influence an election, according to the site’s ads policy.

“We want to create a positive, welcoming environment for our Pinners and political campaign ads are divisive by nature,” said Pinterest spokeswoman Jamie Favazza, who told Reuters the decision was also part of the company’s strategy to address misinformation.

TWITCH

A spokeswoman for Twitch told Reuters the live-streaming gaming network does not allow political advertising.

The site does not strictly ban all issue-based advertising but the company considers whether an ad could be seen as “political” when it is reviewed, the spokeswoman said.

Twitch, which is owned by Amazon.com Inc, is primarily a video gaming platform but also has channels focused on sports, music and politics. In recent months, political candidates such as U.S. President Donald Trump and Senator Bernie Sanders have joined the platform ahead of the 2020 election.

(Reporting by Elizabeth Culliford; additional reporting by Sheila Dang; Editing by Robert Birsel and Bill Berkrot)

Thailand unveils ‘anti-fake news’ center to police the internet

Thailand unveils ‘anti-fake news’ center to police the internet
By Patpicha Tanakasempipat

BANGKOK (Reuters) – Thailand unveiled an “anti-fake news” center on Friday, the Southeast Asian country’s latest effort to exert government control over a sweeping range of online content.

The move came as Thailand is counting on the digital economy to drive growth amid domestic political tensions, following a March election that installed its junta leader since 2014 as a civilian prime minister.

Thailand has recently pressed more cybercrime charges for what it says is misinformation affecting national security. Such content is mostly opinion critical of the government, the military or the royal family.

Minister of Digital Economy and Society Puttipong Punnakanta broadly defined “fake news” as any viral online content that misleads people or damages the country’s image. He made no distinction between non-malicious false information and deliberate disinformation.

“The center is not intended to be a tool to support the government or any individual,” Puttipong said on Friday before giving reporters a tour.

The center is set up like a war room, with monitors in the middle of the room showing charts tracking the latest “fake news” and trending Twitter hashtags.

It is staffed by around 30 officers at a time, who will review online content – gathered through “social listening” tools – on a sweeping range of topics from natural disasters, the economy, health products and illicit goods.

The officers will also target news about government policies and content that broadly affects “peace and order, good morals, and national security,” according to Puttipong.

If they suspect something is false, they will flag it to relevant authorities to issue corrections through the center’s social media platforms and website and through the press.

Rights groups and media freedom advocates were concerned the government could use the center as a tool for censorship and propaganda.

“In the Thai context, the term ‘fake news’ is being weaponized to censor dissidents and restrict our online freedom,” said Emilie Pradichit, director of the Thailand-based Manushya Foundation, which advocates for online rights.

Pradichit said the move could be used to codify censorship, adding the center would allow the government to be the “sole arbiter of truth”.

Transparency reports from internet companies such as Facebook and Google show Thai government requests to take down content or turn over information have ramped up since the military seized power in 2014.

A law prohibiting criticism of the monarchy has often been the basis for such requests for Facebook. In Google’s cases, government criticism was the main reason cited for removal of content.

(Reporting by Patpicha Tanakasempipat; Editing by Kay Johnson and Frances Kerry)

Instagram adds tool for users to flag false information

SAN FRANCISCO (Reuters) – Instagram is adding an option for users to report posts they think are false, the company announced on Thursday, as the Facebook-owned photo-sharing site tries to stem misinformation and other abuses on its platform.

Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

Facebook started using image-detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.

Results rated as false are removed from places where users seek out new content, like Instagram’s Explore tab and hashtag search results.

Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is only being rolled out in the United States.

“This is an initial step as we work toward a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.

Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over alleged Russian attempts to spread misinformation around the 2016 U.S. presidential election.

But an independent report commissioned by the Senate Select Committee on Intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.

Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, wrote researchers at New Knowledge, which conducted the analysis.

“Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.

It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.

Last month, UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on the company to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.

(Reporting by Elizabeth Culliford and Katie Paul; Editing by Cynthia Osterman)

Let’s take the fake out of our news!

The Fake News Highway - Image by John Iglar

By Kami Klein

There was a time when the news wasn’t so confusing.  Before the internet, most families had their morning newspaper delivered conveniently to their door. In order to keep your business or be competitive, newspapers battled over the facts and dug deeper to reach the truth per investigative journalism.  The stories would be presented without opinions but based on legitimate proof. Of course, just as internet news does today, a powerful headline didn’t hurt.  

Once the workday was winding down, the evening news of the day was given through well-respected television journalists such as Walter Cronkite and Tom Brokaw, who presented the unbiased facts, trusting in the abilities of their listeners to ponder and come to their own conclusions.  The news itself was taken quite seriously. The worst thing to happen to a reporter was to be proven or accused of dishonorable reporting. To be respected in the journalism field was the goal and not how many facebook followers or tweet responses happened in a day or whether they have stayed true to their personal beliefs. Becoming a journalist was a calling… not the way to fame.

Suddenly we have the internet highway where everyone can have an opinion, Competition requires all journalism to be the fastest news source which yields little time for investigation or vetting and by presenting a portion of the facts which in many cases is served to the public with a generous amount of opinion gravy poured on top. Conservative or Liberal, it is rare to find an unbiased news source. Add to this confusing issue the hot topic of “Fake News” and it is a wonder any of us really knows what is going on. 

Every day, in social media across the world, fake news is often more prevalent in our feed than those stories that are actually the truth or at least close to it.  These (articles) are spread by the misconception that if it is on the internet, it must be true, or because the story sits right in line with the personal beliefs of the reader, it must be correct.  The share button gets a hit, and the lie continues on its journey. Where we used to be able to hold the reporter or journalist accountable for their information, the responsibility is now ours. In an age where anyone can post a news story, how do we take the fake out of our news? 

There are several kinds of fake news on the internet.  The following information comes from a story written by MastersinCommunication.org.. Called “The Truth about Fake News”. It is important for us to be able to identify and beware of the following:    

 

  • Propaganda – News stories designed to disparage a candidate, promote a political cause, and mislead voters
  • Sloppy Journalism – Stories containing inaccurate information produced by writers and editors who have not properly vetted a story. Retractions do little to fix the problem, even if there is one since the story has spread and the damage done.
  • Sensationalized Headlines – Often a story may be accurate but comes with a misleading or outrageous headline. Readers may not read past it, but take everything they need to know from this skewed title.
  • Clickbait – These stories are deliberately created to create traffic on a website. Advertising dollars are at stake, and gullible readers fall for it by the millions.
  • Satire – Parody websites like The Onion and The Daily Mash produce satirical stories that are believed by uninformed readers. The stories are written as satire and not meant to be taken literally, but unless you check their website, not everyone will know. 
  • Average Joe Reporting – Sometimes a person will post an eyewitness report that goes viral, but it may or may not be true. The classic example of this was a tweet by Eric Tucker in Austin, Texas in 2015. Posting a picture of a row of charter busses, Tucker surmised and tweeted that Trump protesters were being bussed in to rally against the President-elect. The tweet was picked up by multiple media outlets, and Mr. Trump himself, going viral in a matter of hours. The only problem is, it wasn’t true.

 The 2020 elections are upon us and fake news will be used as a weapon.  False news can destroy lives and ultimately do great harm to our country. 

How do we beat these fakes and stop them?  Here are some tools available to anyone who does not want to be duped by those that are attempting to manipulate for power, creating greater discourse or for money. If we can all take responsibility for what we share, we are one step closer to legitimate news.   

HERE ARE QUICK TIPS FOR CHECKING LEGITIMACY OF A NEWS STORY

 

  1.  Pay attention to the domain and URL – many times these sites will make something very close to a trusted news source.  Sites with such endings like .com.co should make you raise your eyebrows and tip you off that you need to dig around more to see if they can be trusted. This is true even when the site looks professional and has semi-recognizable logos. For example, abcnews.com is a legitimate news source, but abcnews.com.co is not, despite its similar appearance.
  2. Read the “About Us” section- Most sites will have a lot of information about the news outlet, the company that runs it, members of leadership, and the mission and ethics statement behind an organization. The language used here is straightforward. If it’s melodramatic and seems overblown, you should be skeptical.  This is where satire sites will let you know that what you are reading is only meant for entertainment. The laugh is then on us when we take what they say as the truth. they are counting on you NOT to check. 
  3. HEADLINES CAN BE MISLEADING!! -Headlines are meant to get the reader’s attention, but they’re also supposed to accurately reflect what the story is about. In fake stories, headlines often will be written in an exaggerated language with the intention of being misleading.  These will then be attached to stories that give half-truth or the story proves that the headline would never or has not actually happened.  
  4. Fact-Checking can be your friend –  Not only is fact-checking smart, looking to see if your particular news story leans to conservative or liberal points of view is just as important. Mediabiasfactcheck.com is one of my go-to places.  They also provide a great list of fact-checking sites that are highly recommended. You will also find a wonderful list of news web sites that have been deemed as non-biased. 

  While Facebook and Twitter are being held accountable for much of what is on our social media today, they will only succeed with our help. Together, we can take the fake out of the news and make responsible choices for our future!  

 

Exclusive: Echo chambers – Fake news fact-checks hobbled by low reach, study shows

FILE PHOTO: A general view of Facebook's elections operation centre in Dublin, Ireland May 2, 2019. REUTERS/Lorraine O'Sullivan/File Photo

By Alissa de Carbonnel

BRUSSELS (Reuters) – The European Union has called on Facebook and other platforms to invest more in fact-checking, but a new study shows those efforts may rarely reach the communities worst affected by fake news.

The analysis by big-data firm Alto Data Analytics over a three-month period ahead of this year’s EU elections casts doubt on the effectiveness of fact-checking even though demand for it is growing.

Facebook has been under fire since Russia used it to influence the election that brought Donald Trump to power. The company quadrupled the number of fact-checking groups it works with worldwide over the last year and its subsidiary WhatsApp launched its first fact-checking service.

The EU, which has expanded its own fact-checking team, urged online platforms to take greater action or risk regulation.

Fact-checkers are often journalists who set up non-profits or work at mainstream media outlets to scour the web for viral falsehoods. Their rebuttals in the form of articles, blog posts and Tweets seek to explain how statements fail to hold up to scrutiny, images are doctored or videos are taken out of context.

But there is little independent research on their success in debunking fake news or prevent people from sharing it.

“The biggest problem is that we have very little data … on the efficacy of various fact-checking initiatives,” said Nahema Marchal, a researcher at the Oxford Internet Institute.

“We know from a research perspective that fact-checking isn’t always as efficient as we might think,” she said.

Alto looked at more than two dozen fact-checking groups in five EU nations and found they had a minimal online presence – making up between 0.1% and 0.3% of the total number of retweets, replies, and mentions analyzed on Twitter from December to March.

The Alto study points to a problem fact-checkers have long suspected: they are often preaching to the choir.

It found that online communities most likely to be exposed to junk news in Germany, France, Spain, Italy and Poland had little overlap with those sharing fact-checks.

PATCHWORK

The European Parliament election yielded a patchwork of results. The far-right made gains but so did liberal and green parties, leaving pro-European groups in control of the assembly.

The EU found no large-scale, cross-border attempts to sway voters but warned of hard-to-detect home-grown operations.

Alto analyzed abnormal, hyperactive users making dozens of posts per day to deduce which political communities were most tainted by suspect posts in each country.

Less than 1% of users – mostly sympathetic to populist and far-right parties – generated around 10% of the total posts related to politics.

They flooded networks with anti-immigration, anti-Islam and anti-establishment messages, Alto found in results that echoed separate studies by campaign group Avaaz and the Oxford Internet Institute on the run-up to the European election.

Fact-checkers, seeking to counter these messages, had little penetration in those same communities.

In Poland – where junk news made up 21% of traffic compared to an average of 4% circulating on Twitter in seven major European languages over one month before the vote, according to the Oxford study – content issued by fact-checkers was mainly shared among those opposed to the ruling Law and Justice party.

The most successful posts by six Polish fact-checkers scrutinized campaign finance, the murder of a prominent opposition politician and child abuse by the Catholic church.

Italy, where an anti-establishment government has been in power for a year, and Spain, where far-right newcomer Vox is challenging center parties, also saw content from fact-checkers unevenly spread across political communities.

More than half of the retweets, mentions or replies to posts shared by seven Italian fact-checking groups – mostly related to immigration – came from users sympathetic to the center-left Democratic Party (PD).

Only two of the seven groups had any relatively sizeable footprint among supporters of Deputy Prime Minister Matteo Salvini’s far-right League party, which surged to become the third-biggest in the new EU legislature.

Italian fact-checker Open.Online, for example, had 4,594 retweets, mentions or replies among PD sympathizers compared to 387 among League ones.

French fact-checking groups, who are mostly embedded in mainstream media, fared better. Their content, which largely sought to debunk falsehoods about President Emmanuel Macron, was the most evenly distributed across different online communities.

In Germany, only 2.2% of Twitter users mapped in the study retweeted, replied or mentioned the content distributed by six fact-checking groups.

Alto’s research faces constraints. The focus on publicly available Twitter data may not accurately reflect the whole online conversation across various platforms, the period of study stops short of the May elections, and there are areas of dispute over what constitutes disinformation.

It also lacks data from Facebook, which is not alone among internet platforms but whose dominance puts it in the spotlight.

FREE SPEECH

Facebook says once a post is flagged by fact-checkers, it is downgraded in users’ news feeds to limit its reach and if users try to share it, they will receive a warning. Repeat offenders will see a distribution of their entire page restricted resulting in a loss of advertising revenue.

“It should be seen less, shared less,” Richard Allen, Facebook’s vice president for global policy, told reporters visiting a “war room” in Dublin set up to safeguard the EU vote.

Facebook cites free speech concerns over deleting content. It will remove posts seeking to suppress voter turnout by advertising the wrong date for an election, for example, but says in many other cases it is difficult to differentiate between blatantly false information and partisan comment. 

“We don’t feel we should be removing contested claims even when we believe they may be false,” Allen said. “There are a lot of concepts being tested because we don’t know what is going to work.”

As the rapid spread of fake news on social media has raised the profile of fact-checking groups, it is forcing them to rethink how they work.

If they once focused on holding politicians to account, fact-checkers are now seeking to influence a wider audience.

Clara Jiménez, co-founder Maldita.es, a Spanish fact-checking group partnered with Facebook, mimics the methods used by those spreading false news. That means going viral with memes and videos.

Maldita.es focuses largely on WhatsApp and asks people to send fact-checks back to those in their networks who first spread the fake news.

“You need to try to reach real people,” said Jimenez, who also aims to promote better media literacy. “One of the things we have been asked several times is whether people can get pregnant from a mosquito bite. If people believe that, we have a bigger issue.”

(Additional reporting by Thomas Escritt in Berlin and Conor Humphries in Dublin; Writing by Alissa de Carbonnel; Editing by Giles Elgood)

Factbox: ‘Fake News’ laws around the world

Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

SINGAPORE (Reuters) – Singapore’s parliament on Monday began considering a law on “fake news” that an internet watchdog has called the world’s most far-reaching, stoking fears the government could use additional powers to choke freedom of speech and chill dissent.

Governments and companies worldwide are increasingly worried about the spread of false information online and its impact on everything from share prices to elections and social unrest.

Human rights activists fear laws to curb so-called “fake news” could be abused to silence opposition.

Here are details of such laws around the world:

SINGAPORE

Singapore’s new law would require social media sites like Facebook to carry warnings on posts the government deems false and remove comments against the “public interest”.

Singapore, which ranks 151 among 180 countries rated by the World Press Freedom Index, defines “public interests” as threats to its security, foreign relations, electoral integrity and public perception of the government and state institutions.

Violations could attract fines of up to S$ 1 million ($737,500) and 10 years in prison.

RUSSIA

Last month, President Vladimir Putin signed into law tough new fines for Russians who spread what the authorities regard as fake news or who show “blatant disrespect” for the state online.

Critics have warned the law could aid state censorship, but lawmakers say it is needed to combat false news and abusive online comment.

Authorities may block websites that do not meet requests to remove inaccurate information. Individuals can be fined up to 400,000 rouble ($6,109.44) for circulating false information online that leads to a “mass violation of public order”.

FRANCE

France passed two anti-fake news laws last year, to rein in false information during election campaigns following allegations of Russian meddling in the 2017 presidential vote.

President Emmanuel Macron vowed to overhaul media laws to fight “fake news” on social media, despite criticism that the move was a risk to civil liberties.

GERMANY

Germany passed a law last year for social media companies, such as Facebook and Twitter, to quickly remove hate speech.

Called NetzDG for short, the law is the most ambitious effort by a Western democracy to control what appears on social media. It will enforce online Germany’s tough curbs on hate speech, including pro-Nazi ideology, by giving sites a 24-hour deadline to remove banned content or face fines of up to 50 million euros.

Since it was adopted, however, German officials have said too much online content was being blocked, and are weighing changes.

MALAYSIA

Malaysia’s ousted former government was among the first to adopt a law against fake news, which critics say was used to curb free speech ahead of last year’s general elections, which it lost. The measure was seen as a tool to fend off criticism over graft and mismanagement of funds by then prime minister Najib Razak, who now faces charges linked to a multibillion-dollar scandal at state fund 1 Malaysia Development Berhad.

The new government’s bid to deliver on an election promise to repeal the law was blocked by the opposition-led Senate, however.

EUROPEAN UNION

The European Union and authorities worldwide will have to regulate big technology and social media companies to protect citizens, European Commission deputy head Frans Timmermans said last month.

EU heads of state will urge governments to share information on threats via a new warning system, launched by the bloc’s executive. They will also call for online platforms to do more to remove misleading or illegal content.

Union-level efforts have been limited by different election rules in each member nation and qualms over how vigorously regulators can tackle misleading content online.

(Reporting by Fathin Ungku; Editing by Clarence Fernandez; and Joe Brock)

NewsGuard’s ‘real news’ seal of approval helps spark change in fake news era

Facebook CEO Mark Zuckerberg is surrounded by members of the media as he arrives to testify before a Senate Judiciary and Commerce Committees joint hearing regarding the company’s use and protection of user data, on Capitol Hill in Washington, U.S., April 10, 2018. REUTERS/Leah Millis TPX IMAGES OF THE DAY

By Kenneth Li

NEW YORK (Reuters) – More than 500 news websites have made changes to their standards or disclosures after getting feedback from NewsGuard, a startup that created a credibility ratings system for news on the internet, the company told Reuters this week.

The latest major news organization to work with the company is Britain’s Daily Mail, according to NewsGuard, which upgraded what it calls its “nutrition label” rating on the paper’s site to “green” on Thursday, indicating it “generally maintains basic standards of accuracy and accountability.”

A representative of the Daily Mail did not respond to several requests for comment.

NewsGuard markets itself as an independent arbiter of credible news. It was launched last year by co-chief executives Steven Brill, a veteran U.S. journalist who founded Brill’s Content and the American Lawyer, and Gordon Crovitz, a former publisher of News Corp’s Wall Street Journal.

NewsGuard joins a handful of other groups such as the Trust Project and the Journalism Trust Initiative which aim to help readers discern which sites are credible when many readers have trouble distinguishing fact from fiction.

After facing anger over the rapid spread of false news in the past year or so, Facebook Inc and other tech companies also say they have recruited more human fact checkers to identify and sift out some types of inaccurate articles.

These efforts were prompted at least in part by the 2016 U.S. presidential election when Facebook and other social media sites were used to disseminate many false news stories.

The company has been criticized by Breitbart News, a politically conservative site, which described NewsGuard as “the establishment media’s latest effort to blacklist alternative media sites.”

The way NewsGuard works is this: red or green shield-shaped labels are visible in a web browser window when looking at a news website if a user downloads NewsGuard’s software from the web. The software is free and works with the four leading browsers: Google’s Chrome, Microsoft Corp’s Edge, Mozilla’s Firefox and Apple Inc’s Safari.

‘CALL EVERYONE FOR COMMENT’

NewsGuard’s investors include the French advertising company Publicis Groupe SA and the non-profit Knight Foundation. Thomas Glocer, the former chief executive of Thomson Reuters, owns a smaller stake, according to NewsGuard’s website. News sites do not pay the company for its service.

The startup said it employs 35 journalists who have reviewed and published labels on about 2,200 sites based on nine journalistic criteria such as whether the site presents information responsibly, has a habit of correcting errors or discloses its ownership and who is in charge of the content.

News sites field questions if they choose to from NewsGuard journalists about its performance on the nine criteria.

“We call everyone for comment which algorithms don’t do,” Brill said in an interview, highlighting the difference between NewsGuard’s verification process with the computer code used by Alphabet Inc’s Google and Facebook in bringing new stories to the attention of users.

Some news organizations have clarified their ownership, financial backers and identity of their editorial staff after interacting with the company, NewsGuard said.

GateHouse Media, which publishes more than 140 local newspapers such as the Austin American-Statesman and Akron Beacon Journal, made changes to how it identifies sponsored content that may appear to be objective reporting but is actually advertising, after being contacted by NewsGuard.  

“We made our standards and practices more prominent and consistent across our digital 460 news brands across the country,” said Jeff Moriarty, GateHouse’s senior vice president of digital.

Reuters News, which earned a green rating on all nine of NewsGuard’s criteria, added the names and titles of its editorial leaders to the Reuters.com website after being contacted by NewsGuard, a Reuters spokesperson said.

NewsGuard upgraded the Daily Mail’s website rating on Thursday to green after giving it a red label in August, when it stated that the site “repeatedly publishes false information and has been forced to pay damages in numerous high-profile cases.”

The Daily Mail objected to that description, and started discussions with NewsGuard in January after the red label became visible for mobile users of Microsoft’s Edge browser, NewsGuard said.

NewsGuard has made public many details of its exchange with the Daily Mail on its website.

“We’re not in the business of trying to give people red marks,” Brill said. “The most common side effect of what we do is for news organizations to improve their journalistic practices.”

(Reporting by Kenneth Li; editing by Bill Rigby)