UK Polling shows a quarter of young Brits open to editing out the ‘Offensive’ parts of the Bible

Hands-holding-bible

Important Takeaways:

  • Young Brits Open to Banning the Bible ‘Unless the Offended Parts Can Be Edited Out’
  • Close to a quarter of young British people said recently they would be open to banning the Bible if they believed its pages contained “hate speech.”
  • Whitestone Insights, a polling group, surveyed 2,088 adults in the United Kingdom, asking them if they agreed with the following statement:
    • Unless the offending parts can be edited out, books containing what some perceive as hate speech should be banned from general sale, including if necessary religious texts such as the Bible.
  • Twenty-three percent of respondents between the ages of 18 and 34 were most likely to agree with the statement, followed by 17% of those ages 35 to 54, and 13% of those over the age of 55, according to Christian Today.
  • Lois McLatchie of the Alliance Defending Freedom UK voiced her concerns over the new survey — and pointed to Räsänen as a cautionary tale.
  • “We may no longer be a majority Christian population here in Britain,” said McLatchie. “That’s even more reason to protect freedom of speech and belief for all.”
  • In addition to Räsänen’s case, there have been instances of unabashed censorship in the U.K. — including a woman who has been arrested twice for praying silently outside an abortion clinic.
  • “Censoring one type of belief because it fails to fit with the dominant orthodoxy of our day is no better than imposing the illiberal blasphemy laws of the Middle Ages,” McLatchie said. “We need a robust defense of religious freedom from those who craft our legislation and we need to educate the ‘be kind’ generation on the truly hateful consequences of censorship before this type of thinking creeps further into reality.”

Read the original article by clicking here.

Biblical views now on trial as Finnish politician fights for free speech

John 16:2 “Indeed, the hour is coming when whoever kills you will think he is offering service to God”

Important Takeaways:

  • Finnish politician begins trial for tweet while demonstrators protest for the right to free speech
  • Prosecution alleges that right to religious freedom exists only within certain boundaries; defense responds that open dialogue is key to a democracy
  • The former Finnish Minister of the Interior has pled “not guilty” to three criminal charges she faces after sharing her deeply held belief
  • As the defendants arrived this morning, crowds gathered outside of the courthouse to show their support for the politician and her Bishop, who are both being accused of “hate speech” for expressing their faith-based views.
  • Police investigations against Räsänen started in June 2019. As an active member of the Finnish Lutheran church, she had addressed the leadership of her church on Twitter and questioned its official sponsorship of the LGBT event ‘Pride 2019,’ accompanied by an image of Bible verses from the New Testament book of Romans. Following this tweet, further investigations against Räsänen were launched, going back to a church pamphlet Räsänen wrote almost 20 years ago. In the last two years, Räsänen has attended several lengthy police interrogations about her Christian beliefs – including being frequently asked by the police to explain her understanding of the Bible.

Read the original article by clicking here.

Facebook removes seven million posts for sharing false information on coronavirus

(Reuters) – Facebook Inc. said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.

Facebook released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.

The company said it would invite external experts to independently audit the metrics used in the report, beginning 2021.

The world’s biggest social media company removed about 22.5 million posts containing hate speech on its flagship app in the second quarter, up from 9.6 million in the first quarter. It also deleted 8.7 million posts connected to extremist organizations, compared with 6.3 million in the prior period.

Facebook said it relied more heavily on automation technology for reviewing content during the months of April, May and June as it had fewer reviewers at its offices due to the COVID-19 pandemic.

That resulted in company taking action on fewer pieces of content related to suicide and self-injury, child nudity and sexual exploitation on its platforms, Facebook said in a blog post.

The company said it was expanding its hate speech policy to include “content depicting blackface, or stereotypes about Jewish people controlling the world.”

Some U.S. politicians and public figures have caused controversies by donning blackface, a practice that dates back to 19th century minstrel shows that caricatured slaves. It has long been used to demean African-Americans.

(Reporting by Katie Paul in San Francisco and Munsif Vengattil in Bengaluru; Additional Reporting by Bart Meijer; Editing by Shinjini Ganguli and Anil D’Silva)

Facebook names first members of oversight board that can overrule Zuckerberg

By Elizabeth Culliford

(Reuters) – Facebook Inc’s new content oversight board will include a former prime minister, a Nobel Peace Prize laureate and several constitutional law experts and rights advocates among its first 20 members, the company announced on Wednesday.

The independent board, which some have dubbed Facebook’s “Supreme Court,” will be able to overturn decisions by the company and Chief Executive Mark Zuckerberg on whether individual pieces of content should be allowed on Facebook and Instagram.

Facebook has long faced criticism for high-profile content moderation issues. They range from temporarily removing a famous Vietnam-era war photo of a naked girl fleeing a napalm attack, to failing to combat hate speech in Myanmar against the Rohingya and other Muslims.

The oversight board will focus on a small slice of challenging content issues including hate speech and harassment and people’s safety.

Facebook said the board’s members have lived in 27 countries and speak at least 29 languages, though a quarter of the group and two of the four co-chairs are from the United States, where the company is headquartered.

The co-chairs, who selected the other members jointly with Facebook, are former U.S. federal circuit judge and religious freedom expert Michael McConnell, constitutional law expert Jamal Greene, Colombian attorney Catalina Botero-Marino and former Danish Prime Minister Helle Thorning-Schmidt.

Among the initial cohort are: former European Court of Human Rights judge András Sajó, Internet Sans Frontières Executive Director Julie Owono, Yemeni activist and Nobel Peace Prize laureate Tawakkol Karman, former editor-in-chief of the Guardian Alan Rusbridger, and Pakistani digital rights advocate Nighat Dad.

Nick Clegg, Facebook’s head of global affairs, told Reuters in a Skype interview the board’s composition was important but that its credibility would be earned over time.

“I don’t expect people to say, ‘Oh hallelujah, these are great people, this is going to be a great success’ – there’s no reason anyone should believe that this is going to be a great success until it really starts hearing difficult cases in the months and indeed years to come,” he said.

The board will start work immediately and Clegg said it would begin hearing cases this summer.

The board, which will grow to about 40 members and which Facebook has pledged $130 million to fund for at least six years, will make public, binding decisions on controversial cases where users have exhausted Facebook’s usual appeals process.

The company can also refer significant decisions to the board, including on ads or on Facebook groups. The board can make policy recommendations to Facebook based on case decisions, to which the company will publicly respond.

Initially, the board will focus on cases where content was removed and Facebook expects it to take on only “dozens” of cases to start, a small percentage of the thousands it expects will be brought to the board.

“We are not the internet police, don’t think of us as sort of a fast-action group that’s going to swoop in and deal with rapidly moving problems,” co-chair McConnell said on a conference call.

The board’s case decisions must be made and implemented within 90 days, though Facebook can ask for a 30-day review for exceptional cases.

“We’re not working for Facebook, we’re trying to pressure Facebook to improve its policies and its processes to better respect human rights. That’s the job,” board member and internet governance researcher Nicolas Suzor told Reuters. “I’m not so naive that I think that that’s going to be a very easy job.”

He said board members had differing views on freedom of expression and when it can legitimately be curtailed.

John Samples, vice president of the libertarian Cato Institute, has praised Facebook’s decision not to remove a doctored video of U.S. House Speaker Nancy Pelosi. Sajó has cautioned against allowing the “offended” to have too much influence in the debate around online expression.

Some free speech and internet governance experts told Reuters they thought the board’s first members were a diverse, impressive group, though some were concerned it was too heavy on U.S. members. Facebook said one reason for that was that some of its hardest decisions or appeals in recent years had begun in America.

“I don’t feel like they made any daring choices,” said Jillian C. York, the Electronic Frontier Foundation’s director of international freedom of expression.

Jes Kaliebe Petersen, CEO of Myanmar tech-focused civil society organization Phandeeyar, said he hoped the board would apply more “depth” to moderation issues, compared with Facebook’s universal set of community standards.

David Kaye, U.N. special rapporteur on freedom of opinion and expression, said the board’s efficacy would be shown when it started hearing cases.

“The big question,” he said, “will be, are they taking questions that might result in decisions, or judgments as this is a court, that go against Facebook’s business interests?”

(Reporting by Elizabeth Culliford in Birmingham, England; Editing by Tom Brown and Matthew Lewis)

YouTube to remove hateful, supremacist content

FILE PHOTO: Silhouettes of mobile device users are seen next to a screen projection of Youtube logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

By Paresh Dave

SAN FRANCISCO (Reuters) – YouTube said on Wednesday it would remove videos that deny the Holocaust and other “well-documented violent events,” a major reversal in policy as it fights criticism that it provides a platform to hate speech and harassment.

The streaming service, owned by Alphabet Inc’s Google, also said it would remove videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.

In addition, video creators that repeatedly brush up against YouTube’s hate speech policies, even without violating them, will now have their accounts shut down, a spokesman said.

In a blog post, YouTube acknowledged the new policies could hurt researchers who seek out these videos “to understand hate in order to combat it.” The policies also could frustrate free speech advocates who say hate speech should not be censored.

Jonathan Greenblatt, chief executive of the Anti-Defamation League, which researches anti-Semitism, said it had provided input to YouTube on the policy change.

“While this is an important step forward, this move alone is insufficient and must be followed by many more changes from YouTube and other tech companies to adequately counter the scourge of online hate and extremism,” Greenblatt said in a statement.

(Reporting by Paresh Dave; Additional reporting by Sayanti Chakraborty in Bengaluru; Editing by James Emmanuel and Bernadette Baum)

Facebook, Apple remove most of U.S. conspiracy theorist’s content

FILE PHOTO: Alex Jones from Infowars.com speaks during a rally in support of Republican presidential candidate Donald Trump near the Republican National Convention in Cleveland, Ohio, U.S., July 18, 2016. REUTERS/Lucas Jackson/File Photo

By Rich McKay

ATLANTA (Reuters) – Facebook Inc announced on Monday that it had removed four pages belonging to U.S. conspiracy theorist Alex Jones for “repeatedly posting content over the past several days” that breaks its community standards.

The company said it removed the pages “for glorifying violence, which violates our graphic violence policy and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies.”

“Facebook bans Infowars. Permanently. Infowars was widely credited with playing a key role in getting Trump elected. This is a co-ordinated move ahead of the mid-terms to help Democrats. This is political censorship. This is culture war,” Infowars editor-at-large Paul Joseph Watson tweeted https://twitter.com/PrisonPlanet/status/1026433061469257733.

Neither Jones nor a representative for Infowars was available for comment.

Since founding Infowars in 1999, Jones has built a vast audience. Among the theories he has promoted is that the Sept. 11, 2001, attacks on New York and Washington were staged by the government.

Facebook had earlier suspended the radio and Internet host’s personal profile for 30 days in late July from its site for what the company said was bullying and hate speech.

Most of Jones’s podcasts from his right-wing media platform Infowars have been removed from Apple Inc’s iTunes and podcast apps, the media news website BuzzFeed quoted a company spokesman as saying on Sunday.

Apple told BuzzFeed that it had removed the entire library for five of Jones’s six Infowars podcasts including the shows “War Room” and the daily “The Alex Jones Show.”

Only one program provided by Infowars, “RealNews with David Knight” remained on Apple’s platforms on Sunday, according to news media accounts.

The moves by Apple and Facebook are the most sweeping of a recent crackdown on Jones’s programs by online sites that have suspended or removed some of his conspiracy-driven content. An Apple spokeswoman said in a statement that the company “does not tolerate hate speech” and publishes guidelines that developers and publishers must follow.

“Podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming,” Apple said in a statement. “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

Also, Spotify, a music, and podcast streaming company said on Monday that it had now removed all of Jones’s Infowars programs from its platform. Last week it removed just some specific programs.

“We take reports of hate content seriously and review any podcast episode or song that is flagged by our community,” a representative said Monday.

“Due to repeated violations of Spotify’s prohibited content policies, The Alex Jones Show has lost access to the Spotify platform,” the representative said.

Jones has also promoted a theory that the 2012 Sandy Hook school massacre was faked by left-wing forces to promote gun control. The shooting left 26 children and adults dead at a Connecticut elementary school.

He is being sued in Texas by two Sandy Hook parents, seeking at least $1 million, claiming that they have been the subject of harassment driven by his programs.

(Reporting by Rich McKay; Additional reporting by Ishita Chigilli Palli and Arjun Panchadar in Bengaluru and Stephen Nellis in San Francisco; Editing by Emelia Sithole-Matarise, Mark Potter, Susan Thomas, Bernard Orr and Jonathan Oatis)

Facebook says posts with graphic violence rose in early 2018

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Facebook logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

By David Ingram

MENLO PARK, Calif. (Reuters) – The number of posts on Facebook showing graphic violence rose in the first three months of the year from a quarter earlier, possibly driven by the war in Syria, the social network said on Tuesday, in its first public release of such data.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late last year.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier, according to the report.

Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook’s vice president of data analytics.

“Whenever a war starts, there’s a big spike in graphic violence,” Schultz told reporters at Facebook’s headquarters.

Syria’s civil war erupted in 2011. It continued this year with fighting between rebels and Syrian President Bashar al-Assad’s army. This month, Israel attacked Iran’s military infrastructure in Syria.

Facebook, the world’s largest social media firm, has never previously released detailed data about the kinds of posts it takes down for violating its rules.

Facebook only recently developed the metrics as a way to measure its progress, and would probably change them over time, said Guy Rosen, its vice president of product management.

“These kinds of metrics can help our teams understand what’s actually happening to 2-plus billion people,” he said.

The company has a policy of removing content that glorifies the suffering of others. In general it leaves up graphic violence with a warning screen if it was posted for another purpose.

Facebook also prohibits hate speech and said it took action against 2.5 million pieces of content in the first quarter, up 56 percent a quarter earlier. It said the rise was due to improvements in detection.

The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism. It said it disabled 583 million fake accounts.

(Reporting by David Ingram; Editing by Clarence Fernandez)

CEO Zuckerberg says Facebook could have done more to prevent misuse

FILE PHOTO: Facebook CEO Mark Zuckerberg speaks on stage during the Facebook F8 conference in San Francisco, California, U.S., April 12, 2016. REUTERS/Stephen Lam/File Photo

By Dustin Volz and David Shepardson

WASHINGTON (Reuters) – Facebook Inc Chief Executive Mark Zuckerberg told Congress on Monday that the social media network should have done more to prevent itself and its members’ data being misused and offered a broad apology to lawmakers.

His conciliatory tone precedes two days of Congressional hearings where Zuckerberg is set to answer questions about Facebook user data being improperly appropriated by a political consultancy and the role the network played in the U.S. 2016 election.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake,” he said in remarks released by the U.S. House Energy and Commerce Committee on Monday. “It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”

Zuckerberg, surrounded by tight security and wearing a dark suit and a purple tie rather than his trademark hoodie, was meeting with lawmakers on Capitol Hill on Monday ahead of his scheduled appearance before two Congressional committees on Tuesday and Wednesday.

Zuckerberg did not respond to questions as he entered and left a meeting with Senator Bill Nelson, the top Democrat on the Senate Commerce Committee. He is expected to meet Senator John Thune, the Commerce Committee’s Republican chairman, later in the day, among others.

Top of the agenda in the forthcoming hearings will be Facebook’s admission that the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica.

But lawmakers are also expected to press him on a range of issues, including the 2016 election.

“It’s clear now that we didn’t do enough to prevent these tools from being used for harm…” his testimony continued. “That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.”

Facebook, which has 2.1 billion monthly active users worldwide, said on Sunday it plans to begin on Monday telling users whose data may have been shared with Cambridge Analytica. The company’s data practices are under investigation by the U.S. Federal Trade Commission.

London-based Cambridge Analytica, which counts U.S. President Donald Trump’s 2016 campaign among its past clients, has disputed Facebook’s estimate of the number of affected users.

Zuckerberg also said that Facebook’s major investments in security “will significantly impact our profitability going forward.” Facebook shares were up 2 percent in midday trading.

ONLINE INFORMATION WARFARE

Facebook has about 15,000 people working on security and content review, rising to more than 20,000 by the end of 2018, Zuckerberg’s testimony said. “Protecting our community is more important than maximizing our profits,” he said.

As with other Silicon Valley companies, Facebook has been resistant to new laws governing its business, but on Friday it backed proposed legislation requiring social media sites to disclose the identities of buyers of online political campaign ads and introduced a new verification process for people buying “issue” ads, which do not endorse any candidate but have been used to exploit divisive subjects such as gun laws or police shootings.

The steps are designed to deter online information warfare and election meddling that U.S. authorities have accused Russia of pursuing, Zuckerberg said on Friday. Moscow has denied the allegations.

Zuckerberg’s testimony said the company was “too slow to spot and respond to Russian interference, and we’re working hard to get better.”

He vowed to make improvements, adding it would take time, but said he was “committed to getting it right.”

A Facebook official confirmed that the company had hired a team from the law firm WilmerHale and outside consultants to help prepare Zuckerberg for his testimony and how lawmakers may question him.

(Reporting by David Shepardson and Dustin Volz; Editing by Bill Rigby)

Social media companies accelerate removals of online hate speech

A man reads tweets on his phone in front of a displayed Twitter logo in Bordeaux, southwestern France, March 10, 2016. REUTERS/Regis

By Julia Fioretti

BRUSSELS (Reuters) – Social media companies Facebook, Twitter and Google’s YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.

The EU has gone as far as to threaten social media companies with new legislation unless they increase efforts to fight the proliferation of extremist content and hate speech on their platforms.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram and Google+ will also sign up to the code, the European Commission said.

The companies managed to review complaints within a day in 81 percent of cases during monitoring of a six-week period towards the end of last year, EU figures released on Friday show, compared with 51 percent in May 2017 when the Commission last examined compliance with the code of conduct.

On average, the companies removed 70 percent of the content flagged to them, up from 59.2 percent in May last year.

EU Justice Commissioner Vera Jourova has said that she does not want to see a 100 percent removal rate because that could impinge on free speech.

She has also said she is not in favor of legislating as Germany has done. A law providing for fines of up to 50 million euros ($61.4 million) for social media companies that do not remove hate speech quickly enough went into force in Germany this year.

Jourova said the results unveiled on Friday made it less likely that she would push for legislation on the removal of illegal hate speech.

‘NO FREE PASS’

“The fact that our collaborative approach on illegal hate speech brings good results does not mean I want to give a free pass to the tech giants,” she told a news conference.

Facebook reviewed complaints in less than 24 hours in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases.

“These latest results and the success of the code of conduct are further evidence that the Commission’s current self-regulatory approach is effective and the correct path forward.” said Stephen Turner, Twitter’s head of public policy.

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence.

YouTube said it was training machine learning models to flag hateful content at scale.

“Over the last two years we’ve consistently improved our review and action times for this type of content on YouTube, showing that our policies and processes are effective, and getting better over time,” said Nicklas Lundblad, Google’s vice president of public policy in EMEA.

“We’ve learned valuable lessons from the process, but there is still more we can do.”

The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.

(Reporting by Julia Fioretti; Additional reporting by Foo Yun Chee; Editing by Grant McCool and David Goodman)