Britain to United States: We want a trade deal and a digital tax

Britain to United States: We want a trade deal and a digital tax
LONDON (Reuters) – Britain wants a trade deal with the United States but will impose a digital service tax on the revenue of companies such as Google, Facebook and Amazon, business minister Andrea Leadsom said on Thursday.

“The United States and the United Kingdom are committed to entering into a trade deal with each other and we have a very strong relationship that goes back centuries so some of the disagreements that we might have over particular issues don’t in any way damage the excellent and strong and deep relationship between the U.S. and the UK,” Leadsom told Talk Radio.

“There are always tough negotiations and tough talk but I think where the tech tax is concerned it’s absolutely vital that these huge multinationals who are making incredible amounts of income and profit should be taxed and what we want to do is to work internationally with the rest of the world to cover with a proper regime that ensures that they’re paying their fair share.”

Under the British plan, tech companies that generate at least 500 million pounds ($657 million) a year in global revenue will pay a levy of 2% of the money they make from UK users from April 2020.

(Reporting by Elizabeth Howcroft; writing by Guy Faulconbridge; editing by Kate Holton)

Harvey Weinstein jury selection: bias, big data and ‘likes’

By Tom Hals

(Reuters) – When lawyers in the Harvey Weinstein rape trial question potential jurors on Thursday, they may already know who has used the #MeToo hashtag on Twitter or criticized victims of sexual harassment in a Facebook discussion.

The intersection of big data capabilities and prevalence of social media has transformed the business of jury research in the United States, which once meant gleaning information about potential jurors from car bumper stickers or the appearance of a home.

Now, consultants scour Facebook, Twitter, Reddit and other social media platforms for hard-to-find comments or “likes” in discussion groups or even selfies of a juror wearing a potentially biased t-shirt.

“This is a whole new generation of information than we had in the past,” said Jeffrey Frederick, the director of Jury Research Services at the National Legal Research Group Inc.

The techniques seem tailor-made for the Weinstein trial, which has become a focal point for #MeToo, the social media movement that has exposed sexual misconduct by powerful men in business, politics and entertainment.

Weinstein, 67, has pleaded not guilty to charges of assaulting two women. The once-powerful movie producer faces life in prison if convicted on the most serious charge, predatory sexual assault.

On Thursday, the legal teams will begin questioning potential jurors, a process known as voir dire. More than 100 people passed an initial screening and the identities of many of those people have been known publicly for days, allowing for extensive background research.

Mark Geragos, a defense lawyer, said it is almost malpractice to ignore jurors’ online activity, particularly in high-profile cases.

When Geragos was representing Scott Peterson, who was later found guilty of the 2002 murder of his pregnant wife Laci, it came to light that a woman told an internet chatroom she had duped both legal teams to get on the California jury.

“You just never know if someone is telling the truth,” said Geragos.

Weinstein’s lawyer, Donna Rotunno, told Reuters recently that her team was considering hiring a firm to investigate jurors’ social media use to weed out bias.

The Manhattan District Attorney’s office does not use jury consultants and office spokesman Danny Frost declined to comment if prosecutors were reviewing potential jurors’ social media.

Frederick’s firm, which has not been involved in the Weinstein case, creates huge databases of online activity relevant to a case, drilling down into interactions that do not appear in a user’s social media timeline. His firm combs through Facebook news articles about a particular case or topic, cataloging every comment, reply, share as well as emojis or “likes,” in the hopes some were posted by a potential juror.

“The social media aspect can be enormously helpful in looking at people’s political motives,” said defense attorney Michael Bachner. He said Weinstein’s team will probably want to know about a potential juror’s ties to women’s causes, with “#MeToo being the obvious one.”

Consultants only use public information and focus on those with extremist views, said Roy Futterman of consulting firm DOAR.

“You’re looking for the worst juror,” he said.

Julieanne Himelstein, a former federal prosecutor, said the best vetting tool remains a lawyer’s questioning of a potential juror in the courtroom.

“That trumps all the sophisticated intelligence gathering anyone can do,” said Himelstein.

But trial veterans said that potential jurors are reluctant to admit unpopular viewpoints during voir dire, such as skepticism about workplace sexual harassment.

During questioning in a trial involving a drug company, consultant Christina Marinakis recalled a potential juror who said he did not have negative feelings toward pharmaceutical companies.

“We found he had a blog where he was just going off on capitalism and Corporate America and pharmaceutical companies especially,” said Marinakis, the director of jury research for Litigation Insights. The juror was dismissed.

Marinakis said the blog was written under a username, and only came to light by digging through the juror’s social media for references to pseudonyms.

Lawyers can reject an unlimited number of potential jurors if they show bias. Each side can typically use “peremptory” challenges to eliminate up to three potential jurors they believe will be unsympathetic, without providing a reason.

In a Canadian civil trial, jury consulting firm Vijilent discovered that a potential juror who appeared to be a stay-at-home mom with no history of social activism, in fact had been arrested three times for civil disobedience while promoting the causes of indigenous people.

“Unless you got into her social media, you wouldn’t have known that information,” said Vijilent founder Rosanna Garcia.

(Reporting by Tom Hals; additional reporting by Brendan Pierson and Gabriella Borter in New York; Editing by Noeleen Walder and Rosalba O’Brien)

Facebook and eBay pledge to better tackle fake reviews

LONDON (Reuters) – Facebook and eBay have promised to better identify, probe and respond to fake and misleading reviews, Britain’s Competition and Markets Authority (CMA) said on Wednesday after pressing the online platforms to tackle the issue.

Customer reviews have become an integral part of online shopping on several websites and apps but the regulator has expressed concerns that some comments may not be genuine.

Facebook has removed 188 groups and disabled 24 user accounts whilst eBay has permanently banned 140 users since the summer, according to the CMA.

The CMA has also found examples via photo-posting app Instagram which owner Facebook has promised to investigate.

“Millions of people base their shopping decisions on reviews, and if these are misleading or untrue, then shoppers could end up being misled into buying something that isn’t right for them – leaving businesses who play by the rules missing out,” said CMA Chief Executive Andrea Coscelli.

The CMA said neither company was intentionally allowing such content and both had committed to tackle the problem.

“We maintain zero tolerance for fake or misleading reviews and will continue to take action against any seller that breaches our user polices,” said a spokeswoman at eBay.

Facebook said it was working to stop such fraudulent activity, including exploring the use of automated technology to help remove content before it was seen.

“While we have invested heavily to prevent this kind of activity across our services, we know there is more work to do and are working with the CMA to address this issue.”

(Reporting by Costas Pitas, Editing by Paul Sandle)

Facebook to pilot new fact-checking program with community reviewers

(Reuters) – Facebook Inc said on Tuesday it would ask community reviewers to fact check content in a pilot program in the United States, as the social media platform looks to detect misinformation faster.

The company will work with data services provider Appen to source community reviewers.

The social media giant said data company YouGov conducted an independent study of community reviewers and Facebook users, who will be hired as contractors to review content flagged as potentially false through machine learning, before it is sent to Facebook’s third-party fact-checking partners.

Facebook is under pressure to police misinformation on its platform in the United States ahead of the November 2020 presidential election.

The company recently came under fire for its policy of exempting ads run by politicians from fact checking, drawing ire from Democratic presidential candidates Joe Biden and Elizabeth Warren.

(Reporting by Neha Malara in Bengaluru; Editing by Shinjini Ganguli)

Facebook, Instagram experience outage on Thanksgiving Day

(Reuters) – Facebook Inc’s family of apps including Instagram experienced a major outage on Thanksgivings Day, prompting a flurry of tweets on the social media platform.

“We’re aware that some people are currently having trouble accessing Facebook’s family of apps, including Instagram. We’re working to get things back to normal as quickly as possible. #InstagramDown,” Instagram said in a tweet.

According to outage monitoring website DownDetector, about 8,000 Facebook users were affected in various parts of the world including the United States and Britain.

Several users reported issues like not being able to post pictures and videos on their main feeds and an error message saying “Facebook Will Be Back Soon” appeared on log in attempts.

Facebook could not immediately be reached for comment.

(Reporting by Mekhla Raina in Bengaluru; editing by Diane Craft)

Facebook suspends Russian Instagram accounts targeting U.S. voters

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

Facebook suspends Russian Instagram accounts targeting U.S. voters
By Jack Stubbs and Christopher Bing

LONDON/WASHINGTON (Reuters) – Facebook Inc. said on Monday it has suspended a network of Instagram accounts operated from Russia that targeted Americans with divisive political messages ahead of next year’s U.S. presidential election, with operators posing as people within the United States.

Facebook said it also had suspended three separate networks operated from Iran. The Russian network “showed some links” to Russia’s Internet Research Agency (IRA), Facebook said, an organization Washington has said was used by Moscow to meddle in the 2016 U.S. election.

“We see this operation targeting largely U.S. public debate and engaging in the sort of political issues that are challenging and sometimes divisive in the U.S. right now,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.

“Whenever you do that, a piece of what you engage on are topics that are going to matter for the election. But I can’t say exactly what their goal was.”

Facebook also announced new steps to fight foreign interference and misinformation ahead of the November 2020 election, including labeling state-controlled media outlets and adding greater protections for elected officials and candidates who may be vulnerable targets for hacking.

U.S. security officials have warned that Russia, Iran and other countries could attempt to sway the result of next year’s presidential vote. Officials say they are on high alert for signs of foreign influence campaigns on social media.

Moscow and Tehran have repeatedly denied the allegations.

Gleicher said the IRA-linked network used 50 Instagram accounts and one Facebook account to gather 246,000 followers, about 60% of which were in the United States.

The earliest accounts dated to January this year and the operation appeared to be “fairly immature in its development,” he said.

“They were pretty focused on audience-building, which is the thing you do first as you’re sort of trying to set up an operation.”

Ben Nimmo, a researcher with social media analysis company Graphika who Facebook commissioned, said the flagged accounts shared material that could appeal to Republican and Democratic voters alike.

Most of the messages plagiarized material authored by leading conservative and progressive pundits. This included recycling comments initially shared on Twitter that criticized U.S. congresswoman Alexandria Ocasio-Cortez, Democratic presidential candidate Joe Biden and current President Donald Trump.

“What’s interesting in this set is so much of what they were doing is copying and pasting genuine material from actual Americans,” Nimmo told Reuters. “This may be indicative of an effort to hide linguistic deficiencies, which have made them easier to detect in the past.”

Attorneys for Concord Management and Consulting LLC have denied any wrongdoing. U.S. prosecutors say the firm is controlled by Russian catering tycoon Evgeny Prigozhin and helped orchestrate the IRA’s operations.

Gleicher said the separate Iranian network his team identified used more than 100 fake and hacked accounts on Facebook and Instagram to target U.S. users and some French-speaking parts of North Africa. Some accounts also repurposed Iranian state media stories to target users in Latin American countries including Venezuela, Brazil, Argentina, Bolivia, Peru, Ecuador and Mexico.

The activity was connected to an Iranian campaign first identified in August last year, which Reuters showed aimed to direct internet users to a sprawling web of pseudo-news websites which repackaged propaganda from Iranian state media.

The accounts “typically posted about local political news and geopolitics including topics like public figures in the U.S., politics in the U.S. and Israel, support of Palestine and conflict in Yemen,” Facebook said.

(Reporting by Jack Stubbs; Additional reporting by Elizabeth Culliford in San Francisco; Editing by Chris Reese, Tom Brown and David Gregorio)

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader
SAN FRANCISCO (Reuters) – Disinformation campaigns helped lead to the assassination of Martin Luther King, the daughter of the U.S. civil rights champion said on Thursday after the head of Facebook said social media should not factcheck political advertisements.

The comments come as Facebook Inc  is under fire for its approach to political advertisements and speech, which Chief Executive Mark Zuckerberg defended on Thursday in a major speech that twice referenced King, known by his initials MLK.

King’s daughter, Bernice, tweeted that she had heard the speech. “I’d like to help Facebook better understand the challenges #MLK faced from disinformation campaigns launched by politicians. These campaigns created an atmosphere for his assassination,” she wrote from the handle @BerniceKing.

King died of an assassin’s bullet in Memphis, Tennessee, on April 4, 1968.

Zuckerberg argued that his company should give voice to minority views and said that court protection for free speech stemmed in part from a case involving a partially inaccurate advertisement by King supporters. The U.S. Supreme Court protected the supporters from a lawsuit.

“People should decide what is credible, not tech companies,” Zuckerberg said.

“We very much appreciate Ms. King’s offer to meet with us. Her perspective is invaluable and one we deeply respect. We look forward to continuing this important dialogue with her in Menlo Park next week,” a Facebook spokesperson said.

(Reporting by Peter Henderson; Editing by Lisa Shumaker)

Facebook’s Zuckerberg hits pause on China, defends political ads policy

Facebook’s Zuckerberg hits pause on China, defends political ads policy
By David Shepardson and Katie Paul

WASHINGTON (Reuters) – Facebook Inc <FB.O> Chief Executive Mark Zuckerberg on Thursday defended the social media company’s political advertising policies and said it was unable to overcome China’s strict censorship, attempting to position his company as a defender of free speech.

“I wanted our services in China because I believe in connecting the whole world, and I thought maybe we could help creating a more open society,” Zuckerberg said, addressing students at Georgetown University.

“I worked hard on this for a long time, but we could never come to agreement on what it would take for us to operate there,” he said. “They never let us in.”

He did not address what conditions or assurances he would need to enter the Chinese market.

Facebook tried for years to break into China, one of the last great obstacles to Zuckerberg’s vision of connecting the world’s entire population on the company’s apps.

Zuckerberg met with Chinese President Xi Jinping in Beijing and welcomed the country’s top internet regulator to Facebook’s campus. He also learned Mandarin and posted a photo of himself running through Tiananmen Square, which drew a sharp reaction from critics of the country’s restrictive policies.

The company briefly won a license to open an “innovation hub” in Hangzhou last year, but it was later revoked.

Zuckerberg effectively closed that door in March, when he announced his plan to pivot Facebook toward more private forms of communication and pledged not to build data centers in countries “that have a track record of violating human rights like privacy or freedom of expression.”

He repeated his concern about data centers on Thursday, this time specifically naming China.

Zuckerberg also defended the company’s political advertising policies on similar grounds, saying Facebook had at one time considered banning all political ads but decided against it, erring on the side of greater expression.

Facebook has been under fire over its advertising policies, particularly from U.S. Senator Elizabeth Warren, a leading contender for the Democratic presidential nomination.

The company exempts politicians’ ads from fact-checking standards applied to other content on the social network. Zuckerberg said political advertising does not contribute much to the company’s revenues, but that he believed it would be inappropriate for a tech company to censor public figures.

Reuters reported in October 2018, citing sources, that Facebook executives briefly debated banning all political ads, which produce less than 5% of the company’s revenue.

The company rejected that because product managers were loath to leave advertising dollars on the table and policy staffers argued that blocking political ads would favor incumbents and wealthy campaigners who can better afford television and print ads, the sources said.

Facebook has been under scrutiny in recent years for its lax approach to fake news reports and disinformation campaigns, which many believe affected the outcome of the 2016 U.S. presidential election, won by Donald Trump.

Trump has disputed claims that Russia has attempted to interfere in U.S. elections. Russian President Vladimir Putin has denied it.

Warren’s Democratic presidential campaign recently challenged Facebook’s policy that exempts politicians’ ads from fact-checking, running ads on the social media platform containing the false claim that Zuckerberg endorsed Trump’s re-election bid.

(Reporting by David Shepardson; Writing by Katie Paul; Editing by Lisa Shumaker)

Mass shooting rumor in Facebook Group shows private chats are not risk-free

By Bryan Pietsch

WASHINGTON (Reuters) – Ahead of the annual Blueberry Festival in Marshall County, Indiana, in early September, a woman broadcast a warning to her neighbors on Facebook.

“I just heard there’s supposed to be a mass shooting tonight at the fireworks,” the woman, whose name is held to protect her privacy, said in a post in a private Facebook Group with over 5,000 members. “Probably just a rumor or kids trying to scare people, but everyone keep their eyes open,” she said in the post, which was later deleted.

There was no shooting at the Blueberry Festival that night, and the local police said there was no threat.

But the post sparked fear in the community, with some group members canceling their plans to attend, and shows the power of rumors in Facebook Groups, which are often private or closed to outsiders. Groups allow community members to quickly spread information, and possibly misinformation, to users who trust the word of their neighbors.

These groups and other private features, rather than public feeds, are “the future” of social media, Facebook Inc <FB.O> Chief Executive Mark Zuckerberg said in April, revealing their importance to Facebook’s business model.

The threat of misinformation spreading rapidly in Groups shows a potential vulnerability in a key part of the company’s growth strategy. It could push Facebook to invest in expensive human content monitoring at the risk of limiting the ability to post in real time, a central benefit of Groups and Facebook in general that has attracted millions of users to the platform.

When asked if Facebook takes accountability for situations like the one in Indiana, a company spokeswoman said it is committed to maintaining groups as a safe place, and that it encourages people to contact law enforcement if they see a potential threat.

Facebook Groups can also serve as a tool for connecting social communities around the world, such as ethnic groups, university alumni and hobbyists.

Facebook’s WhatsApp messaging platform faced similar but more serious problems in 2018 after false messages about child abductors led to mass beatings of more than a dozen people in India, some of whom have died. WhatsApp later limited message forwards and began labeling forwarded messages to quell the risk of fake news.

FIREWORKS FEAR

The Blueberry Festival post caused chaos in the group, named “Local News Now 2…(Marshall and all surrounding Counties).”

In another post, which garnered over 100 comments of confusion and worry, a different member urged the woman to report the threat to the police. “This isn’t something to joke about or take lightly,” she wrote.

The author of the original post did not respond to repeated requests for comment.

Facebook’s policy is to remove language that “incites or facilitates serious violence,” the company spokeswoman said, adding that it did not remove the post and that it did not violate Facebook’s policies because there “was no threat, praise or support of violence.”

Cheryl Siddall, the founder of the Indiana group, said she would welcome tools from Facebook to give her greater “control” over what people post in the group, such as alerts to page moderators if posts contain certain words or phrases.

But Siddall said, “I’m sorry, but that’s a full-time job to sit and monitor everything that’s going on in the page.”

A Facebook spokeswoman said page administrators have the ability to remove a post if it violates the group’s proprietary rules and that administrators can pre-approve individual posts, as well as turn on post approvals for individual group members.

In a post to its blog, Facebook urged administrators to write “great group rules” to “set the tone for your group and help prevent member conflict,” as well as “provide a feeling of safety for group members.”

David Bacon, chief of police for the Plymouth Police Department in Marshall County, said the threat was investigated and traced back to an exaggerated rumor from children. Nonetheless, he said the post to the Facebook group is “what caused the whole problem.”

“One post grows and people see it, and they take it as the gospel, when in actuality you can throw anything you want out there,” Bacon said.

(Reporting by Bryan Pietsch; Editing by Chris Sanders)

FBI director warns Facebook could become platform of ‘child pornographers’

WASHINGTON (Reuters) – FBI Director Christopher Wray said Friday that Facebook’s proposed move to encrypt its popular messaging program would turn the platform into a “dream come true for predators and child pornographers.”

Wray, addressing a crowd of law enforcement and child protection officials at the Department of Justice in Washington, said that Facebook’s plan would produce “a lawless space created not by the American people or their representatives but by the owners of one big company.”

Facebook intends to add encryption of wide swathes of communications on its platform.

His speech, which came ahead of an address on the same topic by Attorney General William Barr, ratchets up the pressure on Facebook as the U.S. and allied governments renew their push to weaken the digital protections around the messages billions of people exchange each day.

Wray’s speech is part of a renewed push by the American, Australian, and British governments to force tech companies to help them circumvent the encryption that helps keeps digital communications secure.

Debates over encryption have been rumbling for more than 25 years, but officials’ anxiety has increased as major tech companies move toward automatically encrypting the messages on their platforms and the data held on their phones.

In the past, officials have cited the threat of terrorism to buttress their campaigns again encryption. But as the Islamic State and other extremist groups fade from the headlines, governments are trying a different tack, invoking the threat of child abuse to argue for “lawful access” to these devices.

Facebook’s privacy-focused move, announced by founder Mark Zuckerberg earlier this year, is causing particular consternation because the platform is the source of millions of tips to authorities about child abuse images every year.

Zuckerberg, speaking on the company’s weekly internal Q&A Livestream, defended the decision, saying he was “optimistic” that Facebook would be able to identify predators even in encrypted systems using the same tools it used to fight election interference.

“We’re going to lose the ability to find those kids who need to be rescued,” Wray said. “We’re going to lose the ability to find the bad guys.”

However, many of those outside the law enforcement have applauded Facebook’s push for privacy and security. Academics, experts, and privacy groups have long worried that circumventing the protections around private communications would open dangerous vulnerabilities that could make the entire internet less safe — and leave billions of users exposed to abusive surveillance.

Wray steered clear of making any specific proposal, saying that “companies themselves are best placed” to offer a way for law enforcement to get around encryption.

(Reporting by Raphael Satter; Editing by Steve Orlofsky)