Deepfake used to attack activist couple shows new disinformation frontier

By Raphael Satter

WASHINGTON (Reuters) – Oliver Taylor, a student at England’s University of Birmingham, is a twenty-something with brown eyes, light stubble, and a slightly stiff smile.

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction.

His university says it has no record of him. He has no obvious online footprint beyond an account on the question-and-answer site Quora, where he was active for two days in March. Two newspapers that published his work say they have tried and failed to confirm his identity. And experts in deceptive imagery used state-of-the-art forensic analysis programs to determine that Taylor’s profile photo is a hyper-realistic forgery – a “deepfake.”

Who is behind Taylor isn’t known to Reuters. Calls to the U.K. phone number he supplied to editors drew an automated error message and he didn’t respond to messages left at the Gmail address he used for correspondence.

Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were taken aback by the allegation, which they deny. But they were also baffled as to why a university student would single them out. Masri said he pulled up Taylor’s profile photo. He couldn’t put his finger on it, he said, but something about the young man’s face “seemed off.”

Six experts interviewed by Reuters say the image has the characteristics of a deepfake.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.

Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

“I’m 100 percent sure,” he said.

‘A VENTRILOQUIST’S DUMMY’

The Taylor persona is a rare in-the-wild example of a phenomenon that has emerged as a key anxiety of the digital age: The marriage of deepfakes and disinformation.

The threat is drawing increasing concern in Washington and Silicon Valley. Last year House Intelligence Committee chairman Adam Schiff warned that computer-generated video could “turn a world leader into a ventriloquist’s dummy.” Last month Facebook announced the conclusion of its Deepfake Detection Challenge – a competition intended to help researchers automatically identify falsified footage. Last week online publication The Daily Beast revealed a network of deepfake journalists – part of a larger group of bogus personas seeding propaganda online.

Deepfakes like Taylor are dangerous because they can help build “a totally untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra specializes in detecting such images.

Brahmy said investigators chasing the origin of such photos are left “searching for a needle in a haystack – except the needle doesn’t exist.”

Taylor appears to have had no online presence until he started writing articles in late December. The University of Birmingham said in a statement it could not find “any record of this individual using these details.” Editors at the Jerusalem Post and The Algemeiner say they published Taylor after he pitched them stories cold over email. He didn’t ask for payment, they said, and they didn’t take aggressive steps to vet his identity.

“We’re not a counterintelligence operation,” Algemeiner Editor-in-chief Dovid Efune said, although he noted that the paper had introduced new safeguards since.

After Reuters began asking about Taylor, The Algemeiner and the Times of Israel deleted his work. The Jerusalem Post removed Taylor’s article after Reuters published this story. Taylor emailed the Times of Israel and Algemeiner protesting the deletions, but Times of Israel Opinion Editor Miriam Herschlag said she rebuffed him after he failed to prove his identity. Efune said he didn’t respond to Taylor’s messages.

Arutz Sheva has kept Taylor’s articles online, although it removed the “terrorist sympathizers” reference following a complaint from Masri and Barnard. Editor Yoni Kempinski said only that “in many cases” news outlets “use pseudonyms to byline opinion articles.” Kempinski declined to elaborate or say whether he considered Taylor a pseudonym.

Oliver Taylor’s articles drew minimal engagement on social media, but the Times of Israel’s Herschlag said they were still dangerous – not only because they could distort the public discourse but also because they risked making people in her position less willing to take chances on unknown writers.

“Absolutely we need to screen out impostors and up our defenses,” she said. “But I don’t want to set up these barriers that prevent new voices from being heard.”

(Reporting by Raphael Satter; editing by Chris Sanders and Edward Tobin)

Domestic online interference mars global elections: report

Domestic online interference mars global elections: report
By Elizabeth Culliford

SAN FRANCISCO (Reuters) – Domestic governments and local actors engaged in online interference in efforts to influence 26 of 30 national elections studied by a democracy watchdog over the past year, according to a report released on Monday.

Freedom House, which is partly funded by the U.S. government, said that internet-based election interference has become “an essential strategy” for those seeking to disrupt democracy.

Disinformation and propaganda were the most popular tools used, the group said in its annual report. Domestic state and partisan actors used online networks to spread conspiracy theories and misleading memes, often working in tandem with government-friendly media personalities and business figures, it said.

“Many governments are finding that on social media, propaganda works better than censorship,” said Mike Abramowitz, president of Freedom House.

“Authoritarians and populists around the globe are exploiting both human nature and computer algorithms to conquer the ballot box, running roughshod over rules designed to ensure free and fair elections.”

Some of those seeking to manipulate elections had evolved tactics to beat technology companies’ efforts to combat false and misleading news, the report said.

In the Philippines, for example, it said candidates paid social media “micro-influencers” to promote their campaigns on Facebook Inc, Twitter Inc.  and Instagram, where they peppered political endorsements among popular culture content.

Online disinformation was prevalent in the United States around major political events, such as the November 2018 midterm elections and the confirmation hearing for Supreme Court nominee Brett Kavanaugh, the report said.

Freedom House also found a rise in the number of governments enlisting bots and fake accounts to surreptitiously shape online opinions and harass opponents, with such behavior found in 38 of the 65 countries covered in the report.

Social media was also being increasingly used for mass surveillance, with authorities in at least 40 countries instituting advanced social media monitoring programs.

China was ranked as the world’s worst abuser of internet freedom for a fourth consecutive year, after it enhanced information controls in the face of anti-government protests in Hong Kong and ahead of the 30th anniversary of Tiananmen Square.

For instance, Beijing blocked individual accounts on WeChat for “deviant” behavior, which encouraged self-censorship, the report said.

The Philippine and Chinese embassies in Washington, D.C., did not immediately respond to requests for comment outside of normal business hours.

(Reporting by Elizabeth Culliford; editing by Richard Pullin)

U.S. disrupted Russian trolls on day of November election: report

FILE PHOTO: Voters fill out their ballots for the midterm election at a polling place in Madison, Wisconsin, U.S. November 6, 2018. REUTERS/Nick Oxford

WASHINGTON (Reuters) – The U.S. military disrupted the internet access of a Russian troll farm accused of trying to influence American voters on Nov. 6, 2018, the day of the congressional elections, The Washington Post reported on Tuesday.

The U.S. Cyber Command strike targeted the Internet Research Agency in the Russian port city of St. Petersburg, the Post reported, citing unidentified U.S. officials.

The group is a Kremlin-backed outfit whose employees had posed as Americans and spread disinformation online in an attempt to also influence the 2016 election, according to U.S. officials.

“They basically took the IRA (Internet Research Agency) offline,” the Post quoted one person familiar with the matter as saying. “They shut ’em down.”

The Pentagon’s cyber warfare unit, which works closely with the National Security Agency, had no comment on the report. Cyber Command’s offensive operations are highly classified and rarely made public.

The Internet Research Agency was one of three entities and 13 Russian individuals indicted by Special Counsel Robert Mueller’s office in February 2018 in an alleged criminal and espionage conspiracy to tamper in the U.S. presidential race in a bid to boost Trump and disadvantage his Democratic opponent Hillary Clinton.

Prosecutors said the agency is controlled by Russian businessman Evgeny Prigozhin, who U.S. officials have said has extensive ties to Russia’s military and political establishment.

Prigozhin, also personally charged by Mueller, has been dubbed “Putin’s cook” by Russian media because his catering business has organized banquets for Russian President Vladimir Putin.

Since those indictments, the breadth of the troll farm’s activities has come to light. A report by private experts released to the Senate Intelligence Committee said the Internet Research Agency has tried to manipulate U.S. politics for years and continues to do so today.

The report, by an Oxford University team working with analytical firm Graphika, said Russian trolls urged African-Americans to boycott the 2016 election or to follow wrong voting procedures, while also encouraging right-wing voters to be more confrontational.

Since Donald Trump was elected president, the report said, Russian trolls have put out messages urging Mexican-American and other Hispanic voters to mistrust U.S. institutions.

(Reporting by Doina Chiacu and Mark Hosenball; editing by James Dalgleish and Bernadette Baum)

Facebook removes fake accounts tied to Iran that lured over 1 million followers

FILE PHOTO: A woman looks at the Facebook logo on an iPad in this photo illustration taken June 3, 2018. REUTERS/Regis Duvignau/Illustration/File Photo

By Christopher Bing and Munsif Vengattil

WASHINGTON (Reuters) – Facebook Inc said on Friday it had deleted accounts originating in Iran that attracted more than 1 million U.S. and British followers, its latest effort to combat disinformation activity on its platform.

Social media companies are struggling to stop attempts by people inside and outside the United States to spread false information on their platforms with goals ranging from destabilizing elections by stoking hardline positions to supporting propaganda campaigns.

The fake Facebook accounts originating in Iran mostly targeted American liberals, according to the Atlantic Council’s Digital Forensic Research Lab, a think tank that works with Facebook to study propaganda online.

Facebook said it removed 82 pages, groups and accounts on Facebook and Instagram that represented themselves as being American or British citizens, then posted on “politically charged” topics such as race relations, opposition to U.S. President Donald Trump and immigration, Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said in a blog post.

In total, the removed accounts attracted more than 1 million followers. The Iran-linked posts were amplified through less than $100 in advertising on Facebook and Instagram, Facebook said.

While the accounts originated in Iran, it was unclear if they were linked to the Tehran government, according to Facebook, which shared the information with researchers, other technology companies and the British and U.S. governments.

The Iranian U.N. mission did not immediately respond to a request for comment.

The action follows takedowns in August by Facebook, Twitter Inc and Alphabet Inc of hundreds of accounts linked to Iranian propaganda.

The latest operation was more sophisticated in some instances, making it difficult to identify, Gleicher said during a press conference phone call on Friday.

Although most of accounts and pages had existed only since earlier this year, they attracted more followers than the accounts removed in August, some of which dated back to 2013. The previously suspended Iranian accounts and pages garnered roughly 983,000 followers before being removed.

“It looks like the intention was to embed in highly active and engaged communities by posting inflammatory content, and then insert messaging on Saudi and Israel which amplified the Iranian government’s narrative,” said Ben Nimmo, an information defense fellow with the Digital Forensic Research Lab.

“Most of the posts concerned divisive issues in the U.S., and posted a liberal or progressive viewpoint, especially on race relations and police violence,” Nimmo said.

Social media companies have increasingly targeted foreign interference on their platforms following criticism that they did not do enough to detect, halt and disclose Russian efforts to use their platforms to influence the outcome of the 2016 U.S. presidential race.

Iran and Russia have denied allegations that they have used social media platforms to launch disinformation campaigns.

(Reporting by Chris Bing in Washington and Munsif Vengattil in Bengalaru, additional reporting by Jack Stubbs in London and Michelle Nichols in New York; Editing by Steve Orlofsky, Bernadette Baum and Susan Thomas)