Hacking against corporations surges as workers take computers home

By Joseph Menn

SAN FRANCISCO (Reuters) – Hacking activity against corporations in the United States and other countries more than doubled by some measures last month as digital thieves took advantage of security weakened by pandemic work-from-home policies, researchers said.

Corporate security teams have a harder time protecting data when it is dispersed on home computers with widely varying setups and on company machines connecting remotely, experts said.

Even those remote workers using virtual private networks (VPNs), which establish secure tunnels for digital traffic, are adding to the problem, officials and researchers said.

Software and security company VMWare Carbon Black said this week that ransomware attacks it monitored jumped 148% in March from the previous month, as governments worldwide curbed movement to slow the novel coronavirus, which has killed more than 130,000.

“There is a digitally historic event occurring in the background of this pandemic, and that is there is a cybercrime pandemic that is occurring,” said VMWare cybersecurity strategist Tom Kellerman.

“It’s just easier, frankly, to hack a remote user than it is someone sitting inside their corporate environment. VPNs are not bullet-proof, they’re not the be-all, end-all.”

Using data from U.S.-based Team Cymru, which has sensors with access to millions of networks, researchers at Finland’s Arctic Security found that the number of networks experiencing malicious activity was more than double in March in the United States and many European countries compared with January, soon after the virus was first reported in China.

The biggest jump in volume came as computers responded to scans when they should not have. Such scans often look for vulnerable software that would enable deeper attacks.

The researchers plan to release their country-by-country findings next week.

Rules for safe communication, such as barring connections to disreputable web addresses, tend to be enforced less when users take computers home, said analyst Lari Huttunen at Arctic.

That means previously safe networks can become exposed. In many cases, corporate firewalls and security policies had protected machines that had been infected by viruses or targeted malware, he said. Outside of the office, that protection can fall off sharply, allowing the infected machines to communicate again with the original hackers.

That has been exacerbated because the sharp increase in VPN volume led some stressed technology departments to permit less rigorous security policies.

“Everybody is trying to keep these connections up, and security controls or filtering are not keeping up at these levels,” Huttunen said.

The U.S. Department of Homeland Security’s (DHS) cybersecurity agency agreed this week that VPNs bring with them a host of new problems.

“As organizations use VPNs for telework, more vulnerabilities are being found and targeted by malicious cyber actors,” wrote DHS’ Cybersecurity and Infrastructure Security Agency.

The agency said it is harder to keep VPNs updated with security fixes because they are used at all hours, instead of on a schedule that allows for routine installations during daily boot-ups or shutdowns.

Even vigilant home users may have problems with VPNs. The DHS agency on Thursday said some hackers who broke into VPNs provided by San Jose-based Pulse Secure before patches were available a year ago had used other programs to maintain that access.

Other security experts said financially motivated hackers were using pandemic fears as bait and retooling existing malicious programs such as ransomware, which encrypts a target’s data and demands payment for its release.

(Reporting by Joseph Menn in San Franciso and Raphael Satter in Washington; Editing by Peter Henderson and Christopher Cushing)

New genre of artificial intelligence programs take computer hacking to another level

FILE PHOTO: Servers for data storage are seen at Advania's Thor Data Center in Hafnarfjordur, Iceland August 7, 2015. REUTERS/Sigtryggur Ari

By Joseph Menn

SAN FRANCISCO (Reuters) – The nightmare scenario for computer security – artificial intelligence programs that can learn how to evade even the best defenses – may already have arrived.

That warning from security researchers is driven home by a team from IBM Corp. who have used the artificial intelligence technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it’s only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc’s Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. “It’s going to make it a lot harder to detect.”

The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by U.S. and Israeli intelligence agencies against a uranium enrichment facility in Iran.

The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.

In a demonstration using publicly available photos of a sample target, the team used a hacked version of video conferencing software that swung into action only when it detected the face of a target.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”

At a recent New York conference, Hackers on Planet Earth, defense researcher Kevin Hodges showed off an “entry-level” automated program he made with open-source training tools that tried multiple attack approaches in succession.

“We need to start looking at this stuff now,” said Hodges. “Whoever you personally consider evil is already working on this.”

(Reporting by Joseph Menn; Editing by Jonathan Weber and Susan Fenton)