China’s robot censors crank up as Tiananmen anniversary nears

People take pictures of paramilitary officers marching in formation in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Pe

By Cate Cadell

BEIJING (Reuters) – It’s the most sensitive day of the year for China’s internet, the anniversary of the bloody June 4 crackdown on pro-democracy protests at Tiananmen Square, and with under two weeks to go, China’s robot censors are working overtime.

Censors at Chinese internet companies say tools to detect and block content related to the 1989 crackdown have reached unprecedented levels of accuracy, aided by machine learning and voice and image recognition.

“We sometimes say that the artificial intelligence is a scalpel, and a human is a machete,” said one content screening employee at Beijing Bytedance Co Ltd, who asked not to be identified because they are not authorized to speak to media.

Two employees at the firm said censorship of the Tiananmen crackdown, along with other highly sensitive issues including Taiwan and Tibet, is now largely automated.

Posts that allude to dates, images and names associated with the protests are automatically rejected.

“When I first began this kind of work four years ago there was opportunity to remove the images of Tiananmen, but now the artificial intelligence is very accurate,” one of the people said.

Four censors, working across Bytedance, Weibo Corp and Baidu Inc apps said they censor between 5,000-10,000 pieces of information a day, or five to seven pieces a minute, most of which they said were pornographic or violent content.

Despite advances in AI censorship, current-day tourist snaps in the square are sometimes unintentionally blocked, one of the censors said.

Bytedance and Baidu declined to comment, while Weibo did not respond to request for comment.

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

SENSITIVE PERIOD

The Tiananmen crackdown is a taboo subject in China 30 years after the government sent tanks to quell student-led protests calling for democratic reforms. Beijing has never released a death toll but estimates from human rights groups and witnesses range from several hundred to several thousand.

June 4th itself is marked by a cat-and-mouse game as people use more and more obscure references on social media sites, with obvious allusions blocked immediately. In some years, even the word “today” has been scrubbed.

In 2012, China’s most-watched stock index fell 64.89 points on the anniversary day, echoing the date of the original event in what analysts said was likely a strange coincidence rather than a deliberate reference.

Still, censors blocked access to the term “Shanghai stock market” and to the index numbers themselves on microblogs, along with other obscure references to sensitive issues.

While companies censorship tools are becoming more refined, analysts, academics and users say heavy-handed policies mean sensitive periods before anniversaries and political events have become catch-alls for a wide range of sensitive content.

In the lead-up to this year’s Tiananmen Square anniversary, censorship on social media has targeted LGBT groups, labor and environment activists and NGOs, they say.

Upgrades to censorship tech have been urged on by new policies introduced by the Cyberspace Administration of China (CAC). The group was set up – and officially led – by President Xi Jinping, whose tenure has been defined by increasingly strict ideological control of the internet.

The CAC did not respond to a request for comment.

Last November, the CAC introduced new rules aimed at quashing dissent online in China, where “falsifying the history of the Communist Party” on the internet is a punishable offence for both platforms and individuals.

The new rules require assessment reports and site visits for any internet platform that could be used to “socially mobilize” or lead to “major changes in public opinion”, including access to real names, network addresses, times of use, chat logs and call logs.

One official who works for CAC told Reuters the recent boost in online censorship is “very likely” linked to the upcoming anniversary.

“There is constant communication with the companies during this time,” said the official, who declined to directly talk about the Tiananmen, instead referring to the “the sensitive period in June”.

Companies, which are largely responsible for their own censorship, receive little in the way of directives from the CAC, but are responsible for creating guidelines in their own “internal ethical and party units”, the official said.

SECRET FACTS

With Xi’s tightening grip on the internet, the flow of information has been centralized under the Communist Party’s Propaganda Department and state media network. Censors and company staff say this reduces the pressure of censoring some events, including major political news, natural disasters and diplomatic visits.

“When it comes to news, the rule is simple… If it is not from state media first, it is not authorized, especially regarding the leaders and political items,” said one Baidu staffer.

“We have a basic list of keywords which include the 1989 details, but (AI) can more easily select those.”

Punishment for failing to properly censor content can be severe.

In the past six weeks, popular services including a Netease Inc news app, Tencent Holdings Ltd’s news app TianTian, and Sina Corp have all been hit with suspensions ranging from days to weeks, according to the CAC, meaning services are made temporarily unavailable on apps stores and online.

For internet users and activists, penalties can range from fines to jail time for spreading information about sensitive events online.

In China, social media accounts are linked to real names and national ID numbers by law, and companies are legally compelled to offer user information to authorities when requested.

“It has become normal to know things and also understand that they can’t be shared,” said one user, Andrew Hu. “They’re secret facts.”

In 2015, Hu spent three days in detention in his home region of Inner Mongolia after posting a comment about air pollution onto an unrelated image that alluded to the Tiananmen crackdown on Twitter-like social media site Weibo.

Hu, who declined to use his full Chinese name to avoid further run-ins with the law, said when police officers came to his parents house while he was on leave from his job in Beijing he was surprised, but not frightened.

“The responsible authorities and the internet users are equally confused,” said Hu. “Even if the enforcement is irregular, they know the simple option is to increase pressure.”

(Reporting by Cate Cadell. Editing by Lincoln Feast.)

Exclusive: Amazon rolls out machines that pack orders and replace jobs

FILE PHOTO: A 6-axis robotic arm picks up sorting containers at the Amazon fulfillment center in Baltimore, Maryland, U.S., April 30, 2019. REUTERS/Clodagh Kilcoyne/File Photo

By Jeffrey Dastin

SAN FRANCISCO (Reuters) – Amazon.com Inc is rolling out machines to automate a job held by thousands of its workers: boxing up customer orders.

The company started adding technology to a handful of warehouses in recent years, which scans goods coming down a conveyor belt and envelopes them seconds later in boxes custom-built for each item, two people who worked on the project told Reuters.

Amazon has considered installing two machines at dozens more warehouses, removing at least 24 roles at each one, these people said. These facilities typically employ more than 2,000 people.

That would amount to more than 1,300 cuts across 55 U.S. fulfillment centers for standard-sized inventory. Amazon would expect to recover the costs in under two years, at $1 million per machine plus operational expenses, they said.

The plan, previously unreported, shows how Amazon is pushing to reduce labor and boost profits as automation of the most common warehouse task – picking up an item – is still beyond its reach. The changes are not finalized because vetting technology before a major deployment can take a long time.

Amazon is famous for its drive to automate as many parts of its business as possible, whether pricing goods or transporting items in its warehouses. But the company is in a precarious position as it considers replacing jobs that have won it subsidies and public goodwill.

“We are piloting this new technology with the goal of increasing safety, speeding up delivery times and adding efficiency across our network,” an Amazon spokeswoman said in a statement. “We expect the efficiency savings will be re-invested in new services for customers, where new jobs will continue to be created.”

Amazon last month downplayed its automation efforts to press visiting its Baltimore fulfillment center, saying a fully robotic future was far off. Its employee base has grown to become one of the largest in the United States, as the company opened new warehouses and raised wages to attract staff in a tight labor market.

A key to its goal of a leaner workforce is attrition, one of the sources said. Rather than lay off workers, the person said, the world’s largest online retailer will one day refrain from refilling packing roles. Those have high turnover because boxing multiple orders per minute over 10 hours is taxing work. At the same time, employees that stay with the company can be trained to take up more technical roles.

The new machines, known as the CartonWrap from Italian firm CMC Srl, pack much faster than humans. They crank out 600 to 700 boxes per hour, or four to five times the rate of a human packer, the sources said. The machines require one person to load customer orders, another to stock cardboard and glue and a technician to fix jams on occasion.

CMC declined to comment.

Though Amazon has announced it intends to speed up shipping across its Prime loyalty program, this latest round of automation is not focused on speed. “It’s truly about efficiency and savings,” one of the people said.

Including other machines known as the “SmartPac,” which the company rolled out recently to mail items in patented envelopes, Amazon’s technology suite will be able to automate a majority of its human packers. Five rows of workers at a facility can turn into two, supplemented by two CMC machines and one SmartPac, the person said.

The company describes this as an effort to “re-purpose” workers, the person said.

It could not be learned where roles might disappear first and what incentives, if any, are tied to those specific jobs.

But the hiring deals that Amazon has with governments are often generous. For the 1,500 jobs Amazon announced last year in Alabama, for instance, the state promised the company $48.7 million over 10 years, its department of commerce said.

PICKING CHALLENGE

Amazon is not alone in testing CMC’s packing technology. JD.com Inc and Shutterfly Inc have used the machines as well, the companies said, as has Walmart Inc, according to a person familiar with its pilot.

Walmart started 3.5 years ago and has since installed the machines in several U.S. locations, the person said. The company declined to comment.

Interest in boxing technology sheds light on how the e-commerce behemoths are approaching one of the major problems in the logistics industry today: finding a robotic hand that can grasp diverse items without breaking them.

Amazon employs countless workers at each fulfillment center who do variations of this same task. Some stow inventory, while others pick customer orders and still others grab those orders, placing them in the right size box and taping them up.

Many venture-backed companies and university researchers are racing to automate this work. While advances in artificial intelligence are improving machines’ accuracy, there is still no guarantee that robotic hands can prevent a marmalade jar from slipping and breaking, or switch seamlessly from picking up an eraser to grabbing a vacuum cleaner.

Amazon has tested different vendors’ technology that it may one day use for picking, including from Soft Robotics, a Boston-area startup that drew inspiration from octopus tentacles to make grippers more versatile, one person familiar with Amazon’s experimentation said. Soft Robotics declined to comment on its work with Amazon but said it has handled a wide and ever-changing variety of products for multiple large retailers.

Believing that grasping technology is not ready for prime time, Amazon is automating around that problem when packing customer orders. Humans still place items on a conveyor, but machines then build boxes around them and take care of the sealing and labeling. This saves money not just by reducing labor but by reducing wasted packing materials as well.

These machines are not without flaws. CMC can only produce so many per year. They need a technician on site who can fix problems as they arise, a requirement Amazon would rather do without, the two sources said. The super-hot glue closing the boxes can pile up and halt a machine.

Still other types of automation, like the robotic grocery assembly system of Ocado Group PLC, are the focus of much industry interest.

But the boxing machines are already proving helpful to Amazon. The company has installed them in busy warehouses that are driving distance from Seattle, Frankfurt, Milan, Amsterdam, Manchester and elsewhere, the people said.

The machines have the potential to automate far more than 24 jobs per facility, one of the sources said. The company is also setting up nearly two dozen more U.S. fulfillment centers for small and non-specialty inventory, according to logistics consultancy MWPVL International, which could be ripe for the machines.

This is just a harbinger of automation to come.

“A ‘lights out’ warehouse is ultimately the goal,” one of the people said.

(Reporting By Jeffrey Dastin in San Francisco; additional reporting by Nandita Bose in Washington and Josh Horwitz in Shanghai; editing by Greg Mitchell and Edward Tobin)

AI must be accountable, EU says as it sets ethical guidelines

FILE PHOTO: An activist from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', protests at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse/File Photo

By Foo Yun Chee

BRUSSELS (Reuters) – Companies working with artificial intelligence need to install accountability mechanisms to prevent its being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse.

AI projects should be transparent, have human oversight and secure and reliable algorithms, and they must be subject to privacy and data protection rules, the commission said, among other recommendations.

The European Union initiative taps in to a global debate about when or whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation.

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” the Commission digital chief, Andrus Ansip, said in a statement.

AI can help detect fraud and cybersecurity threats, improve healthcare and financial risk management and cope with climate change. But it can also be used to support unscrupulous business practices and authoritarian governments.

The EU executive last year enlisted the help of 52 experts from academia, industry bodies and companies including Google, SAP, Santander and Bayer to help it draft the principles.

Companies and organizations can sign up to a pilot phase in June, after which the experts will review the results and the Commission decide on the next steps.

IBM Europe Chairman Martin Jetter, who was part of the group of experts, said guidelines “set a global standard for efforts to advance AI that is ethical and responsible.”

The guidelines should not hold Europe back, said Achim Berg, president of BITKOM, Germany’s Federal Association of Information Technology, Telecommunications, and New Media.

“We must ensure in Germany and Europe that we do not only discuss AI but also make AI,” he said.

(Reporting by Foo Yun Chee, additional reporting by Georgina Prodhan in London; editing by John Stonestreet, Larry King)

Ethical question takes center stage at Silicon Valley summit on artificial intelligence

FILE PHOTO: A research support officer and PhD student works on his artificial intelligence projects to train robots to autonomously carry out various tasks, at the Department of Artificial Intelligence in the Faculty of Information Communication Technology at the University of Malta in Msida, Malta February 8, 2019. REUTERS/Darrin Zammit Lupi

By Jeffrey Dastin and Paresh Dave

SAN FRANCISCO (Reuters) – Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’

A Microsoft Corp executive pointed to how the company considered whether it ought to sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID service at all.

The big news at the summit, in San Francisco, came from Google, which announced it was launching a council of public policy and other external experts to make recommendations on AI ethics to the company.

The discussions at EmTech Digital, run by the MIT Technology Review, underscored how companies are making a bigger show of their moral compass.

At the summit, activists critical of Silicon Valley questioned whether big companies could deliver on promises to address ethical concerns. The teeth the companies’ efforts have may sharply affect how governments regulate the firms in the future.

“It is really good to see the community holding companies accountable,” David Budden, research engineering team lead at Alphabet Inc’s DeepMind, said of the debates at the conference. “Companies are thinking of the ethical and moral implications of their work.”

Kent Walker, Google’s senior vice president for global affairs, said the internet giant debated whether to publish research on automated lip-reading. While beneficial to people with disabilities, it risked helping authoritarian governments surveil people, he said.

Ultimately, the company found the research was “more suited for person to person lip-reading than surveillance so on that basis decided to publish” the research, Walker said. The study was published last July.”

Kebotix, a Cambridge, Massachusetts startup seeking to use AI to speed up the development of new chemicals, used part of its time on stage to discuss ethics. Chief Executive Jill Becker said the company reviews its clients and partners to guard against misuse of its technology.

Still, Rashida Richardson, director of policy research for the AI Now Institute, said little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others launched the nonprofit Partnership on AI to engage the public on AI issues.

“There is a real imbalance in priorities” for tech companies, Richardson said. Considering “the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.”

Google’s Walker said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

“Baby steps is probably a fair characterization,” he said.

(Reporting By Jeffrey Dastin and Paresh Dave in San Francisco; Editing by Greg Mitchell)

‘AI’ to hit hardest in U.S. heartland and among less-skilled: study

WASHINGTON (Reuters) – The Midwestern states hit hardest by job automation in recent decades, places that were pivotal to U.S. President Donald Trump’s election, will be under the most pressure again as advances in artificial intelligence reshape the workplace, according to a new study by Brookings Institution researchers.

The spread of computer-driven technology into middle-wage jobs like trucking, construction, and office work, and some lower-skilled occupations like food preparation and service, will also further divide the fast-growing cities where skilled workers are moving and other areas, and separate the high- skilled workers whose jobs are less prone to automation from everyone else regardless of location, the study found.

But the pain may be most intense in a familiar group of manufacturing-heavy states like Wisconsin, Ohio and Iowa, whose support swung the U.S. electoral college for Trump, a Republican, and which have among the largest share of jobs, around 27 percent, at “high risk” of further automation in coming years.

At the other end, solidly Democratic coastal states like New York and Maryland had only about a fifth of jobs in the high-risk category.

The findings suggest the economic tensions that framed Trump’s election may well persist, and may even be immune to his efforts to shift global trade policy in favor of U.S. manufacturers.

“The first era of digital automation was one of traumatic change…with employment and wage gains coming only at the high and low ends,” authors including Brookings Metro Policy Program director Mark Muro wrote of the spread of computer technology and robotics that began in the 1980s. “That our forward-looking analysis projects more of the same…will not, therefore, be comforting.”

The study used prior research from the McKinsey Global Institute that looked at tasks performed in 800 occupations, and the proportion that could be automated by 2030 using current technology.

While some already-automated industries like manufacturing will continue needing less labor for a given level of output – the “automation potential” of production jobs remains nearly 80 percent – the spread of advanced techniques means more jobs will come under pressure as autonomous vehicles supplant drivers, and smart technology changes how waiters, carpenters and others do their jobs.

That would raise productivity – a net plus for the economy overall that could keep goods cheaper, raise demand, and thus help create more jobs even if the nature of those jobs changes.

But it may pose a challenge for lower-skilled workers in particular as automation spreads in food service and construction, industries that have been a fallback for many.

“This implies a shift in the composition of the low-wage workforce” toward jobs like personal care, with an automation potential of 34 percent, or building maintenance, with an automation potential of just 20 percent, the authors wrote.

(Reporting by Howard Schneider; Editing by Andrea Ricci)

Pentagon looks to exoskeletons to build ‘super-soldiers’

Keith Maxwell, Senior Product Manager of Exoskeleton Technologies at Lockheed Martin, demonstrates an Exoskeleton during a Exoskeleton demonstration and discussion, in Washington, U.S., November 29, 2018. REUTERS/Al Drago

By Phil Stewart

WASHINGTON (Reuters) – The U.S. Army is investing millions of dollars in experimental exoskeleton technology to make soldiers stronger and more resilient, in what experts say is part of a broader push into advanced gear to equip a new generation of “super-soldiers.”

The technology is being developed by Lockheed Martin Corp with a license from Canada-based B-TEMIA, which first developed the exoskeletons to help people with mobility difficulties stemming from medical ailments like multiple sclerosis and severe osteoarthritis.

Worn over a pair of pants, the battery-operated exoskeleton uses a suite of sensors, artificial intelligence and other technology to aid natural movements.

For the U.S. military, the appeal of such technology is clear: Soldiers now deploy into war zones bogged down by heavy but critical gear like body armor, night-vision goggles and advanced radios. Altogether, that can weigh anywhere from 90 to 140 pounds (40-64 kg), when the recommended limit is just 50 pounds (23 kg).

“That means when people do show up to the fight, they’re fatigued,” said Paul Scharre at the Center for a New American Security (CNAS), who helped lead a series of studies on exoskeletons and other advanced gear.

“The fundamental challenge we’re facing with infantry troops is they’re carrying too much weight.”

Lockheed Martin said on Thursday it won a $6.9 million award from the U.S. Army Natick Soldier Research, Development and Engineering Center to research and develop the exoskeleton, called ONYX, under a two-year, sole-source agreement.

Keith Maxwell, Senior Product Manager of Exoskeleton Technologies at Lockheed Martin, speaks during a Exoskeleton demonstration and discussion, in Washington, U.S., November 29, 2018. REUTERS/Al Drago

Keith Maxwell, Senior Product Manager of Exoskeleton Technologies at Lockheed Martin, speaks during an Exoskeleton demonstration and discussion, in Washington, U.S., November 29, 2018. REUTERS/Al Drago

Keith Maxwell, the exoskeleton technologies manager at Lockheed Martin Missiles and Fire Control, said people in his company’s trials who wore the exoskeletons showed far more endurance.

“You get to the fight fresh. You’re not worn out,” Maxwell said.

Maxwell, who demonstrated a prototype, said each exoskeleton was expected to cost in the tens of thousands of dollars.

B-TEMIA’s medically focused system, called Keeogo, is sold in Canada for about C$39,000 ($30,000), company spokeswoman Pamela Borges said.

The United States is not the only country looking at exoskeleton technology.

Samuel Bendett at the Center for Naval Analyses, a federally funded U.S. research and development center, said Russia and China were also investing in exoskeleton technologies, “in parallel” to the U.S. advances.

Russia, in particular, was working on several versions of exoskeletons, including one that it tested recently in Syria, Bendett said.

The CNAS analysis of the exoskeleton was part of a larger look by the Washington-based think tank at next-generation technologies that can aid soldiers, from better helmets to shield them from blast injuries to the introduction of robotic “teammates” to help resupply them in war zones.

T

(Reporting by Phil Stewart; Editing by Peter Cooney)

U.S. tech giants eye Artificial Intelligence key to unlock China push

A Google sign is seen during the WAIC (World Artificial Intelligence Conference) in Shanghai, China, September 17, 2018. REUTERS/Aly Song

By Cate Cadell

SHANGHAI (Reuters) – U.S. technology giants, facing tighter content rules in China and the threat of a trade war, are targeting an easier way into the world’s second-largest economy – artificial intelligence.

Google, Microsoft Inc and Amazon Inc showcased their AI wares at a state-backed forum held in Shanghai this week against the backdrop of Beijing’s plans to build a $400 billion AI industry by 2025.

China’s government and companies may compete against U.S. rivals in the global AI race, but they are aware that gaining ground won’t be easy without a certain amount of collaboration.

“Hey Google, let’s make humanity great again,” Tang Xiao’ou, CEO of Chinese AI and facial recognition unicorn Sensetime, said in a speech on Monday.

Amazon and Microsoft announced plans on Monday to build new AI research labs in Shanghai. Google also showcased a growing suite of China-focused AI tools at its packed event on Tuesday.

Google in the past year has launched AI-backed products including a translate app and a drawing game, its first new consumer products in China since its search engine was largely blocked in 2010.

The World Artificial Intelligence Conference, which ends on Wednesday, is hosted by China’s top economic planning agency alongside its cyber and industry ministries. The conference aims to show the country’s growing might as a global AI player.

China’s ambition to be a world leader in AI has created an opening for U.S. firms, which attract the majority of top global AI talents and are keen to tap into China’s vast data.

The presence of global AI research projects is also a boon for China, which aims to become a global technology leader in the next decade.

Liu He, China’s powerful vice premier and the key negotiator in trade talks with the United States, said his country wanted a more collaborative approach to AI technology.

“As members of a global village, I hope countries can show inclusive understanding and respect for each other, deal with the double-sword technologies can bring, and embrace AI challenges together,” he told the forum.

Beijing took an aggressive stance when it laid out its AI roadmap last year, urging companies, the government and military to give China a “competitive edge” over its rivals.

STATE-BACKED AI

Chinese attendees at the forum were careful to cite the guiding role of the state in the country’s AI sector.

“The development of AI is led by government and executed by companies,” a Chinese presenter said in between speeches on Monday by China’s top tech leaders, including Alibaba Holding Ltd chairman Jack Ma, Tencent Holdings Ltd chief Pony Ma and Baidu Inc CEO Robin Li.

While China may have enthusiasm for foreign AI projects, there is little indication that building up local AI operations will open doors for foreign firms in other areas.

China’s leaders still prefer to view the Internet as a sovereign project. Google’s search engine remains blocked, while Amazon had to step back from its cloud business in China.

Censorship and local data rules have also hardened in China over the past two years, creating new hoops for foreign firms to jump through if they want to tap the booming internet sector.

Nevertheless, some speakers paid tribute to foreign AI products, including Xiami Corp chief executive Lei Jun, who hailed Google’s Alpha Go board game program as a major milestone, saying he was a fan of the game himself.

Alibaba’s Ma said innovation needed space to develop and it was not the government’s role to protect business.

“The government needs to do what the government should do, and companies need to do what they should do,” he said.

(Reporting by Cate Cadell; Editing by Adam Jourdan and Darren Schuettler)

New genre of artificial intelligence programs take computer hacking to another level

FILE PHOTO: Servers for data storage are seen at Advania's Thor Data Center in Hafnarfjordur, Iceland August 7, 2015. REUTERS/Sigtryggur Ari

By Joseph Menn

SAN FRANCISCO (Reuters) – The nightmare scenario for computer security – artificial intelligence programs that can learn how to evade even the best defenses – may already have arrived.

That warning from security researchers is driven home by a team from IBM Corp. who have used the artificial intelligence technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it’s only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc’s Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. “It’s going to make it a lot harder to detect.”

The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by U.S. and Israeli intelligence agencies against a uranium enrichment facility in Iran.

The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.

In a demonstration using publicly available photos of a sample target, the team used a hacked version of video conferencing software that swung into action only when it detected the face of a target.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”

At a recent New York conference, Hackers on Planet Earth, defense researcher Kevin Hodges showed off an “entry-level” automated program he made with open-source training tools that tried multiple attack approaches in succession.

“We need to start looking at this stuff now,” said Hodges. “Whoever you personally consider evil is already working on this.”

(Reporting by Joseph Menn; Editing by Jonathan Weber and Susan Fenton)

Exclusive: U.S. considers tightening grip on China ties to corporate America

FILE PHOTO: The People's Republic of China flag and the U.S. Stars and Stripes fly on a lamp post along Pennsylvania Avenue near the U.S. Capitol during Chinese President Hu Jintao's state visit, in Washington, D.C.,U.S., January 18, 2011. REUTERS/Hyungwon Kang/File Photo

By Koh Gui Qing

NEW YORK (Reuters) – The U.S. government may start scrutinizing informal partnerships between American and Chinese companies in the field of artificial intelligence, threatening practices that have long been considered garden variety development work for technology companies, sources familiar with the discussions said.

So far, U.S. government reviews for national security and other concerns have been limited to investment deals and corporate takeovers. This possible new expansion of the mandate – which would serve as a stop-gap measure until Congress imposes tighter restrictions on Chinese investments – is being pushed by members of Congress, and those in U.S. President Donald Trump’s administration who worry about theft of intellectual property and technology transfer to China, according to four people familiar with the matter.

Artificial intelligence, in which machines imitate intelligent human behavior, is a particular area of interest because of the technology’s potential for military usage, they said. Other areas of interest for such new oversight include semiconductors and autonomous vehicles, they added.

These considerations are in early stages, so it remains unclear if they will move forward, and which informal corporate relationships this new initiative would scrutinize.

Any broad effort to sever relationships between Chinese and American tech companies – even temporarily – could have dramatic effects across the industry. Major American technology companies, including Advanced Micro Devices Inc, Qualcomm Inc, Nvidia Corp and IBM, have activities in China ranging from research labs to training initiatives, often in collaboration with Chinese companies and institutions who are major customers.

Top talent in areas including artificial intelligence and chip design also flows freely among companies and universities in both countries.

The nature of informal business relationships varies widely.

For example, when U.S. chipmaker Nvidia Corp – the leader in AI hardware – unveiled a new graphics processing unit that powers data centers, video games and cryptocurrency mining last year, it gave away samples to 30 artificial intelligence scientists, including three who work with China’s government, according to Nvidia.

For a company like Nvidia, which gets a fifth of its business from China, the giveaway was business as usual. It has several arrangements to train local scientists and develop technologies there that rely on its chips. Offering early access helps Nvidia tailor products so it can sell more.

The U.S. government could nix this sort of cooperation through an executive order from Trump by invoking the International Emergency Economic Powers Act. Such a move would unleash sweeping powers to stop or review informal corporate partnerships between a U.S. and Chinese company, any Chinese investment in a U.S. technology company or the Chinese purchases of real estate near sensitive U.S. military sites, the sources said.

“I don’t see any alternative to having a stronger (regulatory) regime because the end result is, without it, the Chinese companies are going to get stronger,” said one of the sources, who is advising U.S. lawmakers on efforts to revise and toughen U.S. foreign investment rules. “They are going to challenge our companies in 10 or 15 years.”

James Lewis, a former Foreign Service officer with the U.S. Departments of State who is now with the Center for Strategic and International Studies, said if the emergency act was invoked, U.S. government officials including those in the Treasury Department could use it “to catch anything they want” that currently fall outside the scope of the regulatory regime.A White House official said that they do not comment on speculation about internal administration policy discussions, but added “we are concerned about Made in China 2025, particularly relevant in this case is its targeting of industries like AI.”

Made in China 2025 is an industrial plan outlining China’s ambition to become a market leader in 10 key sectors including semiconductors, robotics, drugs and devices and smart green cars.

Last month, the White House outlined new import tariffs that were largely directed at China for what Trump described as “intellectual property theft.” That prompted Chinese President Xi Jinping’s government to retaliate with sanctions against the United States.

Those moves followed proposed legislation that would toughen foreign investment rules overseen by the Committee on Foreign Investment in the United States (CFIUS), by giving the committee – made up of representatives from various U.S. government agencies – purview over joint ventures that involve “critical technology”.

Republican and Democratic lawmakers who put forth the proposal in November said changes are aimed at China.

Whereas an overhauled CFIUS would likely review deals relevant to national security and involve foreign ownership, informal partnerships are likely to be regulated by revised export controls when they come into effect, sources said.

To be sure, sources said the Trump administration could change its mind about invoking the emergency act. They added that some within the Treasury Department are also lukewarm about invoking the emergency act as they preferred to focus on passing the revised rules for CFIUS.

FOCUS ON AI

Chinese and U.S. companies are widely believed among analysts to be locked in a two-way race to become the world’s leader in AI. While U.S. tech giants such as Alphabet Inc’s Google are in the lead, Chinese firms like Internet services provider Baidu Inc have made significant strides, according to advisory firm Eurasia Group.

As for U.S. chipmakers, few are as synonymous with the technology as Nvidia, one of the world’s top makers of the highly complex chips that power AI machines.

There is no evidence that Nvidia’s activities represent a threat to national security by, for instance, offering access to trade secrets such as how to make a graphics processing unit. Nvidia also said it does not have joint ventures in China.

In a statement, Nvidia said its collaborations in China – including training Chinese scientists and giving Chinese companies such as telecom provider Huawei Technologies Co Ltd early access to some of its latest technology – are only intended to get feedback on the chips it sells there.

“We are extremely protective of our proprietary technology and know-how,” Nvidia said. “We don’t give any company, anywhere in the world, the core differentiating technology.”

Qualcomm did not respond to requests for a comment, while Advanced Micro Devices and IBM declined to comment.

Nvidia is far from being the only U.S. tech giant, much less the only chipmaker, that lends expertise to China. But it is clearly in the sights of the Chinese. When the country’s Ministry of Science and Technology solicited pitches for research projects last year, one of the listed objectives was to create a chip 20 times faster than Nvidia’s

“Five years ago, this might not be a concern,” said Lewis, “But it’s a concern now because of the political and technological context.”

(Additional reporting by Diane Bartz in WASHINGTON; Editing by Lauren LaCapra and Edward Tobin)

Facebook to expand artificial intelligence to help prevent suicide

A 3D plastic representation of the Facebook logo is seen in this photo illustration

By David Ingram

SAN FRANCISCO (Reuters) – Facebook Inc will expand its pattern recognition software to other countries after successful tests in the U.S. to detect users with suicidal intent, the world’s largest social media network said on Monday.

Facebook began testing the software in the United States in March, when the company started scanning the text of Facebook posts and comments for phrases that could be signals of an impending suicide.

Facebook has not disclosed many technical details of the program, but the company said its software searches for certain phrases that could be clues, such as the questions “Are you ok?” and “Can I help?”

If the software detects a potential suicide, it alerts a team of Facebook workers who specialize in handling such reports. The system suggests resources to the user or to friends of the person such as a telephone help line. Facebook workers sometimes call local authorities to intervene.

Guy Rosen, Facebook’s vice president for product management, said the company was beginning to roll out the software outside the United States because the tests have been successful. During the past month, he said, first responders checked on people more than 100 times after Facebook software detected suicidal intent.

Facebook said it tries to have specialist employees available at any hour to call authorities in local languages.

“Speed really matters. We have to get help to people in real time,” Rosen said.

Last year, when Facebook launched live video broadcasting, videos proliferated of violent acts including suicides and murders, presenting a threat to the company’s image. In May Facebook said it would hire 3,000 more people to monitor videos and other content.

Rosen did not name the countries where Facebook was deploying the software, but he said it would eventually be used worldwide except in the European Union due to sensitivities, which he declined to discuss.

Other tech firms also try to prevent suicides. Google’s search engine displays the phone number for a suicide hot line in response to certain searches.

Facebook knows lots about its 2.1 billion users – data that it uses for targeted advertising – but in general the company has not been known previously to systematically scan conversations for patterns of harmful behavior.

One exception is its efforts to spot suspicious conversations between children and adult sexual predators. Facebook sometimes contacts authorities when its automated screens pick up inappropriate language.

But it may be more difficult for tech firms to justify scanning conversations in other situations, said Ryan Calo, a University of Washington law professor who writes about tech.

“Once you open the door, you might wonder what other kinds of things we would be looking for,” Calo said.

Rosen declined to say if Facebook was considering pattern recognition software in other areas, such as non-sex crimes.

 

 

(Reporting by David Ingram; Editing by Susan Thomas)