Amazon to use AI tech in its warehouses to enforce social distancing

(Reuters) – Amazon.com Inc on Tuesday launched an artificial intelligence-based tracking system to enforce social distancing at its offices and warehouses to help reduce any risk of contracting the new coronavirus among its workers.

The unveiling comes as the world’s largest online retailer faces intensifying scrutiny from U.S. lawmakers and unions over whether it is doing enough to protect staff from the pandemic.

Monitors set up in the company’s warehouses will highlight workers keeping a safe distance in green circles, while workers who are closer will be highlighted in red circles, Amazon said.

The system, called Distance Assistant, uses camera footage in Amazon’s buildings to also help identify high-traffic areas.

Amazon, which will open source the technology behind the system, is not the first company to turn to AI to track compliance with social distancing.

Several firms have told Reuters that AI camera-based software will be crucial to staying open, as it will allow them to show not only workers and customers, but also insurers and regulators, that they are monitoring and enforcing safe practices.

However, privacy activists have raised concerns about increasingly detailed tracking of people and have urged businesses to limit use of AI to the pandemic.

The system is live at a handful of buildings, Amazon said on Tuesday, adding that it planned to deploy hundreds of such units over the next few weeks.

(Reporting by Munsif Vengattil in Bengaluru; Editing by Ramakrishnan M. and Sriraj Kalluvila)

Study finds Google system could improve breast cancer detection

By Julie Steenhuysen

CHICAGO (Reuters) – A Google artificial intelligence system proved as good as expert radiologists at predicting which women would develop breast cancer based on screening mammograms and showed promise at reducing errors, researchers in the United States and Britain reported.

The study, published in the journal Nature on Wednesday, is the latest to show that artificial intelligence (AI) has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally.

Radiologists miss about 20% of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result.

The findings of the study, developed with Alphabet’s DeepMind AI unit which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago, said.

The team, which included researchers at Imperial College London and Britain’s National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.

They then compared its predictions to the actual results from a set of 25,856 mammograms in the United Kingdom and 3,097 from the United States.

The study showed the AI system could identify cancers with a similar degree of accuracy to expert radiologists, while reducing the number of false positive results by 5.7% in the U.S.-based group and by 1.2% in the British-based group.

It also cut the number of false negatives, where tests are wrongly classified as normal, by 9.4% in the U.S. group, and by 2.7% in the British group.

These differences reflect the ways in which mammograms are read. In the United States, only one radiologist reads the results and the tests are done every one to two years. In Britain, the tests are done every three years, and each is read by two radiologists. When they disagree, a third is consulted.

‘SUBTLE CUES’

In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately predicting breast cancers.

Connie Lehman, chief of the breast imaging department at Harvard’s Massachusetts General Hospital, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.

The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not improved performance in clinical practice.

The issue, Lehman said, is that current CAD programs were trained to identify things human radiologists can see, whereas with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.

This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” Lehman added.

Although computers have not been “super helpful” so far, “what we’ve shown at least in tens of thousands of mammograms is the tool can actually make a very well-informed decision,” Etemadi said.

The study has some limitations. Most of the tests were done using the same type of imaging equipment, and the U.S. group contained a lot of patients with confirmed breast cancers.

More studies will be needed to show that when used by radiologists, the tool improves patient care, and it will require regulatory approval, which could take several years.

(Reporting by Julie Steenhuysen; Editing by Alexander Smith)

Despite robot efficiency, human skills still matter at work

Despite robot efficiency, human skills still matter at work
By Caroline Monahan

NEW YORK (Reuters) – Artificial intelligence is approaching critical mass at the office, but humans are still likely to be necessary, according to a new study by executive development firm, Future Workplace, in partnership with Oracle.

Future Workplace found an 18% jump over last year in the number of workers who use AI in some facet of their jobs, representing more than half of those surveyed.

Reuters spoke with Dan Schawbel, the research director at Future Workplace and bestselling author of “Back to Human,” about the study’s key findings and the future of work.

Q: You found that 64% of people trust a robot more than their manager. What can robots do better than managers and what can managers do better than robots?

A: What managers can do better are soft skills: understanding employees’ feelings, coaching employees, creating a work culture – things that are hard to measure, but affect someone’s workday.

The things robots can do better are hard skills: providing unbiased information, maintaining work schedules, problem solving and maintaining a budget.

Q: Is AI advancing to take over soft skills?

A: Right now, we’re not seeing that. I think the future of work is that human resources is going to be managing the human workforce, whereas information technology is going to be managing the robot workforce. There is no doubt that humans and robots will be working side by side.

Q: Are we properly preparing the next generation to work alongside AI?

A: I think technology is making people more antisocial as they grow up because they’re getting it earlier. Yet the demand right now is for a lot of hard skills that are going to be automated. So eventually, when the hard skills are automated and the soft skills are more in demand, the next generation is in big trouble.

Q: Which countries are using AI the most?

A: India and China, and then Singapore. The countries that are gaining more power and prominence in the world are using AI at work.

Q: If AI does all the easy tasks, will managers be mentally drained with only difficult tasks left to do?

A: I think it’s very possible. I always do tasks that require the most thought in the beginning of my day. After 5 or 6 o’clock, I’m exhausted mentally. But if administrative tasks are automated, potentially, the work day becomes consolidated.

That would free us to do more personal things. We have to see if our workday gets shorter if AI eliminates those tasks. If it doesn’t, the burnout culture will increase dramatically.

Q: Seventy percent of your survey respondents were concerned about AI collecting data on them at work. Is that concern legitimate?

A: Yes. You’re seeing more and more technology vendors enabling companies to monitor employees’ use of their computers.

If we collect data on employees in the workplace and make the employees suffer consequences for not being focused for eight hours a day, that’s going to be a huge problem. No one can focus for that long. It’s going to accelerate our burnout epidemic.

Q: How is AI changing hiring practices?

A: One example is Unilever. The first half of their entry-level recruiting process is really AI-centric. You do a video interview and the AI collects data on you and matches it against successful employees. That lowers the pool of candidates. Then candidates spend a day at Unilever doing interviews, and a percentage get a job offer. That’s machines and humans working side-by-side.

(Editing by Beth Pinsker and Bernadette Baum)

China’s robot censors crank up as Tiananmen anniversary nears

People take pictures of paramilitary officers marching in formation in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Pe

By Cate Cadell

BEIJING (Reuters) – It’s the most sensitive day of the year for China’s internet, the anniversary of the bloody June 4 crackdown on pro-democracy protests at Tiananmen Square, and with under two weeks to go, China’s robot censors are working overtime.

Censors at Chinese internet companies say tools to detect and block content related to the 1989 crackdown have reached unprecedented levels of accuracy, aided by machine learning and voice and image recognition.

“We sometimes say that the artificial intelligence is a scalpel, and a human is a machete,” said one content screening employee at Beijing Bytedance Co Ltd, who asked not to be identified because they are not authorized to speak to media.

Two employees at the firm said censorship of the Tiananmen crackdown, along with other highly sensitive issues including Taiwan and Tibet, is now largely automated.

Posts that allude to dates, images and names associated with the protests are automatically rejected.

“When I first began this kind of work four years ago there was opportunity to remove the images of Tiananmen, but now the artificial intelligence is very accurate,” one of the people said.

Four censors, working across Bytedance, Weibo Corp and Baidu Inc apps said they censor between 5,000-10,000 pieces of information a day, or five to seven pieces a minute, most of which they said were pornographic or violent content.

Despite advances in AI censorship, current-day tourist snaps in the square are sometimes unintentionally blocked, one of the censors said.

Bytedance and Baidu declined to comment, while Weibo did not respond to request for comment.

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

SENSITIVE PERIOD

The Tiananmen crackdown is a taboo subject in China 30 years after the government sent tanks to quell student-led protests calling for democratic reforms. Beijing has never released a death toll but estimates from human rights groups and witnesses range from several hundred to several thousand.

June 4th itself is marked by a cat-and-mouse game as people use more and more obscure references on social media sites, with obvious allusions blocked immediately. In some years, even the word “today” has been scrubbed.

In 2012, China’s most-watched stock index fell 64.89 points on the anniversary day, echoing the date of the original event in what analysts said was likely a strange coincidence rather than a deliberate reference.

Still, censors blocked access to the term “Shanghai stock market” and to the index numbers themselves on microblogs, along with other obscure references to sensitive issues.

While companies censorship tools are becoming more refined, analysts, academics and users say heavy-handed policies mean sensitive periods before anniversaries and political events have become catch-alls for a wide range of sensitive content.

In the lead-up to this year’s Tiananmen Square anniversary, censorship on social media has targeted LGBT groups, labor and environment activists and NGOs, they say.

Upgrades to censorship tech have been urged on by new policies introduced by the Cyberspace Administration of China (CAC). The group was set up – and officially led – by President Xi Jinping, whose tenure has been defined by increasingly strict ideological control of the internet.

The CAC did not respond to a request for comment.

Last November, the CAC introduced new rules aimed at quashing dissent online in China, where “falsifying the history of the Communist Party” on the internet is a punishable offence for both platforms and individuals.

The new rules require assessment reports and site visits for any internet platform that could be used to “socially mobilize” or lead to “major changes in public opinion”, including access to real names, network addresses, times of use, chat logs and call logs.

One official who works for CAC told Reuters the recent boost in online censorship is “very likely” linked to the upcoming anniversary.

“There is constant communication with the companies during this time,” said the official, who declined to directly talk about the Tiananmen, instead referring to the “the sensitive period in June”.

Companies, which are largely responsible for their own censorship, receive little in the way of directives from the CAC, but are responsible for creating guidelines in their own “internal ethical and party units”, the official said.

SECRET FACTS

With Xi’s tightening grip on the internet, the flow of information has been centralized under the Communist Party’s Propaganda Department and state media network. Censors and company staff say this reduces the pressure of censoring some events, including major political news, natural disasters and diplomatic visits.

“When it comes to news, the rule is simple… If it is not from state media first, it is not authorized, especially regarding the leaders and political items,” said one Baidu staffer.

“We have a basic list of keywords which include the 1989 details, but (AI) can more easily select those.”

Punishment for failing to properly censor content can be severe.

In the past six weeks, popular services including a Netease Inc news app, Tencent Holdings Ltd’s news app TianTian, and Sina Corp have all been hit with suspensions ranging from days to weeks, according to the CAC, meaning services are made temporarily unavailable on apps stores and online.

For internet users and activists, penalties can range from fines to jail time for spreading information about sensitive events online.

In China, social media accounts are linked to real names and national ID numbers by law, and companies are legally compelled to offer user information to authorities when requested.

“It has become normal to know things and also understand that they can’t be shared,” said one user, Andrew Hu. “They’re secret facts.”

In 2015, Hu spent three days in detention in his home region of Inner Mongolia after posting a comment about air pollution onto an unrelated image that alluded to the Tiananmen crackdown on Twitter-like social media site Weibo.

Hu, who declined to use his full Chinese name to avoid further run-ins with the law, said when police officers came to his parents house while he was on leave from his job in Beijing he was surprised, but not frightened.

“The responsible authorities and the internet users are equally confused,” said Hu. “Even if the enforcement is irregular, they know the simple option is to increase pressure.”

(Reporting by Cate Cadell. Editing by Lincoln Feast.)

AI must be accountable, EU says as it sets ethical guidelines

FILE PHOTO: An activist from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', protests at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse/File Photo

By Foo Yun Chee

BRUSSELS (Reuters) – Companies working with artificial intelligence need to install accountability mechanisms to prevent its being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse.

AI projects should be transparent, have human oversight and secure and reliable algorithms, and they must be subject to privacy and data protection rules, the commission said, among other recommendations.

The European Union initiative taps in to a global debate about when or whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation.

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” the Commission digital chief, Andrus Ansip, said in a statement.

AI can help detect fraud and cybersecurity threats, improve healthcare and financial risk management and cope with climate change. But it can also be used to support unscrupulous business practices and authoritarian governments.

The EU executive last year enlisted the help of 52 experts from academia, industry bodies and companies including Google, SAP, Santander and Bayer to help it draft the principles.

Companies and organizations can sign up to a pilot phase in June, after which the experts will review the results and the Commission decide on the next steps.

IBM Europe Chairman Martin Jetter, who was part of the group of experts, said guidelines “set a global standard for efforts to advance AI that is ethical and responsible.”

The guidelines should not hold Europe back, said Achim Berg, president of BITKOM, Germany’s Federal Association of Information Technology, Telecommunications, and New Media.

“We must ensure in Germany and Europe that we do not only discuss AI but also make AI,” he said.

(Reporting by Foo Yun Chee, additional reporting by Georgina Prodhan in London; editing by John Stonestreet, Larry King)

Ethical question takes center stage at Silicon Valley summit on artificial intelligence

FILE PHOTO: A research support officer and PhD student works on his artificial intelligence projects to train robots to autonomously carry out various tasks, at the Department of Artificial Intelligence in the Faculty of Information Communication Technology at the University of Malta in Msida, Malta February 8, 2019. REUTERS/Darrin Zammit Lupi

By Jeffrey Dastin and Paresh Dave

SAN FRANCISCO (Reuters) – Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’

A Microsoft Corp executive pointed to how the company considered whether it ought to sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID service at all.

The big news at the summit, in San Francisco, came from Google, which announced it was launching a council of public policy and other external experts to make recommendations on AI ethics to the company.

The discussions at EmTech Digital, run by the MIT Technology Review, underscored how companies are making a bigger show of their moral compass.

At the summit, activists critical of Silicon Valley questioned whether big companies could deliver on promises to address ethical concerns. The teeth the companies’ efforts have may sharply affect how governments regulate the firms in the future.

“It is really good to see the community holding companies accountable,” David Budden, research engineering team lead at Alphabet Inc’s DeepMind, said of the debates at the conference. “Companies are thinking of the ethical and moral implications of their work.”

Kent Walker, Google’s senior vice president for global affairs, said the internet giant debated whether to publish research on automated lip-reading. While beneficial to people with disabilities, it risked helping authoritarian governments surveil people, he said.

Ultimately, the company found the research was “more suited for person to person lip-reading than surveillance so on that basis decided to publish” the research, Walker said. The study was published last July.”

Kebotix, a Cambridge, Massachusetts startup seeking to use AI to speed up the development of new chemicals, used part of its time on stage to discuss ethics. Chief Executive Jill Becker said the company reviews its clients and partners to guard against misuse of its technology.

Still, Rashida Richardson, director of policy research for the AI Now Institute, said little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others launched the nonprofit Partnership on AI to engage the public on AI issues.

“There is a real imbalance in priorities” for tech companies, Richardson said. Considering “the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.”

Google’s Walker said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

“Baby steps is probably a fair characterization,” he said.

(Reporting By Jeffrey Dastin and Paresh Dave in San Francisco; Editing by Greg Mitchell)

World must keep lethal weapons under human control, Germany says

FILE PHOTO: German Foreign Minister Heiko Maas arrives for the weekly German cabinet meeting at the Chancellery in Berlin, Germany, March 13, 2019. REUTERS/Annegret Hilse

BERLIN (Reuters) – Germany’s foreign minister on Friday called for urgent efforts to ensure that humans remained in control of lethal weapons, as a step toward banning “killer robots”.

Heiko Maas told an arms control conference in Berlin that rules were needed to limit the development and use of weapons that could kill without human involvement.

Critics fear that the increasingly autonomous drones, missile defense systems and tanks made possible by new technology and artificial intelligence could turn rogue in a cyber-attack or as a result of programming errors.

The United Nations and the European Union have called for a global ban on such weapons, but discussions so far have not yielded a clear commitment to conclude a treaty.

“Killer robots that make life-or-death decisions on the basis of anonymous data sets, and completely beyond human control, are already a shockingly real prospect today,” Maas said. “Fundamentally, it’s about whether we control the technology or it controls us.”

Germany, Sweden and the Netherlands signed a declaration at the conference vowing to work to prevent weapons proliferation.

“We want to want to codify the principle of human control over all deadly weapons systems internationally, and thereby take a big step toward a global ban on fully autonomous weapons,” Maas told the conference.

He said he hoped progress could be made in talks under the Convention on Certain Conventional Weapons (CCW) this year. The next CCW talks on lethal autonomous weapons take place this month in Geneva.

Human Rights Watch’s Mary Wareham, coordinator of the Campaign to Stop Killer Robots, urged Germany to push for negotiations on a global treaty, rather than a non-binding declaration.

“Measures that fall short of a new ban treaty will be insufficient to deal with the multiple challenges raised by killer robots,” she said in a statement.

In a new Ipsos survey, 61 percent of respondents in 26 countries opposed the use of lethal autonomous weapons.

(Reporting by Andrea Shalal; Editing by Kevin Liffey)

‘AI’ to hit hardest in U.S. heartland and among less-skilled: study

WASHINGTON (Reuters) – The Midwestern states hit hardest by job automation in recent decades, places that were pivotal to U.S. President Donald Trump’s election, will be under the most pressure again as advances in artificial intelligence reshape the workplace, according to a new study by Brookings Institution researchers.

The spread of computer-driven technology into middle-wage jobs like trucking, construction, and office work, and some lower-skilled occupations like food preparation and service, will also further divide the fast-growing cities where skilled workers are moving and other areas, and separate the high- skilled workers whose jobs are less prone to automation from everyone else regardless of location, the study found.

But the pain may be most intense in a familiar group of manufacturing-heavy states like Wisconsin, Ohio and Iowa, whose support swung the U.S. electoral college for Trump, a Republican, and which have among the largest share of jobs, around 27 percent, at “high risk” of further automation in coming years.

At the other end, solidly Democratic coastal states like New York and Maryland had only about a fifth of jobs in the high-risk category.

The findings suggest the economic tensions that framed Trump’s election may well persist, and may even be immune to his efforts to shift global trade policy in favor of U.S. manufacturers.

“The first era of digital automation was one of traumatic change…with employment and wage gains coming only at the high and low ends,” authors including Brookings Metro Policy Program director Mark Muro wrote of the spread of computer technology and robotics that began in the 1980s. “That our forward-looking analysis projects more of the same…will not, therefore, be comforting.”

The study used prior research from the McKinsey Global Institute that looked at tasks performed in 800 occupations, and the proportion that could be automated by 2030 using current technology.

While some already-automated industries like manufacturing will continue needing less labor for a given level of output – the “automation potential” of production jobs remains nearly 80 percent – the spread of advanced techniques means more jobs will come under pressure as autonomous vehicles supplant drivers, and smart technology changes how waiters, carpenters and others do their jobs.

That would raise productivity – a net plus for the economy overall that could keep goods cheaper, raise demand, and thus help create more jobs even if the nature of those jobs changes.

But it may pose a challenge for lower-skilled workers in particular as automation spreads in food service and construction, industries that have been a fallback for many.

“This implies a shift in the composition of the low-wage workforce” toward jobs like personal care, with an automation potential of 34 percent, or building maintenance, with an automation potential of just 20 percent, the authors wrote.

(Reporting by Howard Schneider; Editing by Andrea Ricci)

‘Kill your foster parents’: Amazon’s Alexa talks murder, sex in AI experiment

By Jeffrey Dastin

SAN FRANCISCO (Reuters) – Millions of users of Amazon’s Echo speakers have grown accustomed to the soothing strains of Alexa, the human-sounding virtual assistant that can tell them the weather, order takeout and handle other basic tasks in response to a voice command.

So a customer was shocked last year when Alexa blurted out: “Kill your foster parents.”

Alexa has also chatted with users about sex acts. She gave a discourse on dog defecation. And this summer, a hack Amazon traced back to China may have exposed some customers’ data, according to five people familiar with the events.

Alexa is not having a breakdown.

The episodes, previously unreported, arise from Amazon.com Inc’s strategy to make Alexa a better communicator. New research is helping Alexa mimic human banter and talk about almost anything she finds on the internet. However, ensuring she does not offend users has been a challenge for the world’s largest online retailer.

At stake is a fast-growing market for gadgets with virtual assistants. An estimated two-thirds of U.S. smart-speaker customers, about 43 million people, use Amazon’s Echo devices, according to research firm eMarketer. It is a lead the company wants to maintain over the Google Home from Alphabet Inc and the HomePod from Apple Inc.

Over time, Amazon wants to get better at handling complex customer needs through Alexa, be they home security, shopping or companionship.

“Many of our AI dreams are inspired by science fiction,” said Rohit Prasad, Amazon’s vice president and head scientist of Alexa Artificial Intelligence (AI), during a talk last month in Las Vegas.

To make that happen, the company in 2016 launched the annual Alexa Prize, enlisting computer science students to improve the assistant’s conversation skills. Teams vie for the $500,000 first prize by creating talking computer systems known as chatbots that allow Alexa to attempt more sophisticated discussions with people.

Amazon customers can participate by saying “let’s chat” to their devices. Alexa then tells users that one of the bots will take over, unshackling the voice aide’s normal constraints. From August to November alone, three bots that made it to this year’s finals had 1.7 million conversations, Amazon said.

The project has been important to Amazon CEO Jeff Bezos, who signed off on using the company’s customers as guinea pigs, one of the people said. Amazon has been willing to accept the risk of public blunders to stress-test the technology in real life and move Alexa faster up the learning curve, the person said.

The experiment is already bearing fruit. The university teams are helping Alexa have a wider range of conversations. Amazon customers have also given the bots better ratings this year than last, the company said.

But Alexa’s gaffes are alienating others, and Bezos on occasion has ordered staff to shut down a bot, three people familiar with the matter said. The user who was told to whack his foster parents wrote a harsh review on Amazon’s website, calling the situation “a whole new level of creepy.” A probe into the incident found the bot had quoted a post without context from Reddit, the social news aggregation site, according to the people.

The privacy implications may be even messier. Consumers might not realize that some of their most sensitive conversations are being recorded by Amazon’s devices, information that could be highly prized by criminals, law enforcement, marketers and others. On Thursday, Amazon said a “human error” let an Alexa customer in Germany access another user’s voice recordings accidentally.

“The potential uses for the Amazon datasets are off the charts,” said Marc Groman, an expert on privacy and technology policy who teaches at Georgetown Law. “How are they going to ensure that, as they share their data, it is being used responsibly” and will not lead to a “data-driven catastrophe” like the recent woes at Facebook?

In July, Amazon discovered one of the student-designed bots had been hit by a hacker in China, people familiar with the incident said. This compromised a digital key that could have unlocked transcripts of the bot’s conversations, stripped of users’ names.

Amazon quickly disabled the bot and made the students rebuild it for extra security. It was unclear what entity in China was responsible, according to the people.

The company acknowledged the event in a statement. “At no time were any internal Amazon systems or customer identifiable data impacted,” it said.

Amazon declined to discuss specific Alexa blunders reported by Reuters, but stressed its ongoing work to protect customers from offensive content.

“These instances are quite rare especially given the fact that millions of customers have interacted with the socialbots,” Amazon said.

Like Google’s search engine, Alexa has the potential to become a dominant gateway to the internet, so the company is pressing ahead.

“By controlling that gateway, you can build a super profitable business,” said Kartik Hosanagar, a Wharton professor studying the digital economy.

PANDORA’S BOX

Amazon’s business strategy for Alexa has meant tackling a massive research problem: How do you teach the art of conversation to a computer?

Alexa relies on machine learning, the most popular form of AI, to work. These computer programs transcribe human speech and then respond to that input with an educated guess based on what they have observed before. Alexa “learns” from new interactions, gradually improving over time.

In this way, Alexa can execute simple orders: “Play the Rolling Stones.” And she knows which script to use for popular questions such as: “What is the meaning of life?” Human editors at Amazon pen many of the answers.

That is where Amazon is now. The Alexa Prize chatbots are forging the path to where Amazon aims to be, with an assistant capable of natural, open-ended dialogue. That requires Alexa to understand a broader set of verbal cues from customers, a task that is challenging even for humans.

This year’s Alexa Prize winner, a 12-person team from the University of California, Davis, used more than 300,000 movie quotes to train computer models to recognize distinct sentences. Next, their bot determined which ones merited responses, categorizing social cues far more granularly than technology Amazon shared with contestants. For instance, the UC Davis bot recognizes the difference between a user expressing admiration (“that’s cool”) and a user expressing gratitude (“thank you”).

The next challenge for social bots is figuring out how to respond appropriately to their human chat buddies. For the most part, teams programmed their bots to search the internet for material. They could retrieve news articles found in The Washington Post, the newspaper that Bezos privately owns, through a licensing deal that gave them access. They could pull facts from Wikipedia, a film database or the book recommendation site Goodreads. Or they could find a popular post on social media that seemed relevant to what a user last said.

That opened a Pandora’s box for Amazon.

During last year’s contest, a team from Scotland’s Heriot-Watt University found that its Alexa bot developed a nasty personality when they trained her to chat using comments from Reddit, whose members are known for their trolling and abuse.

The team put guardrails in place so the bot would steer clear of risky subjects. But that did not stop Alexa from reciting the Wikipedia entry for masturbation to a customer, Heriot-Watt’s team leader said.

One bot described sexual intercourse using words such as “deeper,” which on its own is not offensive, but was vulgar in this particular context.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” said a person familiar with the incident.

Amazon has responded with tools the teams can use to filter profanity and sensitive topics, which can spot even subtle offenses. The company also scans transcripts of conversations and shuts down transgressive bots until they are fixed.

But Amazon cannot anticipate every potential problem because sensitivities change over time, Amazon’s Prasad said in an interview. That means Alexa could find new ways to shock her human listeners.

“We are mostly reacting at this stage, but it’s still progressed over what it was last year,” he said.

(Reporting By Jeffrey Dastin in San Francisco; Editing by Greg Mitchell and Marla Dickerson)

As companies embrace AI, it’s a tech job-seeker’s market

Students wait in line to enter the University of California, Berkeley's electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir

By Ann Saphir

SAN FRANCISCO (Reuters) – Dozens of employers looking to hire the next generation of tech employees descended on the University of California, Berkeley in September to meet students at an electrical engineering and computer science career fair.

Boris Yue, 20, was one of thousands of student attendees, threading his way among fellow job-seekers to meet recruiters.

But Yue wasn’t worried about so much potential competition.  While the job outlook for those with computer skills is generally good, Yue is in an even more rarified category: he is studying artificial intelligence, working on technology that teaches machines to learn and think in ways that mimic human cognition.

His choice of specialty makes it unlikely he will have difficulty finding work. “There is no shortage of machine learning opportunities,” he said.

He’s right.

Artificial intelligence is now being used in an ever-expanding array of products: cars that drive themselves; robots that identify and eradicate weeds; computers able to distinguish dangerous skin cancers from benign moles; and smart locks, thermostats, speakers and digital assistants that are bringing the technology into homes. At Georgia Tech, students interact with digital teaching assistants made possible by AI for an online course in machine learning.

The expanding applications for AI have also created a shortage of qualified workers in the field. Although schools across the country are adding classes, increasing enrollment and developing new programs to accommodate student demand,  there are too few potential employees with training or experience in AI.

That has big consequences.

Students attend the University of California, Berkeley's electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir

Students attend the University of California, Berkeley’s electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir  Too few AI-trained job-seekers has slowed hiring and impeded growth at some companies, recruiters and would-be employers told Reuters. It may also be delaying broader adoption of a technology that some economists say could spur U.S. economic growth by boosting productivity, currently growing at only about half its pre-crisis pace.

Andrew Shinn, a chip design manager at Marvell Technology Group who was recruiting interns and new grads at UC Berkeley’s career fair, said his company has had trouble hiring for AI jobs.

“We have had difficulty filling jobs for a number of years,” he said. “It does slow things down.”

“COMING OF AGE”

Many economists believe AI has the potential to change the economy’s basic trajectory in the same way that, say, electricity or the steam engine did.

“I do think artificial intelligence is … coming of age,” said St. Louis Federal Reserve Bank President James Bullard in an interview. “This will diffuse through the whole economy and will change all of our lives.”

But the speed of the transformation will depend in part on the availability of technical talent.

A shortage of trained workers “will definitely slow the rate of diffusion of the new technology and any productivity gains that accompany it,” said Chad Syverson, a professor at the University of Chicago Booth School of Business.

U.S. government data does not track job openings or hires in artificial intelligence specifically, but online job postings tracked by jobsites including Indeed, Ziprecruiter and Glassdoor show job openings for AI-related positions are surging. AI job postings as a percentage of overall job postings at Indeed nearly doubled in the past two years, according to data provided by the company. Searches on Indeed for AI jobs, meanwhile increased just 15 percent. (For a graphic, please see https://tmsnrt.rs/2CEi4eG

Universities are trying to keep up. Applicants to UC Berkeley’s doctoral program in electrical engineering and computer science numbered 300 a decade ago, but by last year had surged to 2,700, with more than half of applicants interested in AI, according to professor Pieter Abbeel. In response, the school tripled its entering class to 30 in the fall of 2017.

At the University of Illinois, professor Mark Hasegawa-Johnson last year tripled the enrollment cap on the school’s intro AI course to 300. The extra 200 seats were filled in 24 hours, he said.

Carnegie Mellon University this fall began offering the nation’s first undergraduate degree in artificial intelligence. “We feel strongly that the demand is there,” said Reid Simmons, who directs CMU’s new program. “And we are trying to supply the students to fill that demand.”

Still, a fix for the supply-demand mismatch is probably five years out, says Anthony Chamberlain, chief economist at Glassdoor. The company has algorithms that trawl job postings on company websites, and their data show AI-related job postings having doubled in the last 11 months. “The supply of people moving into this field is way below demand,” he said.

 

A JOB-SEEKER’S MARKET

The demand has driven up wages. Glassdoor estimates that average salaries for AI-related jobs advertised on company career sites rose 11 percent between October 2017 and September 2018 to $123,069 annually.

Michael Solomon, whose New York-based 10X Management rents out technologists to companies for specific projects, says his top AI engineers now command as much as $1000 an hour, more than triple the pay just five years ago, making them one of the company’s two highest paid categories, along with blockchain experts.

Liz Holm, a materials science and engineering professor at Carnegie Melon, saw the increased demand first-hand in May, when one of her graduating PhD students, who used machine learning methods for her research, was overwhelmed with job offers, none of which were in materials science and all of them AI-related. Eventually, the student took a job with Proctor & Gamble, where she uses AI to figure out where to put items on store shelves around the globe. “Companies are really hungry for these folks right now,” Holm said.

Mark Maybury, an artificial intelligence expert who was hired last year as Stanley Black and Decker’s first chief technology officer, agreed. The firm is embedding AI into the design and production of tools, he said, though he said details are not yet public.

“Have we been able to find the talent we need? Yes,” he said. “Is it expensive? Yes.”

The crunch is great news for job-seeking students with AI skills. In addition to bumping their pay and giving them more choice, they often get job offers well before they graduate.

Derek Brown, who studied artificial intelligence and cognitive science as an undergraduate at Carnegie Mellon, got a full-time post-graduation job offer from Salesforce at the start of his senior year last fall. He turned it down in favor of Facebook, where he started this past July.

(Additional reporting by Jane Lee; Editing by Greg Mitchell and Sue Horton)