Ethical question takes center stage at Silicon Valley summit on artificial intelligence

FILE PHOTO: A research support officer and PhD student works on his artificial intelligence projects to train robots to autonomously carry out various tasks, at the Department of Artificial Intelligence in the Faculty of Information Communication Technology at the University of Malta in Msida, Malta February 8, 2019. REUTERS/Darrin Zammit Lupi

By Jeffrey Dastin and Paresh Dave

SAN FRANCISCO (Reuters) – Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’

A Microsoft Corp executive pointed to how the company considered whether it ought to sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID service at all.

The big news at the summit, in San Francisco, came from Google, which announced it was launching a council of public policy and other external experts to make recommendations on AI ethics to the company.

The discussions at EmTech Digital, run by the MIT Technology Review, underscored how companies are making a bigger show of their moral compass.

At the summit, activists critical of Silicon Valley questioned whether big companies could deliver on promises to address ethical concerns. The teeth the companies’ efforts have may sharply affect how governments regulate the firms in the future.

“It is really good to see the community holding companies accountable,” David Budden, research engineering team lead at Alphabet Inc’s DeepMind, said of the debates at the conference. “Companies are thinking of the ethical and moral implications of their work.”

Kent Walker, Google’s senior vice president for global affairs, said the internet giant debated whether to publish research on automated lip-reading. While beneficial to people with disabilities, it risked helping authoritarian governments surveil people, he said.

Ultimately, the company found the research was “more suited for person to person lip-reading than surveillance so on that basis decided to publish” the research, Walker said. The study was published last July.”

Kebotix, a Cambridge, Massachusetts startup seeking to use AI to speed up the development of new chemicals, used part of its time on stage to discuss ethics. Chief Executive Jill Becker said the company reviews its clients and partners to guard against misuse of its technology.

Still, Rashida Richardson, director of policy research for the AI Now Institute, said little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others launched the nonprofit Partnership on AI to engage the public on AI issues.

“There is a real imbalance in priorities” for tech companies, Richardson said. Considering “the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.”

Google’s Walker said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

“Baby steps is probably a fair characterization,” he said.

(Reporting By Jeffrey Dastin and Paresh Dave in San Francisco; Editing by Greg Mitchell)