Google says it’s committed to ethical AI research. Its new artificial intelligence group is led by Dr. Demis Hassabis and includes computer scientists Yuliang Yang and Ray Kurzweil. It’s ethical AI team includes senior staff from other parts of Google including search, content, and user experience. The project’s goal, according to Google’s definition, is to build and run an artificially intelligent computer system that can help resolve the future of human interaction, both online and offline.

Ethical AI by Google:

The project is in a testing stage, and Google has released a number of prototype programs to test their cognitive abilities and decision making process. Hassabis envisions an eventual outcome, where computers can understand not only the natural language but also all the complex conversations going on in the world today. He envisions a time when computers can hand out not only simple spoken words but also non-verbal signals, such as facial expressions, to convey not only what a person wants but also how he feels. Google plans to compete with Apple and Microsoft in the future, and that competition will inevitably bring ethical questions.

This project is a part of the long list of projects; Google purchased out of the gate or made big bets on, including self-driving cars and Android mobile devices. Will ethical questions come into play with these and other technologies? The company says it is not planning to self-publish its technology, but to partner with organizations that are. The way Google plans to tackle this pressing issue is to hire researchers who work at leading research labs.

Is Ethical AI a good thing?

The best example for this can be the variety of AI powered tools and applications that are developed by ONPASSIVE which Will those researchers be biased according to the product they are testing or the funding source they are working for? Google doesn’t say, but if you read between the lines, you can speculate. Many prominent academics and ethicists have criticized the company for buying a technology that could easily be manipulated for profit. Will Google say that it will only use reputable and independent researchers to run its projects?

Will Ethical AI meet the requirements?

If it does, that may be a good thing. But what about those working for the competitors? Are their research programs going to be bias-free? Will Google say that it is not going to buy competitors’ research and then selectively use it in its own projects? That kind of selective coordination could easily get the company into trouble, especially if it were to engage in activities such as spying on competitors.

Or is it? Google acknowledges that it needs to hire researchers with expertise in a wide variety of topics and fields, to conduct its own projects impartially. But it also recognizes that participating in political debate or publishing controversial scientific papers will not necessarily help its business goals.

Advantages of Ethical AI:

To say that we will only use researchers with a demonstrated commitment to research integrity, accuracy, objectivity and transparency, would mean that any researcher who didn’t toe that line would automatically be deemed unreliable and untrustworthy. Is that something that Google wants? Probably not. So maybe we shouldn’t look at this as an either-or proposition.

What’s more, Google has actually stated that it cannot promise to disclose its research-related activities. So maybe we should view the whole promise in that light. Google says that it is committed to conducting its research with a certain level of transparency, but can it be expected that it will commit to sharing all of its research findings, even the ones that come into play during the course of a particular project? Not likely.

Limitations of Ethical AI:

So then, what does Google stand to gain by being so forthright about its research projects? It certainly claims a commitment to human health and safety, which are a noble commitment. However, we have no real way to gauge Google’s actual progress towards those ends. We do know that Google’s self-driving autonomous cars project is facing some pretty significant roadblocks, and it’s unclear whether it’s because it’s too complex, too expensive, or both.

That said, if you believe the oft-stated claim that Google’s self-driving car project is on the verge of achieving something truly groundbreaking, you might as well take Google’s word for it. After all, they’re the company that was willing to pay Carnegie Mellon University millions to do the research. They also announced that they’ll be releasing the source code for their Project Zero car to the public after the project is complete. And last month, they announced another self-driving car project called Project Blue, which is also in competition with Carnegie Mellon.


So what does Google mean when it claims it’s committed to ethical research and development? If Google has made a lot of money developing self-driving vehicles and plans to release the source code soon, it’s probably doing what it says it’s going to do: creating the best vehicle on the market for the future of driving. If you’re investing in the future of AI research and development, Google’s statement makes some sense.