Recently, I attended the Compliance & Ethics Institute of the SCCE in Las Vegas. One of the keynote speakers was Amber Mac, a well-known public speaker for business innovation, internet of things, online safety, artificial intelligence (AI), and other topics. That morning, her keynote address was titled “Artificial Intelligence: A Day in Your Life in Compliance & Ethics.”
It was completely mind-blowing.
From her comments, I had a profound realization that ethics will be extremely important for AI and other emerging technologies as society progresses towards integrating these technologies into our daily lives. Note that this integration is starting to be, or is already, in our homes and workplaces. “Alexa” might already be part of your family. This development is growing in an exponential rate, and there’s no slowing it down. In fact, Waymo (the self-driving subsidiary of Google parent Alphabet) is launching the first ever commercial driverless car service next month. Yet, have we stopped to consider if an ethical “backbone” to all of this progress should be put in place as a guide for AI and all emerging technologies?
For example, a few years ago Microsoft released an AI chatbot on Twitter where the AI robot named Tay would learn from conversations it had. The goal was that the AI would progressively get “smarter” as it discussed these topics with regular people over the Internet. However, the project was an embarrassment. In no time, Tay blurted out racist slurs, defended white supremacists and even advocated for genocide. So, how did this happen? Well, the problem was that Tay’s learning was not supported with proper ethical guidance. Without proper guidance, such as the difference between truth and falsehood or the general knowledge of the existence of racism, it was vulnerable to learning unethical thought and behavior.
As another example, a study was conducted by MIT researchers where the researchers designed a multilingual online game called the “Moral Machine.” Researchers would collect data on how individuals would want autonomous vehicles to solve ethical dilemmas, and would allow users from around the world to play the game. In essence, players would need to decide possible outcomes for an unavoidable accident because the brakes of the autonomous vehicle would suddenly give out. Should the car swerve to avoid a group of pedestrians and kill the people in the car? Should it kill the people on foot instead of swerving to spare the life of those in the car? Does it matter if the pedestrians are women, children or older people? Does it matter if the pedestrians were crossing legally or jaywalking? Would the pedestrians’ income or education matter? After seeing the results of their study, researchers argue that if we’re going to let these vehicles on our streets, it is obvious that their operating systems should take ethical preferences into account.
This is where a universal code of ethics for AI and emerging technologies makes sense. As the MIT researchers suggested, there needs to be a global conversation on ethics for these technologies, with the input of each country and culture taken into consideration. Once this data is collected, the global community can come together and create a universal code of ethics for these technologies.
Unfortunately, it must be noted that establishing such a code will be harder than people think. For example, one problem is that there may be differences in ethics, much like there are differences in food tastes or political opinions. Whose preference should be considered and enforced first? I would suspect many compromises will need to be made, where a particular group’s preferences are adopted instead of the preferences of other groups. I cannot ignore those challenges. Yet despite them, I still believe that a global conversation is the initial first step towards the right path. I believe it’s necessary. Eventually, I would hope that this discussion would lead to the creation of a universal code of ethics, a code that would be enforced, reviewed and updated periodically.
Efforts are already happening around the world at companies such as Google and SAP, and in international institutions such as the Data & Society Institute and at the Institute of Electrical and Electronics Engineers (IEEE). In addition, the topic of artificial intelligence and emerging technologies has even been addressed by government bodies like the U.S. Department of Transportation and the U.S. Department of Commerce.
These efforts are good, but it is not enough. I believe that the global community must come together to discuss this matter. Indeed, I’m calling for a global discussion to develop a universal “Bible for AI.”