Let me first make it clear it is not the choice of cars. Instead, they share an opinion on the threat of Artificial Intelligence (AI). My first thought was Elon Musk is pioneering self-driving cars, how can he be concerned with AI. Bill Gates is a technologist who has always seen the value in the power of computing. Both men have raised issues about the ethical dilemma of AI. They both share a concern that AI presents a risk as robots overtake the knowledge capacity of the creators of the robots, with no safety and preventative measures to protect society.
Many of the tools that we all use and even rely on employ AI. Siri uses AI to get smarter as use increases. Alexa understands speech and is the cornerstone of a smart home. Amazon predicts the buying choices of customers based on past purchases. Netflix uses predictive technology to suggest movie choices. Pandora recommends songs based on your music choices.
The items mentioned above support the life of today and do not seem to present ethical challenges. Most importantly, you can choose to use none of the tools mentioned. So, why do Gates and Musk have a concern about an ethical dilemma? AI expands the potential of technology with no control. The risk is that technology will allow robots to make decisions we can’t even consider. We need to do more than hope that robots are safe and ethical; we must ensure it.
The current situation with AI is most akin to the wild west without a sheriff. In the wild west, the sheriff provided safety and a sense of law. In AI, there are no guide rails, no law, no ethical review. Lawlessness is a dangerous concept when people are in charge, but as machines learn, there is the risk that they will outthink the persons that created them.
The Malicious Use of Artificial Intelligence published in 2018, provided four recommendations to address the threat landscape:
- Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
- Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
- Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
- Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.
Now, many may consider the thought of a computer outthinking the programmer as an alarmist thought or a positive goal of AI. Still, the possibility of a bad actor using AI for nefarious purposes is realistic. If there are no controls, no oversight, no ethical review, it may be that the risks are out of our hands. While this may be unlikely, it is no more improbable than Elon Musk and Bill Gates agreeing on the risk of AI.
To continue the conversation, please join me for my webinar: “On theHorns of a Dilemma: Artificial Intelligence in Behavioral Health”