Artificial Intelligence: Are We Safe?

When we hear about artificial intelligence, we frequently are bombarded with notions of ultra-smart robots taking over the world, while either destroying humans, or at least leaving humans in the development dust. The good news, at the time of this writing, is that humans currently do not face that AI existential threat. However, the bad news is that artificial intelligence nevertheless creates present and future safety concerns.

As accurately pointed out in a commentary recently posted by Informationweek.com, the use of artificial intelligence raises the following risks:

  • AI can lead to privacy invasions;
  • Socioeconomic biases can be built into AI applications (whether intended or not);
  • AI algorithms may not always be transparent and subject to full interpretation by humans;
  • Systems may not be established that make human AI-creators responsible and liable for algorithmic outcomes;
  • It is not clear that AI applications generally will be aligned with stakeholder values;
  • There is concern that AI-driven decision-making will not be throttled even when uncertainty is too great to support automated AI decisions;
  • Worry exists about the implementation of failsafe procedures that would allow humans to take back control when AI applications reach the limit of their competency (or when the AI applications simply are not working properly);
  • It is not certain that AI-driven applications will work in consistent, predictable patterns, free from unintended consequences;
  • Importantly, it is not known that AI applications can be impenetrable to adversarial attacks that are intended to exploit vulnerabilities; and
  • We do not know whether AI algorithms ultimately will fail gracefully rather than catastrophically at the end of their useful lives.

This is quite the list of AI safety concerns, and of course, we can think of many more.

What is the bottom line?

The bottom line here is that we should not currently worry about some distant future in which humans are the slaves of AI-robot masters. But the AI train is out of the station — artificial intelligence is here and likely to stay. So, we need to dedicate present and mighty focus on how to reduce and even eliminate AI safety risks.

Eric Sinrod (@EricSinrod on Twitter) is a partner in the San Francisco office of Duane Morris LLP, where he focuses on litigation matters of various types, including information technology and intellectual property disputes. You can read his professional biography here. To receive a weekly email link to Mr. Sinrod’s columns, please email him at ejsinrod@duanemorris.com with Subscribe in the Subject line. This column is prepared and published for informational purposes only and should not be construed as legal advice. The views expressed in this column are those of the author and do not necessarily reflect the views of the author’s law firm or its individual partners.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress