It appears that robots are at least one of the waves of the future. As an example, 23 million of Twitter’s user accounts in fact are autonomous Twitterbots. Why? Apparently, they are there to perform research, heighten productivity, and create enjoyment. However, other such bots have been designed with less than pure intentions — indeed, at times with the goal of wreaking some havoc.
So, where do the ethics lie here? And what happens when humans presently are developing much more complicated and sophisticated “robots” going forward?
One possible way to preserve ethics with developing robots is to hard code ethics into their operating systems, according to Digital Trends. But as noted, ethics are subjective and not always black and white. It is the human creator of a particular machine who determines the ethics of that machine. Plus, what might seem proper ethics when a robot is first brought online, could change over time as community standards morph or evolve.
Another way of going about ethics for robots is to provide an ethical framework or ethical guidelines and thereafter allow the robots to learn their own ethics based on their own development and experiences, as set out by Digital Trends. Here too, there are potential problems. There could be a tremendous opening for misconstructions of appropriate morality. As an example, consider the “meltdown” of Microsoft’s Twitter AI, which reportedly was fooled into tweeting support for genocide while also making racist comments.
And there is the conundrum of whether ethics truly is based on reason or emotion. If ethics actually is or should be based on emotion, are robots even capable of experiencing true emotion such that ethical decisions properly can be made emotionally?
If we ultimately determine that robots and machines just do not have the capacity to make decisions based on true ethics, who is to be deemed liable for harm they cause or laws they break based on their decisions? Are their designers, their ultimate creators, responsible? Or, can they argue that they robots evolved on their own to the point that the decisions of the robots are too attenuated to trace causal liability back to their creators? Would Dr. Frankenstein be responsible for any future harms caused by his supposed “monster,” or would he be off the hook if his creation was taunted by others to take certain actions that inadvertently put others at risk?
So many questions, and as of yet, a dearth of definitive answers!