Ethics in the Deep End

Programmers should not determine AI ethics, much less the companies for which they work. A recent article posted on VentureBeat, Ethical AI happens before you write the first line of code, set off massive alarm bells in my mind. While the author’s intentions seem altruistic, a frightening level of naiveté only underscores the risks we face going forward.

The argument is basically: identify core values of company, set-up a group to monitor ethical code writing, and consider AI’s impact on employees. Company values simply separates humans from personal responsibility, enabling activity that perverts ethical and moral behavior of individuals. Rare is the public company that allows shareholder value to consistently trump moral values of employees. An oversight committee sounds more like litigation cover than anything with internal power. The final point of considering impact on employees… well, many a business decision has been to reduce expenses by cutting employees when technology advancements enable such a decision.

As a former Wall Street analyst, I applauded these decisions because it allowed me to raise my EPS estimates and possibly upgrade a stock to Buy. I don’t know a CEO who doesn’t like the clapping of financial analysts in his ears. I also didn’t come across many CEOs who didn’t chafe when an analyst dropped their rating to a Sell, pointing out a bloated expense structure.

A Bias Justice System

Enough of the moral vacuum on Wall Street. Let’s go deeper into inherent bias in our society. Humans do not understand most of their biases and lack the emotional granularity to even attempt to address them. Even if we understand a bias, often we choose to ignore it. Until we understand ourselves, we should tread very carefully when creating AI.

The current justice system in the U.S. offers an excellent example of the problems we face. For this breakdown I summarize the points of Dr. Lisa Feldman Barrett in her book How Emotions are Made. Our judicial system focuses on justice and punishment (as opposed to restoring harmony), and rests on:

  1. the classical assumption of essentialism, which argues emotions are hard-wired in our brains. Neurology has significant evidence this assumption is wrong.
  2. the assumption of a separation between rational and emotional thought, or “cognitive control.” Research by neurologists are also proving this assumption faulty.
  3. creation of the “reasonable person” in law, representing the norms of society. This legal construct results in different applications of the law depending on emotion stereotypes based on gender, race and sexual orientation.
  4. biology as an explanation for behavior, potentially releasing the person from responsibility. This consideration eliminates the involvement of culture in the development of the individual, resulting in the person judged in a vacuum.
  5. a jury system that can determine intent and remorse of the defendant. This is a fallacy because the more dissimilar the juror from the defendant, the less synchrony of emotional behavior, resulting in more misinterpretations.
  6. an impartial judge and jury. Another major problem because neurology has demonstrated we see what we believe, called affective realism. Furthermore, memories are simulations, they are not like photographs, highly vulnerable to the circumstances at the time of recollection.
  7. physical harm is much greater than emotional harm, despite mounting evidence that emotional harm shortens lives and reduces quality of life.

Bottom Line

Until we, as a society, better understand the ethics and bias created by our own neurological and psychological make-up, we should hesitate aggressively developing AI. We likely program our own bias into the code, potentially creating a nightmare problem that we cannot easily fix.