Every earthworm has, at one point, been your mother.
Buddhism has many such thought experiments, ways to expand our notions of morality, to align it with what I’ll call here “universal morality”.
Universal morality is obscured by our evolved morality. Some problems cause disproportionate suffering; there are ways to greater optimize the flourishing of humans and other sentient beings. Our moral psychology, however, is designed to punish those who challenge our in-group’s interests, reward those who work in our favor and maintain our signaled moral identity. This evolved morality not only obscures universal morality but also creates an aversion to improvements to humans that would align our intuitions with actions that promote sentient well-being.
Progress on problems further away from our evolved intuitions, such as in mathematics and physics, has always been faster than progress on understanding human psychology and moral philosophy. The fewer layers of evolved psychology to peel away, the faster progress can be made. B.F. Skinner noted it should be more difficult to send a man to the moon than to implement effective education or to rehabilitate criminals. He lamented the degree to which we can control the inanimate, including weapons, without the wherewithal to solve social problems. Why? Humans anthropomorphize themselves. Understanding of psychology is clouded by intuitions especially the strong intuition that humans are not objects of a deterministic universe.
Evolved morality not only obscures universal morality but also creates an aversion to improvements to humans that would align our intuitions with actions that promote sentient well-being.
Morality is too close to our eyes for us to see. Compounding the confusion of studying anything as intimate as our own psychology is the self-deception integral to moral psychology. We punish those who transgress while looking for loopholes for ourselves, making self-reported moral reasoning especially suspect. In the trolley problem participants are more likely to make utilitarian choices the greater their distance from the action that caused one death instead of many (e.g. choosing to save three lives instead of one). There is no real difference between pushing someone onto the tracks and flipping a switch except in terms of plausible deniability. Detecting psychopaths was an important ancestral problem. Thus our conscious moral reasoning is optimized to signal we are not psychopaths. “True” evolved moral reasoning is insulated from conscious awareness. Consistency, virtue and capacity for self-punishment, otherwise known as guilt, are prioritized over aggregate benefit.
Moral debates dance around biting bullets and avoiding fanciful repugnant conclusions. Advocates of moral perspectives confuse morality with the desire to preserve their reputations or align with the intuitions of their readers. Thought experiments can help us transcend our evolved psychology but the vast majority of moral reasoning celebrates the output of essentially vestigial moral emotions. There is no wisdom in repugnance. Nor is there any wisdom in beliefs we develop to align our intuitions with those of others, preserve our reputations or signal we are not psychopaths.
We have made leaps and bounds in moral achievement compared to the deep history of humanity and to the rest of the natural world. Some consider nonhuman animals their in-group, or at least, are signaling their expanded moral circle. There are hacks that stretch our moral circle outside our ingroup, like narrative fiction and meditation. The world is a much better place than it was, but an evolutionary perspective does predict limits to this moral expansion. The flourishing of sentient beings is still impeded by moral disgust, outrage, and self-deception. Ironically, moral enhancement such as augmenting intelligence with embryo selection, the only means by which we may expand our moral circle, is averted by feelings of disgust, neophobic anxiety and an attachment to an amorphous, ill-defined concept of “natural”. One prominent criticism of evolutionary psychology is that it condones immoral behavior by pointing to its natural origins. Far from it, I believe we must acknowledge the shadow of our evolved morality in order to transcend it.
The rapid development of artificial intelligence has both confused and clarified our vision of a universal morality. We don’t want our self-driving cars to choose to kill three pedestrians rather than injure one driver. Most of us would want an artificial superintelligence to work for the aggregate good, to limit suffering and increase well-being rather than signal consistency, disgust sensitivity, guilt or commitment to tradition. Why shouldn’t we modify ourselves to become more moral as well?
A moral artificial intelligence should have very different moral goals than the average human, but how will we be able to build one with our evolved moral intuitions in place?
Dear Seed AI, every living being has, at some point, been your mother.
This article is from TVOL's project titled “This View of Morality: Can an Evolutionary Perspective Reveal a Universal Morality?” You can download a PDF of the project [here], comment on this article below, or comment on the project as a whole in the Summary and Overview.