Titled “AI and the Sense of Self,” the paper describes a methodology called “elastic identity” by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid “collateral damage.” In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it’s appropriate to optimize for self and when its necessary to optimize for the good of a community. Per the paper: The researchers describe a sort of equilibrium between altruism and selfish behavior where an agent would be able to understand ethical nuances. Our sense of self, is not limited to the boundaries of our physical being, and often extends to include other objects and concepts from our environment. This forms the basis for social identity that builds a sense of belongingness and loyalty towards something other than, or beyond one’s physical being. Unfortunately, there’s no calculus for ethics. Humans have been trying to sort out the right way for everyone to conduct themselves in a civilized society for millennia and he lack of Utopian nations in modern society tells you how far we’ve gotten. As to exactly what measure of “elasticity” an AI model should have, that may be more of a philosophical question than a scientific one. According to the researchers: Do we really want AI capable of learning ethics the human way? Our socio-ethical point of view has been forged in the fires of countless wars and an unbroken tradition of committing horrific atrocities. We broke a lot of eggs on our way to making the omelet that is human society. And, it’s fair to say we’ve got a lot of work left yet. Teaching AI our ethics and then training it to evolve like we do could be recipe for automating disaster. It could also lead to a greater philosophical understanding of human ethics and the ability to simulate civilization with artificial agents. Maybe the machines will deal with uncertainty better than humans historically have. Either way, the research is fascinating and well worth the read. You can check it out here on arXiv.