AI HAS NO TIME FOR "HUMAN" RIGHTS

Popular media have recently reported a White House initiative asserting companies’ “moral
obligation” to limit the risks of AI products. True enough, but the issues are far broader.

At the core of the debate around AI will it save us or destroy us? are questions of values.
Can we tell AI how to behave safely for humans, even if in the future it has a “mind of its own”?
It is often said that AI algorithms should be “aligned with human values”. What instructions,
then, should be given to those who write the algorithms? As of now, there is nothing
approaching a consensus on what constitute “human” values. Are they the goals of human desire
determined by social science? Are they common behavior rooted in evolutionary biology? A
summation of the wisdom of faiths and beliefs over time? Or should AI just be allowed to learn
them on its own by observing human history and behavior? Equally importantly, who gets to
decide? At best, a good-faith process might result in a lowest common standard of “human
values”. There is, of course, no obvious reason to believe that “AI values” will be the same as
“human values”.

In addition to efforts to conform AI to “human values”, there are widespread attempts to provide
AI with rules to govern specific behaviors in particular situations, which by analogy to human
behavior is called “AI ethics”. The very idea of “ethics” assumes a degree of intentional choice
on the part of the actor, so until AI achieves some degree of conscious personhood and
autonomy, use of the word “ethics” in AI is largely metaphorical. For now, it is about rules built
into specific algorithms for decision making in particular contexts, which will reflect the values
of the humans doing the programming. But human ethics provides no easy answer.

Ethics come in many schools and disciplines. There are at least twenty different ethical
approaches, with normative ethics based broadly on either the outcome of an act or on the
inherent value of the act itself. Among them are, in no particular order: utilitarian, situational,
virtue, duty, evolutionary, care, pragmatic and post-modern ethics. Without getting into
examples, different ethical approaches may lead to different outcomes in the same situation. So
it will be quite important to train an AI how to behave, particularly in ethically ambiguous
situations where decisions might have to be quick and decisive with great harm to persons and
property in the balance. Some ethical approaches, e.g., utilitarianism, situational, and pragmatic,
may have outcomes that leave some people worse off by justifying a greater good or arguing
special circumstances.

In this context of conceptual confusion, where are human rights? It seems clear that human
rights are not the same as “human values” or “AI ethics”. Yet core “human rights” are also a
complex social construction, deemed by the United Nations in 1948 to apply equally to all
humans at all times in all places. They are more extensive and specific than “human values”
and, in principle, more grounded, unequivocal and inflexible than ethics. Not everyone agrees
on them, some seeing them as simply an embodiment of western liberal political values. To the
extent UN bodies have commented on AI, they have suggested that human rights be broadly
construed as guidelines or aspirational goals.

Right now, human rights are not only not at the center of the policy debates on AI, they are
barely in the conversation. It is not yet clear how AI can or will engage with values, but it is
critical for the future that human rights be a salient in training AI to support, not conflict with,
human rights. That conversation should be public and transparent in multiple forums because it
affects every human being, and it needs to happen now!

*Richard D. Taylor, J.D., Ed.D., is Palmer Chair and Professor of Telecommunications Studies
and Law Emeritus at Pennsylvania State University

Total Views: 622 ,
Scroll to Top