Trustworthy AI? What’s lost in translation

When telephones were first installed in homes, people worried whether they were properly dressed to receive calls, apparently. Since then, we’ve come a long way in understanding this technology, judging from the ubiquity of phones in hotel bathrooms and the endless chatter one overhears in public toilet stalls.
Similar adjustments are taking place with regard to AI. To this end, the EU Commission set up an expert group to draft “Ethics Guidelines for Trustworthy AI” published in April, 2019. By “trustworthy AI” the document understands a system which, in its entire “life cycle”, proves “lawful”, “ethical”, and “robust” (although the focus is only on the second and third elements).
In this context, “robustness” has to do with safety, security, resilience, accuracy, and reliability. If I had a car instead of AI, I suppose I’d call it “robust” if it brought me safely, efficiently, and in relative comfort from one place to another. In other words, if it did everything you’d expect a car to do, if it were “dependable”. But call it “ethical” or “trustworthy”? I can only hope that my Uber or Lyft driver be ethical and trustworthy; but their car? How and why should AI be different from a car?
The Guidelines continue to explain “ethical” AI in terms of safeguarding fundamental rights by respecting human autonomy, preventing harm, procuring fairness, and guaranteeing explicable functioning. None of this would make sense if they did not refer to human agents who design, deploy, and use AI, for by itself, AI can do nothing.
It would be no different from a car parked in a garage from the standpoint of mobility. It cannot respect, prevent, procure, or guarantee anything on its own accord. Only human beings would have reason to pay attention to vulnerable groups, as children, the sick, or the poor, and recognize their intrinsic worth beyond the semblance of weakness. Only they could value democracy or participation in political issues, upholding of the rule of law against arbitrariness. AI could not care less about these or anything else for that matter.
Further, the Guidelines establish seven requirements for AI to be deemed “trustworthy”. Again we have to infer that designers, deployers, and users of AI ought to support human agency and decision making (for instance, by enabling individuals to store and keep track of their personal data) and defer always to human oversight and correction (by maintaining humans “in” or “on the loop” and “in command”, such as the combat pilots in northern Virginia who fly drones in Afghanistan).
AI systems should be robust, programmed to prevent harms and minimize risks to human integrity and wellbeing, as in malicious hacking attacks, for example. Also, AI must be devised to protect data privacy and integrity, ensuring that sensitive information is neither accessed nor used by the wrong people or for the wrong motives. That’s what sets a “smart city” apart from a “surveillance city” or a panopticon.
The Guidelines likewise require that AI design be transparent in its data management and decision making. This means AI service providers should communicate in simple, understandable terms how data are gathered, kept, and used, such that the people involved can give their informed consent, trace errors, and introduce rectifications in the system. Moreover, AI must be used in a manner that is inclusive or non-discriminatory (not penalizing people of certain ethnicities in employee selection) and fair (by eschewing price collusion among sellers).
AI ought to be employed so as to advance societal and environmental wellbeing, through responsible energy consumption and resource usage at all stages of its life cycle. And lastly, AI should enhance accountability, enabling audits, minimizing harms and reporting them, flagging trade-offs, and permitting redress when problems arise, as in the case of collateral damage in drone-strikes, for instance.
Yet once more, all these demands are to be made on the people who design, fabricate, and operate AI. For AI is an object, not a subject, and much less, an ethical agent. We should not allow ourselves to be misled by the anthropomorphic metaphors and allegories we use in speaking of AI. There is no homunculus nor ghost inside the machine.
In the same way that a telephone is a tool to speak to someone out of hearing distance or a car a means of land transport, AI is an instrument we have invented for the same ultimate purpose of achieving a better human life. As all forms of technology, its goodness lies in its usefulness for the ends we humans freely ordain. Thus “AI ethics” is nothing else but a new kind of applied ethics at the service of individual and social flourishing.
It would be utterly foolish to expect an AI ethics code to be written, or worse, to spontaneously emerge, that would supplant human reasoning about what ought to be done and what avoided. 
That’s why perhaps the most valuable part of the Guidelines is the final “examination of conscience” where those who design, deploy, and use AI, individually or in organizations, can ask themselves about their motives and behavior regarding the above-mentioned principles: “Are human users made aware that they are interacting with a non-human agent?” “Does the AI system enhance or augment human capabilities?” “Did you put in place mechanisms to ensure human control or oversight?” and so forth.
Because in the end, it’s the humans operating AI that need to be trustworthy, more than the system itself. Unfortunately, that’s often lost in translation. Alejo José G. Sison teaches at the School of Economics and Business at the University of Navarre and investigates issues at the juncture of ethics, economics and politics from the perspective of the virtues and the common good. For the academic year 2018-2019, he is a visiting professor at the Busch School of Business at the Catholic University of America. He is an editor of the recently published "Business Ethics: A Virtue Ethics and Common Good Approach” (Routledge 2018). He blogs at Work, Virtues, and Flourishing from which this article has been republished with permission.

icon

Join Mercator today for free and get our latest news and analysis

Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.

Be the first to comment

Please check your e-mail for a link to activate your account.