Ain't no AI in heaven

(First of two parts)

Besides the Covid-19 coronavirus, the threat of a pandemic, and its economic consequences, perhaps nothing grabs global attention more nowadays than AI. In the popular press at the moment AI stands in turns for the greatest threat and the greatest opportunity the world faces. Everyone seems to be aware AI use is fraught with ethical issues. Yet only few take the trouble to figure out how problems could be systematically addressed from a sober, level-headed perspective in line with long-established ethical principles. For indeed, although many AI affordances are new, technology itself is not, and humankind has always co-existed and co-evolved with technology, beginning with speech and writing. So rather than inventing a newfangled AI ethics, it may make more sense to examine the resources already in store and see how they can help better engage with AI.  

Definitions of AI and its Business Applications

There is no standard definition for AI and the majority of definitions are inconsistent, referring to particular intelligent systems applied to specific domains. AI comprises distinct, although related technologies such as machine learning, natural language processing, chatbots, robotics, and so forth. Perhaps the simplest definition comes from Google CEO Sundar Pichai who speaks of AI as “computer programming that learns and adapts”. “Learning” and “adapting” are the type of activities humans engage in, leading us to consider AI as a machine that mimics human intelligence: “strong AI” seeks to think, feel and purpose, becoming a “mind” and not just a model of one; while “weak AI” is meant to be a tool at the service of human designs. 

Pressed by the need to legislate, the UK government has come up with its own definition: “Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. However, for at least two of these functions, human intelligence is not strictly required, since even dogs are quite capable of visual perception and speech recognition.

Perhaps the best definition of AI is a combination of those offered by the Expert Groups of the Collaborations Between People and AI Systems and the European Commission: “any computational process or product that appears to demonstrate intelligence through non-biological/natural processes”, “analyzing [its] environment –with some degree of autonomy– to achieve specific goals”. Composed of digital data, algorithms, and computer hardware, AI is not only expected to perform rational functions, but also to change its environment towards a preset direction. 

Being non-biological or non-natural, the “artificial” in AI is clear. “Intelligence” denotes “rationality”, the abstract quality of doing things (or making them happen) for an end or purpose, as opposed to chance. This entails an explanation, a propositional response to the question “why?”. AI displays intelligence in three ways. First, through the use of sensors (cameras, microphones, or keyboards) in machine perception, AI is able to draw and collect data from the environment (physical features such as light, temperature, pressure, distance) relative to its goal (for example, an image of the floor, for a cleaning robot like Roomba). Second, through machine-reasoning and “decision making”, AI interprets relevant data (determining whether the floor is clean or not) and decides on a course of action (if clean, remain still, and if not, proceed). AI is never fully autonomous and its range of options limited. Closely allied to AI is “machine learning” (ML), the mathematical modeling approach that uncovers statistical correlations and patterns within data sets, producing novel outputs. Third, AI shows intelligence in actuation, carrying out responses or environmental modifications virtually (with chatbots), or physically (with cleaning robots). Learning rational AI is able to adapt algorithms or decision-making rules depending on the success of previous interventions.      

AI systems may be purely software-based or embedded in hardware devices. Examples of the former are voice assistants, image analysis software, search engines, and speech and face recognition systems; and of the latter, advanced robots, autonomous vehicles, drones, and applications in the Internet of Things.  

Numerous AI processes or products are already widely used: computer vision, which identifies objects; natural language processing, which interprets texts; and reinforcement learning, which controls robots or game agents through feedback loops. Thus AI can identify faces, follow voice commands or read handwriting, and play (even win) board games such as chess or Go.

Business applications of AI are vast. AI can be used in decision support, taking care of repetitive tasks in finance, marketing, or project management. It can also help in predictive maintenance of machine failures, based on equipment data histories. Another area is customer support and relationship management, answering queries and analyzing opinions. Likewise, AI is useful in process acceleration, boosting efficiency in e-mail management or database information retrieval. The widespread use of AI in the Data Economy heralds the “Fourth Industrial Revolution”, after the adoption of steam engines, electricity, and electronics.

Links between AI and Business Ethics

Because AI is a machine, the connection with ethics isn’t obvious. AI designates processes or products imitating human intelligence, while ethics is concerned with what’s right and wrong. Although AI is human-made, it isn’t human, and only imitates, not actually performs intelligent human activity. Hence, to examine AI ethics per se would be akin to evaluating the color, cut, clarity, and carat of a fake diamond, misattributing features of human action to something which is not human and incapable of human action. Like all tools or instruments, AI can only appraised technically, whether it produces the desired output (effectiveness) and whether it does so optimally (efficiency or economy), but not ethically. It is not enough to be a “functional equivalent” of human action to be subject to ethical judgment (think of a person holding the door and a doorstop); agency itself has to be human, that is, proceeding freely and purposefully from an individual of the human species. AI fails to exhibit ethically salient causality associated with human beings. Not being alive, a fortiori, AI cannot be rational, for ethical reasoning depends on a kind of life for scaffolding. So no matter how good, effective, or efficient, there ain’t no AI in heaven. 

There can only be ethical judgment, moral praise or blame, for the way humans engage with AI. Humans make use of AI to augment or enhance their own activities; AI extends, but cannot supplant human agency. For instance, we could raise the volume of our voice with a bullhorn, but it is still we who speak (truths or lies), not the bullhorn. With AI, we can program a machine to emit sounds similar to speech; yet this can only happen thanks to our inputs, even when resulting outputs are unforeseen or novel. AI cannot produce original speech because it depends on previous data and algorithms which identify statistical correlations and patterns. Although AI can be “taught” to scramble letters, it cannot form new words, because it is a “nobody”, and a “nobody” cannot create words, expressing or understanding meaning. That is why ethical judgment always bears upon human agents, never on AI.

Humans develop, deploy and use AI, oftentimes, with a business intent. While acknowledging its socially transformative and revolutionary potential, we shouldn’t forget that “AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good”. Engagement with AI in business is ethical if it contributes to the common good of flourishing. Proper AI use could prove helpful to reach this goal.  

Businesses generally decide to use AI for utilitarian principles, after an analysis of costs and benefits. Over time, smart robots cost less than the salaries of the workers they replace. However, most ethical approaches to AI-use are rule-governed, concerned with safeguarding fundamental human rights. For instance, the European Commission’s “Ethical Guidelines for Trustworthy AI” mandate that AI-use respect human autonomy, prevent harm, uphold fairness, and remain explicable. Further, AI design and deployment should observe seven key requirements: support human agency and defer to oversight; be technically robust, preventing or minimizing harms; protect data privacy; be transparent in data management and decision making; allow for diversity and inclusion, eschewing unfair discrimination; preserve societal and environmental wellbeing; and exhibit accountability.

Nothing objectionable in these rules and principles. Harm-avoidance may seem obvious, but no less necessary. Yet problems may arise in their application. For instance, demands for privacy and security can enter into conflict with transparency and explicability. Privacy and security require that sensitive information (preferences, sex, age, religious or political views) be accessible only to authorized agents and not be used unlawfully. But at the same time, AI transparency and explicability precisely demands that data gathering, labelling, and processing be documented, so as to allow traceability of errors and biases. Ideally, users ought to provide as much and as accurate information as possible for optimal results, but that could compromise privacy. For example, a correlation may be established between vegans (known from meal preferences) and a lower propensity to miss flights due to tardiness. Is this an acceptable bias? Would it be fair to charge omnivores more? How about this particular omnivore who, historically, has always been punctual? Further, explainability depends on the technological competence of listeners. Hence it becomes likely that the most vulnerable (children, seniors, and the disabled) suffer even greater discrimination and exclusion from the employment of AI. How then to proceed? 

On closer look, the rule-based approach offers little practical guidance to navigate the conflicts and tradeoffs in human-AI engagement.

That’s why we may have to explore other less popular options such as virtue ethics. The objective isn’t so much to replace, as to extend the rules-based method and have a greater focus on what AI engagement does to the people themselves, rather than the business outcomes. We shall begin the second part with an explanation of the fundamental aspects in which virtue ethics and the rules-based approach differ.

Alejo José G. Sison teaches ethics at the University of Navarre and Georgetown. His research focuses on issues at the juncture of ethics, economics and politics from the perspective of the virtues and the common good. He blogs at Work, Virtues, and Flourishing from which this article has been republished with permission.  

icon

Join Mercator today for free and get our latest news and analysis

Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.

Be the first to comment

Please check your e-mail for a link to activate your account.