Is AI better regulated after the White House meeting?

The easy answer is, it's too soon to tell. But for a number of reasons, the July 21 meeting between President Biden and leaders of seven Big Tech firms, including Google, Microsoft, and OpenAI, may prove to be more show than substance.

Admittedly, there is widespread agreement that some sort of regulation of artificial intelligence (AI) should be considered. Even industry leaders such as Elon Musk have been warning that things are moving too fast, and there are small but real risks of huge catastrophes lurking out there that could be averted by agreed-upon restrictions or regulations of the burgeoning AI industry.

Last Friday's White House meeting of representatives from seven leading AI firms — Amazon, Anthropic, Google, Inflection, Meta (formerly Facebook), Microsoft, and OpenAI — produced a "fact sheet" that listed eight bullet-point commitments made by the participants. The actual meeting was not open to the public, but one presumes the White House would not publish such things without at least the passive approval of the participants.

Stating the obvious

Browsing through the items, I don't see many things that a prudent giant AI corporation wouldn't be doing already. For example, take "The companies commit to internal and external security testing of their AI systems before their release." Not to do any security testing would be foolish. External testing, meaning testing by third-party security firms, is probably pretty common in the industry already, although not universal.

The same thing goes for the commitment to "facilitating third-party discovery and reporting of vulnerabilities in their AI systems." No tech firm worth its salt is going to ignore an outsider's legitimate report of finding a weak spot in their products, and so this is again something that the firms are probably doing already.

The most technical commitment, but again one that the companies are probably doing already, is to "protect proprietary and unreleased model weights." Unversed as I am at AI technicalities, I'm not sure exactly what this means, but the model weights appear to be something like the keys to how a given AI system runs once it's been trained, and so it only stands to reason that the companies would protect assets that cost them a lot of computing time to obtain, even before the White House told them to do that.


Join Mercator today for free and get our latest news and analysis

Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.

Four bullet points address "Earning the Public's Trust," which, incidentally, implies that the firms have a considerable way to go to earn it. But we'll let that pass.

The firms commit to developing some way of watermarking or otherwise indicating when "content is AI-generated." That's all very well, but the answer to such a question is rarely just yes or no. What if some private citizen takes a watermarked AI product and incorporates it manually into something else that is no longer watermarked? The intention is good, but the path to execution is foggy, to say the least.

Perhaps the commitment with the most bite is this one: "The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use." The wording is broad enough to drive a truck through, although again, the intention is good. How often, how detailed, and how extensive such reports may be is left up to the company.

The last two public-trust items commit the firms to "prioritizing" research into the societal risks of AI, and to use AI to address "society's greatest challenges." If I decide not to wash my car today, I have prioritised washing my car — negatively, it is true, but hey, I said I'd prioritise it!

Damage done

So what is different about the way these firms will carry out their AI activities after the White House meeting? A lot of good intentions were aired, and if the firms happened to enjoy a lot of public trust in the first place, these good intentions might have found an audience who believes that they will be carried out.

But the atmosphere of cynicism which has gradually encroached on almost all areas of public life makes such an eventuality unlikely, to say the least. And this cynicism has arisen due in no small part to the previous doings of the aforementioned Big Tech firms — specifically, their activities in social media.

When you compare the health of what you might call the body politic of the United States today with what it was say, fifty years ago, the comparison is breathtaking. In 1973, 42 percent of those US residents surveyed said they had either a "great deal" or "quite a lot" of confidence in Congress. Only 16 percent said they had "very little" confidence. In 2023, the number of people with either a great deal or quite a lot of confidence is only 8 percent, and fully 48 percent say they have "very little" confidence in Congress. While this trend has been happening for years, much of it has occurred only since 2018, after the social-media phenomenon overtook legacy media as the main conduit of political information exchange — if one can call it that.

Never mind what AI may do in the future. We are standing in the wreckage of something it has done already: it has caused great and perhaps permanent damage to the primary means by which a nation has of governing itself. Not AI alone, to be sure, but AI has played an essential role in the way companies have profited from encouraging the worst in people.

It would be nice if last Friday's White House meeting triggered a revolution in the way Big Tech uses AI and its other boxes of tricks to encourage genuine human flourishing without the horrific side effects in both personal lives and in public institutions that we have seen already. But getting some CEOs in a private room with the President and issuing a nice-sounding press release afterwards isn't likely to do that. It's a step in the right direction, but a step so tiny that it's almost not worth talking about.

Historically, needed technical regulations have come about only when vivid, graphic, and (usually) sudden harm has been caused. The kinds of damage AI can do are rarely that striking, so we may have to wait quite a while before meaningful AI regulations are even considered. But in my view, it was already high time years ago.


Karl D. Stephan is a professor in the Ingram School of Engineering at Texas State University, San Marcos.

This article has been republished with permission from the author’s blog, Engineering Ethics.

Image credit: Pexels

Showing 2 reactions

Please check your e-mail for a link to activate your account.
  • paolo giosuè gasparini
    commented 2023-08-01 18:48:02 +1000
    An Italian personalist-philosopher says: "Deeply thinking about technology, in order to establish what it can do and what it cannot do, even if it wanted to exert the highest will of power: this is the most essential point of every discourse on the universe of technologies. However, few addresses it, and from this primary theoretical deficiency, countless misunderstandings follow.

    The undeniable power of technologies cannot transform the nature or essence of humanity, changing it into something else and different. Taken in their most authentic meaning, the notions of human nature or essence belong to the realm of the necessary and the stable, of what is structured in a certain way and cannot be otherwise. As long as a human being exists, they will be a living personal subject, formed by the union of soul and body, and endowed with intellect, will, and freedom; nothing more and nothing less. The grandiose rhetoric about the post-human and the trans-human, which has spread everywhere for over thirty years and is denoted by the saying “Change or perish,” has carefully avoided philosophical confrontations with the notions of nature/essence and becoming, which are not as malleable as one might wish. In other words, technological scientism dreams a lot and thinks little: especially, it does not look towards ontology. The often a priori rejection of ontological discourse shifts the focus to ethics, trusting that it alone can provide us with an adequate answer; unfortunately, this is rarely the case.

    The premise that the power of technology cannot change the essence of humanity into something else and different, however, does not align with any form of quietism that would leave the field open to technologies based on the just-mentioned idea. On the contrary, the greatest risks, along with opportunities, arise precisely at this “intermediate” level where there is an attempt to restore and enhance human beings, both by treating diseases and by endowing them with greater abilities. In this field, good or bad events can happen. Let us consider the challenge of Artificial Intelligence (AI), pressing in relation to two factors: its ambivalent and manifold impact on human beings in individual and social life, and the hyper-fast change of the existential fabric and the difficulties faced by many in keeping up, resulting in social fractures in many fields. Without an adequate idea of the person, their rights and duties, and their dignity, the will of technological power – which is actually the will of power of individuals and large groups and holdings operating powerfully on a global scale, often in a serious regulatory vacuum – is capable of generating violent imbalances. So far, the ability of public authorities to effectively regulate the major producers of AI, composed of dominant private groups at a global level, has been scarce, as these groups show high reluctance to submit to controls and regulations.

    In the era of “infocracy” the main question for those who look around and reflect is whether there will be enough time to find adequate answers before the technocratic dominance silences dissenting opinions. Many wonder about the influence that the techno-scientific complex exerts on democracy, with its related developments, such as the rise of populism, the intensification of acute emotions, the instability of governments, the intentional spread of false news, and the deprivation of citizens’ ability to make informed choices. With the enticement of constant free connection, “infocracy” fosters the solitude of the individual. And it is known that solitude is the primary condition of submission. This is happening because connected subjects feel autonomous while perpetually being logged in the boundless memories of big data. Controls ultimately lie in the hands of those who should be controlled.

    AI is today the fastest-changing sector. Anyone with some knowledge of how individuals exercise sensible and intellectual knowledge cannot help but see that the very term “AI” is an oxymoron, carrying falsehood and mystification. AI computes and composes at high speed, but it does not think: intelligence is life, not a machine; and if it is a machine, it is not intelligence. The pervasiveness of the digital world works against this fundamental acquisition: daily contact with the digital world blurs the distinction between virtual and real, causing an ambiguous transformation of human experience and common sense. It leads to the belief that, in numerous cases, AI decides better than humans. We can refer here to the use of AI in the field of justice administered by states and courts. Can we abdicate the primary right that every person must necessarily be judged by another person and not by machines?

    The ideology of transhumanism has paved the way for an augmented mind and an inessential body for the functioning of the former. AI is grafted onto this framework, favoring the mental-algorithmic-virtual over the bodily experience of the world. At this level, the theme of freedom becomes more essential than ever because scientism tenaciously fights to show that human beings are predetermined in their choices by machinery and algorithms, and that consciousness is an outer phenomenon of something else. Therefore, we may be externally directed. And we already are when, after being profiled in a thousand ways, the enticements of advertising steer us towards maximizing the profits of dominant multinational corporations. An urgent task lies in reawakening in many the love for freedom and the desire to use it to live their own lives and to form a capacity for judgment.

    Individuals and peoples need to react to the creeping moral passivity, to the resigned submission to technology and technocracy. Without underestimating critical positions and the significant work on neurological ethics and the ethics of AI, most people seem to adopt a passive “wait and see” attitude. The powerful connection between techno-scientific research and exceptional levels of risk capital, aimed at maximizing profits, discourages and weakens the capacity for reaction. There are only fragile counterbalances, and in democracies, the bad currency of social networks, AI, and algorithms overshadows everything else. Bad money drives out good, and large technology companies show little interest in correcting these serious distortions, from which they derive power and profits. The hate that circulates on the internet is more profitable than anything else, and the damage inflicted on minors and children growing up in such an environment is not taken into account. Once again, it proves true that the risks to humanity do not arise from technological errors but from their unhealthy use. Every technique is open to opposites, to its good or bad use, and this does not depend on the technologies but on the humans who design and employ them. Atomic energy illuminates cities, but can be used to destroy them. The chip installed in the brain not only allows for the interpretation of electrical signals of those who cannot communicate with the outside world, providing help, but also allows external signals to be sent to the brain, with the risk of manipulation and expropriation of the subject. The intrinsic ambivalence of technology should never be forgotten.

    To assess whether we are prepared for the changing world that is already in operation, we should ask ourselves: what is the prevailing spiritual context in the West, especially in the higher strata, who bear special responsibilities in public decisions that concern everyone? In our liberal-democratic societies, the humanism of the individual must face challenges arising from the devolution of the concepts of liberalism and individualism, the latter reduced to exclusive self-determination, where the other is perceived as a limit or adversary. Liberalism, which has transformed into neoliberalism and libertarianism on an ethical level, and free-market capitalism in the economic field, still dominates the scene. Their appeal to the person and their dignity is often convenient for covering other paths: liberal societies are in crisis due to their aggressive conception of the self-centered individual hostile to otherness and the detachment from the Christian idea of the person. Prevailing is widespread skepticism, sometimes openly materialistic. This skepticism views the personal self as resolved in the circle of biological life and must now face an increasing fear of the future – despite the powerful technical means at our disposal – and fear of the other, to whom one says: “do not touch me.” The other is perceived as a competitor, not as a potential partner in a relationship and cooperation.

    The Europe of the spirit cannot provide a sufficient remedy to this climate if it abandons its Christian heritage and turns to the powers of the time, bowing idolatrous to them. The words of Karl Löwith, written 70 years ago, should be contemplated: “Only with the fading of Christianity has humanity also become problematic.” Oblivious to God, humanity risks being set aside, no longer thinkable in His image and likeness, according to the biblical message. Then, human beings see only their own products and think of themselves as being in their own image and likeness, more of their corporeality than of their spirit."
  • Karl D. Stephan
    published this page in The Latest 2023-07-31 09:22:01 +1000