• Feite Kraay, Author |
5 min read

I’m lucky to have had the opportunity throughout my life to travel extensively. I’ve set foot on every continent except Antarctica—and that one is definitely on my bucket list. From a recent trip to the overwhelmingly bustling metropolis of Tokyo, to multiple visits at my family’s favourite village in central Italy, to the majestic grandeur of the lost civilization of Machu Picchu—travel has given me a wonderful sense of perspective on our planet and the human species. Two trips, two decades apart, are particularly noteworthy.

In 1998, my wife and I took a holiday in Argentina. After spending some time in Buenos Aires, we went on to Patagonia for a few days. While there, we toured the marvelous Perito Moreno glacier high in the Andes and learned that it was, at the time, the only glacier in the world that was still expanding. Seeing ice crash from the glacier into the lake water below was a memorable experience. Knowing that all the other glaciers in the world are shrinking really hit home for me the impact of climate change.

In 2019, our last big family vacation before the pandemic was to Portugal, with a side trip to Morocco. While in Casablanca and Rabat we observed many spires with three symbols—a crescent, a cross and an orb—which our guide proudly told us represented three major world religions coexisting in harmony. Indeed, Morocco remains a peaceful nation that is, so far, still unaffected by the sectarian strife that afflicts many other regions of the world. That visit had me thinking about inclusiveness in general—and feeling hopeful that if we put our collective minds to it, we can find ways to get along and enable everyone to share in the benefits of our global, integrated, technology-enabled economy.

In a previous post, “You’ve come a long way, AI!” I briefly mentioned the environmental impact of the colossal data centres required to operate these systems and wrote about some of the problems of inherent bias and inaccuracy, specifically in AI systems. In this and a couple of posts to follow, I want to expand on these themes because they go far beyond AI—and finding solutions is going to require willpower and effort from the private sector, government and NGOs.

The polyproblem
Much has been written recently about the social risks of AI, especially generative AI. From the prospect of job loss among knowledge workers to the doomsday scenario of sentient AI turning on its human creators, the pundit-driven hype machine seems to be in overdrive. I want to look past the hype, though, and consider in detail what I see as AI’s actual and immediate risks.

First, to the environment. The data centres that support today’s large-scale AI systems draw enormous amounts of electricity and therefore carry a significant carbon footprint. Estimates of the power it took to operate a current generative AI system for just the month of January 2023 averaged approximately nine million KWh. Since KPMG’s emerging technology centre of excellence is based in Copenhagen, let’s compare this to the average Dane who consumes some 1,600 KWh of electricity annually—133.3 KWh per month. One generative AI system, therefore, could power a small Danish city of 67,500 inhabitants.

That’s just one generative AI system. There are many more in the market and their use is growing fast as they get embedded into search engines, customer service chatbots and seemingly everything else. Even so, AI is far from being the worst of the polluters. In a September 2022 paper, the US government estimated that the electricity usage by all cryptocurrency mining and crypto-asset management amounted to between 120 billion and 240 billion KWh annually—equivalent to a range between 0.4 and 0.9 per cent of total annual global electricity usage. Now consider the large hyperscalar public cloud providers, and even big corporations’ own private cloud infrastructures. It’s not just AI—clearly the entire IT industry’s thirst for power keeps growing, leading to a serious carbon footprint problem. This will be the focus of my next post.

The second set of risks are to society at large. I’ve noted before that AI systems have exhibited inherent biases against women and the BIPOC population. AI systems in insurance, government, health care and other sectors have all shown demonstrable biases with serious consequences for people the systems were meant to help. But just because these biases originate in the data and rules we code into the AI systems, and therefore are a result of inherent prejudices in society at large, doesn’t mean it’s okay to accept them. On the contrary, given the implicit trust society puts in our computer systems, the onus is clearly on AI developers to do better. The way I see it, AI should be setting the gold standard for inclusiveness and equity instead of reflecting our own biases back at us.

AI systems have also shown a disturbing tendency to produce answers that are simply wrong. It can be amusing to try to trick commercial generative AI systems, but in other cases the outcome is more drastic. You can’t trust a diagnosis based on AI-enhanced MRI images, knowing that the image enhancement could occasionally be wrong. False positives and false negatives could have equally bad outcomes. Self-driving cars with an accident rate twice as high as human drivers are also not particularly useful.

When bias and inaccuracy occur, it’s possible to trace and explain how the AI came to a wrong conclusion; it’s also possible to go back and fix individual instances. But that’s not enough—what’s required is a better design approach that can avoid these problems in the first place. I’ll explore this more fully in a couple of weeks.

Clear eyes ahead
Addressing the environmental and societal impacts of AI—and the technology industry in general—is an ambitious undertaking. It will require accountability, compassion and foresight. It’s encouraging to know that KPMG and many of the large industry players are already devoting time and resources to this effort and are collaborating with the public sector and NGOs to establish a new framework. The goal is to deliver a new generation of AI systems that are inclusive, reliable and sustainable. Some call it “responsible computing” or “ethical computing,” and I’ve even seen the term “Algor-Ethics” coined. Personally, I think “computing with conscience” has a nice ring to it.

I’m confident we can do it. Clean computing and fair AI systems that leave no one behind will have a better impact on the economy, on the planet—and, really, on everyone and everything. It’s the right way forward.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today