• Feite Kraay, Author |
8 min read

An editorial cartoon in a recent edition of my morning newspaper depicts a giant, menacing robot labeled “Artificial Intelligence” holding up a scientist by the back of his white lab coat. The scientist is frantically thumbing through pages labeled “Owner’s manual.” Another one shows a man reading the headlines over breakfast and asking, “Should I be worried about this AI thing?”—to which his toaster answers with a sly grin, “Don’t give it a second thought.”

In March 2023, an open letter calling for an immediate six-month moratorium on all further development of generative AI technology was published. This pause was to allow for the establishment of shared safety protocols intended to ensure that AI systems are safe “beyond a reasonable doubt.” The letter quickly gathered tens of thousands of signatures and made headlines for a while but has since become old news. Although the letter writers meant well, I believe the idea of a moratorium is impractical, and the reasons they cite for it, much like those editorial cartoons, miss the mark on the real risks posed by AI.

Call to alarms
The open letter worried about AI becoming “human-competitive” and asked, “should we automate away all the jobs, including the fulfilling ones?” This might be a legitimate question but not, I suggest, one to aim at AI alone. All technology has been human-competitive since, well, at least the industrial revolution. I recently listened to a webcast where the speaker suggested that knowledge workers, including many in the technology industry, are now the most vulnerable to AI. To which I would answer, perhaps flippantly, I guess it’s our turn now. But without minimizing the trauma to those affected by technological disruptions, it has always been the case that the job market is not zero-sum. In fact, technology eventually becomes human-complementary. People have always adapted and migrated toward higher-value work and the result is a net increase in jobs and economic activity. Knowledge workers and others can, and will, learn to use AI as a tool that just might relieve them of certain tasks while freeing them up to unleash their creativity and innovation into new fields.

The open letter also fretted about AI representing “a profound change in the history of life on Earth” and asked “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” I would argue that this is a red herring. I’ve written before about what I feel are the inherent limitations of AI with current technology and I am confident that this kind of dystopian science-fiction scenario is extremely unlikely. Despite all the power we’re throwing at it, AI still is far from the scale or sophistication needed—and I’m not sure it will ever get there. I have suggested that an approach based on quantum computing could help but I also admit that this is highly speculative and theoretical. Quantum is currently being deployed to improve specific subsets of AI like machine learning, but it’s far from clear if or when quantum and AI might converge into anything sentient.

A third concern of the open letter read “Should we let machines flood our information channels with propaganda and untruth?” This one, at least, is a half-step in the right direction. Online propaganda has been a problem for quite a few years now. Human agents have already been using sophisticated technology to influence public opinion, and AI will be another tool at their disposal. Our defenses against propaganda are still the same—keep an open mind, learn to think critically and check your facts. Untruth from AI, on the other hand, is where the most serious problems lie—so let’s take a closer look at that.

Into the deep
The real problems with AI are more subtle than the ones outlined in the open letter because they are not the explicit result of a person using an AI system as a tool—they are the untruths, the biases and inaccuracies that are the unintended consequences of the complexity and sophistication built into AI systems as they work today. AI developers don’t deliberately set out to code discrimination or wrong answers into their systems, and yet it happens. Let’s look at a couple of examples to see how, which will also demonstrate the difficulty of solving these problems.

In a prior post, I mentioned a hospital AI system that assigned lower care standards to Black patients than White patients. It turns out that the problem was buried deep in the data. The system assigned risk scores—and therefore treatment levels—to patients based on their cumulative one-year health care costs. This seemed a reasonable assumption: a higher cumulative cost should imply a patient with greater health care needs. However, the system didn’t catch the fact that Black patients in the same cost bracket as White patients were considerably sicker on average, with higher incidences of high blood pressure, diabetes and other illnesses. The implication is that Black patients tend to use the healthcare system overall less frequently than White patients, likely due to systemic bias and inequitable levels of access to care in the first place.

Looking at the data another way, a Black patient cost the healthcare system $1,800 less per year than a White patient with similar symptoms. Therefore, the cost-based decision made by the hospital AI algorithm implied that a Black patient had to be considerably sicker than a White patient before being referred for extra care. Fewer than 18 per cent of the patients referred for extra care were Black, a number that should have been over 46 per cent in an unbiased decision-making system.

It’s easy to see in retrospect how the bias occurred, but difficult to predict in advance. This was also a single, specific case in healthcare. AI systems in finance, insurance, criminal justice—all can and have exhibited bias based on completely different problems hidden several layers deep in the data and the rules interpreting the data.

A related problem is inaccuracy—generative AI has the amusing tendency to make up answers from time to time, but inaccuracy in the results of AI algorithms is a problem with potentially drastic consequences. Image enhancement is a good example—it can be useful in correcting an out-of-focus smartphone photo, but in that previous post I also alluded to an AI system for enhancing MRI images that introduced errors into its results that could lead to incorrect diagnoses. Again, the system developer’s intentions were good. The longer a patient stays in an MRI scanner, the better the resulting image—but shortening the time in the scanner reduces patient risk and increases the number of patients that can be scanned.

Therefore, AI systems exist to enhance image resolution, in the hope of producing a high-quality image from a short scan time. An AI system can be taught to reconstruct an image based on prior data—a large collection of other high-resolution images from which it can detect patterns similar to sections of the low-resolution image. Based on the likeliest pattern matches, it can then build up a new high-resolution image with a good probability of being correct—but the patterns it selects may not always match what is in the original blurry image, thus masking an underlying problem or introducing a nonexistent one. It’s not enough to expect a doctor evaluating the MRI image to always catch these kinds of errors and refer doubtful results for follow-up. If more reliable enhancement techniques can’t be developed, we may need to conclude that image enhancement just can’t be trusted in some situations.

Self-driving cars provide another example of inaccuracy. They do a pretty good job of relieving human drivers of mundane tasks like driving at a constant speed on a highway, but still don’t do well managing unexpected situations such as construction zones or other sudden hazards. The result is that self-driving cars experience traffic accidents at a rate of 9.1 per million miles travelled, versus 4.1 for human drivers. It turns out that the AI in self-driving cars still can’t replicate human intuition and reaction time. (On the upside, self-driving car accidents are less severe and result in fewer deaths, partly because they occur at lower speeds—self-driving cars are better than humans at obeying speed limits and other traffic laws.)

Out of the dark
There is no single solution for correcting the problems of bias and inaccuracy. Fixing the problems after the fact is akin to the proverbial Dutch boy sticking his finger in a hole in the dike—a new leak immediately springs somewhere else and before long, he has run out of fingers and toes to plug them all. The sources of the problems, and their solutions, are unique to each AI system and possibly even each use of that system. It will take a genuine commitment on the part of data scientists and AI developers to think carefully, compassionately and several levels more deeply about all the possible consequences of where they source their input data and how they encode their rules—and then to test rigorously and widely for bias and accuracy before moving systems to production.

In my last post, I wrote about the contribution of “dark data”—data that is obsolete, incomplete or inaccurate—to carbon footprints. Well, let’s factor in AI risk, as well. When AI systems ingest dark data as part of their training process, the result is often the biased and inaccurate outputs I’ve noted above. A recent article I read about this suggests methods to improve the extraction, transformation and loading of data. In some cases, inaccuracies can be corrected; in other cases, missing data can be interpolated using mathematical models—including the application of another layer of AI. Also, data sets can be curated or tagged to indicate the level of confidence an organization can have in their use. If data scientists and analysts make the effort to use these techniques and improve on them, we should see a noticeable improvement in the fairness and accuracy of our AI systems.

Ultimately, what is the purpose of AI but to augment human intelligence, not replace it? In order to get there—and conscientiously—I admit I wouldn’t rule out the need for some amount of public sector regulation. In the meantime, some technology companies and NGOs have already been collaborating on a set of clear governance principles, including:

  • Transparency: The system must be traceable so that the process of achieving a result can be explained. This includes clarity about who trains the AI system, with what data, and how recommendations are generated.
  • Accountability: Business requirements must be clearly defined and ethically acceptable before development starts, and a human must always be able to take responsibility for what the system does.
  • Inclusion: The system must not discriminate against anyone, preserving the dignity of all people.
  • Impartiality: The system must not follow or create biases, so that its output helps people make fairer choices.
  • Reliability: We must be able to have confidence in the system’s accuracy.
  • Security and Privacy: The system must be held to the highest standards of security from data breaches or other intrusions and must respect the user’s right both to privacy and to control over their own data.

If we make a genuine effort to abide by these principles, there will be little to fear—and the world will gain.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today