• Feite Kraay, Author |
7 min read

My father, age 87, moderates a weekly discussion group at our neighbourhood church. Called “The Globe and Faith,” the group meets every Wednesday morning to discuss current events from an ethical and moral perspective. Intrigued by my first three posts, about post-quantum cryptography, Dad asked me to be a guest speaker a few weeks ago. I was immediately ready to put on my prophet of doom robes and launch into a jeremiad about the coming Y2Q apocalypse. Instead, Dad asked me to talk to the group about artificial intelligence.

I readily agreed, relishing the challenge to refresh my mind on a subject of interest that I have been following off and on since a couple of elective courses in university. I spent a few evenings compiling relevant historical and contemporary material and had a lively and stimulating conversation with Dad’s group. We could certainly conclude that AI had matured considerably since the late 1980s when undergraduates like me were tinkering with data structures such as stacks to try, in a limited way, to simulate short-term memory. The group was also quick to identify some serious problems with AI in its current state.

Unless you’ve been living under a rock, you can’t help but have noticed that AI has been a huge topic in the headlines for the last few months, driven by the recent public emergence of AI “chatbot” technology, also known as “generative AI.” A powerful new product has been released to the consumer market, the industry is scrambling to react, and pundits are posting wildly exaggerated claims. Some are breathlessly hailing it as the greatest revolution since the dawn of Computer Science (it isn’t) and others casually dismiss it as nothing more than a souped-up “type-ahead” tool (also not true). I think the actual truth is somewhere in the middle, and a bit of historical context will give us the perspective to understand the real benefits and risks inherent in AI technology, including generative AI.

Test talk
To begin, we humans have always tended to anthropomorphize our technology. Early 20th century science fiction stories were full of robots coexisting, usually benignly and helpfully, with people. Popular movies and TV series have humans conversing in plain English with computers onboard spaceships and include nearly-human android crew members. Sometimes, however, the computer turns hostile or a space-based computer network launches all-out war against its human creators. In our popular imagination, we view technology as, in a way, superior to ourselves. We appreciate that it can be useful and beneficial, but we also fear what might happen if we were to lose control over it.

This anthropomorphic view guided much of the early philosophical and mathematical work on AI in the mid to late 20th century. Maybe I should say misguided, because I believe this view delayed the development and deployment of real AI-inspired solutions that could solve specific problems. Here’s why: while the popular imagination was filled with talking computers and humanistic robots, the technology industry was busy with databases, search engines and systems that could win a chess game or a TV quiz show. The disconnect between imagination and reality contributed to an “AI winter” where investment and innovation slowed dramatically. It has only recently recovered.

One of the early giants in AI research was the brilliant British mathematician, Alan Turing. In the 1950s, Turing turned his mind toward the fields of Computer Science and Artificial Intelligence, his best-known contribution being the Turing Test. This was a thought experiment designed to tell if a computer could ever achieve true artificial intelligence—to, in other words, be indistinguishable from a human.

The Turing Test works like this: A human and a computer are in separate, enclosed rooms. Each can communicate via a teletype device with an outside, human interrogator who does not know in which room is the human and in which is the computer. The interrogator can ask any questions they want to either room and evaluate the answers. Eventually, the interrogator must correctly identify the computer and the human. If the interrogator decides wrongly, or admits they cannot tell the difference, then the computer has passed the test. Note that in order to pass, a computer system would have to give answers that are plausible but also at times fallible. I submit that despite the hype, nothing to date has ever passed the Turing Test, not even close, and not even the new generative AI. Wags have taken great delight in pointing out generative AI’s tendency to make up answers when stumped, but even this “capability” is not nearly enough. (Trust me, I have tested it myself.)

Correction course
So, there’s the problem. The term "artificial intelligence" had built up an inflated expectation and was nowhere near able to live up to it. What did we in the industry do instead? Well, we changed the language. We de-emphasized “artificial intelligence” in favour of other, more modest labels and other, more modest solutions that would in fact deliver real, material benefits. These include:

  • Cognitive computing: Applying human ways of learning to specific knowledge domains such as health care or pharmaceutical research.
  • Machine learning (ML): A similar approach, applying rules and algorithms to train a computer to draw inferences from a large pool of unstructured data.
  • Natural language processing (NLP): Kind of an extension of ML, enabling computers to parse actual sentences and retain some conversational memory in order to provide a more user-friendly query experience.
  • Deep learning: Taking ML but scaling up the underlying database.
  • Large language models (LLM): Essentially a combination of ML and deep learning to get an even better, more intuitive communication with the user and broader applicability across larger knowledge domains.
  • Robotic process automation (RPA): Combining some of the above models to automate routine tasks like insurance or loan application forms processing, help desk queries and similar use cases.
  • Generative AI: Combining LLM and deep learning into a system that can carry on a natural-language conversation, picking up on textual cues and relying on an enormous and growing unstructured database of information.

The common thread is having a large database to work with as well as a good rules engine for contextualizing the data, interpreting human queries and generating natural-language responses. And, no matter what we call it, AI in a more limited form has proven to be extremely useful. It assists doctors with diagnoses and lab technicians with interpreting MRI images. It powers facial recognition technology that speeds your passage through border controls at the airport. It accelerates and improves customer service and helps retailers in making product recommendations or allocating inventory. By changing its name, AI has managed to affect nearly all aspects of our lives. AI truly has come a long way, and generative AI can be viewed as essentially a powerful next step in a long line of technology improvements.

However, all these versions of AI are also not without real problems. For example,

  • The underlying data and rules engine can encode bias. Some health care AI systems have assigned lower care standards to Black patients due to an inherent bias in how patient risk was determined, and an AI algorithm used to predict recidivism rates among parolees was also found to be biased against Black people.
  • Facial recognition software does a pretty good job identifying white men but doesn’t do well with women or the BIPOC population. In one publicized example, a facial recognition algorithm was shown a photo of Oprah Winfrey and concluded with 76 per cent confidence that this was a man.
  • Results can simply be inaccurate. An AI system designed to enhance MRI images ended up generating incorrect results leading to wrong diagnoses.

I won’t go into other geopolitical issues or the environmental impact of the large data centres needed to support AI although these are also real concerns. But just to be clear, generative AI is prone to all the same problems.

Solving the problems with AI won’t be easy. Part of the solution is in massively scaling up the data and rules—a bigger data engine can better normalize some of the inherent bias and produce more accurate results. Part of it must be improving our own societal attitudes—the more bias we remove from our own human interactions, the less bias we’ll see in the data we feed to our AI systems. Part may be a regulatory framework with some amount of public sector oversight. Finally, much of the responsibility must be shouldered by the technology industry itself. Encouragingly, an industry consortium is beginning to form with an AI manifesto centred on six core principles:

  • Transparency
  • Inclusion
  • Accountability
  • Impartiality
  • Reliability
  • Security and Privacy

If we in the technology industry take these principles to heart, I believe that AI will continue to improve and to benefit all of us. But when, if ever, will it pass the Turing Test? I don’t think it will with today’s technology. I think it will take something truly revolutionary. Something capable of breaking Moore’s Law and unleashing a completely different approach to computing and cognition.

Is artificial intelligence ready to make a quantum leap? I’ll discuss that in my next post.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today