• Feite Kraay, Author |
4 min read

​The dullest statutory holiday I ever celebrated was without a doubt Saturday, January 1, 2000.

I was working on contract at one of Canada’s big five retail banks, where my team and I had spent the prior two years modernizing their network of branch servers. Part of our work included ensuring that the entire middleware and application software stack was Y2K-ready. We had gone through seemingly endless rounds of testing and verification and were highly confident in our work; yet, when the critical date arrived, our director asked all team members to be on-site just in case. Bleary-eyed we showed up at the office at 8:00 a.m. and spent the next six hours staring at our incident dashboards, watching absolutely nothing happen. In the early afternoon we were released to go home for the remainder of the holiday. By all accounts, our experience was not unique. Globally, the number of serious Y2K incidents could probably have been counted on the fingers of one hand.

What made Y2K such a non-issue? Although it’s possible that the severity of the problem might have been inflated in the industry and public press, it was still a pervasive problem with potentially serious consequences across the entire global economy. I believe there were three main factors that contributed to Y2K’s minimal impact:

  1. Widespread public knowledge of, and concern about, the problem. It was easy for the average layperson to understand coding two-digit dates versus four-digit dates and the impact of computer systems misunderstanding ‘00’ for ‘1900.’ Such general concern was a strong factor in forcing government and industry to act.
  2. Fixing the problem, although it was expensive and time-consuming, was technically not very difficult. Modifying and testing computer code to accommodate a four-digit year field is something most programmers are capable of; in fact, many automated scanning, discovery and remediation tools hit the market in the late 1990s.
  3. There was a definite deadline to act. Chaos would ensue if all systems weren’t repaired and tested by December 31, 1999. This deadline allowed everyone to build robust project plans with ample time for design, development, deployment and verification. Everyone was working toward the same goal and collectively got the job done.

Today, the world faces a new threat with consequences at least as severe as those ever envisioned for Y2K. This threat is a consequence of the rapid developments in quantum computing.

Quantum computing has the potential to revolutionize the IT industry as we know it today. Quantum computer hardware based on the properties of subatomic particles can represent multiple values instead of the binary digits, zeros and ones, or bits that underpin all classical computing. How quantum computing works will be the topic of a later post. For now, what you need to know is that the promise of this radical new computer architecture is the ability to solve complex mathematical problems in exponentially faster time than it would take classical computers. Quantum computing will have great applicability for problems in business, science and engineering, such as forecasting large-scale weather systems, researching new pharmaceutical drugs and optimizing complex financial systems or transportation networks.

However, this promise also comes with a threat—that cyber-criminals using large-scale quantum computers will be able to break many encryption algorithms in use today to protect sensitive data and online transactions. When realized, this could disrupt if not destroy the foundations of digital commerce and data privacy. Furthermore, this threat does not meet any of the three remediating factors mentioned above.

  1. It is not well known to the public. It does not have a catchy name and requires lengthy explanation to be understood. The average layperson sees quantum physics, and indeed quantum computing, as somewhat futuristic science fiction. In addition, cryptography is based on complex mathematical problems and internet security is therefore generally taken for granted. In other words, the severity of the problem is discounted.
  2. Fixing the problem, in addition to being expensive and time-consuming, is enormously complicated. In fact, the solution requires highly advanced knowledge of pure mathematics and number theory to develop new quantum-safe encryption methods. Testing and validating these new encryption methods is also an uncertain process absent any sufficiently powerful quantum computers available today. Deploying new encryption algorithms into production will require a great deal of work within organizations and across the public internet.
  3. There is no fixed deadline for when the remediation needs to take place, nor even when the threat will materialize. Most industry experts expect that large-scale quantum computers will be commercially available in the next five to ten years, at which point encryption vulnerabilities will be exposed. This timeline is based on the major technical hurdles still to be overcome in quantum computing; however, new innovations are also emerging at a rapid pace that may collapse the timeline. At the same time, some cyber-criminals are already harvesting data now with a view to decrypting it years from now and extracting useful information. Therefore, although the urgency of the problem is real, it is also still somewhat vague.

This vulnerability of current encryption methods to quantum attacks has been nicknamed “Y2Q.” Next, we’ll look at how encryption works, the root of the Y2Q vulnerability and its consequences for privacy and online safety.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today