• Feite Kraay, Author |
6 min read

My seventh marathon was in May of 2017, in Saskatoon. I had planned a visit with my parents, who called that city home for over three decades, and timed my visit to coincide with the race weekend. With a solid training cycle through the winter, I was anticipating a good race, taking advantage of a mostly flat course and cool spring temperatures. However, early in the race—about 5km in—I saw something unusual that threw me off my stride.

What I noticed was that the on-course distance markers started deviating from my GPS watch by about 800 meters—and this deviation stayed constant for the rest of the race. Each marker showed me farther ahead than what my GPS was telling me—by a margin large and consistent enough that it couldn’t quite be explained by normal GPS inaccuracy. (Wearable GPS devices are only accurate to within a few meters at best and can often be off by dozens or even a few hundred meters.) I asked some of my fellow competitors and they, too, were experiencing the exact same thing. We all assumed there was just some error in the marker placement that would eventually resolve, and we continued running. I was feeling good, thinking that a personal best finish was within my reach.

But imagine my surprise when, almost at the end of the race, where I expected to make one final turn toward the finish line, I came upon a course marshal directing all runners to an additional 400-meter out-and-back loop. Take it from me: when you’ve run what you thought was 42km and think you’re almost done, being forced to run 800 meters more is absolutely excruciating. My heart sank, realizing that my dream of a personal best had just gone up in smoke. Nevertheless, I pushed myself through the extra distance and finished the race—notching what, at the time, would be my second-fastest marathon finish.

Later that afternoon, I received an e-mail from the race organizers with an explanation of what had happened. It turned out that the on-course marshals had misdirected all runners at an intersection at that initial 5-km mark, accidentally cutting about 800 meters from the full distance of the course. When they realized the mistake, the race director had to scramble to add 800 meters at the end of the course before the lead pack of runners got there. I guess you could say all’s well that ends well—two deviations from the planned racecourse caused everyone some extra difficulty and confusion, but we still ended up running the correct distance.

Going off-course
In a previous post, I wrote about our marathon to achieve quantum-safe encryption—and that the finish line appeared to be in sight. However, just like my experience at that marathon in Saskatoon, a couple of intervening events have thrown us off our stride. Although the finish is still achievable, it may be farther off and harder to reach than first anticipated.

Remember how I wrote about RSA-2048, a form of asymmetric encryption that protects internet traffic, and that Shor’s algorithm proves that a sufficiently large quantum computer will eventually be able to break it? It had always been expected that it would take at least five to ten years for quantum computers to achieve the scale necessary for this to happen. However, in December 2022 a team of Chinese researchers published a paper claiming to prove that RSA encryption can be broken using currently available quantum technology—specifically, a quantum computer of only 372 qubits. Given that 433-qubit technology is already available from one major quantum hardware manufacturer, with 1122 qubits promised by the end of 2023, the consequences for online security could be severe. This marks the first unexpected detour in our quantum-safe marathon.

I’ve also previously written about the National Institute for Standards and Technology (NIST) having assembled, in July 2022, a short list of four new encryption algorithms that were considered leading candidates to form the basis of post-quantum encryption. All of these algorithms are based on sufficiently complicated mathematical problems that even a quantum computer wouldn’t be able to break them. However, in February 2023, a team of American cryptographers announced that they’d been able to compromise one of those algorithms, called CRYSTALS-Kyber. This is one of the three algorithms based on lattice problems in pure mathematics. If it’s compromised, how can we trust the integrity of any of the other finalists? And what are the implications for vendors who are already embedding versions of these algorithms into everything from networking devices to mainframe computers, and selling them as quantum-safe? CRYSTALS-Kyber’s sudden vulnerability is a second detour in our course.

So, that’s the bad news—the quantum threat to current encryption might be closer, and our best hope for a defense might be weaker, than we thought. The good news is that probably neither of these detours are as serious as they seem, and we still have a good chance at finishing this race.

Keep calm and soldier on
Although the Chinese researchers claimed their work proves RSA could be broken with a quantum computer of only 372 qubits, what they actually accomplished was to find the prime factors of a 48-bit integer using a 10-qubit machine. A 48-bit integer is any number of a size up to just over 281 trillion (281,000,000,000,000). This is a value we can theoretically understand (roughly ten times the size of the US national debt) and it would be easy for even a classical computer to find its prime factors. The current implementation of RSA is based on factoring a 2048-bit integer, which is very roughly a number up to the size of a 3 followed by 616 zeros, pretty much incomprehensibly large. Mathematicians including Peter Shor himself have examined the Chinese results and concluded that although the method isn’t wrong, there’s no evidence to prove that it can scale up and run fast enough to crack RSA-2048. We’re probably safe—for now—but this episode should serve as a serious wake-up call.

Also, the vulnerability of CRYSTALS-Kyber wasn’t due to a flaw in the algorithm itself. Instead, it was a result of something called a “side-channel attack.” In a side-channel attack, hackers don’t try to break the algorithm mathematically. Rather, they measure things like fluctuations in electrical current, CPU execution time and even electromagnetic emissions on the computer to try to reverse-engineer key values that could help them guess at the actual algorithm. Defenders of CRYSTALS-Kyber have been quick to point out that you must distinguish between the algorithm itself and its implementation. A side-channel attack just targets a particular implementation of an encryption algorithm on specific hardware, not the algorithm itself, and in theory any algorithm could be susceptible. There are viable defenses against side-channel attacks, including inserting random delays in the algorithm’s execution, manipulating the computer’s clock or better shielding of emissions—but these come at a performance cost. NIST will have more work to do, especially on efficiency and performance. But for now at least, the mathematical integrity of CRYSTALS-Kyber and its peers does not appear to have been breached.

Let’s not give up—we haven’t lost this race, not yet at any rate. But we should be grateful that these two setbacks were not as serious as they could have been. Next time we may not be so lucky, so we had better be prepared. Inactivity is no longer an option. Organizations should move decisively to execute a quantum risk assessment followed by a cryptographic inventory and a classification of data at risk. Taking these steps now will give us the flexibility to implement new quantum-safe solutions quickly before the inevitable Y2Q date arrives—which may well be much sooner than we think.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today