- Security
- A
Time in Cryptography
Time plays a significant role in applied cryptography. Many aspects of cryptosystem applications are directly or indirectly tied to the passage of time or to time measurements. Sometimes the impact is obvious. But more often, it's not. Let’s explore the most interesting aspects of time in cryptography applications.
Time and its measurement are highly significant for applied cryptography and affect various aspects of cryptosystem usage. Sometimes this influence is direct and obvious, and sometimes it’s hidden and unnoticed.
Time to crack
Any cipher, except for the single absolutely unbreakable one, can be cracked. Cracking takes some time. If you brute-force a 128-bit symmetric cipher key at a trillion keys per second, you’d need to spend about ten trillion times a million Earth years to check all 2^128 keys. In short, that's a very long time, even if there’s a mistake in the order of magnitude. But seconds, years—these are all intervals of time.
For a perfect cipher, to have a decent chance of guessing the key, you’d need to search half of the key space. For 2128, that doesn't help much: 2127 still consumes “trillions from millions.” But if the cipher’s structure has flaws leading to vulnerabilities, which let you split the search space into smaller regions, a successful brute-force may take much less time. For example, the trillion-keys-per-second machine from the previous paragraph would exhaust 248 keys in less than five minutes.
When assessing cryptographic strength, time is always used as a parameter. But here, time is measured in the number of operations. That's because it's difficult to define a standard cryptographic “brute-force second,” but you can specify elementary operations that implement the cipher’s mathematical transformations. However, we’re still talking about some computational quantum that corresponds to time. This is the time to crack. Or—the number of operations required to reveal the secret with high probability.
In practice, the required strength of a cipher relates to how long the information in the protected messages stays relevant. If the information in a message becomes outdated after a day, then, technically, it’s enough to protect the message with a cipher that takes at least a month to crack. Or a billion years. Practical internet ciphers—for example, AES in TLS/HTTPS—have estimated strength in the realm of many billions of years of computation. Just to be absolutely sure. Although, about “absolutely sure”—no one really knows. But situations where it’s unclear how long you need to keep data secret happen all the time. Worse: what took years to crack a quarter-century ago now takes only days—well, just to arrange access to supercomputers, which will spit out the answer in a minute.
Acceleration of computation. That, too, is a matter of time. Quantum computers have been expected for many, many years. A very long time. They’re still not here, but the threat that traffic recorded today could be decrypted fifty years from now on a quantum computer exists. And indeed: you can always safely say that we’re still twenty years away from building the necessary quantum computer—that's a very good strategy, fully supported by game theory.
If you thought that quantum computers don’t really affect symmetric ciphers like AES, you’re not quite right. Yes, the known general-purpose algorithms can only speed up brute force by an exponential half—the square root, meaning 2128 turns into 264. So it’s enough to use 2256 to guarantee resistance. However, these are general-purpose algorithms, and it’s not certain that some specific ciphers won’t have improved quantum algorithms capable of breaking them in a reasonable amount of time.
The main leap in time for quantum computers is expected in the area of breaking not symmetric, but asymmetric systems, which are used for key exchange. In other words, to obtain a shared symmetric secret, an asymmetric cryptosystem is used (usually the Diffie-Hellman protocol). This cryptosystem would be attacked using Shor’s quantum algorithm, if a quantum computer is actually built anytime soon. For now, quantum computers aren’t even able to factor arbitrary numbers, and key exchange systems in TLS are already protected by post-quantum cryptosystems, for which no quantum algorithms have been found to break them quickly.
Time as a coordinate
Physics has no time, but there are periodic event counters. Accurate time is synchronized across different points of the planet nowadays, so counting gives a stable increasing sequence. Such sequences in applied cryptography are called counters. They can be used to create a unified coordinate system—specifically, when calculating one-time codes.
A common example is the TOTP protocol (time-based one-time password): both parties have a shared secret, and use it to compute a token that depends on the current time. So for the method to work, the two parties need to share not only a secret, but also the time. If you use TOTP—which is a very popular source for the “second factor” these days (for some reason it’s considered reliable)—the moment the clock on the token generator and the verifier’s system diverge by too much, tokens will stop working. This is one of the most common reasons for contacting one-time code tech support: “– Oh no, my tokens stopped working! – Don’t worry: let’s sync our clocks!”
Once again, time has a direct impact on cryptographic operations: one-time code/token calculation uses the timestamp; stability of time coordinates is vital for the scheme to work. If someone has stolen the TOTP secret, that person can compute tokens for any time (even in the future).
To prove that a certain action preceded a specific moment in time, you need to take special steps. In olden times, to establish priority for a discovery, people published anagrams—which could later be decoded by someone who knew the key (the components of the important discovery). Even Newton did this. Basically, the key and the anagram—that’s cryptography.
Trusted timestamps are used in digital signature infrastructure, allowing verification of whether a signature existed before a specified point in time. Typically, such confirmations are provided by special timestamp providers, and these providers cryptographically verify the timestamps they issue. Proper "processing of the past" is a major challenge in digital document management. Suppose there is a document signed with a key from a pair, the public part of which is verified by a certificate. But the certificate for this key expired five years ago. Can the signature on the document be trusted?
It seems logical to trust it; after all, we're not signing but verifying the signature, and at the time of signing, the certificate was valid. But that's just speculation if ten years have passed: keys have expiration dates for a reason—perhaps in the meantime, someone built a quantum computer, cracked all the keys in the chain, and signed the document, issuing fake certificates retrospectively. So, trusting the signature is no longer possible. This can be quite unfortunate, given how digital document circulation has intensified. That's why we need hash chain value sequences and similar solutions—the most cutting-edge one is called “blockchain”: the main role blockchain plays is strictly fixing the sequence of records—what came after what. For example, who spent part of the bitcoin first. Time coordinates are crucial precisely because of their rigidity. Still, who can really guarantee anything about hash function chains? They say the math behind the complexity of synthesizing gold is more reliable than SHA-256. Yes, even despite the philosopher’s stone.
Accuracy of Time Intervals
The accuracy of time calculation is just as significant in applied cryptography. The most common case is the validity period in a TLS certificate. Two values are specified: not before, not after. If the clock at verification shows unexpected time, it may turn out that the certificate hasn't started being valid locally. The special theory of relativity, in a way.
The issue of local clocks lagging behind the clocks of the certificate authority is considered so widespread that CA Let's Encrypt, for example, specifically sets the start time of a certificate one hour earlier than when the certificate was issued! A time machine. Cryptographic, but virtual. Not only Let's Encrypt does this, but strong backdating of TLS certificates isn't encouraged. For automatic systems that quickly issue certificates, like Let's Encrypt, reasonable backdating is important because a freshly issued certificate already arrives at the server, yet users with slow clocks can’t access the website since, in their “timeframe,” the certificate hasn’t started being valid. When validating certificates, both time boundaries matter, exactly like this: not before, not after.
It's not that hard to determine if the time set in certificates is lagging behind: that's what Certificate Transparency logs are for. The logs issue their own timestamps—the so-called SCT labels—these timestamps are included in the certificate, marking the moment when the log service saw the certificate data, and anyone can compare the time themselves (for example, the SCT value is displayed by the TCI web-node analysis service).
Certificate Transparency (CT) is one of the large-scale examples of applying various methods of applied cryptography. Time intervals from certificates lead to an interesting effect. In CT, all issued certificates must be recorded in logs. More precisely, specially invented precertificates must be recorded, but here we’ll call everything certificates, since for the story about time in cryptography this doesn’t really matter. Anyone can read certificates from the logs—this is the point of CT. But a huge number of certificates for TLS are issued per unit of time—the flow is so intense that CT logs bloat too quickly. The swelling of logs leads to running out of disk space for storing them. That’s why time-sharding was invented.
Time-sharding in CT logs means maintaining separate logs for certificates of each interval of time. For example, there might be a separate log for the period from July to December 2026. Every certificate now goes into the log whose time interval matches the expiration timestamp of the certificate. Pay attention—this matters: certificates are sorted by their expiration date, not their start date; this gives logs the opportunity to work ahead of time. And again—it’s time that plays the main role here.
Leaks through time measurement
There’s another aspect: leaks of information about the internal state of a cryptosystem’s implementation, related to differences in the time taken to perform cryptographic operations. What does that mean? Take some working cryptographic tool and measure how many microseconds it spends to perform the same operation but on different data. For example, you sign different documents, measuring the time taken for signing. If you manage to find a correlation between the data and the time it takes to perform the cryptographic operation, it might be possible to recover the secret key since the signature algorithm is known. That’s why critically important operations are designed to run in fixed time, regardless of input data. This requires special algorithms and is not always possible. Note that the scheme works not only for computer systems but also for electromechanical machines.
A classic case, which nowadays should be the starting point for specialized courses in applied cryptographic programming, is of course the leakage of secrets due to incorrect implementation of bit string comparison operations. A “straightforward” implementation means that the comparison of two strings stops when the first difference is found. Indeed, for the strings to be different, a single bit difference is enough. Here, the time (more precisely—the number of cycles) needed to output the result of comparing with a secret value depends on the composition of the input string.
Let a vulnerable system issue an authentication token only if the correct secret string of 128 bits is provided. In this case, the attacking party can send arbitrary data to the vulnerable system and measure the processing time for each variant of the string. At first glance, it might seem that a full brute-force search with guaranteed discovery of the secret would still require 2128 requests. And that’s very long. But that’s just how it seems. The trick with this vulnerability is that it “compresses” the time due to information leakage: the attacker can determine the position at which the bits start to differ. If the system responds extremely quickly, it means the first bit didn’t match. Flip this bit—and now it’s correct. Send the modified string for verification and measure the verification time. Based on the measured time, you determine which bit doesn’t match the secret. And so on. If the comparison is vulnerable, you can brute-force 128 bits at bitwise resolution in 128 requests. The same method works for a more practical variant—using bytes: the leak lets you identify the position of the last matching byte. But brute-forcing will take longer here, since there are 256 possible values in a byte.
Such leaks are considered side-channel leaks because, despite being algorithmic in nature, they require measuring an external parameter relative to the cryptosystem. The observable execution time of operations is just such an external parameter. Of course, it’s not just about comparison, there are many algorithmic variants here. On the other hand, measuring the response time with enough precision is far from always possible. Sometimes things work out just right, and even if the algorithmic foundation is vulnerable, timing noise reliably hides the vulnerability. But very often, it’s the other way around.
Pay attention to time. It doesn’t exist, but that’s why it matters especially much. As a concept.
Write comment