The use of computers for, well, computations is nowadays practically the only way of manipulating large numbers. Many disciplines are unthinkable without the help of the computing power that a computer brings, both in research and industry, where numeric information could be the result itself, and in other fields where computations are the background for visual, practical, or qualitative results one could be after.
However, most, if not all computations a computer does are imprecise and use approximations which give results that are as close as possible (or, some rarer times, as requested) to the true values. Theorists are satisfied when they find formulae which could produce an output regardless of the input, whereas engineers and experimentalists need concrete results, the ones they are particularly interested in. Moreover, the lack of precision is, in fact, embedded in nature itself, if we take into consideration the measurement postulate of quantum mechanics. Even if we don’t go that far, any measurement device has a resolution, and the act of measuring simply answers the question How many times does the measured object fit into the smallest unit the device could discern? Computers themselves bring imprecision to the table, on the one hand due to their finite working memory and processing power, and on the other, due to the approximations imposed to them by algorithms.
This article is about the flawed approach of computers to numbers, a flaw by design, nonetheless. At the same time, we ask whether this shortcoming should cause any unrest. Simply and half-jokingly put: Are the engineers really wrong when they work with π = 3? And if they are, can one “fix” this error? And if one can, is it worth it?
The Continuous and The Discrete
This distinction could be traced all the way down to sets. Therefore, it is the basis of most branches of mathematics and their applications, since although set theory cannot be the foundations of mathematics, the objects one works with are usually sets, nonetheless. Philosophy of science also asks about the discrete or the continuous nature of… nature itself.
On the face of it, things are easy to understand: a set is called discrete if its elements are clearly distinguishable. The simplest example is that of the set of natural numbers (aka positive integers), {0, 1, 2, 3,…}, which is discrete, since the numbers themselves are distinguishable and furthermore, there is no middle element between two consecutive elements. In other words, there is no other positive integer between 0 and 1, or between 1 and 2, or 2 and 3 etc. Obviously, the statements hold inside the set itself, because otherwise we know that one can find infinitely many decimal numbers between 0 and 1, say.
This remark leads us to continuity: a quantity is continuous if between any two parts of it one can find at least another part. The clearest example is that of real numbers and we have sketched already what Aristotle called divisibility ad infinitum. Here’s how this works: assume without proof that the set of real numbers is continuous. In particular, it follows that there is at least one more real number between 0 and 1. Let that be 0.5. Then again, between 0 and 0.5, and 0.5 and 1 respectively, we can find one more real number in each interval, say 0.25 and 0.75, respectively. Then again, we can repeat the argument and we can divide the subintervals that are obtained without any termination.
Continuity, unlike the discrete, brings one more difficulty, both for mathematicians and philosophers alike. Clearly the elements of a discrete set can be distinguished and ordered, which makes it possible to list them, say, in an increasing order: 0, 1, 2, 3,…. Furthermore, there aren’t any intermediate elements between any two consecutive ones, hence the list is exhaustive. What about continuous sets, e.g., the real numbers? It definitely could be ordered, but one cannot answer the question of what immediately follows 0 in the list for example! Such a number clearly exists, but one cannot name it.
This problem is related to two faces of the most important problem in the history of mathematics and of abstract thought: infinity. On the one hand, the divisibility ad infinitum method can in principle produce numbers with infinite decimal points: between 0.5 and 0.6 we have 0.51. Next, between 0.5 and 0.51 we have, say, 0.501. Then between 0.5 and 0.501 we have 0.5001 and so forth. The sequence can be continued as much as one desires and the outcome is an infinite set of numbers getting closer and closer to 0.5.
On the other hand, there’s another surprise: such a procedure could never exhaust the set of real numbers, no procedure can! The relevant distinction between the discrete and the continuous is that of denumerability. Discrete sets like the naturals could be enumerated or used for counting: here’s the 0th, here’s the 1st, here’s the 2nd etc. But what Georg Cantor (1845-1918) showed in one of the most controversial results in the history of mathematics is that infinite sets have a hierarchy! Intuitively, one can see that the positive integers could be embedded in the reals and that this embedding “leaves some room”, hence the reals are “more” than the naturals. Cantor rigorously proved this: although both sets are infinite, the fact that the reals could not be used for counting makes them more numerous than the naturals.
Here's now the gist of it, with reference to computers: the fact that one could never completely represent an infinite set or a number with infinitely many decimals is a shortcoming that we can work with; nothing and no one could ever do it. But the fact that there are useful quantities that could not be enumerated means that there can be no algorithms to manipulate them accurately. Any algorithm must have steps which, even if one assumes are infinite in number (e.g., assume a computer which started working before the Big Bang and doesn’t finish after the Apocalypse), they are enumerable: it starts with the first, then comes the second, then certainly the third and so on and there are no intermediate steps between them!
Determinism and Probability
The fundamental design that led to this is due to Alan Turing, since most computers work thanks to a theoretical model of his, called a Turing machine. An essential feature they have is determinism, a term rich in meaning ever since Isaac Newton in the seventeenth century or even prior. Simply put, deterministic behaviour means that the result of a procedure is found to be a certain combination of the initial setup. Therefore, if we know all the relevant factors at the beginning of a deterministic process, then we know the result, since the process which works with the input to give an output is one which could be replicated as an algorithm, with clearly defined steps.
In physics, determinism means that one could completely express the evolution of a system if we know its starting conditions, therefore the process could never surprise us and decide to do something on its own. It will certainly behave according to precise and sometimes algorithmic laws.
You would be right if you intuit that such a theory cannot really hold true, at least not to such a large extent; it seems implausible that all computers and nature itself would behave predictably. Indeed, determinism is mostly abandoned, both in computers and in physics. What replaced it? A surprisingly old, simple notion — used in the times of Newton if not even earlier —, which was found where one least expected it: in nature itself. This is probability theory, chance, or luck if you will. The Austrian physicist Erwin Schrödinger (1887-1961) is the one who showed, through mathematical equations and physical laws, that subatomic particles have a probabilistic behaviour. In other words, if one wanted to know the position, speed, or other features of an electron, say, one would only find that it probably has this and that measurements, with a probability one can compute. Certainty is banished and, through the works of a contemporary or Schrödinger’s, the German Werner Heisenberg (1901-1976), computation errors could never be smaller than a threshold. And if we refer to measurement errors, things are even worse, of course, since they are given by the precision of apparatuses, and even today we don’t have any means of examining arbitrarily smaller scales, although we can “see” inside an atom.
The relevance of all this to computers is huge. On the one hand, in the first part of the 2000s, computer scientists developed a new discipline called probabilistic programming. Instead of certain steps of an algorithm, probabilistic programs use more possibilities, each with its own chance. Take a simple deterministic example: I go to the store; if I find milk, I buy 1 litre; if I don’t, I buy bread. Every time this “algorithm” executes, two steps occur, and we know for sure why we end up either with 1 litre of milk or a loaf of bread. But here’s the probabilistic version: There’s a 45% chance I go to the store; if it has milk, there’s a 51% chance I buy 1 litre; if it doesn’t, in 61% of cases I buy bread. Each step is a lottery, and the outcome is impossible to know for sure.
Then, more recently, the development of quantum computers brought Schrödinger’s theory in computations. Classical computers work with bytes, which are 0s and 1s manipulated by electronic components called logical gates, in which, intuitively, 1 means the gate is open and the next step is accessible, whereas 0 means the gate is closed, hence the current and information don’t flow, and the algorithm stops. Quantum computers use qubits (a wordplay from quantum + bits), which have the same values of 0 and 1, but they are no longer certain. Not only is there a chance to obtain the state 0, but quantum information theory forces one to consider both possibilities at once. In other words, if there is a 71% chance to find 0 (i.e., a closed logical gate), an analysis of the algorithm must also take into consideration the 29% chances to find the gate open and the outcome is a “combination” (rigorously called superposition) of the two possibilities.
Does All This Matter?
Finally, it certainly matters at least to ask the question: How relevant are such difficulties and shortcomings and what exactly do they impact and restrict? In physics, aka “nature” in the more general sense, despite the quantum theory, we have known for more than a century that at the macroscopic level, quantum effects are, practically, negligible. Hence, relativity and quantum theory are only applied at the subatomic level, because in larger objects, the errors that Newton’s mechanics produce are almost unnoticeable. Here we also find the starting point for the answer for computers. Quantum supremacy, which we discussed in a previous article, is not yet a reality. In other words, the benefits that are specific to quantum computers are still hard to advocate for.
However, as a clumsy mathematician that I am, with not many practical skills or interests, I cannot find a better closing than this: despite the almost negligible impact that such shortcomings have, along with those due to approximations (such as π = 3 or any other value which inevitably contains a truncated, finite number of decimals), distinctions exist. Consequently, they have been, are, and will remain subjects of research and fascination for mathematicians and philosophers alike, to say the least. Last but not least, the cases when history showed that inspiration and practical, applied, engineering progress came precisely from such sources are not negligible.