It’s a truism that the phone in your pocket today is more powerful than the computer Nasa used to land humans on the moon in 1969. But the comparison isn’t as straightforward as it appears. First, raw power isn’t everything. Yes, your phone is speedier than the Apollo Guidance Computer – many millions of times faster – but Nasa’s machines were built with unparalleled focus, designed for the sole purpose of calculating orbital trajectories, burn rates and azimuths*,* with no superfluous bits to slow that process down. Second, computation comes in many forms, and much of the arithmetic needed to get people into space was done by humans, who reckoned figures with pen, paper and calculators. In 1970, when the men aboard Apollo 13, barrelling through space at thousands of miles per hour, gave their laconic report to ground control, ‘Houston, we’ve had a problem,’ Nasa summoned engineers with analogue slide rules, not digital computers, to guide the crew safely back to Earth. And the astronauts themselves were equipped with the same tool, a five-inch metal slide rule, so consistent in its design over the centuries that a mathematician from the 1600s would have been able to use it without much trouble.

The need for prosthetic brainpower has been apparent throughout human history, evidenced by the continual development of techniques and technologies to compensate for our biological inadequacies. As Keith Houston documents, the first number systems were developed around five thousand years ago in Mesopotamia, making it possible for users to write down what memory might struggle to retain. An early innovation was finger-counting, or dactylonomy, which assigns numbers to joints and hand positions, transforming multiplication and addition into a business of folding fingers and brushing knuckles. The discipline was popular enough, as Houston tells it, that as late as the 15th century mathematics textbooks included instructions in finger-counting ‘as a matter of course’ – the English ‘digit’ derives from the Latin *digitus*, finger or toe.

The next step forward for calculation was the creation of the first calculating devices, generally thought to be abacus variants that represented numbers as physical tokens such as stones placed in columns, which might be lines scratched onto a flat surface, or rods or strings arranged in a frame. (The Latin word for ‘count’, *calculare*, comes from *calculus*, ‘pebble’.) Some historians have argued, on the basis of a few gnomic references in scattered texts, for the existence of ancient Sumerian abacuses in the second millennium bcE, but the earliest solid evidence of such devices is from the last few centuries bcE, in ancient Greece and China. The abacus was sufficiently familiar that from the fourth century bce on, figures such as the philosopher Diogenes and the historian Polybius could refer to its workings metaphorically, Polybius noting that for all their striving, men are ‘like the pebbles on a reckoning-board’, whose value can be transformed in an instant ‘according to the pleasure of the reckoner’. This image highlights one of the primary benefits of the abacus: its capacity to physically represent place-value notation – which is key to complex calculations – by dividing digits, tens, hundreds and so on into separate columns. With abacus in hand, the four basic operations of arithmetic – addition, subtraction, multiplication and division – can be conducted with incredible speed by an experienced bead-shuffler. So much so that as late as 1946, when US forces occupying Japan organised a maths competition between an army private with an electric calculator and a local civil servant with a beaded *soroban* (Japanese abacus), it was the ancient calculating device that triumphed. The *Nippon Times* covered the event with due majesty: ‘Civilisation, on the threshold of the atomic age, tottered Monday afternoon.’

As the name suggests, the appeal of the pocket calculator is all about compression – the smallness of the device itself, but more important its capacity to reduce the time and effort needed to carry out calculations. In this regard, the abacus was a shabby stand-in compared to the device that came next: the slide rule, a marvel of design that compressed mathematical work to the point of invisibility. The individual who deserves most credit for its invention, the English clergyman and mathematician William Oughtred (1574-1660), came to believe the slide rule was dangerous since it removed the need for students to learn any underlying mathematical principles. ‘The true way of Art is not by Instruments, but by Demonstration,’ he complained. ‘It is a preposterous course of vulgar Teachers, to beginne with Instruments, and not with the Sciences, and so instead of Artists, to make their Schollers onely doers of tricks, and as it were jugllers.’

The tricks performed with the slide rule are logarithms, a type of mathematical operation first described by the Scottish mathematician John Napier in 1614. If you are multiplying a number *p* by itself to get another number *q*, the logarithm of *q* to base *p* is how many times you need to do the multiplication to reach *q*. So the logarithm of 8 to base 2 is 3, because you have to multiply 2 by itself three times (2 x 2 x 2) to reach 8. The usefulness of this may not seem immediately obvious, but the benefit of logarithms is that they allow you to transform multiplication and division into the simpler operations addition and subtraction. Imagine you want to multiply 8 by 16. If you express these numbers as multiplications of two, it’s the same as multiplying 2^{3} by 2^{4}. Instead of doing that, however, you can just add together their logarithms, 3 and 4. So 8 times 16 becomes the same as 2^{7}: multiplication has become addition. Once you’ve worked out tables of logarithms for many numbers, you can render them on a logarithmic scale: a ruler with markings that become closer together as their value increases. By aligning a pair of these scales and reading off the figures, you can multiply and divide large numbers in seconds by adding or subtracting their logarithms. With a little training, advanced operations such as trigonometry or calculating roots can be performed just as quickly. As with the abacus, the slide rule works because it physically embodies mathematical operations: it turns thought into mechanics – producing, in Oughtred’s description, the ‘superficiall scumme and froth of Instrumentall tricks’.

The simplicity and cheapness of the slide rule ensured its domination of the world of pocket calculation for centuries. Designers experimented with circular and cylindrical versions that were even more compact and accurate; additional scales were added to facilitate further types of calculation. Although it was invented in the 17th century, the slide rule peaked in popularity in the 20th, with variants catering to different professions. There were slide rules for photographers calculating light levels; slide rules for engineers to gauge the flow of liquids and gases; and slide rules for pilots to estimate ground speed and calculate fuel consumption. Slide rules not only went into space but were onboard the *Enola Gay* when the US used atomic weapons for the first time (specialised versions were made to calculate radiation exposure). By the mid-1960s, more than a million slide rules were sold in the US each year. The device became for the mathematically minded what the stethoscope was for the physician: a badge of professionalism as well as a practical tool.

The slide rule’s replacement – the calculator proper – took time to develop. From the 1600s, inventors and mathematicians began to experiment with gears and drums to create the first mechanical calculators, but these were somewhat crude. Early examples such as the Pascaline, created by Blaise Pascal in 1645, or Samuel Morland’s adding machines from the 1670s, were too intricate for mass production but also too impractical to be worth the expenditure. The polymath Robert Hooke summarily dismissed the gadgets when he witnessed them at work in 1673: ‘Saw Sir S. Morland’s Arithmetic engine. Very Silly.’ By the mid-19th century, after improvements in gearing and metalwork, mechanical calculators had become more reliable and easier to manufacture but remained limited in function, used most extensively as cash registers – simple adding machines.

Houston skips past most of this era, focusing instead on the electronic revolution of the 20th century, which introduced new functions such as memory storage and custom programs for multi-step calculations. Early machines such as the Olivetti Programma 101, released in 1965 by the Italian typewriter company, were a hybrid between computer and calculator. Users of the P101 could store programs on magnetic cards but results were printed on a roll of paper rather than displayed digitally. The miniaturisation of electronic components also led to changes in aesthetics. Rather than simply draping a shell around bulky mechanics, designers could more easily move and reposition smaller internal components to create a shape of their choice. Although the P101 was quite large (about the size of a cash register), it had smooth curves, with gill-like vents in the rear and a palm rest at the front that sloped away from the keypad like the page of an open book. Even the keypad looked modern, with full-body keycaps that fit flush into the machine’s surface, where old-fashioned typewriter keys floated like lily pads above the spokes of the machine. While the P101 wouldn’t look too out of place in an office today, contemporary competitors like the Mathatronics Mathatron, all sharp angles and industrial dials, looked like they belonged in a steel mill.

Improvements to the Programma 101 followed swiftly, especially with the creation of the integrated circuit, or microchip, by Texas Instruments in 1958. Microchips drastically improved the speed, reliability and compactness of electronic systems, enabling manufacturers to bring down costs and experiment with new forms. At the time, Texas Instruments’ primary customers were industry and the military. The company wanted a commercial hit to take advantage of its invention, so it commissioned the Cal-Tech, a 1967 prototype which, being the size of a small book, is often referred to as the first handheld (but not pocket) electronic calculator. As Houston notes, what you’re willing to call a pocket calculator depends on the size of your pockets, but it’s generally agreed that the first genuine article was the Japanese Busicom Handy-LE, released in March 1971. The Handy-LE was the first calculator to have an LED display and was just five inches tall, two and a half inches wide, and less than an inch thick. A contemporary advert shows a Japanese salaryman pulling open his suit jacket and sliding the calculator out of his pocket like Superman ripping open his shirt. Such marvels came at a price: the Handy-LE sold for ¥89,900, two months’ salary for the average Japanese graduate.

Once the size barrier had been broken, the market exploded with new calculator designs, some of which look like forerunners of today’s smartphones. Take the HP-65, released by Hewlett-Packard in 1974 for $795 (a little under $5000 in today’s money). This was the first handheld calculator to be programmable: it came with little magnetic strips carrying preloaded functions that could be slotted into the device like memory cards. HP sold packs of these cards to suit the needs of different professions: the one for doctors offered a method to calculate body surface area from height and weight; the one for accountants included cards for calculating compound interest. HP even created a community around the device, with a ‘users’ library’ by means of which users could share homemade programs with one another – much like the grassroots organisations that helped drive the development of early home computers. In 1974 Hewlett-Packard even marketed the HP-65 as a ‘personal computer’, just a few years before the ‘1977 Trinity’ – the Commodore PET 2001-8, Apple II and TRS-80 Model I – properly established this product category.

It was the extended capabilities of computers that doomed calculators, which were transformed into a cheap commodity in a matter of years. In the early 1970s, a pocket calculator sold for hundreds of dollars and still had an air of mystique. ‘Look again,’ reads an article in the *New York Times* from 1972. ‘The person next to you, apparently holding a tiny radio in his hand, may actually be using a calculator. Rather than listening to rock or Bach, he could be figuring the commission on sale, his income taxes, shopping costs or, on Madison Avenue, how much a big lunch is going to cost a client.’ Just a few years later, calculators were retailing for less than $20; sales peaked in 1975, the year after the HP-65 was released. In addition to being outclassed by computers, Houston argues, calculators were also at this point ‘dramatically overengineered’, built on technology that had long outstripped their practical requirements. By the 1990s, anyone who needed a calculator already had one. ‘Rare indeed was the kitchen drawer … that did not hide at least one broken calculator,’ Houston writes, ‘its LCD display clouded or its battery compartment crusted with the lifeblood of exploded AAAs.’

There are interesting things to say about the calculator’s story in these years, though Houston gives them short shrift. (Here and elsewhere, *Empire of the Sum* is undermined by its own format, which is more product catalogue than social history, with Houston leaning heavily on tour-guide humour to jolly readers along.) Consider the introduction of calculators to schools. In 1975, one in five schools in Ohio was using calculators during lessons, while one in three had forbidden them. Echoing William Oughtred, teachers worried that the devices would erode students’ cognitive abilities. ‘There is a real danger that if calculators are used, children will think that pushing buttons on a black box is mathematics,’ one teacher warned. But the calculator’s ubiquity in life outside school persuaded many educators that they needed to ‘prepare children for today’s world rather than yesterday’s’, and by 1980 the US National Council of Teachers of Mathematics was officially recommending that schools incorporate calculators and computers ‘at all grade levels’.

What was the effect of this? Houston refers to a few studies that suggest teaching with calculators is beneficial for students’ skills and makes maths classes more enjoyable, but doesn’t explore the topic any further. Is the triumph of the calculator in schools a sign of progress, or is something lost when thinking is outsourced to a machine? It’s a significant question, especially given current debates about the use in schools of AI writing tools like ChatGPT. In an epilogue, Houston compares the fate of the calculator to that of an alien species in the science fiction of Iain M. Banks, which becomes so advanced that it ‘sublimes’ from the physical universe altogether to settle in higher dimensions. The calculator as a gadget may have sublimed into software, but the comparison doesn’t do justice to the role of the device in the automation of calculation more generally. That’s the problem the calculator solved before it met its demise.

Send Letters To:

The Editor

London Review of Books,

28 Little Russell Street

London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.