Let us imagine to be in the MIT Instrumentation Laboratory, today known as Draper Labs, one night in May, 1967. The little blonde 8-yrs old Lauren Hamilton is “playing the astronauts”, jumping on and off the chairs and touching every possible implement. Her mom, Margaret, watches her with one eye, while with the other and all her mind she is fully absorbed in the alternating white-green lines of the pile of pages of a computer output. At some moment, Lauren pushes a nice, interesting red button popping up from a row of many, and suddenly the ticking noise of the machine stops, many lights go off while another big red light starts flashing. The tape units stop rolling, and the paper stops flowing out of the printer. Lauren screams frightened and starts crying, while she runs to her mom’s lap. Margaret hugs her, and calms her down with a soft-toned voice. Margaret Hamilton (second from the left in the attached photo) is a 33-years old working mother, single after the break-up with her first husband. She often takes Lauren at the MIT laboratory, weekends and nights, because her work is too important to be stopped. And while Lauren still sobs and searches for a possible excuse for the little mess she just made, Margaret thinks, and thinks deep. How could Lauren’s distracted gesture crash the simulator of the Apollo Guidance Computer, or AGC, for which she and her team had been so relentlessly working for months? Could the same mistake occur in flight? And could it be corrected? Margaret immediately started working around a backup solution to the problem, and designed a safety piece of code that could restart the operation after the crash. A piece of code that turned out to be marvelously useful, when a few months later commander Jim Lovell actually made that very same mistake, while orbiting around the Moon on Apollo 8. The land team run by Margaret caught the error signal and quickly fixed the situation.

The design of a guidance, navigation and control (GNC) system that could satisfy the bold expectations laid down by president John Fitzgerald Kennedy, in his landmark “We choose to go to the Moon” speech of Sep 12, 1962, had to be invented from scratch. By then, how to get a package into Earth’s orbit was more or less understood, but how do you aim the 2,800-tons Saturn-V and fly to the Moon? Scientists had thought about the basic physics of it, but going from a kind of basic architecture to actually how you are going to do it, was still much under debate. And how you navigate to the destination, just like ships crossing the ocean, was a huge challenge. Ordinary air or sea navigation systems rely on making observations from the outside world. However, in a featureless and empty space, the only way to find your way is to precisely measure time and sensing your acceleration, that is, inertial navigation. This problem had already been solved for launching ballistic missiles, with a system of gyroscopes and accelerometers to guide the warhead to the target (in order to properly assure the destruction of humanity, such missiles cannot use electronics that could be perturbed by foreign radio signals). The software of the primitive computers that could guide nuclear missiles had been developed by that same Charles Stark Draper, who was director of the MIT Instrumentation laboratory. It was then obvious to appoint them the contract for the Apollo GNC system.

Up to then, all airplanes and spacecrafts (including the Mercury project) were steered by a human pilot, by pulling levers and flipping switches. The Gemini had a rudimentary onboard computer, only to assist the pilot in case he had to adjust the orbit. The Soviet unmanned Zond spacecrafts had to wait until 1969 for some kind of guidance computer, while all Vostok, Voshkod and early Soyuz missions were automated and directed from ground, with just a magnificent but purely mechanical navigation instrument, the Globus, to help the pilots. Only in 1980 the Soyuz-T would put the life of two cosmonauts in the hands of an Argon-16 computer. Therefore, back in 1962, the mere idea that the Apollo GNC, with its sophisticated AGC hardware and software, could take command of a spaceship and guide three astronauts to land on the Moon was already a revolutionary statement in itself. It was dubbed “the fourth astronaut”.  The so-called “block-II” AGC, designed by Margaret’s team of more than 100 people (they were close to 350, at peak times), was used not only on Apollo 7 through Apollo 17 (including all actual lunar landings), but also on three Skylab missions, on the Apollo-Soyuz test mission, and on a research project using an F-8 aircraft. A total of 57 AGCs were constructed, with 138 display-keyboard units; all those installed in the Lunar Modules never made it back to the Earth, so they are definitely collector’s items.

Margaret had always loved maths. She obtained her first degree at the Earlham College, and subsequently enrolled at Brandeis, in Boston. Which she quit soon after getting involved in computer programming, as an assistant to a MIT professor who did meteorological predictions and long-term weather simulation. After one year at MIT, she went programming the Philco-Ford SAGE system, for the Department of Defense. When she heard of the MIT-NASA program about 1964, she immediately wanted to be involved. She was initially assigned to a small group in charge of the computer hardware and software, as the first female programmer, until she was designated group leader of the entire software project, just one year later. However, at that time programming was merely considered as the activity of punching in and storing data. At the beginning of the Apollo program there was no notion of onboard flight software, computers were only meant to acquire and display data. According to David Mindell, former NASA engineer and MIT scientist, “the original documents that established the engineering requirements of the Apollo missions did not even mention the word ‘software’. Software was not included in the program and was not part of the budget”.

The concept of a computer interface did not exist: how to make the computer communicate with all the different subsystems, how could it command a valve to open, how could it order a motor to turn on or off… By mid-1964, MIT engineers started to have a somewhat clear idea of what computer technology was going to look like, to enable the mission. At the Draper labs, the blackboard with the original AGC design still exists, it has never been wiped off. It was turned into a few sheets of paper, and then into a blueprint. Note that up to then, the race was to make bigger and bigger computers, hosted in rooms literally filled with thousands of vacuum tubes. Nobody really trusted such machines, which could work at maximum for one or two days, sometimes no more than a few hours, before needing repairs. All of a sudden, the problem was instead to make a computer as small as possible, which could operate without problems for at least two weeks, at temperatures ranging from nearly zero to few hundred K, and under quite heavy cosmic radiation. The first transistor-based computers had appeared around 1958, and the first integrated circuits had been introduced by Fairchild only three years before. The AGC was designed as the first computer using integrated circuits on silicon chips, pushed by a crystal clock ticking at 1 MHz. Its hardware was based on 2,800 integrated circuits with resistor-transistor logic, each including two 3-input NOR gates, chosen to avoid the problems of mixed logic (as the engineers among you know well, the NOR and NAND are functionally complete logic gates that can be combined to obtain any other Boolean function). The whole AGC was encased in an elongated aluminum box of about 33 liters.

One very interesting solution was found for the data and program storage. The AGC had very little memory by modern standards: 2048 sixteen-bit words (15 data bits + 1 parity bit) of RAM erasable magnetic-core memory occupying a volume of about 10 liters, and 36,864 sixteen-bit words of ROM in core-rope memory, occupying just about 8 liters. In the 1960s, most computers already used magnetic core memory for RAM storage, but “core ropes” were unusual and operated differently. Erasable core memory and core rope both used magnetic cores, that is small magnetizable rings made of ferrite. But while erasable core memory used one core for each bit, core-rope stored an incredible 192 bits per core, achieving much higher density. The trick was to put many wires through each core, therefore hardwiring the data: a 1 bit was stored by threading a wire through a core, while the wire bypassed the core to store a 0 bit. Thus, once a core rope was carefully manufactured using a half-mile of copper wire, the data and program were permanently stored in the core rope. The program code of AGC was hardwired in the core rope memory by female workers hired from factories, skilled in textile threading. Some programmers nicknamed the finished product “LOL memory”, for Little Old Ladies memory, and Margaret Hamilton was “Mother rope”.

The Apollo-11 version of the guidance computer had 145,000 lines of code, written in AGC4, a specific assembly language invented at MIT, which was processed into binary code by the YUL assembler program. A rather large instruction set of 35 +10 operations was available in the processor. Among the key innovations introduced by Margaret and her team, there was a high-level interpreter software module implementing a virtual machine (something unheard of at the time), which could pre-process a set of pseudo-instructions more complex and capable than the native AGC4 language. These instructions greatly simplified the navigation, implementing for the first time double-precision scalar and vector arithmetics, trigonometric functions, and even compact matrix/vector products (necessary to compute inertial frame rotations). Such high-level instructions were interpreted at runtime, therefore they required a longer processing clocktime of about 24 milliseconds each. This feature allowed the astronauts of Apollo 14 in trouble, to manually punch in new code instructions that were radiotransmitted from Earth, thus saving the landing sequence from abort. Another key feature was the real-time operating system, which J. Halcombe Laning developed for the Lunar Module. Laning invented from scratch the idea of exec and waitlist, still in use today for spacecraft guidance, to prioritize the various running processes in asynchronous mode. This design saved the Apollo 11 landing mission, when a radar interface program began using more registers than were physically available in memory, causing the infamous 1201 and 1202 errors: the priority scheduling allowed the GNC to automatically recover, by deleting lower priority tasks, and letting the computer to restart and complete the landing protocols.

In Margaret’s own words, “Nobody knew what we were doing, it was a wild west, there were no classes to take, nobody could teach us what to do. Looking backwards, however, we were the luckiest people in the world: we had no other choice than being pioneers, we had no time nor excuse for beginners. When I started calling our work ‘software engineering’ many around mocked us, for our radical ideas. But soon, software gained respect as any other engineering work for the mission”. In 2003, NASA honored Margaret with the Exceptional Space Act Award, to acknowledge her contributions to software development, and attributed her the highest prize ever conferred to a single person by the administration: 37,200 dollars. President Obama awarded her the Medal of Freedom in 2016, the highest civil honor in the USA. It is widely acknowledged that the concepts developed by Margaret and her team became the core elements of modern software engineering, many of those features being still in use today.

The woman who flew a man to the Moon

Post navigation


Leave a Reply

Your email address will not be published. Required fields are marked *