A review of postings about scientific integrity and intellectual honesty, with observations regarding elite centrism – Part 5: inventions, innovations, and ushering of ‘the new normal’ (iii)

(Continued from Part 5 (ii))

When the Harvard University computer pioneer Howard Aiken, inventor of Harvard Mark I, the world’s first “automatic digital calculator” built in the early 1940s by IBM, took early retirement in 1961 from his professorship, intending to become a businessman in the industry, he and Cuthbert Hurd, director of IBM’s Electronic Data Processing Machines Division, likely had discussions about starting “the first microcomputer computer company in the world”, with design work already being done by an assistant director of engineering at Lockheed Missiles and Space Company in Sunnyvale, California, where Aiken was a regular consultant; but as in Part 5 (ii), their collaboration plan did not materialize and Aiken founded his own Howard Aiken Industries Incorporated in New York and Florida, specializing in buying ailing companies, fixing and then selling them, while Hurd left IBM in 1962 to become board chairman of the first independent computer software company, the Computer Usage Company.

As reviewed in Part 5 (ii), Aiken had been the leading academic computer-pioneer rival to John von Neumann, but not attaining the level of prominence of von Neumann who has been regarded as the “father of computers” for his role in the development of the first general-purpose electronic computer ENIAC built in the mid-1940s at the University of Pennsylvania, his advocacy of the “stored program” computer design, and his subsequent leadership of an ambitious computer-building movement among the academia and scientific institutions.

From this perspective, as I have commented in Part 5 (ii), in 1961-1962 Aiken missed a second chance to attain some sort of “father” status of a new generation of computers, namely microcomputers, and the chance perhaps to share some glories of founding Silicon Valley with distinguished figures such as Stanford University engineering dean and provost Frederick Terman who had in the 1930s mentored the founders of the Hewlett-Packard Company, and such as 8 young scientists and engineers who had in 1957 rebelled against the difficult management style of their mentor, 1956 Nobel Physics Prize laureate William Shockley, and founded the Fairchild Semiconductor Corporation.

The 1939 founding of Hewlett-Packard by Stanford University graduates William Hewlett and David Packard in a garage in Palo Alto has since been recognized as the birth of the Silicon Valley, as noted in Part 5 (ii).

A co-winner of the 1956 Nobel Physic Prize for his role in inventing the transistor, William Shockley had in late 1955 started the Shockley Semiconductor Laboratory in Mountain View near Stanford, marking the arrival of the semiconductor industry that would give Silicon Valley its name; but it was the 1957 rebellion of the 8 disciples at the Shockley Semiconductor Lab, Robert Noyce, Gordon Moore, Julius Bank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last and Sheldon Roberts, and their founding of Fairchild Semiconductor that led to that industry’s success, as previously quoted in Part 5 (ii):

“In September 1955 William Shockley and Arnold Beckman agreed to found the Shockley Semiconductor Laboratory as a Division of Beckman Instruments … Shockley rented a building … in Mountain View… attracted extremely capable engineers and scientists, including Gordon Moore and Robert Noyce, Julius Blank, who learned about and developed technologies and processes related to silicon and diffusion while working there. In December 1956 Shockley shared the Nobel Prize in Physics for inventing the transistor, but his staff was becoming disenchanted with his difficult management style. They also felt the company should pursue more immediate opportunities for producing silicon transistors rather than the distant promise of a challenging four-layer p-n-p-n diode he had conceived at Bell Labs for telephone switching applications.

After unsuccessfully asking Beckman to hire a new manager, eight Shockley employees – including Moore and Noyce plus Julius Blank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last and Sheldon Roberts – resigned in September 1957 and founded the Fairchild Semiconductor Corporation in Palo Alto. Many other employees, from technicians to PhDs, soon followed. Over the next decade, Fairchild grew into of the most important and innovative companies in the semiconductor industry, laying the technological and cultural foundations of Silicon Valley while spinning off dozens of new high-tech start-ups, including Advanced Micro Devices (AMD) and Intel. …”

(“1956: Silicon Comes to Silicon Valley”, The Silicon Engine, Computer History Museum)

As quoted, Shockley’s 8 disciples had become “disenchanted with his difficult management style” and also wanted “more immediate opportunities for producing silicon transistors”.

In 1956 von Neumann, by this time a U.S. Atomic Energy Commissioner and a leading scientific adviser to the U.S. Air Force on nuclear weapons and nuclear missiles development, was in hospital for cancer treatment and made a decision to move from the Institute for Advanced Study in Princeton to the University of California but, as in Part 5 (ii), the move did not materialize as von Neumann soon died in February 1957 – in the year Fairchild Semiconductor was founded.

Had von Neumann moved, it would likely have been to UCLA in Southern California rather than UC Berkeley in Northern California, as reviewed in Part 5 (ii): with Southern Californian military aerospace companies active in computer development, and the Santa Monica-based Cold War think-tank RAND Corporation having him as a leading strategist and having built the JOHNNIAC computer named for him, von Neumann and RAND could have started a ‘Computer Beach’ there – at a time when Northern California’s Silicon Valley-founding Hewlett-Packard did not yet have computer development in its vision.

A few years later in 1961 Aiken’s new company, Howard Aiken Industries, was founded and named for himself similarly to William Shockley’s Shockley Semiconductor Laboratory; but while Aiken’s was ambitiously broader in its industry scope, it was neither in computer development – the field of Aiken’s academic prominence – nor in Silicon Valley.

Hurd later appeared to blame Aiken’s interest in getting rich for their, or perhaps just Hurd’s, not moving forward with their plan to start a microcomputer company, as previously quoted in Part 5 (ii):

“… Hurd said that he had never discussed this matter with Aiken, but that on two or three occasions when Aiken was in California, where he was a regular consultant for the Lockheed Missile and Space Division, the two of them had “talked at great length about organizing a company.” “If we had done it and if it had been successful,” Hurd mused, “it would have been the first microcomputer computer company in the world.” Hurd told me that “an Assistant Director of Engineering at Lockheed . . . was doing the design work,” and that “Howard, along with that man and me” would form the new company. Aiken, Hurd continued, “wanted me to help raise the money.” They “never followed through” with this plan. “I thought that maybe he wanted to be rich,” Hurd concluded, “and was thinking about starting the company for that reason.””

(I. Bernard Cohen, Howard Aiken: Portrait of a Computer Pioneer, 2000, The MIT Press)

So Aiken “wanted to be rich” and was “thinking about starting the company for that reason”. But what if a scientist wanted to put innovative work into commercial industrial production and needed to start a company for this purpose?

In 1957 the 8 young men abandoning William Shockley wanted exactly that, “more immediate opportunities for producing silicon transistors” as quoted earlier.

Unfortunately, the “Shockley Eight” found it very difficult to get financing, and their adventure nearly failed as their rebellion against their Nobel laureate mentor made them unacceptable to the investment firms that otherwise might take a chance on them – until eventually their case was brought to the attention of a wealthy and prominent playboy businessman, Sherman Fairchild: 

“Most of the transistor entrepreneurs had been backed by family money or other private capital resources. Arthur Rock at Hayden, Stone soon came to appreciate why. Every company he approached on behalf of the group of eight turned the idea down flat, without even asking to meet the men involved. Some firms may have found the pith of the letter–please give a million dollars to a group of men between the ages of 28 and 32 who think they are great and cannot abide working for a Nobel Prize winner–unpalatable. …

Undaunted, Bud Coyle mentioned the scientists to playboy-millionaire-inventor Sherman Fairchild. A meticulous man in his sixties, Fairchild was a bon vivant who frequented New York’s posh 21 Club and wore “a fresh pretty girl every few days like a new boutonniere,” according to Fortune. …

Sherman Fairchild was not involved with day-to-day operations at his companies, but he suggested to the senior management at Fairchild Camera and Instrument that it might explore the prospects of the West Coast technologists. …”

(Leslie Berlin, The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley, 2005, Oxford University Press)

Thus was born the Fairchild Semiconductor, a new company in the name of a famous playboy businessman who invested $1.5 million – twice as requested by the 8 Shockley mutineers – on the condition that he later could choose to acquire the full ownership for a pre-agreed $3 million:

“Fairchild readily put up $1.5 million to start the new company—about twice what the eight founders had originally thought necessary—in return for an option deal. If the company turned out to be successful, he would be able to buy it outright for $3 million.”

(Walter Isaacson, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution, 2014, Simon and Schuster)

Sherman Fairchild was a good match as an investor for the Shockley Eight, because he himself was a scientific inventor, though not in the semiconductor field, and was the largest shareholder of IBM through family inheritance:

“It was a fine match. Fairchild, the owner of Fairchild Camera and Instrument, was an inventor, playboy, entrepreneur, and the largest single stockholder in IBM, which his father had cofounded. A great tinkerer, as a Harvard freshman he invented the first synchronized camera and flash. He went on to develop aerial photography, radar cameras, specialized airplanes, methods to illuminate tennis courts, high-speed tape recorders, lithotypes for printing newspapers, color engraving machines, and a wind-resistant match. In the process, he added a second fortune to his inheritance, and he was joyful spending it as he had been making it. …”

(Walter Isaacson, 2014, Simon and Schuster)

Like Aiken, Fairchild had attended Harvard, but never earned a degree from attending universities, including the University of Arizona and Columbia University.

(Frank and Suanne Woodring, Fairchild Aircraft, 2007, Arcadia Publishing)

In contrast to what Fairchild, a businessman and IBM’s largest shareholder living a playboy lifestyle, would easily do, namely financing the start of a new technology company, wasn’t Cuthbert Hurd, an IBM executive himself, too idealistic as reviewed in Part 5 (ii) or simply too harsh about Aiken’s interest in getting rich, when the first major semiconductor company of the fledgling Silicon Valley had already been founded in the name of someone famously wealthier, more debaucherous and less related to that company’s technological work?

Absolutely so, especially when Sherman Fairchild and his father George were the kind of legendary IBM figures like the bosses Hurd liked to talk about as in Part 5 (i), namely Thomas Watson father and son:

“… His father had preceded Tom Watson as the chief executive of the company that would become International Business Machines, and thanks to the vagaries of inheritance (Tom Watson had several children, while George Fairchild had only Sherman), he was the largest shareholder in IBM.”

(Leslie Berlin, 2005, Oxford University Press)

As the executive committee chairman on IBM’s board of directors, Sherman Fairchild immediately helped the new Fairchild Semiconductor get its first industry sales order – from IBM:

“Fairchild Semiconductor reached an important milestone on 2 March 1958, when it received a purchase order from IBM Owego accepting Fairchild’s quote to provide 100 core driver transistors at the price of $150 each. The firm had booked its first sale. With IBM’s purchase order, Fairchild Semiconductor instantly gained a measure of credibility in the electronics industry. IBM, a notoriously selective and demanding customer, had chosen to buy devices from the start-up. Helping Fairchild Semiconductor to secure this order from IBM was, reportedly, a timely visit by Sherman Fairchild (the founder of Fairchild Camera and Instrument and its majority owner) and Richard Hodgson to IBM’s president, Thomas Watson Jr. Managers at IBM Owego had concerns about Fairchild’s production capabilities and financial soundness. To overcome these reservations, Sherman Fairchild—who was also IBM’s largest individual shareholder and who chaired the executive committee of IBM’s board of directors—met with Watson and asked him to buy silicon transistors from the new venture. Fairchild Semiconductor received its order from IBM shortly thereafter.”

(Christophe Lécuyer and David C. Brock, with forward by Jay Last, Makers of the Microchip: A Documentary History of Fairchild Semiconductor, 2010, The MIT Press)

With the IBM management, of which Cuthbert Hurd was a part, so well accustomed to a wealthy son of an IBM co-founder wielding influences on the company’s board, how ‘impure’ in comparison was the early computer pioneer Howard Aiken’s goal of getting rich?

Given that Fairchild, an inventor and businessman, could easily help young scientists start a new technology company that would lay “the technological and cultural foundations of Silicon Valley”, as quoted earlier, and given that Shockley, a scientist inventor, had been able to start a new technology company, in my opinion Aiken, a scientist inventor turning into a businessman with a similar ambition, should be given the chance to try.

But as much as Hurd’s educational institutional career perspectives may have biased him toward Aiken’s interest in money as reviewed in Part 5 (ii), Hurd likely would have needed IBM’s support to co-launch a new computer company with Aiken.

Therefore I have to wonder if IBM, by the 1960s, still harboured trepidation about Aiken’s ambition: as in Part 5 (ii), Thomas Watson, Jr.’s late father and IBM-president predecessor had in 1944 witnessed the inclination of Harvard and Aiken to claim all the Mark I credits for themselves.

The Shockley Eight who founded Fairchild Semiconductor were viewed as the “Traitorous Eight” by William Shockley, but Shockley’s own difficult management style and orthodox scientific focus not only triggered their departure but also led to his company’s eventual failure. Spending the rest of his career as a Stanford professor, Shockley further descended into an openly “racist” scholar detested by many:

“Dubbed “the traitorous eight,” Noyce and his posse set up shop just down the road from Shockley on the outskirts of Palo Alto. Shockley Semiconductor never recovered. Six years later, Shockley gave up and joined the faculty of Stanford. His paranoia deepened, and he became obsessed with his notion that blacks were genetically inferior in terms of IQ and should be discouraged from having children. The genius who conceptualized the transistor and brought people to the promised land of Silicon Valley became a pariah who could not give a lecture without facing hecklers.”

(Walter Isaacson, 2014, Simon and Schuster)

Free of the difficult man as their boss, the Shockley Eight managed to start their new company at an incredibly right time; 3 days after Fairchild Semiconductor’s founding on October 1, 1957, the Soviet Union successfully launched the world’s first satellite, the Sputnik, spurring a fierce scientific and technological race in the United States to keep up with its Cold War nemesis:

“The traitorous eight who formed Fairchild Semiconductor, by contrast, turned out to be the right people at the right place at the right time. The demand for transistors was growing because of the pocket radios that Pat Haggerty had launched at Texas Instruments, and it was about to skyrocket even higher; on October 4, 1957, just three days after Fairchild Semiconductor was formed, the Russians launched the Sputnik satellite and set off a space race with the United States. The civilian space program, along with the military program to build ballistic missiles, propelled the demand for both computers and transistors. It also helped assure that the development of these two technologies became linked. Because computers had to be made small enough to fit into a rocket’s nose cone, it was imperative to find ways to cram hundreds and then thousands of transistors into tiny devices.”

(Walter Isaacson, 2014, Simon and Schuster)

In 1959 at Fairchild Semiconductor, Robert Noyce became an inventor – co-inventor independently with Jack Kilby of Texas Instruments – of the integrated circuit:

“When the transistor was invented in 1947 it was considered a revolution. Small, fast, reliable and effective, it quickly replaced the vacuum tube. …

With the small and effective transistor at their hands, electrical engineers of the 50s saw the possibilities of constructing far more advanced circuits than before. However, as the complexity of the circuits grew, problems started arising.

When building a circuit, it is very important that all connections are intact. …

Another problem was the size of the circuits. …

In the summer of 1958 Jack Kilby at Texas Instruments found a solution to this problem. …

… Kilby presented his new idea to his superiors. He was allowed to build a test version of his circuit. In September 1958, he had his first integrated circuit ready. It was tested and it worked perfectly!

Although the first integrated circuit was pretty crude and had some problems, the idea was groundbreaking. …

Robert Noyce came up with his own idea for the integrated circuit. He did it half a year later than Jack Kilby. Noyce’s circuit solved several practical problems that Kilby’s circuit had, mainly the problem of interconnecting all the components on the chip. … This made the integrated circuit more suitable for mass production. …”

(“The History of the Integrated Circuit”, May 5, 2003, Nobelprize.org)

I note that this milestone status of Robert Noyce’s co-inventing the integrated circuit matched his former mentor William Shockley’s co-inventing the transistor.

In 2000 when Kilby learned he was to be awarded the Nobel Physics Prize for inventing the integrated circuit, he immediately mentioned Noyce, who had died 10 years earlier:

“When Kilby was told that he had won the Nobel Prize in 2000, ten years after Noyce had died, among the first things he did was praise Noyce. “I’m sorry he’s not still alive,” he told reporters. “If he were, I suspect we’d share this prize.” ….”

(Walter Isaacson, 2014, Simon and Schuster)

Ironically, Noyce had died of a heart attack, on June 3, 1990 at the age of 62, in Austin, Texas, i.e., in the home state of Texas Instruments, Fairchild Semiconductor’s industry rival, where he led an American research consortium competing with the Japanese in the semiconductor industry:

““He was considered the mayor of Silicon Valley,” said Jim Jarrett, a spokesman for Intel. A founder of the Semiconductor Industry Association in 1975, Dr. Noyce was frequently in Washington to lobby on behalf of semiconductor manufacturers.

At the time of his death, Dr. Noyce was the president and chief executive of Sematech Inc., a research consortium in Austin that was organized by 14 corporations in an attempt to help the American computer industry catch up with the Japanese in semiconductor manufacturing technology.”

(“An Inventor of the Microchip, Robert N. Noyce, Dies at 62”, by Constance L. Hays, June 4, 1990, The New York Times)

The immediate boost the integrated circuit invention provided, in the early 1960s, was not to the commercial computer industry but to the U.S. military’s nuclear missiles development:

“The first major market for microchips was the military. In 1962 the Strategic Air Command designed a new land-based missile, the Minuteman II, that would each require two thousand microchips just for its onboard guidance system. Texas Instruments won the right to be the primary supplier. By 1965 seven Minutemen were being built each week, and the Navy was also buying microchips for its submarine-launched missile, the Polaris. With a coordinated astuteness not often found among military procurement bureaucracies, the designs of the microchips were standardized. Westinghouse and RCA began supplying them as well. So the price soon plummeted, until microchips were cost-effective for consumer products and not just missiles.”

(Walter Isaacson, 2014, Simon and Schuster)

As shown in the above quote, Texas Instruments quickly reaped big benefits from selling to the military missiles program.

In comparison, Fairchild Semiconductor carefully kept a distance from directly supplying military projects, instead gaining a major role supplying the U.S. civilian space program’s poster child, the Apollo missions to the moon:

“Fairchild also sold chips to weapons makers, but it was more cautious than its competitors about working with the military. In the traditional military relationship, a contractor worked hand in glove with uniformed officers, who not only managed procurement but also dictated and fiddled with design. Noyce believed such partnerships stifled innovation: “The direction of the research was being determined by people less competent in seeing where it ought to go.” He insisted that Fairchild fund the development of its chips using its own money so that it kept control of the process. If the product was good, he believed, military contractors would buy it. And they did.

America’s civilian space program was the next big booster for microchip production. In May 1961 President John F. Kennedy declared, “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth.” The Apollo program, as it became known, needed a guidance computer that could fit into a nose cone. So it was designed from scratch to use the most powerful microchips that could be made. The seventy-five Apollo Guidance Computers that were built ended up containing five thousand microchips apiece, all identical, and Fairchild landed the contract to supply them. The program beat Kennedy’s deadline by just a few months; in July 1969 Neil Armstrong set foot on the moon. By that time the Apollo program had bought more than a million microchips.”

(Walter Isaacson, 2014, Simon and Schuster)

For the Apollo moon landing missions, Fairchild’s microchips achieved a major milestone by beating out IBM computers, which did not use integrated circuits, to be the choice of the onboard Apollo Guidance Computer, thanks to the perseverance of engineers at the Massachusetts Institute of Technology:

“… One of the most interesting examples of these decisions concerned the Apollo Guidance and Navigation system, controlled by the Apollo Guidance Computer. Due to size, weight, and power constraints, the Command and Lunar Modules would each carry only one computer, which had to work. What was more, the designers of the computer, at the MIT Instrumentation Laboratory, decided to build the computer using the newly-invented integrated circuit, or silicon “chip” as we now know it. … MIT’s decision did not go unchallenged. Early in the Apollo program, NASA contracted with AT&T to provide technical and managerial assistance for select technical issues. AT&T in turn established Bellcomm, an entity that carried out these analyses. In late 1962, Bellcomm recommended that IBM, not MIT, supply the computers for the Apollo Command and Lunar Modules. … Bellcomm’s recommendation was due in part to IBM’s role as supplier of the computer that guided the Saturn V rocket into Earth orbit and then to a lunar trajectory. The IBM Launch Vehicle Digital Computer did not use integrated circuits, but rather a more conservative circuit developed at IBM called “Unit Logic Device.” What was more, the circuits in the computer were installed in threes—so called “Triple Modular Redundancy” so that a failure of a single circuit would be “outvoted” by the other two.

The engineers at the MIT Instrumentation Lab mounted a vigorous defense of their design and were able to persuade NASA to not use the IBM computers in the Command and Lunar Module. … The Lab worked closely with Fairchild Semiconductor, the California company where the integrated circuit was invented, to ensure reliability. Chips were tested under rigorous conditions of temperature, vibration, contamination, and so on. If a chip failed these tests, the entire lot from which it came from was discarded. … Although Fairchild was offering a line of chips that could be used to make a computer, MIT chose only one type, giving them an ability to test it more thoroughly and to allow the manufacturer to build up more experience making them reliably. No Apollo Guidance Computer, on either the Command or Lunar Modules, ever experienced a hardware failure during a mission.

MIT did not entirely prevail, however, as NASA specified that primary navigation for Apollo would be conducted from Houston, using its array of large mainframe computers (supplied by IBM), with the on-board system as a secondary. The wisdom of that decision was proven during Apollo 13 when the Command Module’s power was lost. In other missions, the on-board computers and navigation systems worked perfectly and worked more in tandem with Houston than as a backup. It also functioned reliably during the burns of the Service Module engine behind the Moon, when there was no communication with Houston. …”

(“Apollo Guidance Computer and the First Silicon Chips”, by Paul Ceruzzi, October 14, 2015, Smithsonian National Air and Space Museum)

In the end, the Apollo Guidance Computer microchips designed by Fairchild Semiconductor were mass-manufactured by Philco, a company based in Philadelphia; nonetheless, the growth of Fairchild Semiconductor and the growth of Silicon Valley benefited significantly from the Apollo space program:

“… Grumman Aerospace, the builder of the Lunar Module, insisted that a small back-up controller be installed in case of a computer failure. Grumman envisioned this “Abort Guidance System” (AGS) as a modest controller intended only to get the crew off the Moon quickly and into Lunar Orbit, where they would be rescued by the Command Module pilot. As finally supplied by TRW Inc., it grew into a general-purpose computer of its own, with its own display and keyboard. Like the Apollo Guidance Computer, it also used integrated circuits. It was tested successfully during the Apollo 10 mission, but it was never needed.

… The area of Santa Clara County, where Fairchild and its competitors were located, began going by the name “Silicon Valley” by the end of the decade. The Apollo contract was not the sole reason for the transformation of the Valley, but it was a major factor. In truth, Fairchild ended up not being the main supplier of Apollo chips after all. Their design was licensed to Philco of suburban Philadelphia, which supplied the thousands of integrated circuits used in all the Apollo Guidance Computers. And because the Abort Guidance System was specified a year or two after the Apollo Guidance Computer, its designers were able to take advantage of newer circuit designs, not from Fairchild but from one of its Silicon Valley competitors, Signetics. …”

(Ceruzzi, October 14, 2015, Smithsonian National Air and Space Museum)

As summarized in the above two quotes, the Apollo program’s use of computers consisted of: 1) IBM mainframe computers in the Houston control center; 2) MIT-developed Apollo Guidance Computers using Fairchild-designed integrated-circuit chips manufactured by Philadelphia-based Philco, onboard the Command and Lunar Modules; and 3) TRW-developed computers, with microchips designed by another Silicon Valley semiconductor company Signetics, for the emergency Abort Guidance System.

As in Part 5 (ii), Philco was a company that produced transistors and computers for the military and for the commercial market, a company where Saul Rosen, a University of Pennsylvania Ph.D. and former Wayne State University professor, had worked before becoming one of the 2 initial founding members of Purdue University’s computer science department – the first such department of any U.S. university.

Also as in Part 5 (ii), Sam Conte, Purdue computer science department’s founding chairman and Rosen’s former Wayne State colleague, had worked at the military aerospace company Space Technology Laboratories; STL was a part of TRW, which was the leading contractor for the U.S. Air Force’s intercontinental ballistic missiles development.

(“Former TRW Space Park, now Northrop Grumman, designated as historic site for electronics and aerospace work”, by John Keller, December 18, 2011, Military & Aerospace Electronics)

The fact that 2 of the technology companies where the 2 Purdue computer science department founding members had worked later played key roles for the Apollo space program indicates that Purdue, in 1962 hiring these 2 former Wayne State professors to establish the first academic computer science department in the United States, had insight into the computer industry.

Fairchild Semiconductor kept a distance, in fact, from directly supplying military as well as other government contracts. In the company’s first 2 years, only 35% of sales were direct government purchases even though, in 1960 for instance, 80% of its transistors and 100% of its integrated circuits ended up in military use; by 1963, less than 10% of its business was direct contracts with the government; as also quoted earlier, Fairchild Semiconductor did not use government funding for its research and development, despite other companies’ use of government money as their primary source of R&D funding:

“Ever since Fairchild’s inception, the focus on innovation had led the company to reject most direct government contract work. Of course, Noyce knew that without the government—specifically, the Department of Defense—Fairchild Semiconductor would not exist. In the company’s first two years, direct government purchases accounted for 35 percent of Fairchild Semiconductor’s sales, and well over half of the company’s products eventually found their way into government hands. The multimillion-dollar Minuteman contract for transistors cemented the company’s success, and the vast majority of Fairchild’s other early customers were aerospace firms buying products to use in their own government contract work. In 1960, 80 percent of Fairchild’s transistors went to military uses, and fully 100 percent of the company’s early integrated circuits were used in defense functions as well. The company worked closely with military contractors in designing and building its products. …

Though Noyce welcomed the government as a customer and appreciated that federal mandates—such as one issued in April 1964, that required all televisions be equipped with UHF tuners, a law that effectively forced the introduction of transistors into every television in the United States—could benefit Fairchild Semiconductor, he believed there was something “almost unethical” about using government contract money to fund R&D projects. “Government funding of R&D has a deadening effect upon the incentives of the people,” he explained to a visitor in 1964. “They know that [their work] is for the government, that it is supported by government dollars, that there is a lot of waste. This is not the way to get creative, innovative work done.” …

… “A young organization, especially in the electronics industry has to be fast moving,” he explained in 1964. “It runs into problems with the unilateral direction mandated by government work.” By this point, the company was relatively well established, and Noyce reminisced, “We were a hard, young, hungry group. [Our attitude was] ‘We don’t give a damn what [money] you have [to offer], buddy. We’re going to do this ourselves.’” Gordon Moore shared Noyce’s beliefs. Consequently, while other firms in the early 1960s used government contracting as the primary source of R&D funding, less than 10 percent of business at Fairchild was contracted directly by the government in 1963. “And we like it that way,” Noyce hastened to tell a reporter.”

(Leslie Berlin, 2005, Oxford University Press)

As quoted earlier from a book by Walter Isaacson, in the early 1960s the U.S. military’s strategic nuclear missiles that immediately benefited from the invention of the integrated circuit were the land-based Minuteman II and the submarine-based Polaris.

Polaris was developed and produced by Lockheed Missiles and Space Company founded in 1956 in Sunnyvale of the nascent Silicon Valley region, as cited in a quote in Part 5 (ii) – a company for which Howard Aiken was a regular consultant for over a decade until 1973, the year of his death, as in Part 5 (ii).

But Aiken’s relationship with Lockheed had started before the creation of this missiles and space branch, with his consulting for the military aircraft manufacturer Lockheed Corporation in Los Angeles, according to an interview of him by Henry Tropp and Bernard Cohen in February 1973 – just over 2 weeks before his death as in Part 5 (ii):

“TROPP:

Well I’ve run across some interesting unpublished documents, many of them in another environment that’s related, in terms of what happened in the computer revolution, and had similar occurrences to Harvard, because people were starting clean. That’s the aircraft industry primarily in the Los Angeles area. Communications were different in that period. The East Coast had its computing environment and the West Coast tended to be separate and distinct and they almost grew up by themselves. But there were links back and forth and one of the questions that I was going to ask you was who were some of the people from that aerospace industry who visited the Harvard Computational Lab?

AIKEN:

The first man that I can think of is Louis Ridenhauser, with whom I was very closely associated. Louis was the Vice-President of Lockheed and he almost clubbed the Board of Directors of Lockheed into starting electronics machinery. And I was associated with him in that venture and was a Lockheed consultant for many years. I was always hopping out there to Los Angeles for a week nearly every month.

Then, I was in and out of Los Angeles for the Bureau of Standards operation.

TROPP:

That’s right. That was at UCLA. Did people come from Northrop and from Hughes in the late forties? Did any of that group come to Harvard?

AIKEN:

I saw Lehmer very frequently.

TROPP:

How about Harry Huskey? Did he come?

AIKEN:

Yes, yes. He was at our place. In fact, he attended one of these symposia.”

(“Interviewee: Howard Aiken (1900-1973) Interviewers: Henry Tropp and I.B. Cohen”, February 26-27, 1973, Smithsonian National Museum of American History)

Per his recollection as above, Aiken consulted for Lockheed in close association with then Lockheed vice president Louis Ridenhauser, who convinced the Lockheed board of directors to start “electronics machinery”, and was a very important aerospace industry figure among the visitors to Aiken’s Harvard Computation Lab; Aiken went to Lockheed in Los Angeles for a week of consulting in nearly every month, and was then in and out of Los Angeles for “the Bureau of Standards operation” at UCLA where he saw Lehmer very frequently; Harry Huskey also visited his Harvard lab.

As in Parts 5 (i) & (ii), UC Berkeley computational mathematicians Derrick Lehmer and Harry Huskey had been with the Institute for Numerical Analysis, located at UCLA and managed by the National Bureau of Standards, where Lehmer was the director in the early 1950s and Huskey led the development of its SWAC computer completed in 1950; the INA was terminated in 1954 due to McCarthyism-type politics, Lehmer returned to Berkeley in August 1953 and in 1954 recruited Huskey to UC Berkeley.

By this timeline, Aiken first started as a Lockheed consultant before INA’s closure in 1954, helping Lockheed getting into “electronics machinery” development.

I would not doubt that Howard Aiken had frequented the INA as recalled in his 1973 interview; however, in the comprehensive 1991 book by Magnus R. Hestenes and John Todd on INA history, extensively quoted in Parts 5 (i) & (ii), there exists only one reference to Aiken, and in a negative light – about his opposition to the NBS’s development of computers, as recalled by Harry Huskey (Aiken name underline emphasis added):

“The success of the ENIAC had excited mathematicians and other scientists to the possibilities now opening before them. … Government agencies, quick to see the potentials of an electronic computer, were eager to acquire one. However, the field was new, there was no background of experience… Therefore, government agencies were glad to ask the NBS to assist them in negotiating with computer companies. In early 1948, the Bureau had begun negotiating with the Eckert-Mauchly Computer Corporation and the Raytheon Corporation, and later with Engineering Research Associates.

The computers were slow in being developed. New techniques were being tried and often they did not work as well, or as soon, as had been first thought, or hoped. The personnel of the Applied Mathematics Laboratories became impatient with this slow development, and decided that they could build one faster with the help of the Electronics Laboratory at the Bureau. Also, it had become clear that in order to be able to judge effectively the probability of a new technique working they would need more “hands-on” expertise. Dr. Edward Cannon and the author convinced Dr. Curtiss that this “gamble” was worth trying, and Dr. Mina Rees of the Office of Naval Research backed them up. This was in spite of the advice of a committee, consisting of Dr. George Stibitz, Dr. John von Neumann, and Dr. Howard Aiken, which had been asked by Dr. Curtiss to consider the Bureau’s role in the computer field. Their advice had been that the NBS shouldn’t really work on computers, but should confine its work to improving components.

In May 1948, the decision was made at the Executive Council to build a machine for the Bureau’s own use in Washington. … at the October 1948 meeting of the Executive Council it was decided that the Bureau should build a second computer at the Institute for Numerical Analysis, which had by now been located in a reconverted temporary building on the campus of the University of California at Los Angeles. This machine was to be built under the direction of the author, who had joined Curtiss’s group in January 1948. He had spent the previous year at the National Physical Laboratory in Teddington, England, working under Alan Turing with James Wilkinson and others on the Automatic Computing Engine (ACE) project. He had been offered the job there on the recommendation of Professor Douglas Hartree, whom he had met while working on the ENIAC project.”

(“The SWAC: The National Bureau of Standards Western Automatic Computer”, by Harry D. Huskey, in Magnus R. Hestenes and John Todd, Mathematicians Learning to Use Computers: The Institute for Numerical Analysis UCLA 1947-1954, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

As reviewed in Part 5 (i), the NBS built 2 computers in 1950, the SEAC computer in Washington, D.C. and the SWAC computer at the INA at UCLA; both were electronic computers, and influenced by John von Neumann’s design, including the “stored program” concept he advocated. The INA operation was overseen by John Curtiss cited above, who was head of the NBS’s applied mathematics division and Mina Rees, cited above as supportive of NBS’s interest in computer development, was the head of the mathematics division at the U.S. Navy’s Office of Naval Research, as in Part 5 (ii).

And as Huskey recalled in the quote above, to get into computer development the NBS had to go against the advice of a committee of 3 prominent computer pioneers, including Aiken and von Neumann.

Of the 3 committee members, Howard Aiken and George Stibitz were part of a powerful scientific establishment, led by the U.S. government’s leading science adviser Vannevar Bush discussed about in Parts 5 (i) & (ii), that had strongly opposed the development of the first general-purpose electronic computer ENIAC during World War II – the project von Neumann, the other committee member, had played a key role in – and continued to oppose later electronic computer projects:

“The decision to build the ENIAC resulted from the Army’s pressing wartime needs. The established scientific community within the government’s Office of Scientific Research and Development, headed by Vannevar Bush, fiercely opposed the Eckert-Mauchly project. At issue were both the digital design of their proposal (an advanced analog differential analyzer was being built at MIT at the time) and their proposed use of electronic circuit elements. Samuel H. Caldwell, Bush’s MIT colleague, wrote, “The reliability of electronic equipment required great improvement before it could be used with confidence for computation purposes.” Stibitz, at the Bell labs, expressed similar sentiments and suggested using electromechanical relay technology.

After the war, when Eckert and Mauchly turned to more advanced designs, opposition from established figures in computing continued. Howard Aiken, who had struggled at Harvard before the war to seek support for constructing a large electromechanical calculator, opposed their projects, …

Well before ENIAC was finished, Eckert and Mauchly’s group began thinking about a new and improved machine. …

By then, an important addition to the Moore School team had been made. The famous mathematician John von Neumann had learned of the existence of the project and had immediately begun regular visits. Von Neumann, a superb mathematician, had become a powerful voice in the scientific establishment that ran the U.S. war effort’s research and development program. …

Von Neumann’s presence stimulated a more formal and rigorous approach to the design of the successor machine, dubbed the Electronic Discrete Variable Automatic Computer (EDVAC). …”

(Kenneth Flamm, Creating the Computer: Government, Industry and High Technology, 1988, Brookings Institution)

As described above, the powerful U.S. scientific establishment’s leading conservative views were very much like Aiken’s reviewed in Part 5 (ii), namely that electronic components were unreliable, and that the Aiken type of electromechanical delay machines should be the choice instead.

Even decades later, in the 1973 interview shortly before his unexpected death, Aiken continued to make the same point, that electronic computers built with vacuum tubes were “absolutely worthless” because they were unreliable, asserting that electronic computers became successful really only after transistors had become available:

“AIKEN:

Yes. You see, a computing machine is not reliable. It’s worthless, absolutely worthless. You can’t trust the results. And that was the reason that we checked everything that we did. Even today, people don’t go to all lengths to check what they did. … Well, we were conscious that we had ________ and the ENIAC didn’t. We had program facilities which they didn’t. We had input and output which they didn’t bother to worry about. … So that we used to say, “What’s all this speed for? What does it accomplish? We get there sooner.”

And yet, you know, I really question if electronic computation would ever have become a great success had it not been for the transistor.”

(February 26-27, 1973, Smithsonian National Museum of American History)

So, even assuming that in 1948 John von Neumann was sympathetic toward the NBS’s intent on developing computers, his view was unfortunately in the minority on the committee making the recommendation – with Howard Aiken and George Stibitz staunch opponents – just like his view having been in the minority among the leading circle of the U.S. scientific establishment in their attitudes toward the ENIAC project.

Nonetheless, like the U.S. Army had done starting the ENIAC project during World War II, in 1948 the NBS decided against the expert committee’s negative advice and started the SEAC computer project in Washington, D.C. and the SWAC computer project at the INA at UCLA.

As in Part 5 (i), von Neumann was among the “distinguished visitors” at the INA at UCLA as reported in the book of Hestenes and Todd; but that detailed INA history account published in 1991 made no other mention of Aiken, i.e., other than his on the committee opposing NBS computer development – despite his own claim in the 1973 interview that he had frequented INA.

While I do not doubt the validity of Aiken’s claim, it apparently has been ignored by former INA members who have told their stories in the 1991 book. Given Aiken’s prominence in the computing field nearly rivaling von Neumann’s, I would infer that he frequented Los Angeles as a computer consultant for Lockheed as recalled in his 1973 interview, and while there also visited INA in an informal capacity, and therefore his visits have not been mentioned in the more recent book on INA history.

But now a related question arises: with his frequent consulting trips to Lockheed for many years since the early 1950s, what contributions did Howard Aiken actually make to Lockheed in electronic computer development – or “electronics machinery” as he called it in the 1973 interview quoted earlier?

The answer may be that, at least during his consultancy in the 1950s, Aiken did not help Lockheed develop any electronic computer.

Firstly, given Aiken’s conservatism, i.e., his viewing electronic computers as “absolutely worthless”, he was slow to embrace the vacuum tube technology for electronic computers even though, as in Part 5 (ii), his 1930s’ Harvard Ph.D. research had been in vacuum tubes.

Secondly, as quoted earlier from Nobelprize.org on the history of the integrated circuit, the transistor had only been invented in 1947, and despite the story previously cited in Part 5 (ii) that Aiken used some transistors with Mark III and Mark IV, in his 1973 interview Aiken seemed to deny it, stating that he had never utilized the transistor, and that their quality had been very bad:

“COHEN:

But this was still vacuum tube.

AIKEN:

Mark III was. You see, I never had anything to do with the transistor.

AIKEN:

The first transistors we purchased in the computer laboratory were purchased in France._____________________________________, and they were yea big, and they looked like little plastic things with three wires sticking out of them. That happens to be exactly what they were. They sold those damn things in a number of plastic molds with three wires sticking out of the thing; they had no circuit capability whatsoever at all. And like everybody else, we didn’t find that out until we paid for it.

Then I wrote to the United States Chamber of Commerce in Paris to complain, and several other people must have done the same thing because I got a letter back, saying, “Well, these people are no longer in business.” But they ripped quite a few people with their technique.”

(February 26-27, 1973, Smithsonian National Museum of American History)

Thirdly, in Part 5 (i) I have reviewed the computer development activity of the late 1940s and early 1950s at aerospace companies in Southern California, and Lockheed is not mentioned in that history, as previously quoted in Part 5 (i):

“One other pocket of activity, in historical hindsight, looms in importance as a transporter of computer technology from laboratory to market. Located on the West Coast of the United States and tied closely to the aerospace industry in Southern California, which, in turn, was very dependent on government contracts, this activity focused on scientific and engineering computing. The design of aircraft inherently required extensive mathematical calculations, as did applications such as missile guidance. Early efforts (late 1940s) were primarily housed at Northrop Aircraft and to a lesser extent at Raytheon. Both had projects funded by the U.S. government: Northrop for its Snark missile and Raytheon for a naval control processor, for example. Northrop worked with an instrument supplier (Hewlett-Packard) on early digital projects. Then, in 1950, a group of Northrop engineers formed their own computer company called Computer Research Corporation (CRC). Like ERA, it had a military sponsor the U.S. Air Force for which it built various computers in the first half of the 1950s.”

(James W. Cortada, The Computer in the United States: From Laboratory to Market, 1930 to 1960, 1993, M.E. Sharpe)

As quoted, aircraft design and missile guidance were two applications that “required extensive mathematical calculations” and would benefit from computing power.

But as it appeared in the above history account, Northrop and Raytheon were centers of computer development activity in that earlier period, but likely not Lockheed.

However, Lockheed did purchase 2 IBM 701 computers – IBM’s first commercial computer, the development of which Cuthbert Hurd played a key role for as in Parts 5 (i) & (ii) – for its Southern Californian aircraft business in 1953 and 1954 – No. 3 and No. 18 on a full list of 19 IBM 701 computers produced per IBM’s archival records, including No. 1 for IBM itself.

(“701 Customers”, International Business Machines Corporation)

And fourthly, despite Aiken’s consultancy for Lockheed in Los Angeles on “starting electronics machinery”, in 1956 when Lockheed Missiles and Space Division was founded in the fledgling Silicon Valley region, the Lockheed engineers there relied solely on mechanical calculators as previously quoted in Part 5 (ii):

“The Bayshore Freeway was still a two-lane road, and 275 acres of bean fields adjacent to Moffett Field were purchased in 1956 to become the home of Lockheed Missile & Space Division (now Lockheed Martin Space Systems Company). …

… the first reconnaissance satellite, called Corona, and the Polaris submarine-launched ballistic missiles (SLBMs) were designed and built in just a few short years by the company’s engineers and scientists — armed only with slide rules, mechanical calculators, the basic laws of physics and an abundance of imagination.”

(“Lockheed grew up with Sunnyvale”, Myles D. Crandall, February 25, 2007, Silicon Valley Business Journal)

Given the four aspects of evidence listed above, encompassing Aiken’s own dismissive attitudes toward electronic components, Lockheed’s lack of computer development activity and the absence of computer use in the early years at Lockheed Missiles and Space Company in Sunnyvale where Aiken became a consultant at some point, Lockheed most likely did not develop electronic computers during the 1950s and Aiken did not help in that regard.

Then the next logical question is: what did Howard Aiken actually do in his consulting for Lockheed, which he later referred to in his 1973 interview, quoted earlier, as the venture of “starting electronics machinery”?

An answer may be that Aiken helped with developing special circuits for the application of missile guidance, i.e., for the Polaris nuclear missiles developed and produced by Lockheed Missile and Space Company.

This is because, as quoted earlier, in the early 1960s the Polaris missiles utilized a lot of microchips after the integrated circuit’s invention by Texas Instruments’ Jack Kilby and Fairchild Semiconductor’s Robert Noyce.

In fact, the title of a 1962 conference co-organized by Aiken and William Main – a person Aiken and Hurd wanted to start a computer company with in 1970 as mentioned in Part 5 (ii) – and hosted by the Lockheed Missiles and Space Company in Sunnyvale, previously cited in Part 5 (ii), referred to “Switching theory in space technology”.

(Howard Aiken and William F. Main, eds., Switching theory in space technology: [Symposium on the Application of Switching Theory in Space Technology, held at Sunnyvale, California, February 27-28 and March 1, 1962], 1963, Stanford University Press)

What is switching theory? It is the theory of circuit switching of data for rapid decision making – like the computer’s logical functioning but not necessarily requiring a general-purpose computer to accomplish:

Switching theory, Theory of circuits made up of ideal digital devices, including their structure, behaviour, and design. It incorporates Boolean logic (see Boolean algebra), a basic component of modern digital switching systems. Switching is essential to telephone, telegraph, data processing, and other technologies in which it is necessary to make rapid decisions about routing information.”

(“Switching theory”, by the Editors of Encyclopædia Britannica, Encyclopædia Britannica)

One can understand that “rapid decisions” were critically needed on flying missiles as they were on a telephone network.

In his 1973 interview, Aiken emphasized that switching theory did not have to rely on transistors, because the switching circuits could also be made of mechanical relays or vacuum tubes:

“AIKEN:

This was the beauty of switching theory, you see. You had switches and you could take a relay or a vacuum tube or a transistor or anything else, a magnetic core, and make diagrams and draw pictures for a machine completely on the logic of it. …”

(February 26-27, 1973, Smithsonian National Museum of American History)

So the various pieces of evidence suggest that in the 1950s before the integrated circuit’s invention, Howard Aiken did not get to develop electronic computers as a Lockheed consultant but helped the company develop switching circuits for missile guidance, probably using electromechanical relays at first – his conservative Harvard Mark I specialty – and later using integrated circuits during the 1960s.

In an earlier quote from Walter Isaacson’s book on innovators in digital revolution, there was the statement that, “computers had to be made small enough to fit into a rocket’s nose cone”.

Regarding that is an important point of distinction related to Aiken’s recognized contributions to the computer field: those “computers” were not necessarily general-purpose computers that one commonly knows, i.e., not of “von Neumann architecture” as discussed in Part 5 (i) but special switching circuits built with electronic components; such circuits would certainly be “electronics machinery” as Aiken called them in his 1973 interview, and have sometimes been referred to as computers of “Harvard architecture”, also known as “Aiken architecture” as previously quoted in Part 5 (ii):

“… Aiken is sometimes held to be reactionary because he was always wary of the concept of the “stored program” and did not incorporate it into any of his later machines. This stance did put him out of step with the main lines of computer architecture in what we may call the post-Aiken era, but it must be kept in mind that there are vast fields of computer application today in which separate identity of program must be maintained, for example, in telephone technology and what is known as ROM (“read-only memory”). In fact, computers without the stored-program feature are often designated today (for instance, by Texas Instruments Corporation) as embodying “Harvard architecture,” by which is meant “Aiken architecture.””

(“Howard Hathaway Aiken”, by J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

Unfortunately for Howard Aiken, in 1959 just as the integrated circuit was invented and a giant leap of technological advancement would soon take place in the applications of switching circuits and computers in the missile and space fields, his close friend and Lockheed sponsor, Lockheed vice president Louis Ridenhauser who had gotten the company’s board to start “electronics machinery”, as cited earlier, unexpectedly died as Aiken later recalled in his 1973 interview:

“AIKEN:

Have you ever heard Louis Ridenhauer’s definition of an airplane company? He says it’s a place where well-informed brash old men sit around all day in conference discussing irrational things. I was very, very fond of Louis. I was with him the evening he died, in Washington. It was a very unusual thing.

There was a young man who was interested in very low temperature devices in computers— what was his name?

TROPP:

I don’t know.

COHEN:

I do.

AIKEN:

Well at any rate, he and Ridenhauer and I talked ____________________________________________, and we went to Washington. I met Ridenhauer and we bummed around Washington all evening with him. And the last thing he said to me before he left was, “You’re going to the Cosmos Club. I wish I was going to the Cosmos Club but I can’t go because I’m now Vice-President of Lockheed and I’ve got that two-room suite,” and he shoved off and ________________________________________. And the next morning, I got up at a quarter to eight and Ridenhauer didn’t show up and this other man didn’t show up. Somebody wanted to know where they were and I said, “Well don’t worry about Louis, he probably has a hangover this morning.” And a couple hours later, the Manager of the Statler called up looking for me and telling me that Louis had died in bed. And almost immediately after that, we got a telegram that ________ was dead, so we just folded the meeting and everybody went home. It didn’t seem very worthwhile to go on.

COHEN:

You know who he was, don’t you?

AIKEN:

He was Dean of the Graduate School at the University of Illinois when he was 28 or some thing like that, after having been Chief Scientist in the United States Air Force.”

(February 26-27, 1973, Smithsonian National Museum of American History)

Recall as in Part 5 (ii) that both Howard Aiken, who passed away at 73 in a St. Louis hotel during a March 1973 consulting trip at Monsanto, and his former Harvard underling, U.S. Navy Rear Admiral Grace Murray Hopper who passed away at 86 on New Year 1992, died while in their sleep.

Thus in his February 1973 interview two weeks before his own unexpected death, Aiken unwittingly told of an earlier precedent as quoted above, that his Lockheed vice-president friend Louis Ridenhauser had died in bed in a 2-room hotel suite in Washington, D.C., after a night of drinking with Aiken and another man.

By Aiken’s description, his friend Ridenhauser, a former “Dean of the Graduate School at the University of Illinois ” and former “Chief Scientist in the United States Air Force”, was probably quite young when he died.

The Aiken friend’s age and year of death can be found from a different source, where his name is identified as Louis Ridenour, and the biography includes his positions at the University of Illinois and at the U.S. Air Force similar to what Aiken said:

“1911, Nov. 11

Born, Montclair, N. J.

1932

B.S. in physics, University of Chicago, Chicago, Ill.

1936 1938

Instructor in physics, Princeton University, Princeton, N.J.

1938

Ph.D. in physics, California Institute of Technology, Pasadena, Calif.

1938 1941

Professor of Physics, University of Pennsylvania, Philadelphia, Pa.

1941 1944

Assistant director, Radiation Laboratory, Massachusetts Institute of Technology, Boston, Mass.; headed team that developed SCR 584 gun laying radar

1942 1946

Consultant to the secretary of war; field duty as chief radar advisor in North African, European, and Pacific theaters of World War II

1947 1951

Professor of physics and dean of graduate college, University of Illinois, Urbana, Ill.

Editor-in-chief, Radiation Laboratory series of technical books

1948 1952

Chairman, scientific advisory board to the chief of staff, United States Air Force

1951 1954

Vice president, International Telemeter Corp., Los Angeles, Calif.

1952 1953

Visiting professor of engineering, University of California, Los Angeles, Calif.

1955

Director of program development, Missile System Division, Lockheed Aircraft Corp., Burbank, Calif.

1955 1957

Director of research, Missile System Division, Lockheed Aircraft Corp., Burbank, Calif.

1957 1959

Assistant general manager and chief scientist, Lockheed Aircraft Corp., Burbank, Calif.

1959

Vice president and general manager, Electronics and Avionics Division, Lockheed Aircraft Corp., Burbank Calif.

1959, May 21

Died, Washington, D.C.”

(“Ridenour, Louis N. (Louis Nicot), 1911-1959”, Social Networks and Archival Context)

Louis Ridenour was only 47 when he died in May 1959, several years younger than John von Neumann, another top scientific adviser to the U.S. Air Force, at 53 at the time of death in February 1957.

The timing of Ridenour’s death, i.e., in the year he was promoted to a Lockheed vice presidency, is reminiscent of the 1997 death of Diana Forsythe as in Part 5 (ii), whose late father George had been Stanford computer science department’s founding chairman and the most influential person in the emergence of computer science as an academic discipline: Diana drowned during a Alaska backpacking trip in the year she was given an associate professorship at the University of California, San Francisco, following a string of short-term academic jobs in the Stanford area.

Aiken’s 1973 interview quoted earlier mentioned that another person died around that time, whose name has not been disclosed in the online copy of the interview cited.

As a matter of fact, 3 days after Ridenour’s May 21 death in Washington, D.C., a prominent American died in his sleep in the U.S. capital area, at the same hospital where John von Neumann had died; but like von Neumann and unlike Ridenour, U.S. Secretary of State John Foster Dulles had been stricken of cancer:

Washington, May 24—John Foster Dulles died in his sleep Sunday morning, 39 days after his resignation as secretary of state.

The 71 year old statesman succumbed to cancer complicated by pneumonia, at 7:49 a. m. The end came quietly. He had been comatose under pain-relieving drugs for more than a week.

At the bedside in the Presidential suite of Walter Reed army hospital were his wife of 47 years, Janet; his sons, John and Avery, the latter a Jesuit priest; his brother, Allen, and his sister, Eleanor.

Ike Leaves Farm

The news was flashed first to President Eisenhower, spending a week-end at his Gettysburg, Pa., farm. He cut short his stay and returned to Washington in mid-afternoon. …”

(“Succumbs in Sleep during Cancer Fight: Military Service Scheduled at Arlington”, by Willard Edwards, May 25, 1959, Chicago Daily Tribune)

As discussed in Part 2, the mathematician John Nash, like Dulles a Princeton University alumnus, was in his first psychiatric committal at the time of Dulles’s death, having failed in his attempts to start a world peace movement at MIT and been diagnosed as a “paranoid schizophrenic” instead.

But at least Dulles and Nash were trying hard on world politics, rather than drinking heavily.

With a Ph.D. degree in physics from California Institute of Technology in Southern California, Ridenour’s distinguished academic career included faculty positions at Princeton and University of Pennsylvania, wartime military research management at MIT, and as graduate dean at the University of Illinois, according to the biography quoted earlier; his chairmanship of the scientific advisory board to the Air Force chief of staff was what Aiken referred to as former U.S. Air Force “chief scientist”.

However, there is a time discrepancy in Ridenour’s time of service at Lockheed and Aiken’s recollection of it in his 1973 interview: Ridenour’s biography quoted indicates he had started working at Lockheed in 1955; but Aiken said in his 1973 interview, as quoted earlier, that he had visited Lockheed in Los Angeles regularly as a consultant and then also frequented the “Bureau of Standards operation” at UCLA, i.e., the Institute for Numerical Analysis, which as mentioned earlier was terminated in 1954, i.e., even before Ridenour had started working at Lockheed.

Perhaps Aiken was unclear in his recollection of the history with Ridenour: he also gave the wrong chronological order regarding Ridenour as graduate dean of Illinois and as chief scientist of the Air Force.

Or Aiken could have already been a consultant at Lockheed even before Ridenour’s arrival.

After his Air Force chief scientist stint Ridenour was a visiting professor of engineering at UCLA during 1952-1953, and the two could have connected there at the INA.

Regardless, consistent with my earlier analysis’s conclusion that Aiken’s consultancy work at Lockheed was helping develop switching circuits for missile guidance, his close friend Ridenour’s Lockheed management responsibilities were in the missile field from 1955 to 1957, as a director within the Missile System Division – I note that Lockheed Missiles and Space Company was founded in Sunnyvale during this period.

But Ridenour’s persuading the company board to start “electronics machinery”, a venture Aiken was “associated with” as described in Aiken’s 1973 interview, may have come later in 1957 when Ridenour became Lockheed chief scientist, and surely by 1959 when he became Lockheed vice president in charge of the Electronics and Avionics Division, not long before his sudden death which Aiken was also ‘associated with’.

Before his death, Ridenour had also played a key role in Lockheed’s winning an Air Force satellite contract that would help put Lockheed in a “commanding position in the aerospace market”:

“… Before his untimely death, Ridenour won for Lockheed the Air Force satellite contract that would contribute greatly in the years afterward to that firm’s commanding position in the aerospace market.”

(R. Cargill Hall and Jacob Neufeld, eds., The U.S. Air Force in Space, 1945 to the 21st Century, September 21-22, 1995, Proceedings, Air Force Historical Foundation Symposium)

I would think that this Air Force satellite contract was a business basis for Lockheed Missiles and Space Company’s sponsorship of a 1962 symposium co-organized by Aiken, cited earlier, on “switching theory in space technology”.

In what could be another, if partial, credit due Ridenour, in 1959, namely the year he was promoted to be Lockheed vice president and general manager of its Electronics and Avionics Division, Lockheed started an electronics business, taking over an engineering company in Plainfield, New Jersey, and renaming it Lockheed Electronics Company:

“In 1953, Stavid Engineering built an 80-acre industrial site that sits in the boroughs of Watchung and North Plainfield, N.J. Lockheed Corporation, a predecessor to Lockheed Martin Corporation, acquired the engineering company six years later.

From 1959 to 1989, Lockheed Electronics Company manufactured, tested and assembled electronic components at the site. Lockheed closed the operation in 1989, …”

(“NORTH PLAINFIELD, NEW JERSEY”, Lockheed Martin Corporation)

Rather ironically in light of the computer-pioneer rivalry between Aiken and von Neumann mentioned earlier and detailed in Part 5 (ii), in 1959 the new Lockheed Electronics Company was set up not in California where Aiken was a Lockheed consultant in Sunnyvale, but in New Jersey where von Neumann had led his IAS computer project in Princeton.

This history related to him, as reviewed so far, suggests that in 1961 when Howard Aiken took early retirement from Harvard to become a businessman in the industry, he had specific reasons in wanting to partner with Cuthbert Hurd to start a new computer company – reasons beyond his business ambition in general or his interest in making computers smaller as discussed in Part 5 (ii).

Firstly, after his years of consulting for Lockheed in “electronics machinery”, Lockheed was still not doing computer development, and its new electronics branch was set up in New Jersey, separate and away from California where Aiken did missiles-and-space consulting. The recent invention of the integrated circuit led to greater potential for computer development, which had been Aiken’s specialty and was likely more desirable to him than consulting on switching theory for missiles and space applications; Aiken’s predicament thus became similar to William Shockley’s focus on difficult telephone switching applications in 1957 when eight of Shockley’s disciples decided to split with him – except that in Aiken’s case it was not his choice but Lockheed’s decision.

Secondly, Aiken’s close friend, vice president Louis Ridenour at Lockheed headquarters had died in 1959, and now at Lockheed Missiles and Space Company Aiken socialized with persons lower in statue and lesser in influence, such as, as discussed in Part 5 (ii), George Garrett, Hurd’s former Oak Ridge National Laboratory colleague and Lockheed Missiles “Director of Computer Activities” according to Hurd, or “director of information processing” per Garrett’s obituary. Hurd, the director of Electronic Data Processing Machines Division at the computer company IBM, by this time would be an important connection to Aiken.

Thirdly, as discussed earlier, Hurd’s IBM management role meant the potential of support for their new microcomputer company venture by the leading computer company which had made Aiken’s Harvard Mark I project possible as in Part 5 (ii) – I note that IBM’s largest shareholder and influential board member Sherman Fairchild had helped Shockley’s “Traitorous Eight” found Fairchild Semiconductor in the fledgling Silicon Valley where Aiken was doing his Lockheed consulting.

And lastly, with the prominence of his computer pioneer status and now industry consulting expertise, if Aiken wanted to undertake computer development in the industry he could and should launch a new company, given that Lockheed had set up its electronics company at a faraway location, whereas “an Assistant Director of Engineering at Lockheed” in Sunnyvale was doing computer design work, as mentioned earlier, for starting a new computer company with him and Hurd.

So Howard Aiken must have felt that it was a good time to do it, except that the new computer venture did not then materialize.

The scenario put forth in Part 5 (ii) is that in the early 1960s Hurd was unhappy with what little Aiken would offer him, namely that Aiken wanted a new company of his own and wanted Hurd only to “help”, i.e., likely not giving Hurd a good share of the ownership – especially in light of the historical precedent of Harvard and Aiken the inventor attempting to claim sole credit for Mark I despite that it was built by IBM.

Indeed, even by the time of his 1973 interview shortly before his death Aiken continued to express scorn of IBM, about IBM’s lack of mathematical abilities at the time of the Mark I project, reminiscing that IBM personnel didn’t know how to calculate basic arithmetic division and that he had to invent a technique for it overnight in his hotel room:

“AIKEN:

I went up to Endicott after this began to be formalized at IBM over the years. During the conversations with Lake and Durfee about what kind of machine this was that we were talking about. It was, oh I guess I made eight or ten such trips before I learned that IBM didn’t know how to divide. And that was a terrible blow. They could add, they could multiply, but by god, they didn’t know how to divide. It was almost like the bottom had dropped out. Maybe I ought to get back to Monroe—they knew how to divide.

So I stayed at a hotel in Binghamton and I can’t remember the name of it. That night when I found out that they didn’t know how to divide, I was up nearly all night, and it was that night that I invented the technique of dividing by computing by reciprocals. This is a scheme by which you can compute reciprocals, knowing how to add and multiply and you know how to do it. You could add and multiply by computing reciprocals and all you need is a first guess, and a first guess, when you are dealing with an independent variable that proceeds by fixed intervals ________, the first guess is always the reciprocal of the last name, and the convergence is beautiful. You double the number of digits of precision at each iteration.

So it was around three o’clock in the morning when this all came clear. The next day I walked into IBM and said “Well, you don’t need to worry about not being able to divide because I know how to divide with an adder and a multiplier.” And I started to derive this expression using the ___________________. And it became very clear that this was a waste of time because these men couldn’t understand this. They knew no mathematics. Even high school algebra was too much for them.”

(February 26-27, 1973, Smithsonian National Museum of American History)

Aiken also asserted that had Thomas J. Watson, Sr. – IBM president in the era of the Mark I project as in Part 5 (ii) – not relinquished authority to his son soon enough, IBM would not have become a top computer company:

“TROPP:

No, but the vision didn’t start until much later than the SSEC, it was in the 1950’s.

COHEN:

Oh, I see. That’s their statement.

TROPP:

It started after the defense calculator came into being.

COHEN:

Oh, yes.

AIKEN:

IBM got going in the computer business when young Tom was made president.

TROPP:

That’s right.

AIKEN:

And as he told me one time, the first thing that he did when he became President of the Company, replacing his father, was to sense the embarrassment the Corporation faced because Remington Rand was getting all the credit for going ahead. So that if the senior Watson had remained active for say, another two years, chances are pretty good that Sperry Rand would be the big computer company today. It was changed by young Tom.”

(February 26-27, 1973, Smithsonian National Museum of American History)

In any case, as cited in Part 5 (i), Watson, Sr. died 2 months after handing over IBM’s reign to Watson, Jr. in 1956.

The “SSEC” mentioned in the above quote was IBM’s own version of a electromechanical relay machine, made in collaboration with Columbia University, after the company’s dispute with Harvard and Aiken over the Mark I credit:

“… the SSEC contained over 21,000 electromechanical relay switches with physically moving parts.

… In these features of its physical layout, quite apart from its hardware and system architecture, the SSEC was the ancestor of IBM mainframe systems to come—after the ASCC (Automatic Sequence Controlled Calculator) or Mark I, that is. IBM had built that machine with researchers at Harvard using a team led by Howard Aiken and including Grace Hopper. T.J. Watson, Sr. believed that IBM had subsequently been given insufficient credit for the machine. Competition with its own Harvard Mark I was the impetus for IBM’s building the SSEC—and the new computer was pointedly created through a new arrangement with another Ivy League institution, Columbia, in a collaboration between IBM engineering in Endicott, New York, and the newly formed (1945) Watson Scientific Computing Laboratory at Columbia.

The Lab at Columbia was directed by the first Director of Pure Science at IBM, Wallace Eckert (no relation to Presper Eckert, the well-known developer of the ENIAC), an astronomy professor at the university. In 1944 T. J. Watson, Sr. recruited Eckert as the first IBM employee with a PhD and Eckert helped to hire a team that included the second PhD at IBM, another astronomer, Herb Grosch, as well as Robert R. (“Rex”) Seeber, who had worked on the Harvard Mark I. …”

(Steven E. Jones, Roberto Busa, S. J., and the Emergence of Humanities Computing: The Priest and the Punched Cards, 2016, Routledge)

The above-quoted history account does corroborate that at the time of the Mark I project IBM personnel likely knew very little advanced mathematics or advanced science: only after the slight by Harvard and Aiken did IBM hire its first two Ph.D.s – recruited through Columbia.

As in the last quote earlier from Aiken’s 1973 interview, after the SSEC machine IBM developed the “defense calculator”. As in Part 5 (i), the IBM 701 Defense Calculator was IBM’s first commercial line of general-purpose computer, made during Watson, Sr.’s time; as cited earlier, Lockheed bought 2 of them in 1953 and 1954; and most importantly as in Part 5 (ii), Cuthbert Hurd was a driving force behind the development of IBM Defense Calculator, who had a Ph.D. degree, had been head of technical research at the Oak Ridge National Lab and then founded IBM’s Applied Science Department, and later became director of IBM’s Electronic Data Processing Machines Division.

Therefore, co-launching a new computer company with Hurd, Howard Aiken would be collaborating with the best of IBM managerial talent in applied science and in electronic computer development, and also the best of Lockheed Missiles engineering talent as in the “assistant director of engineering” already doing computer design work for this potential new venture.

That might be the case, but Aiken likely did not regard it as much: in the last quote earlier from his 1973 interview: when the interviewer Henry Tropp mentioned the Defense Calculator Aiken did not even respond to it let alone mention Hurd, only crediting Thomas Watson, Jr. for turning around IBM’s fortune.

Howard Aiken’s sense of self-importance was brimming beyond naming his new company after himself. Later in his 1973 interview, he cited the example of how busy he had been as president of Howard Aiken Industries, Inc., not having the time to personally accept a prestigious scientific award, at the Franklin Institute, after attending dinner – presumably the award dinner – and later never taking the time to view that award, or to wear any of his awards:

“TROPP:

Have you ever had occasion to wear all those awards?

AIKEN:

I’ve never had occasion to wear one of them.

TROPP:

This looks like the Franklin Medal.

AIKEN:

That is the Edison Medal of the Institute of Electrical Engineers. I got an award from the Franklin Institute.

TROPP:

Yes, it’s listed here.

AIKEN:

That’s in the Harvard Archives.

TROPP:

It’s the John Price Award.

AIKEN:

Yes. The night I was to go and get that award, I also had to fly away to Madrid, and so I went to St. Louis and stayed for a short time at the Franklin Institute, and then out to the airport, and my plane that I had as President of Aiken Industries at the time, flew to Boston to meet TWA to go on to Madrid. I changed out of my dinner clothes on the plane. We left Philadelphia in the aero commander after TWA left New York, and we got there and so I made the flight. So Tony Ottinger picked up the Franklin Institute Award in my place and took it to Harvard and I’ve never seen it myself.”

(February 26-27, 1973, Smithsonian National Museum of American History)

As told in the quote above, Aiken visited the Franklin Institute in Philadelphia briefly for an award event, and most likely attended a formal dinner in “dinner clothes”, but did not have the time to accept the award, instead hurrying in his Aiken Industries’ ‘presidential plane’ to catch a TWA flight to Madrid, Spain; his former Harvard Ph.D. student Tony Oettinger, previously mentioned in quotes in Part 5 (ii), accepted the award on his behalf and took it to Harvard, and Aiken never bothered to view it afterwards.

The John Price Award Aiken was given was not the most well-known award of the Franklin Institute, which would be the Franklin Medal mentioned in the above quote. In the year 1964 when Aiken was one of four co-recipients of the John Price Award, also known as the John Price Wetherill Medal, the Franklin Medal was awarded to Gregory Breit.

(“John Price Wetherill Medal”, and, “Franklin Medal”, Wikipedia)

The physicist Gregory Breit led the early stage of atomic bomb development during World War II, resigning in 1942 with the role later taken over by the physicist Robert Oppenheimer, under whose leadership the Manhattan Project was successful, and two of the bombs were then used against Japan in war.

(“Gregory Breit”, and, “J. Robert Oppenheimer”, Atomic Heritage Foundation)

In my review in Part 5 (i) it has been noted that John von Neumann and the physicist Enrico Fermi, both important members of the Manhattan Project, died of cancer at the same age of 53, whereas Oppenheimer later died “10 years older at the age of 63”. But a more careful inspection shows that Oppenheimer was about 2 months short of his 63th birthday of April 22 when he died on February 18, 1967.

Though it was not “10 years older”, I note that Oppenheimer’s death was precisely 10 years and 10 days past von Neumann’s death on February 8, 1957.

(“John von Neumann”, Atomic Heritage Foundation)

And there were other interesting numbers of coincidences: the early computer pioneer Howard Aiken’s age of 73 at death was 20 years more than “father of computers” John von Neumann’s age of 53 at death; and the early atomic bomb development leader Gregory Breit’s age of 82 at his death on September 13, 1981, having been born on July 14, 1899, was also 20 years more than “father of the atomic bomb” Robert Oppenheimer’s age of 62 at death.

Considering that the other co-recipients of the 1964 John Price Award, John Eugene Gunzler, John Kenneth Hulm and Bernd T. Matthias, were all physicists awarded for achievements in the research frontier of “superconductive materials”, Howard Aiken was in good company indeed for an honor in memory of “America’s first scientist, Benjamin Franklin”.

(“Dr. Bernd T. Matthias to be honored by Franklin Institute”, October 7, 1964, UC San Diego, and, “JOHN EUGENE GUNZLER”, “JOHN KENNETH HULM”, and, “MISSION & HISTORY”, The Franklin Institute)

But the ambitious Howard Aiken probably did not view it that way.

In Part 5 (ii) I have offered the opinion that Aiken should have been given the Association for Computing Machinery’s A. M. Turing Award – considered the highest honor in computer science – had he not changed his career from academia to business.

However, in his 1973 interview shortly before his unexpected death, Aiken stated that from the beginning he had opposed the establishment of an organization like the ACM, that von Neumann had agreed with him, and that he now still opposed it:

“COHEN:

The other person that I was interested in, I know that you had at least two contacts with him, but for reasons that are fairly obvious, I think it would be fascinating to know what your relations with him were, when you first heard of him, was von Neumann. Now I know that Warren Weaver sent you some computations for von Neumann for the Mark I on inclusion. Secondly, you were with him on that National Academy of Sciences Commission. I have no idea what kind of relations you might have had or where you first heard of him.

AIKEN:

Well, let’s take the Commission first. Who all was in that Committee, let’s see.

TROPP:

There was you and von Neumann and Stibitz. Was Sam Alexander on that?

AIKEN:

Maybe. But there was a mathematician and what was his name? He’s presently at Miami University.

TROPP:

Oh, John Curtiss.

AIKEN:

That’s right— John Curtiss. John Curtiss proposed that we should all get together and start an association for people interested in computing machines, a new scientific society. And I said, “No, we shouldn’t do that because computation was a universal thing.” I said that what we should do was to help the mathematical economists to publish papers in their journals, using computational techniques and astronomers with theirs and the physicists with theirs and so on. That our best interests and the best interests of the scientific community as a whole would be better served to assist everybody to use machinery and to publish their work. And after all, we were tool makers, and it didn’t make very good sense for all of us specialists to get together and talk about the tools they used.

Von Neumann agreed with that completely, and so this proposal of Curtis’s’ was voted down. Curtiss then went out and formed the Association for Computing Machinery.

TROPP:

That was the Eastern Association, the original one.

AIKEN:

Yes. And neither John von Neumann during his lifetime, nor I have ever joined the Association for Computing Machinery. We opposed it and I still oppose it. Now, the basis for the opposition today is, of course, far less, because now you get all the machine builders and there are thousands and thousands of specialists, and they do have something going for them. But as of the time that it was proposed, it didn’t make very good sense, and as I say, von Neumann refused to have anything to do with it and I’m just as stubborn as von Neumann, and I never joined.

COHEN:

Did he ever come up to your Laboratory?

AIKEN:

Yes, many, many times. We did several problems for him. Then, we had a lot going back and forth. As long as he was using computing machines and helping to lead us in the discovery of numerical methods, which in some of the problems that we did, Dick Block did the programming by the way…”

(February 26-27, 1973, Smithsonian National Museum of American History)

Oh well, in that case, there was no reason for the ACM to award Aiken the Turing Award – when Aiken himself did not even see merits for ACM’s existence.

In this context, I begin to wonder why the Turing Award to Alfred Brooks, founding chairman of the University of Carolina at Chapel Hill’s computer science department and “one of Aiken’s most devoted disciples” as in Part 5 (ii), was given to Brooks rather late, in 1999, when the most important of his achievements had been made at IBM in the early 1960s as the leader of its development team for the important and successful System/360 computer.

I notice that 1999 was 2 years after 1997, the year John Weber Carr, III died of cancer as in Part 5 (ii), who had in 1956 become the first current academic – a University of Michigan mathematics associate professor – to serve as ACM president, and in 1959 become UNC Chapel Hill’s computation center director, but then left in 1962 before UNC Chapel Hill invited Brooks from IBM in 1964 to found what was the second academic computer science department in the U.S.

I cannot confidently assert that Carr opposed awarding ACM’s Turing Award to Brooks, and that Brooks could not get it due to Carr’s statue and influence as a former ACM president until after Carr’s death; but given how opinionated and stubborn some of these personalities were, Aiken in particular as extensively illustrated, I would not be surprised if this scenario was indeed the case.

A consequence of such egotism on Howard Aiken’s part was that, in the scenario when he and Cuthbert Hurd considered launching a computer company together in the early 1960s, if he viewed himself like William Shockley and treated Hurd like a disciple, or if he viewed himself like Sherman Fairchild and treated Hurd like only a manager under him, Hurd likely would not agree and would not accept less than a real share of the ownership.

The lack of ownership equity had in fact been an important reason why in 1956-1957 the 8 disciples of William Shockley mutinied, left Shockley Semiconductor and founded Fairchild Semiconductor with financing from Sherman Fairchild. Besides their opposition to Shockley’s difficult management style and orthodox scientific focus as previously discussed, they aspired for higher compensations, including not only salaries but stock options, for their work; and in making Fairchild Semiconductor a success, the 8 started the trend of venture capital financing for technological start-ups founded by Silicon Valley scientists and engineers:

“When William Shockley founded his company in Stanford University’s Research Park in 1955, no one knew then that he was starting an industry that was to give a whole region its name: Silicon Valley. Shockley chose Palo Alto as the site for his company partly because it was where he grew up and his mother still lived there, partly because he was aware that entrepreneurial electronics companies were hatching there, and partly because Arnold Beckman, his financial backer and founder of Beckman Instruments, had located one of his divisions in the Stanford Research Park.

One of Shockley’s motivations for starting his transistor company in 1955 was his conclusion that “the most creative people were not adequately rewarded as employees in industry.” Shockley attracted to his transistor company the brightest and best young men, who formed the nucleus of entrepreneurial scientists and engineers that built the semiconductor industry in Silicon Valley. But they didn’t make their fortunes at Shockley’s company. Wealth came later when they started and built their own companies.

In 1957, eight of them left to found Fairchild Semiconductor. Robert Noyce was one of them. According to Noyce, one of their principal reasons for leaving was that they could get equity in a startup company rather than simply working for a salary for the rest of their lives. They weren’t disappointed. Seven years later, each of the eight received about $250,000 when Fairchild Semiconductor was bought out by its parent, Fairchild Instrument and Camera—not a shabby return on their original investments of $500 each. …

When the Shockley Eight launched the first company to focus exclusively on silicon devices (rather than those of germanium), they were financed by Sherman Fairchild. At the time he was the largest individual stockholder of IBM through stock inherited from his father who was one of IBM’s founders. Sherman owned Fairchild Instrument and Camera, which set up the Shockley Eight as Fairchild Semiconductor. Venture capitalist Arthur Rock helped arrange the financing.

Thus, we see a pattern emerging at the start of the semiconductor industry in Silicon Valley: a scientific breakthrough followed by commercial exploitation by entrepreneurial scientists and engineers financed with venture capital from technologically savvy, wealthy investors. …”

(William D. Bygrave and Jeffry A. Timmons, Venture Capital at the Crossroads, 1992, Harvard Business School Press)

As quoted, each of the Shockley Eight’s $500 initial investment in Fairchild Semiconductor became $250,000 when Sherman Fairchild bought the ownership 7 years later.

It is not known much equity Howard Aiken would have been willing to give Cuthbert Hurd in a new computer company had it gotten off the ground in the early 1960s. But I can see Hurd wanting a larger share than any one of the 8 scientists starting Fairchild Semiconductor, given that Hurd already had a distinguished record as an IBM executive, and that Aiken himself could not provide the level of financing Sherman Fairchild had provided.

But as reviewed earlier, when Fairchild Semiconductor was founded in 1957 Sherman Fairchild had reserved the option of buying its full ownership. 7 years later his paying a pre-agreed $3 million – a figure cited earlier – to the company, with each of the 8 young founders receiving $250,000, materialized that objective, and from this point on neither the 8 nor anyone else other than Fairchild’s parent company owned shares.

In 1968 two of the 8, Robert Noyce and Gordon Moore, decided to leave Fairchild Semiconductor to form their own company. The money they had made from the ownership buyout and the experiences they had gained at Fairchild Semiconductor positioned them to launch the next big, more independent venture, the Intel Corporation; and their move spurred the next big wave of venture capital-financed start-ups in Silicon Valley, many of which, Intel the most famous, became known as “fairchildren” – companies founded by persons who had worked at Fairchild Semiconductor:

Intel: The Fairest of the “Fairchildren”

There can be little doubt that Fairchild was the breeding ground for the technology entrepreneurs—sometimes dubbed “Fairchildren”—who built the semiconductor industry in Silicon Valley. About half the firms can trace their roots in Fairchild. It is a distinguished list of movers and shapers, among them, Intel, National Semiconductor, and Advanced Micro Devices (AMD). But the fairest of them all is Intel.

In the summer of 1959, Noyce, then director of R&D at Fairchild, invented the integrated circuit. (Jack Kilby at Texas Instruments independently discovered the same concept a few months earlier. Today, Kilby and Noyce are recognized as the co-inventors of the integrated circuit.) The importance of the integrated circuit to the development of the semiconductor industry was second only to the invention of the transistor itself. So Noyce was already a legendary figure when he and Gordon Moore resigned from Fairchild in 1968.

Noyce and Moore, with Rock as their venture capitalist, launched their next semiconductor company, Intel. Rock as lead investor raised $2.5 million and Noyce and Moore each invested about $250,000. Through years of tireless endeavor, they multiplied their original investments of $500 in Fairchild a hundred-thousand-fold. In 1982, Moore owned 9.6% of Intel, with a market value of more than $100 million, and Noyce owned 3.6%. No one deserved it more. They had been at the forefront of building a new industry. Their companies were responsible for breakthroughs that transformed not only the semiconductor industry but society itself.

Before Rock launched Intel, there had been only a handful of venture-capital-backed startups. But that was about to change. Other budding entrepreneurs were making proposals to other venture capitalists. Between 1967 and 1972, about thirty companies were started with venture capital … including such luminaries as National Semiconductor—which was started the year before Intel—and Advanced Micro Devices.”

(William D. Bygrave and Jeffry A. Timmons, 1992, Harvard Business School Press)

In addition to National Semiconductor, Intel and AMD mentioned above, one of the “Fairchildren” of note was Signetics, which as mentioned earlier designed the computer chips for the Apollo moon landing program’s emergency onboard Abort Guidance System.

(“NXP Semiconductors”, 2008, Silicon Valley Historical Association)

I note that when starting Intel, Noyce and Moore reinvested the same amount of money they had made from their Fairchild Semiconductor equity – $250,000 each, after 7 years from a $500 initial investment – into the new start-up, and 14 years later in 1982 Noyce owned 3.6% of Intel, and Moore 9.6%. With Moore’s valued at more than $100 million, Noyce’s would be more than $37.5 million.

In the corporate culture of the earlier time back in 1957, when the Shockley Eight sought investment for their new venture, not only that their rebellion against their Nobel laureate mentor made them unpalatable to the investment community, but that their request for equity and management power was also considered inappropriate:

“… Some firms may have found the pith of the letter–please give a million dollars to a group of men between the ages of 28 and 32 who think they are great and cannot abide working for a Nobel Prize winner–unpalatable. Even if a firm thought the proposal was interesting in theory, no standard operating procedure existed for the company-within-a-company undertaking Rock and Coyle recommended. What accounting procedures would be used? How could the funding firm allow this group of unknown young men to run their own operation, according to criteria of their own devising, and not permit other employees the same autonomy? In the 1950s, with its ethos of conformity, this smacked of unseemly preferential treatment.”

(Leslie Berlin, 2005, Oxford University Press)

“Company-within-a-company undertaking”? In my understanding, if it had been a group of former corporate managers requesting the same kind of power in a new company then it would have been alright in the eyes of an investment firm, because they knew the accounting procedures, they were not unknown to the investment firm, and allowing them but not other employees to run the company was simply a matter of the management right. The problem was, therefore, that scientists and engineers with special technological expertise were still viewed only as employees.

Ironically, such prevailing corporate view of the 1950s was compatible with Cuthbert Hurd’s interest in co-launching a company with Howard Aiken: Hurd had been an established IBM executive; and if he also helped “raise the money” then he probably would not think of Aiken’s computer pioneer status and expertise as worthy of a Sherman Fairchild level of ownership power.

In both the investment firm’s view in the Shockley Eight case and Hurd’s view in the Aiken case, they also had the politically correct ground: for the investment firm, the “ethos of conformity” did not permit “unseemly preferential treatment” for some employees but not for others; and for Hurd, Aiken’s “thinking about starting the company” in order “to be rich” was not a deserving motivation.

But the issue of fairness aside, namely to bright scientists like the Shockley Eight, and to the prominent computer pioneer Howard Aiken aspiring to succeed in business, there is still a question of practicality regarding a company run by scientists and engineers with special technological expertise: would such management be more beneficial to the other employees of the company?

The history of Fairchild Semiconductor seemed to suggest that the answer should be affirmative in regards to the company’s commercial success. The Shockley Eight started Fairchild Semiconductor not only with them as part-owners but also with some at the helm of the management, and it turned out good enough that 7 years later Sherman Fairchild exercised his $3 million buyout option to acquire the company.

In fact, Fairchild Semiconductor was very successful in the civilian commercial market sectors, such as the television market, and in the 1960s became the U.S. market leader in integrated circuits – something the company had invented independently along with Texas Instruments as discussed earlier – with a market share of 55%:

“Responding to a decline in the military demand for electronic components in the early 1960s, Fairchild Semiconductor created new markets for its transistors and integrated circuits in the commercial sector. To meet the price and volume requirements of commercial users, Fairchild’s engineers introduced mass production techniques adapted from the electrical and automotive industries and set up plants in low labor cost areas such as Hong Kong and South Korea. The firm’s application laboratory also developed novel systems such as an all-solid state television set and gave these designs at no cost to its customers, thereby seeding a market for its products. … By 1966, Fairchild had established itself as a mass producer of integrated circuits and controlled 55% of the market for such devices in the United States.”

(“Technology and Entrepreneurship in Silicon Valley”, by Christophe Lécuyer, December 3, 2001, Nobelprize.org)

But to really answer the question posed earlier, one needs to determine how much Fairchild Semiconductor’s commercial success under the leadership of the founding scientists group translated into improved incomes for the regular employees.

It is unclear where the rest of the $3 million Sherman Fairchild’s parent company paid to Fairchild Semiconductor was distributed, i.e., other than the $2 million paid to the 8 founders. After that, the company was in the hands of Sherman Fairchild. From this perspective, if one considers better employee compensations as not only earning wages but also gaining ownership equity, i.e., stock options, then after the first 7 years no employee owned anymore equity at Fairchild Semiconductor.

And that was an important rationale behind the decision by Robert Noyce and Gordon Moore to leave Fairchild Semiconductor in 1968 and found Intel, a new company where they would institute a much more egalitarian corporate structure and culture, in which not only they would not be working for a wealthy businessman in the end, but also every employee could own equity:

“In preparation for the IPO, Noyce consulted with attorneys and bankers, reviewed drafts of the prospectus, met with auditors, signed the certificates necessary for the offering, wrote explanatory letters to employees and current investors, invited employees to buy stock in the offering as “friends of the company,” …

SEC rules for public offerings required Intel to cancel the stock-purchase plan set up at the company’s establishment. Noyce dreamed of replacing this plan with options packages that would be distributed to every employee, “including janitors.” He worried, though, if people with limited educations could understand what a stock option was and how volatile the market could be. He and Moore finally decided that once a plan could be developed that met SEC guidelines for publicly held companies, Intel should reinstitute a stock-purchase plan, rather than options, for nonprofessional employees. Under this plan, which was implemented in 1972, every employee would be allowed to take up to 10 percent of base pay in Intel stock, which could be bought at 15 percent below market rates. The stock purchase plan met Noyce and Moore’s goals of giving employees a stake in the company without requiring the sophisticated financial knowledge associated with stock options.”

(Leslie Berlin, 2005, Oxford University Press)

The nonprofessional employees of Intel, “including Janitors” as quoted above, were allowed to use up to 10% of their wage to buy company stocks at 15% below market prices – as quoted earlier Intel earned the reputation of being “the fairest of the “Fairchildren””.

Intel’s egalitarian culture influenced, and spread to, the entire Silicon Valley, according to Robert Noyce’s widow Ann Bowers speaking in a February 2013 event remembering the pioneers of Silicon Valley; Bowers herself also brought some of that culture to Apple Computer:

“Ann Bowers knew Bob Noyce, the “mayor of Silicon Valley,” better than anyone. She was married to the co-founder of Fairchild Semiconductor and Intel, and witnessed the magnetic effect he had on the people who followed him to the region that will be chronicled in an American Experience history documentary on PBS on Tuesday (Feb. 5) at 8 pm. The film is called Silicon Valley: Where the Future Was Born, and it captures the people like Noyce, who died in 1990, and how they made such a mark that their impact is still reverberating today.

Last week, Bowers … spoke about Noyce on stage at the Computer History Museum in Mountain View, Calif., where the surviving pioneers of Silicon Valley gathered to celebrate the film and the man at the center of it. …

Noyce isn’t depicted as some superhero, since he had his flaws. But, like David Packard and Bill Hewlett before him, he was a very big part of the fuel that set the valley on fire. …

At the start of the event, host Hari Sreenivasan asked Bowers about the many talents of Noyce. He played the oboe. He was the state diving champion. He lettered on the swim team. He was in the drama club. And he knew more about transistors than just about anyone on the planet. Bowers smiled and answered a quick “yes” to each one of the facts.

Noyce had migrated to the Santa Clara Valley to work for William Shockley, the Nobel-prize winning physicist who co-invented the transistor and moved West to Palo Alto, Calif., to set up Shockley Semiconductor Laboratory — and be near his mother.

But Noyce, Gordon Moore, and six other colleagues broke away, since they didn’t like his erratic behavior. They were deemed “the traitorous eight” … by Shockley. They founded Fairchild Semiconductor, a division of Fairchild Camera and Instrument.

In 1957, the Russians scared America with the launch of the first satellite, Sputnik. President Eisenhower created NASA within the next year and launched the American space program.

Noyce’s team at Fairchild was positioned to make the chips that would go into the rockets and space ships. The federal government had an “insatiable need” for what Fairchild would produce, said Leslie Berlin, author of The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley.

In contrast to Shockley’s dictatorship and the East Coast profit-milking of Fairchild’s parent company, Noyce and co-founder Gordon Moore founded Intel in 1968 and gave out stock options. They managed in a way that was more egalitarian, building a company based on meritocracy. Bowers gave credit for that culture to Moore as well as Noyce. They infused this egalitarianism in the rest of the Valley, which came to be a collection of many companies that spun out of Fairchild. Today’s valley is littered with descendants, or Fairchildren, such as Advanced Micro Devices.

Noyce once said that the stock option money didn’t seem real.

“It’s just a way of keeping score,” he said.

The Pied Piper of his generation, Noyce counseled his employees to “go off and do something wonderful.”

Bowers held jobs such as director of personnel for Intel, and she was the first vice president of  human resources for Apple. She currently serves as the chair of the Noyce Foundation.”

(“Widow speaks about Bob Noyce, telling the human side of the mayor of Silicon Valley (video)”, by Dean Takahashi, February 4, 2013, Venture Beat)

The history view of Silicon Valley as presented by Ann Bowers cited above says that William Shockley’s management, which the “Traitorous Eight” rebelled against, was a “dictatorship, and subsequently the Sherman Fairchild parent company’s oversight of Fairchild Semiconductor founded by the 8 was “East Coast profit-milking”.

Bowers’s view is supported by Arthur Rock, a venture capitalist instrumental in helping to start both Fairchild Semiconductor and Intel as mentioned in earlier quotes, who in a Harvard Business School interview expressed a similar opinion about East Coast business style:

““… Fairchild Camera and Instrument was really an eastern-type company; they wanted to run things their way. That’s why people then left Fairchild Semiconductor and formed Intel and other companies. …”

(“ARTHUR ROCK”, Entrepreneurs, Harvard Business School)

Rock explained what “eastern-type” business was like, that it was “old establishment and old money”:

“The problem at Fairchild Semiconductor had to do with incentives. The whole idea of giving people incentives was something foreign to most companies. That’s one of the reasons, of course, that I came out to California; I saw that people were a lot more adventuresome in California than they were in the East. In the East, it’s the old establishment and old money. People have been doing things one way for a long time and it’s very hard to change. It actually took many, many years for them to change, whereas people who came out West had some of the “Go West, young man,” Wild West aura about them, and they were willing to do things to test new ideas. I found that really the brighter, more imaginative, adventuresome people were out here rather than in the East.”

(Entrepreneurs, Harvard Business School)

As quoted, Rock contrasted the East Coast’s “old establishment and old money” with Californians’ “adventuresome” spirit.

In this Harvard Business School interview, Arthur Rock recalled how he met the Shockley Eight and decided to help, stating that starting a new company was actually his idea, except that he and his firm then approached some 35 investment firms and none would help, before he was introduced to Sherman Fairchild:

“I had done financings for a number of small companies based around New York and Boston, and I liked doing that. I liked the people who ran those companies. I liked the scientific-type person. So Eugene Kliner wrote a letter to his father’s broker who was at Hayden Stone. And that broker, knowing of my interest in these kinds of companies, showed me the letter.

… seven of whom got together and asked Eugene Kliner to write this letter. Actually, the letter was typed by Kliner’s wife. The letter said that they were unhappy with Shockley and they were going to quit, but did we know of anyone who might hire them together as a group? They weren’t looking especially to form a company. That was my idea. I came out to see them with one of the partners of Hayden Stone and we talked to them. They had all been chosen by Shockley, so I knew they were probably pretty good people, and then when we met them I was very impressed and thought we could help them. I suggested to them that they might want to set up a company, and we told them we would see if we could get financing for the company. We had a couple more meetings with them. Then they brought along an eighth fellow who was Bob Noyce, so that was the “traitorous eight.”

We, Hayden Stone together with the eight of them, put together a list of companies that might finance this group. We had about thirty-five companies, all of whom had expressed an interest in going into new fields. We talked to each of these thirty-five companies. I personally visited most of them. And their reply was, well, this is a great idea, but if you set up a separate company, then what will our other employees think; it will just upset our organization, so we don’t want to do it. So we crossed out all thirty-five and were at our wit’s end when somebody introduced me to Sherman Fairchild.”

(Entrepreneurs, Harvard Business School)

Rock pointed out that Fairchild had wealth, an inventor background, and a liking for young people, that were favorable factors for his decision to finance the 8 scientists to start the new company:

“Sherman Fairchild was one of the largest shareholders of IBM stock. His father had gone into partnership with Tom Watson in setting up IBM, and since Tom Watson’s family consisted of three or four children and Sherman Fairchild was the only heir in his family, he owned a lot of stock. So he had a lot of cash and he was an inventor. He had invented the aerial camera, and as a result had formed Fairchild Camera and Instrument Company. And then he had to invent the airplane that could hold the camera, so that was the basis for Fairchild Aviation. He also liked young people and saw the merit in our idea, so he agreed that Fairchild Camera and Instrument Company would finance this group. They advanced what we then named Fairchild Semiconductor a million and a half dollars in return for an option to buy all of our stock for $3 million. We then split up Fairchild Semiconductor: 10 percent to each of the eight and 20 percent to Hayden Stone.”

(“ARTHUR ROCK”, Entrepreneurs, Harvard Business School)

As recalled by Arthur Rock, Sherman Fairchild’s company provided $1.5 million financing on the condition that the parent company had the future option to buy the entire Fairchild Semiconductor for $3 million, but otherwise did not hold any initial ownership at all: 80% of the initial ownership belonged to the Shockley Eight at 10% each, and the other 20% belonged to Rock’s investment company Hayden Stone.

So it was a bold start-up arrangement, that the 8 actually owned 80% of Fairchild Semiconductor from the start, except that the company was obliged to be, and later indeed was bought out by the East-Coast parent company.

Rock offered the reflection that had he and Sherman Fairchild not helped the Shockley Eight, “there would not have been any silicon in Silicon Valley” and “a lot of them would have ended up with Texas Instruments”:

“… But there would not have been any silicon in Silicon Valley if it hadn’t been for the formation of Fairchild Semiconductor because the “treacherous eight” would probably have just and gone off and gotten jobs individually. My guess is a lot of them would have ended up with Texas Instruments, which had a similar but less successful enterprise going in Texas.”

(Entrepreneurs, Harvard Business School)

As discussed earlier, Texas Instruments was Fairchild Semiconductor’s main competitor in integrated circuit invention, and did much better in the direct supply of U.S. military aerospace demands.

While the U.S. civilian space program and NASA, started in 1957-1958 by President Dwight Eisenhower as quoted earlier, fuelled the demand for semiconductors, the rapid growth of consumer electronics would drive the market to much larger. As cited earlier, Fairchild Semiconductor played a significant role in the 1960s developing the U.S. civilian and commercial market sectors for integrated circuits and consumer electronics.

During this period, the rapid growth of consumer electronics far outpaced the growth in military demands, and permanently ended the U.S. military’s dominance in the semiconductor market, reducing the military share from, for instance, 72% of annual semiconductor sales in 1965 to just 21% in 1970:

“Prior to the outpouring of new firms and new products, semiconductors were part of the defense industry. By the end of the 1960s, consumer and industrial applications of microelectronics were expanding much more rapidly than DoD purchases. Defense sales grew relatively slowly, and quickly dropped below 20 percent of the total (Table 8-1). …

…”

(John A. Alic, Lewis M. Branscomb, Harvey Brooks, Ashton B. Carter and Gerald J. Epstein, Beyond Spinoff: Military and Commercial Technologies in a Changing World, 1992, Harvard Business School Press)

Howard Aiken’s retirement from the academia and adventure in business, mentioned earlier and discussed in some details in Part 5 (ii), took place during this same period of the 1960s and early 1970s.

Upon his early retirement from Harvard in 1961, Aiken started Howard Aiken Industries, specialising in buying, fixing and selling companies. He and Cuthbert Hurd also discussed a plan of starting a new computer company that would have been the first in the world to develop computers of smaller sizes referred to as “microcomputers”, and a Lockheed assistant director of engineering was doing the design work; but they did not follow through with the plan, with Hurd later commenting somewhat scornfully about Aiken’s motivation, as quoted earlier: “I thought that maybe he wanted to be rich,”, “and was thinking about starting the company for that reason”.

Then in 1967 Aiken retired from heading his company, became its board vice chairman – his Aiken Industries was at some point renamed Norlin Technologies as cited in Part 5 (ii) –and re-engaged in active consulting, now for Monsanto Company in addition to Lockheed Missiles and Space Company, as previously quoted in Part 5 (ii),:

“During the 1973 interview, Hank Tropp questioned Aiken about aspects of his life and career after leaving Harvard. Aiken referred, first of all, to his “forming Aiken Industries, beginning in 1961” and his becoming “vice-chairman of the board” in 1967. “So now,” Aiken said, “I go to board meetings, but I’m not going at it the way I used to. . . . When they kicked me out of Harvard, I had to find a new job and that was Aiken Industries. And when they kicked me out, I had to find a new job and went into the consulting business. So now I spend a good deal of time at Monsanto.” … Aiken said that he had been a consultant at Lockheed “for many years,” but that he had “quit that this year.””

(I. Bernard Cohen, 2000, The MIT Press)

What is especially interesting but not yet reviewed here is Aiken’s work during the 1960s and early 1970s in the context of the field of computer development of that period.

As mentioned earlier, in 1959 while Aiken was a Harvard professor and a consultant for Lockheed, the company had started the Lockheed Electronics Company in New Jersey, away from California where Aiken did consulting on switching circuits for missiles and space applications.

Just 2 years after Aiken’s return to active computer consulting, in 1969 the Lockheed Electronics Company produced its first commercial computer, a “minicomputer”:

“Lockheed Aircraft can trace its history back to 1912. It entered the computer field in 1969 when it produced a small process control computer, the MAC-16 (also referred to as the LEC-16). The MAC was used for instrument control in large laboratory settings and for applications such as air traffic control.

The Lockheed Data Products Division supported the MAC (and its successors, MAC Jr., Sue, and System III) until the mid 1970s when Lockheed decided that they should concentrate on their core capabilities and left the commercial computer business.”

(“Company: Lockheed Electronics Company, Inc.”, Computer History Museum)

So, without Howard Aiken’s consultancy participation Lockheed got into the computer development field and produced a “minicomputer” for the commercial market, one that was useful in laboratory settings.

Minicomputers were smaller than the typical mainframe computers up to that point in time, such as the various IBM systems mentioned before, but not as small as what a “microcomputer” is today, which is typically a PC, i.e., personal computer.

(“Supercomputers, Mainframes, Minicomputers and Microcomputers: Oh My!”, December 14, 2011, Turbosoft)

It was at this stage that in 1970 Howard Aiken again discussed with Cuthbert Hurd about launching a computer company to make smaller computers, this time with “the idea of what was to become a microprocessor and personal computer”, quoted previously in Part 5 (ii):

“Hurd… recalled that he, Aiken and William Main had met several times in 1970 “to discuss the formation of a corporation to be called PANDATA.” The three had “discussed the idea of what was to become a microprocessor and personal computer.” Aiken, according to Hurd, “had an early vision of the usefulness of such devices,” “believed that they could be mass produced at a low cost,” and “wished to form an integrated company to manufacture and sell them.” Aiken wanted Hurd “to help form the company, be chairman of the board, and raise the money.” Aiken himself “wished to make a considerable investment in the new company.” …”

(I. Bernard Cohen, 2000, The MIT Press)

I note that the first time, probably around 1961, when Aiken and Hurd discussed about starting the “first microcomputer computer company in the world” as quoted earlier, there was no mention of “microprocessor”, and then in 1970 when they again discussed starting a computer company, “microprocessor and personal computer” were in Aiken’s vision.

That is interesting, because the modern definition of a microcomputer, like a personal computer, includes that of a microprocessor as its central processing unit:

Microcomputer, an electronic device with a microprocessor as its central processing unit (CPU). Microcomputer was formerly a commonly used term for personal computers, particularly any of a class of small digital computers whose CPU is contained on a single integrated semiconductor chip. … Smaller microcomputers first marketed in the 1970s contain a single chip on which all CPU, memory, and interface circuits are integrated.”

(“Microcomputer”, Encyclopædia Britannica)

Aiken’s “early vision”, as Hurd called it, was quite impressive. They had planned to start a “microcomputer computer company” probably a decade before small microcomputers were “first produced marketed in the 1970s”, as quoted above.

But to start the world’s first “microcomputer computer company” so early, that company would have to produce a small computer without relying on a microprocessor – despite the modern technical definition quoted above requiring a microprocessor for a “microcomputer” – because, even by 1970 when they talked about starting a company to mass-produce “microprocessor and personal computer”, the microprocessor was not yet invented.

Around that time in 1970, something like the microprocessor was only in the process of being invented, by none other than Intel, the Silicon Valley semiconductor company founded 2 years earlier by two of the Shockley Eight, Robert Noyce and Gordon Moore.

More specifically, in 1969-1970 Intel was doing a project for an advanced Japanese calculator design, and in that process Intel chip designer Marcian “Ted” Hoff proposed a new design to consolidate several central components onto a single chip, as Hoff later recalled:

“Intel was founded with the idea of doing semiconductor memory. Up until that time, most computer memory used small magnetic cores strung onto wire arrays. In most cases, they were wired by hand and some of these cores weren’t much bigger than the tip of a mechanical pencil. … While developing memory products, there was a feeling within Intel’s management that it might take a while before the computer industry would accept semiconductor memory as an alternative to cores. So it was felt that we should undertake some custom work, that is, to build chips to the specifications of a particular customer. We were contacted by a Japanese calculator company whose calculators came out under the name Busicom. They said that they would like to have us build a family of chips for a whole series of different calculator models, models that would vary in type of display, whether they had a printer or not, the amount of memory that they had and so on. A contract to make their chips was signed in April of 1969. … I was curious about the calculator design. I knew little about it, although I was fairly familiar with computer architectures and I had been at the sessions where the project had been discussed. The more I studied the design, the more concerned I became, based on what I had learned about Intel’s design and package capability and costs. It looked like it might be tough to meet the cost targets that had been set in April. The Japanese design was programmable, using read-only memory, but it seemed to me that the level of sophistication of their instruction set was too high, because there was a lot of random logic and many interconnections between different chips. There was a special chip to interface a keyboard, another chip for a multiplexed display and yet another chip for one of those little drum printers. It seemed to me that improvements could be made by simplifying the instruction set and then moving more of the capability into the read-only memory, perhaps by improving the subroutine capability of the instruction set. I mentioned some of my concerns and ideas to Bob Noyce. He was really encouraging, saying that if I had any ideas to pursue them because it was always nice to have a backup design. I did so throughout the months of July and August. … In October, the management of the calculator company came over to the U.S. for a meeting in which both approaches were presented. At that point they said they liked the Intel approach because it represented a more general purpose instruction set, a simpler processor, fewer chips to be developed and had a broader range of applications. Our proposal reduced the number of chips needed from around a dozen to only four.”

(“Ted Hoff: the birth of the microprocessor and beyond: Alumni Profile”, Alumni Profile, Stanford Engineering)

As Ted Hoff explained above, by using his new design the Japanese calculator needed only 4 semiconductor chips instead of “around a dozen” chips in the Japanese design, and – as he implied – the number of “interconnections”, i.e., wirings between the chips, would also be reduced as a result.

Subsequently in 1970-1971 Intel turned this new idea, of using a single chip to integrate different functionalities previously spread over several chips, into a general-purpose microprocessor for computers:

“Our initial goal was never to make a microprocessor, only to solve this particular customer’s problem, this calculator design problem. But there were several aspects of the design that became more evident as it was pursued. One was, being more general purpose and faster than the original design, we figured it might be useful for a broader range of applications than just the calculator family. Dr. Federico Faggin was hired around in April of 1970 and given the responsibility for chip circuit design and layout, to turn this architecture into a physical transistor layout. … He had working parts by around January of 1971.”

(Alumni Profile, Stanford Engineering)

As Intel’s Japanese calculator example illustrates, back in the 1960s when Howard Aiken and Cuthbert Hurd discussed starting the world’s first “microcomputer computer company”, making a small microcomputer without the microprocessor – unlike the modern definition of microcomputer – was possible but the technical work would have been more complicated, the computer’s size likely not as small and its performance likely not as good.

Then by the next time, in 1970, when the two discussed launching a computer company to develop “a microprocessor and personal computer”, Intel, a company that had started only in 1968 – one year after Aiken returned to active computer consulting – was already in the process of developing the first microprocessor.

Thus, in 1970 it was too late for a yet-to-be-founded company to invent the microprocessor.

The personal computer, on the other hand, did not become a reality even by the time of Aiken’s death in 1973. Hence a new company of Aiken’s and Hurd’s could have become the first to develop and mass-produce it.

As reviewed in Part 5 (ii), Aiken died in his sleep in a St. Louis hotel during a March 1973 consulting trip to Monsanto on memory technology – also a main interest of Intel’s – with the goal of making computers smaller; Aiken had just quit consulting for Lockheed Missiles and Space Company in Silicon Valley – presumably also abandoning the hope of starting a new company with Hurd and with help from Lockheed engineers.

Commenting on that missed prospect for Aiken to develop smaller computers, Hurd later said that Aiken died before it could happen, as previously quoted in Part 5 (ii):

“… Hurd reported, however, that he “was busy at the time with other activities” and that Aiken “died before the venture could be launched.””

(I. Bernard Cohen, 2000, The MIT Press)

But like I have remarked in Part 5 (ii), there were 3 years from 1970 t0 1973 for Hurd to act on their collaboration plan, and I would only think that Hurd had the sense to know well that at the age of over 70 – Howard Aiken was born in 1900 as in Part 5 (ii) – there simply wasn’t much working time left to waste if Aiken was to realize his ambition.

In the end, past Howard Aiken’s lifetime, the work of developing microcomputers and personal computers and the glory of making them a success went to a younger generation of scientists and engineers, most notably among them computer designer and Apple Computer co-founder Steve Wozniak:

Stephen Gary Wozniak, (born Aug. 11, 1950, San Jose, Calif., U.S.), American electronics engineer, cofounder, with Steven P. Jobs, of Apple Computer, and designer of the first commercially successful personal computer.”

(“Stephen Gary Wozniak”, by William L. Hosch, Encyclopædia Britannica)

Stephen Gary Wozniak also happened to be the son of a Lockheed Missiles and Space Company engineer:

“Wozniak—or “Woz,” as he was commonly known—was the son of an electrical engineer for the Lockheed Missiles and Space Company in Sunnyvale, Calif., in what would become known as Silicon Valley. …”

(William L. Hosch, Encyclopædia Britannica)

I note that in Part 5 (ii) a similar case of a generation-long time frame before a goal was finally realized has been noted: Alain Fournier, a University of British Columbia computer science professor specializing in computer graphics when I taught at UBC in the late 1980s and early 1990s, had in the 1970s been a Ph.D. student at the University of Texas at Dallas interested in studying computer graphics with faculty member Henry Fuchs, without realizing that wish as Fuchs soon left for the University of North Carolina at Chapel Hill; then at UBC in 1990, Fournier brought in new Stanford Ph.D. Jack Snoeyink to fill a tenure-track faculty position, and it practically ended my hope of getting a tenure-track position there; finally in 2000, as Fournier was dying of cancer, Snoeyink landed a UNC Chapel Hill professorship and became a colleague of Fuchs, the Federico Gil Distinguished Professor of Computer Science at UNC Chapel Hill, in the computer graphics field.

It turned out that even though Intel invented the microprocessor by 1971, i.e., while Aiken was still alive and had in 1970 talked with Hurd about developing it, it was not commercially available until after Aiken’s death; when the Intel 8080 microprocessor became available in 1975, Wozniak, a university dropout including from UC Berkeley in the San Francisco Bay Area, and a Hewlett-Packard computer designer, began designing a microcomputer he hoped HP would produce; but HP was not interested:

“… A precocious but undisciplined student with a gift for mathematics and an interest in electronics, he attended the University of Colorado at Boulder for one year (1968–69) before dropping out. Following his return to California, he attended a local community college and then the University of California, Berkeley. In 1971 Wozniak designed the “Blue Box,” a device for phreaking (hacking into the telephone network without paying for long-distance calls) that he and Jobs, a student at his old high school whom he met about this time, began selling to other students. Also during the early 1970s Wozniak worked at several small electronics firms in the San Francisco Bay area before obtaining a position with the Hewlett-Packard Company in 1975, by which time he had formally dropped out of Berkeley.

Wozniak also became involved with the Homebrew Computer Club, a San Francisco Bay area group centred around the Altair 8800 microcomputer do-it-yourself kit, which was based on one of the world’s first microprocessors, the Intel Corporation 8080, released in 1975. While working as an engineering intern at Hewlett-Packard, Wozniak designed his own microcomputer in 1976 using the new microprocessor, but the company was not interested in developing his design. …”

(William L. Hosch, Encyclopædia Britannica)

This is not the first example of Hewlett-Packard’s slow response to the prospect of developing new products. As discussed in Part 5 (i), 20 years earlier in 1956-1957 Silicon Valley computer pioneer Douglas Engelbart was looking for a job in computer development and HP let him know that it had no plan for such, and only after another 10 years in 1966 that HP started its own computer product line. Now another 10 years later in 1976, HP was again uninterested in something new, namely Wozniak’s personal computer design.

Wozniak later recalled that as an HP employee he had pitched his personal computer design to the company no fewer than 5 times and been rejected every time, before accepting his friend Steve Job’s suggestion to start their own personal computer company:

“His loyalty to Hewlett Packard made him reluctant to leave the company to start Apple with Jobs. Wozniak reminded reporters last week at the Computer History Museum that he had proposed his idea for the Apple I computer to Hewlett Packard, but they “turned him down 5 times.”

According to a 2008 interview with The Telegraph, Wozniak originally thought he owed it to HP to stay, until Jobs persuaded Wozniak’s family to convince him to do it.”

(“Apple co-founder offered first computer design to HP 5 times”, by Josh Ong, December 6, 2010, Apple Insider)

As quoted, Jobs needed to get Wozniak’s family to convince Wozniak to start Apple Computer with him.

Though a university dropout, Wozniak was a proven designer of video games, having worked with Jobs in the early stage of that field, including building his own version of the Pong game, one of the earliest video games, and designing the original Breakout game for the company Atari that had hired Jobs on the basis of the Pong game credit:

“… Wozniak related the story of one of his first collaborative projects with Apple CEO Steve Jobs. After Wozniak built his own version of Pong, one of the earliest video games, Jobs then took the game to Atari and got a job. “They had Steve working at night so he wouldn’t be around other people,” joked Wozniak.

“Then he got us a job,” Wozniak continued, “I designed the first Breakout game for Atari. So I didn’t really work there. They tried to hire me, but I said ‘Never leave Hewlett Packard, I love my company, I’m loyal.’” Wozniak added that it took them 4 straight days and nights to design the game. “We both got the sleeping sickness, mononucleosis,” said Wozniak.”

(Josh Ong, December 6, 2010, Apple Insider)

Now together the two started Apple Computer, and the popular Apple II computer was produced in 1977:

“… Jobs, who was also a Homebrew member, showed so much enthusiasm for Wozniak’s design that they decided to work together, forming their own company, Apple Computer. Their initial capital came from selling Jobs’s automobile and Wozniak’s programmable calculator, and they set up production in the Jobs family garage to build microcomputer circuit boards. Sales of the kit were promising, so they decided to produce a finished product, the Apple II; completed in 1977, it included a built-in keyboard and support for a colour monitor. The Apple II, which combined Wozniak’s brilliant engineering with Jobs’s aesthetic sense, was the first personal computer to appeal beyond hobbyist circles. …”

(William L. Hosch, Encyclopædia Britannica)

The critical importance of collaboration among a pair of technology pioneers is highlighted in the contrasting examples of Howard Aiken and Cuthbert Hurd versus Steve Wozniak and Steve Jobs: despite Aiken’s prominent computer pioneer status and early retirement from Harvard with an intense interest in starting a computer company, his close associate and former IBM executive Hurd only talked about collaboration plans with him but never put them into action; on the other hand, Wozniak the talented young computer designer did not want to leave his employer Hewlett-Packard at all, and yet the entrepreneurial adventurer Jobs persuaded Wozniak’s family to convince Wozniak to do it.

Without financing, Jobs and Wozniak started their company at Jobs’s family home garage, just like William Hewlett and David Packard had started theirs in a house garage in 1939 as noted in Part 5 (i).

Hewlett later commented on HP’s decision in 1976 not to pursue Wozniak’s personal computer idea by saying, “You win some, you lose some”:

“Regarding the missed opportunity, HP co-founder Bill Hewlett reportedly said, “You win some, you lose some.””

(Josh Ong, December 6, 2010, Apple Insider)

Glimpses into Hewlett-Packard’s own history may help shed light on William Hewlett’s circumspect and somewhat philosophical comment regarding HP’s missed opportunity with Steve Wozniak and the personal computer.

Back in 1939, it was in the garage of David Packard’s rented house in Palo Alto that Packard and Hewlett started their company, a garage since recognized as “the birthplace of Silicon Valley” by the State of California in 1989:

“In 1938 David and Lucile Packard got married and rented the first floor of the house at 367 Addison Avenue in Palo Alto. The simple one car garage became the HP workshop and the little shack out back became Bill Hewlett’s home. In 1989 California named the garage “the birthplace of Silicon Valley” and made it a California Historical Landmark.

Dave Packard had gone to Schenectady to work at General Electric. He was told that there was no future in electronics at General Electric and that he should instead concentrate on generators, motors and other heavier equipment. Bill Hewlett was finishing up his graduate work at Stanford and the two decided to pursue their earlier plan of starting their own business. The name HP (vs. PH) was chosen by a coin toss. For $45 per month, the Packards rented the first floor of the house, which was chosen specifically because it had a garage that they could work in. Bill Hewlett moved into the little shack next to the garage.”

(“The HP Garage – The Birthplace of Silicon Valley”, The Museum of HP Calculators)

As reviewed in Part 5 (ii), the founding of Hewlett-Packard was greatly encouraged by Stanford engineering professor Frederick Terman, a protégé of leading U.S. government science adviser Vannevar Bush of the World War II era, and an influential academic administrator who grew Stanford’s scientific research by utilizing Cold War-oriented research funding.

By the early 1950s, as quoted earlier from a book by James W. Cortada on U.S. computer development history, Hewlett-Packard was an “instrument supplier” for some of the Southern California aerospace companies actively engaged in computer development activity – even though at the time HP itself had no such ambition as recalled by Douglas Engelbart.

HP’s technological expertise in instrumentation was in serious demand in the arena of military weapons research:

“Although the company never developed weapons systems, it depended heavily throughout its history on military spending, because its instrumentation has been used to develop and test military products, particularly as weapons systems have become more dependent on electronic and semiconductor technologies. The military expertise of Hewlett-Packard was underscored in 1969 when U.S. Pres. Richard M. Nixon appointed Packard deputy secretary of defense, in which position he oversaw the initial plans for the development of two of the country’s most successful jet fighter programs, the F-16 and the A-10.”

(“Hewlett-Packard Company”, by Mark Hall, Encyclopædia Britannica)

The Encyclopædia Britannica article on Hewlett-Packard quoted above relates the history of HP’s benefiting from military spending on weapons development to David Packard’s rise in the national political scene – from a co-founder of a leading electronics and computer company in the growing Silicon Valley to serving as the U.S. Deputy Secretary of Defense under President Richard Nixon.

Packard was noted not only for overseeing the initial planning of the development of the U.S. Air Force’s important fighter jets the F-16 and the A-10, but also for overseeing the revision of military acquisition policy in 1969-1971 following a public controversy, just prior to the Nixon presidency, of cost overruns in the Air Force’s C-5A cargo aircraft project contracted to the Lockheed Corporation:

“… As it happened, when Nixon took office in 1969, the acquisition community was already in turmoil, the result of a high-profile controversy that began in mid-1968, when A. Ernest Fitzgerald, deputy for Management Systems in the Office of the Assistant Secretary of the Air Force for Financial Management, first testified before Congress about cost overruns on the C-5A cargo aircraft program. His appearances before congressional panels resulted in a series of investigations that proved to be very embarrassing for the Air Force and the Lockheed Corporation, prime contractor for the C-5A. Subsequent allegations were made that, after testifying, Fitzgerald was the subject of career reprisals by the Air Force’s senior leadership. These accusations only drew more public attention to the controversy. …

In this connection, President Nixon appointed David Packard, one of the founders of the Hewlett-Packard Corporation and a veritable legend in American business circles, to the post of deputy secretary of Defense in January 1969. With an extensive business background and a hands-on management style that stood in stark contrast with that of former Defense Secretary Robert McNamara, Packard seemed like a logical choice to tackle the problems of defense acquisition by revising policy and working closely with subordinates to repair the cultural rift that had developed between the services and OSD during McNamara’s tenure. One observer, writing in 1972, suggested that Packard was the embodiment of a “cult of personality in reverse,” a hero called upon to “put things right for the future.” …

… Shortly after taking office, Packard had formed the Defense Systems Acquisition Review Council (DSARC) as an advisory body reporting to the secretary of Defense. The council, formed in May 1969, established three progress milestones for acquisition programs, an important enhancement to the acquisition process. The milestones were defined as “program initiation decision,”, “full-scale development decision,” and “production decision.” … The DSARC was part of a long-term scheme to promote a kind of “decentralized centralization” over defense acquisition activities. OSD retained oversight authority over new acquisition programs, but Packard wanted the services to assume a larger role in the management of the acquisition process, with many functions devolving to the services. …

The spirit of “decentralized centralization” also could be found in the language of the landmark May 1970 memo, in which Packard articulated new principle for managing acquisition in the coming years. “The prime objective of the new policy guidance is to enable the services to improve their management of programs . . . . [T]he services have the responsibility to get the job done,” wrote Packard. “[I]t is the responsibility of OSD to approve the policies which the services are to follow, to evaluate the the performance of the services in implementing the approved policies, and to make decision on proceeding into the next phrase in each major acquisition program.” …”

(Shannon A. Brown with Walton S. Moody, “Defense Acquisition in the 1970s: Retrenchment and Reform”, in Shannon A. Brown, ed., Providing the Means of War: Historical Perspectives on Defense Acquisition, 1945-2000, 2005, United States Army Center of Military History and Industrial College of the Armed Forces)

I note that David Packard’s reform vision of “decentralized centralization” cited above gave the U.S. military “services”, i.e., branches, more powers in defense acquisitions.

These powers had rested with the Office of Secretary of Defense Robert McNamara during the John Kennedy and Lyndon Johnson presidencies; the above quote cited an observer describing Packard as the embodiment of a “cult of personality in reverse” from Robert McNamara.

Under McNamara, the Pentagon had centralized its decision-making powers:

“One of the most important elements in the McNamara approach to management during the 1960s was in the commitment to centralized decision making in OSD. The new Planning, Programming, Budgeting System correlated resource inputs with categories of performance … The newly created office of assistant secretary of defense for systems analysis employed more than one hundred professional personnel preparing and using parametric cost estimates in cost-benefit analyses for use by the secretary of defense and other decision makers in the Pentagon. …”

(J. Ronald Fox, Defense Acquisition Reform, 1960-2009: An Elusive Goal, 2011, Center of Military History, United States Army)

Interestingly, the former defense secretary who had taken a centralization approach to modernizing Pentagon’s decision making under Democratic presidents Kennedy and Johnson was actually a Republican, and was also a former president of a top U.S automaker just like earlier President Eisenhower’s secretary of defense Charles Wilson reviewed in Part 5 (i):

“As a registered Republican, McNamara became the first Republican appointed to Kennedy’s Cabinet. He was a Presbyterian, married, and father of three. His rapid rise at Ford had been through the financial and accounting side of the business, where he was brought in by Henry Ford II as one of a team known as “whiz kids” right after World War II.

… He had been president of Ford just over a month; he’d been selected the day after Kennedy was elected President. And McNamara was not the first Secretary of Defense to be plucked from Detroit. Charles Wilson, President Eisenhower’s first Secretary of Defense, was a former president of General Motors.”

(“Kennedy Selects Robert McNamara as Secretary of Defense”, by David Coleman, HistoryinPieces.com)

Cases such as the U.S. Army’s initiating the ENIAC electronic computer project at the University of Pennsylvania during wartime – over the serious objection of the U.S. scientific establishment led by Vannevar Bush, as discussed earlier – and establishing the Army Mathematics Research Center at the University of Wisconsin-Madison in peacetime as reviewed in Part 5 (i), suggest that the flexibility of acquisition decision making at the level of the military services could benefit companies like Hewlett-Packard that procured to military contracts directly or indirectly in the weapons technology arena.

But such focuses on the part of Hewlett-Packard and its co-founder David Packard, namely on military research and development and their benefits to the company, by the 1970s were a far distance from the aspirations of a younger generation of computer designers like Steve Jobs and Steve Wozniak.

At their first-ever published press interview, Jobs proclaimed that Apple Computer would like to donate computers to schools for the education of kids, so that “there would be an Apple in every classroom and on every desk” – an ambitious statement later recalled by tech writer Sheila (Clarke) Craven, who had conducted the interview in Apple’s start-up garage for her February 1977 article published in Kilobaud, The Small Computer Magazine:

““My interview with the two Steves took place while they were still in the folks’ garage,” Craven tells Business Insider. She remembers it this way:

One of the things Jobs told me was that they would make certain there would be an Apple in every classroom and on every desk, because if kids grew up using and knowing the Apple, they would continue to buy Apples and so would their kids. The computers would be donated by Apple Computer. I understand that when that article came out, orders starting pouring in, and Apple Computer was seriously launched.

At the time, Apple consisted of just the two Steves in Jobs’ parents’ garage. There was no office, Craven says. Craven spent four hours with the pair, including lunch. After Wozniak booted up the machine, Jobs loaded a game of Blackjack onto it to demonstrate its powers.”

(“This is the first news article ever written about Apple”, by Jim Edwards, May 5, 2015, Business Insider)

Quite different from making computers for kids to learn at school and to play video games, as the deputy defense secretary David Packard had held steadfast the stability of the civil society as a priority; in 1971, he authored a Pentagon document to justify, on the ground of “constitutional exceptions”, the use of military rule in the United States to handle civil disturbances:

“… The United States has contingency plans for establishing martial law in this country, not only in times of war, but also if there is what the Defense Department calls “a complete breakdown in the exercise of government functions by local civilian authorities.” What’s more, there’s a little-known 1971 memorandum prepared by the deputy secretary of defense which also provides justification for military control similar to martial law.

… Martial law is expected to be proclaimed by the president, although “senior military commanders” also enjoy the power to invoke it in the absence of a presidential order, according to a Department of Defense Directive signed in 1981 by Deputy Secretary of Defense Frank Carlucci—who has since become the secretary of defense. Despite the existence of these martial law contingency plans, Justice Department spokesperson John Russell says martial law could never be invoked in the United States, pointing to “the Posse Comitatus Act, [which] bars the military from engaging in law enforcement.”

The Posse Comitatus Act provides small comfort, however, because another Pentagon document, authored by Deputy Secretary of Defense David Packard in 1971, cites two “Constitutional exceptions” to the act’s restrictions. …

The Packard directive claims that Congress intended for there to be an exception to Posse Comitatus “when unlawful obstructions or rebellion against the authority of the United States renders ordinary enforcement unworkable. . . .” Like the Carlucci document, Packard’s directive says turning over law enforcement to the army will “normally” require a Presidential Executive Order, but that this requirement can be waived in “cases of sudden and unexpected emergencies . . . which require that immediate military action be taken.”

During last year’s congressional hearings into the Iran-contra scandal, a brief reference was made to planning efforts by FEMA and by Lt. Col. Oliver North at the National Security Council to outline a martial law scenario premised upon, among other things, a domestic crisis involving national opposition to a U.S. military invasion abroad. …”

(“Could It Happen Here?”, by Dave Lindorff, April 1988, Mother Jones)

Judging from what the Reagan White House National Security Council official Lieutenant Colonel Oliver North did planning for during the 1980s, namely the use of martial law to handle a crisis arising from domestic opposition to a U.S. military invasion abroad, and what the 1971 Packard document had outlined, I can imagine that domestic military rule would have occurred in the Nixon era, or the Reagan era, had there been the Johnson-era nationwide escalating anti-Vietnam War protests – as in Part 2, in 1965 Stephen Smale, later my Ph.D. adviser, was a leader in starting the UC Berkeley anti-war movement that inspired the growth of the national protests, but before long returned to concentrating on mathematical research.

Given Hewlett-Packard’s history and focuses as reviewed here, Bill Hewlett made sense when he later mused about not taking up Steve Wozniak’s personal computer idea in 1976, “You win some, you lose some”.

And I would ponder and wonder: what could the win be for Mr. Hewlett and for Mr. Packard to make it easier for “kids” to play the “Pong” game and the “Breakout” game?

Besides, for HP there was the irony that Wozniak was the son of a weapons development engineer at Lockheed – the company that had spent too much Pentagon money that Nixon decided to bring in Packard to correct it.

As for Steve Wozniak, being a Hewlett-Packard loyalist at the time he may not have taken adequate notice – but for Steve Jobs’s worldly enthusiasm and persistence – of the cultural evolution of Silicon Valley over the years, from the ethos of HP as its founding company to Fairchild Semiconductor’s championing of civil and commercial industrial developments, and further to the plurality of start-up companies springing up since the late 1960s, as reviewed earlier.

So Hewlett-Packard didn’t help Wozniak in his quest to develop and market a personal computer. But paradoxically, neither did the Silicon Valley-trendsetting Intel, “the fairest of the “Fairchildren”” with its entrepreneurial spirit and egalitarian culture, and the inventor of the microprocessor, help Apple Computer with it – in the sense that although Wozniak based his initial personal computer design on the Intel 8080 microprocessor it was too expensive to be used in the early Apple computers:

“Steve Wozniak made a decision very early on at Apple that would prove one of the company’s most fateful ever. When “Woz,” a prank-loving 26-year-old who loved to tinker with machines, designed the very first Apple computer, he decided to use a microprocessor called the MOS Technology 6502, based on the design of Motorola Inc.’s 6800, essentially because it was cheaper than anything else he could find. Intel’s 8080 chip was selling for $179 at the time, and Motorola’s 6800 fetched $175. The MOS Technology chip, made by a Costa Mesa, California, company, cost only $25. …

The decision to go with the Motorola technology was fateful, because Intel would gain the license from IBM to make the microchips that went into almost every IBM-compatible computer. Motorola was a big company in its own right, a giant in cellular phones and pagers. But Apple, which soon after that first design by Woz began using Motorola chips exclusively, became Motorola’s only sizable customer for personal computer microprocessors. Intel’s whole life, on the other hand, revolved around microchips. In fact, it had been a young Intel engineer named Marcian E. Hoff Jr. who had invented the microchip in 1971, making the PC revolution possible.”

(“They Coulda Been a Contender”, by Jim Carlton, November 1997, Issue 5.11, WIRED)

As quoted, Wozniak chose a $25 MOS Technology 6502 microprocessor based on the design of Motorola’s $175 6800 microprocessor, and not Intel’s $179 8080 microprocessor, a fateful decision with the consequence that Intel then became the microprocessor supplier for IBM but not Apple.

Wozniak had in mind making Apple computers affordable for students and teachers, with his first Apple computer debuting in a math class at Windsor Junior High School in Windsor, California:

“Before the iPod, the Macintosh, or even the formation of Apple Computer Company on April Fool’s Day 1976, there was the Apple I. Designed by Steve “Woz” Wozniak, then an engineer at Hewlett-Packard, it was less a personal computer than the bare essentials of one: the circuit board you see in the image at left is the Apple I (buyers had to hook up their own keyboards, displays, and power supplies). This computer, the very first Apple I made, was first used in a math class at Windsor Junior High School in Windsor, CA, in 1976 and donated to the LO*OP Center, a nonprofit educational organization run by Liza Loop. …

… Wozniak pored over ­integrated-­circuit specifications and engineered the Apple I so that different processes could share the same chips, reducing the overall part count. This, plus the use of cheaper items such as a $20 MOS Technology 6502 microprocessor rather than the more common $175 Motorola 6800, enabled him and Steve Jobs to offer the Apple I for the somewhat affordable price of $666.66… Loop, who is also the director of the History of Computing in Learning and Education Project, agrees: “Woz wanted this simple, low-cost design so that the Apple would be affordable for students and teachers.””

(“Hack: The Apple I”, by Daniel Turner, May 1, 2007, MIT Technology Review)

It looked like launching a great adventure on April Fool’s Day was not an unpopular move, preferred by Steve Jobs and Steve Wozniak starting Apple Computer and not just by Stephen Hawking who, as in Part 4, had his first popular book, A Brief History of Time, published on that day in 1988.

The second previous quote above, from a Wired magazine article, cited the company MOS Technology that made the cheap microprocessor chip Wozniak selected as based in Costa Mesa, which is in Southern California.

So would it not appear that, with all that had been achieved by major Silicon Valley semiconductor companies like Fairchild Semiconductor and Intel, in the 1970s Southern California was still more productive than Northern California in semiconductor chip production?

Not exactly, because MOS Technology was really based in, and its 6502 microprocessor designed and manufactured in, Valley Forge just outside of Philadelphia, Pennsylvania, even though a lower-power version, CMOS 6502, was developed in Costa Mesa for use in “hand-held” calculators:

“VALLEY FORGE, PA—Back in the fall of 1976, before there was any microcomputer market to speak of, Commodore International acquired MOS Technology to help its struggling calculator business. MOS had designed the 6502 chip, which is found today in innumerable microcomputers, including models by Apple, Atari and Hewlett-Packard.

What finally leaves the MOS plant is integrated circuits, which then go to Japan or California for final assembly into computers.

There is also a systems/assembly facility in Santa Clara, California (see related article on page 18). A support facility in Costa Mesa, California, is working on a CMOS version of the 6502 and 6500 series of microprocessors that will provide lower-power, “equal- or better-speed microprocessor” for “use in hand-held applications”…”

(“MOS Technology is Commodore’s ‘edge’”, by David Needle, April 26, 1982, Volume 4, Number 16, InfoWorld)

So it was near Philadelphia, the birthplace of the world’s first general-purpose electronic computer ENIAC, that the MOS 6502 was designed and manufactured so cheaply that it operated in so many microcomputers, including Apple computers, computers by Atari which Jobs and Wozniak had done video-game design for, and even Hewlett-Packard microcomputers!

Apparently Steve Wozniak started the personal computer trend very well and subsequently Hewlett-Packard in 1980 entered the market with the HP 85 personal computer.

(“HP-85 personal computer, 1980”, Hewlett-Packard)

As mentioned earlier, microchips designed in the 1960s by Fairchild Semiconductor for the American moon-landing program’s Apollo Guidance Computers were also manufactured by a Philadelphia-based company, Philco.

In the year 1980 when HP entered the personal computer market, Apple’s annual sales were already in the tens of millions of dollars and growing fast; IBM was also jolted into immediate action; in that year, Apple Computer shares began trading on the stock market, instantly making Apple a billion-dollar company and turning Jobs and Wozniak into $100 millionaires:

“The Apple II was first sold in 1978 and made $700,000 worth of sales that year. The following year, sales were $7 million, and year after $48 million. In 1980, sales doubled again and the Apple company went public, giving Jobs and Wozniak $100 million each.

IBM couldn’t ignore that. In July 1980, it set up a project to get into the personal computer business within a year. Because of the time constraint, the project team scrapped the usual IBM practice of making every major component in the computer themselves and decided to build their computer from standard components that anyone could buy.”

(David Manners and Tsugio Makimoto, Living with the Chip, 1995, Chapman & Hall)

Besides Jobs and Wozniak, Apple Computer’s stock market debut produced around 300 instant millionaires, about 40 of them Apple investors and employees; that was more millionaires than any other company had produced in history up to that point:

“On December 12, 1980, Apple launched its IPO (initial public offering) of its stock, selling 4.6 million shares at $22 per share with the stock symbol “AAPL” on the NASDAQ market.

The shares sold out almost immediately and the IPO generated more capital than any IPO since Ford Motor Company in 1956. Instantly, about 300 millionaires, some 40 of which are Apple employees and investors, are created. That is more millionaires than any company in history had produced at that time. Steve Jobs, the largest shareholder, made $217 million dollars alone.

By the end of the day, the stock had increased in value by almost 32% to close at $29, leaving the company with a market value of $1.778 billion.”

(“Apple IPO makes instant millionaires, December 12, 1980”, by Suzanne Deffree, December 12, 2015, EDN Network)

As quoted, Apple Computer raised more capital in the IPO, i.e., initial public offering, of its stock than any other company after Ford Motor Company in 1956.

So, within a few short years after HP’s rejection of Wozniak’s personal computer idea, he and Jobs had surpassed Hewlett and Packard as successful industrial entrepreneurs – reaching a level of fame closer to Henry Ford and Robert McNamara.

After Fairchild Semiconductor and Intel, Apple started something in entrepreneurship and in technological progress with a revolutionary impact on not only Silicon Valley but American consumers: according to the U.S. Census Bureau data, by 1984, 8.2% of American households had a computer; 5 years later in 1989, the figure grew to 15%; by 1993, it was 22.8%; it then became 36.6% in 1997 and 42.1% in 1998; and by 2000, over half of the American households, or 51%, had a computer.

(“Home Computers and Internet Use in the United States: August 2000”, September 2001, U.S. Census Bureau)

Still, credits to Apple’s innovative success are due not only Jobs, Wozniak and Apple’s employees, but also others outside of Apple, such as Silicon Valley pioneer Douglas Engelbart who in the mid-1950s, as in Part 5 (i), had encountered the lack of interest in computer development on the part of UC Berkeley, Stanford and Hewlett-Packard, but persevered and in the early 1960s invented the computer mouse; after Engelbart, it was a group of graduates of Stanford’s interdisciplinary product design program who in the early 1980s turned the expensive and unreliable mouse into an inexpensive and very helpful user device Steve Jobs wanted:

“Dean Hovey was hungry. His young industrial design firm, Hovey-Kelley Design, had been working on projects for Apple Computer for a couple of years but wanted to develop entire products, not just casings and keyboards. Hovey had come to pitch Apple co-founder Steven Jobs some ideas. But before he could get started, the legendary high-tech pioneer interrupted him. “Stop, Dean,” Hovey recalls Jobs saying. “What you guys need to do, what we need to do together, is build a mouse.”

Hovey was dumbfounded. A what?

Jobs told him about an amazing computer, code-named Alto, he had just seen at Xerox’s Palo Alto Research Center (PARC). In early 1980, most computers (including Apple’s) required users to memorize text commands to perform tasks. The Alto had a graphical user interface—a symbolic world with little pictures of folders, documents and other icons—that users navigated with a handheld input device called a mouse. Jobs explained that Apple was working on two computers, named Lisa and Macintosh, that would bring that technology to market. The mouse would help revolutionize computers, making them more accessible to ordinary people. …

Just one problem: a commercial mouse based on the Xerox technology cost $400, malfunctioned regularly and was nearly impossible to clean. That device—a descendant of the original computer mouse invented by Douglas Englebart at the Stanford Research Institute in the early 1960s—was a masterpiece of high-concept technology, but a hopeless product. Jobs wanted a mouse that could be manufactured for $10 to $35, survive everyday use and work on his jeans.

“We thought maybe Steve wasn’t getting enough meat in his diet,” says Jim Sachs, a founding member of Hovey-Kelley, “but for $25 an hour, we’d design a solar-powered toaster if that’s what he wanted.” …

They did. The mouse’s evolution “from the laboratory to the living room,” as one of its designers puts it, is not well known—even some Apple fanatics aren’t familiar with it—but it reveals something of the personalities of its designers, the Stanford program that trained them and even the history of Silicon Valley. Everyone knows that the University has helped shape the region, but the influence is often described as a function of great individuals like Frederick Terman…

When Hovey-Kelley was asked to design the Apple mouse, the firm was a two-year-old start-up. Hovey and David Kelley, as well as most of the firm’s other early members, had met as graduate students in Stanford’s product design program. An interdisciplinary program that combines mechanical engineering, art and, often, math, physics and psychology, it was founded in 1958 by Robert McKim. …

… The Apple mouse transformed personal computing. Although the expensive Lisa flopped, the Macintosh, released in 1984, made the graphical user interface the industry standard. Microsoft responded with Windows, and its own mouse—also engineered by Jim Yurchenco. “We made a mouse mass-producible, reliable and inexpensive,” says Sachs, “and hundreds of millions of them have been made.””

(“Mighty Mouse”, by Alex Soojung-Kim Pang, March/April 2002, Stanford Magazine)

For his part, Steve Wozniak credited his success in building the first Apple computer to his having been “extremely lucky”:

“I built the first Apple prototype myself, before there was suggestion to start a company. I gave out schematics and code listings of it at the Homebrew Computer Club. …

In 1975 I decided to build a full computer, that would be able to run a programming language. In 1970 I’d told my dad that someday I’d own a 4K computer capable of running Fortran programs, which was my favorite high school pastime. We didn’t have computers in our high school, but my electronics teacher arranged for me to visit a company in Sunnyvale and program a computer there once a week. Due to not having money, I couldn’t consider a $370 Intel 8080 microprocessor. But MOS Technology came out with the 6502 for $20. More important, in a day when there was no store where you could actually buy a microprocessor, the new 6502 was to be introduced and sold over the counter at a show, Wescon, in San Francisco. All of this was extremely lucky for me.

So to build my first Apple computer (I’d actually built a smaller computer of my own design with no microprocessor, the “Cream Soda Computer,” in 1970) by joining my terminal (input and output) with the 6502 microprocessor and some RAM. I chose dynamic RAM whereas all the other cheap hobbiests chose static RAM. My goal in any design was to minimize the board space and chip count. Well, the new 4K-bit dynamic RAMs were the first RAMs to be cheaper, per bit, than magnetic core memories. It was a change in technology as significant as the scientific calculators of Hewlett Packard (which I helped design), which totally replaced slide rules.”

(“Letters-General Questions Answered”, last updated: July 18, 2001, Woz.org)

From what he said above, Wozniak’s extreme luck came because of the MOS Technology microprocessor that not only came out onto the market just when he wanted to build a full computer, but that was being shown at the Wescon show in San Francisco so he could go buy it for $20, when other microprocessors were not only expensive but unavailable at any retail store.

Besides these, Wozniak mentioned several favorable factors: he had programmed computers since high school, with the help of his electronics teacher to access a computer in a Sunnyvale company; he had had chats with his father about computers; he had designed and built a small computer, the “Cream Soda Computer”, without a microprocessor in 1970; he had experiences at Hewlett-Packard helping design scientific calculators; and, when he chose to use dynamic RAMs as the computer memory, there was the new 4K-bit dynamic RAMs that were “the first RAMs to be cheaper, per bit, than magnetic core memories” – not unlike the MOS Technology microprocessor being not only a new but affordable product.

Of these factors, I would say that the emergence of the MOS Technology microprocessor and the emergence of the new 4K-bit dynamic RAMs were “lucky” for Wozniak, because the timings of market availability and affordability of new products coincided with his needs for them. The other factors were primarily consequences of his talent, skills and studious work: he got to do calculator design at HP because he had the technical skills and prior experience designing and building a small computer without the microprocessor in 1970, and that prior experience no doubt had benefited from his familiarity with computers, having done programming regularly in his high school years.

Of particular interest to my comment earlier about Howard Aiken’s intent of starting a microcomputer company long before the microprocessor was invented, Wozniak showed in 1970 that one could build a small computer without having a microprocessor.

In addition, though Wozniak did not delve into details in the above quote from his response to an email question, his “favorite high school pastime” of computer programming and later building a computer on his own probably had a family influence, given that his father, with whom he had discussions as quoted, was an electrical engineer at Lockheed Missiles and Space Company.

In fact, on other occasions Wozniak has described in great details about his having been nurtured passionately by his father during childhood and youth; his father tutored him on electronics since when he was 3 years old, and taught him to start his first electronics building project at the age of 6, something that instilled in him an exciting sense of superiority over other kids; from there on it was project after project, through elementary school and through 8th grade of high school, guided by his “single greatest influence” – his father:

“The other thing my dad taught me was a lot about electronics. Boy, do I owe a lot to him for this. He first started telling me things and explaining things about electronics when I was really, really young—before I was even four years old. This is before he had that top secret job at Lockheed, when he worked at Electronic Data Systems in the Los Angeles area. One of my first memories is his taking me to his workplace on a weekend and showing me a few electronic parts, putting them on a table with me so I got to play with them and look at them. …

… In fact, my very first project—the crystal radio I built when I was six—was really all because of my dad. It took me a very long time in my life to appreciate the influence he had on me. He started when I was really young, helping me with these kinds of projects.

Dad was always helping me put science projects together, as far back as I can remember. When I was six, he gave me that crystal radio kit I mentioned. It was just a little project where you take a penny, scrape it off a little, put a wire on the penny, and touch it with some earphones. Sure enough, we did that and heard a radio station. Which one, I couldn’t tell you, but we heard voices, real voices, and it was darned exciting. I distinctly remember feeling something big had happened, that suddenly I was way ahead—accelerated—above any of the other little kids my age. And you know what? that was the same way I felt years later when I figured out how resistors and lightbulbs worked.

All through elementary school and through eighth grade, I was building project after electronic project. There were lots of things I worked on with Dad; he was my single greatest influence.”

(Steve Wozniak with Gina Smith, iWoz: Computer Geek to Cult Icon, 2006, W. W. Norton & Company)

In the above quoted from his autobiography, “Woz” mentioned that his father’s work at Lockheed was a “top secret job”.

His dad was in the missile program and never told him any details about it, in an era when even the space program was “top secret”, as Woz explained:

“I did know Dad was an engineer, and I knew he worked in the missile program at Lockheed. That much he said, but that was pretty much it. Looking back, I figure that because this was in the late 1950s and early 1960s at the height of the Cold War, when the space program was so hot and top secret and all, probably that’s why he couldn’t tell me anything more about it. What he worked on, what he did every day at work, he’d say absolutely nothing about. Even up to the day he died, he didn’t give so much as a hint.”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

The space program could not have made Woz’s dad so tight-lipped all his life, could it? His father’s was Lockheed’s submarine-based Polaris missile program, mentioned earlier, as Woz later figured out:

“Now, on my own, I managed to put together little bits and pieces. I remember seeing NASA-type pictures of rockets, and stuff related to the Polaris missile being shot from submarines or something, but he was just so closemouthed about it, the door slammed there.

I tell you this because I’m trying to point out that my dad believed in honesty. Extreme honesty. Extreme ethics, really. That’s the biggest thing he taught me. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

As discussed earlier and in Part 5 (ii), Lockheed Missiles and Space Company was founded in 1956 in the nascent Silicon Valley region around the time of the semiconductor industry’s arrival there.

Woz’s father came to this company in 1958 from Southern California when Waz was 7 years old, moving to a house on Edmonton Avenue in Sunnyvale, “in the heart of” Silicon Valley and in “the best climate in America”:

“We spent most of my early years in Southern California, where my dad worked as an engineer at various companies before the secret job at Lockheed.

But where I really grew up was Sunnyvale, right in the heart of what everyone now calls Silicon Valley. Back then, it was called Santa Clara Valley. I moved there when I was seven. … Our street, Edmonton Avenue, was just a short one-block street bordered by fruit orchards on three of four sides. …

When I think of that street, looking back, I think it was the most beautiful place you could imagine growing up. It wasn’t as crowded back then, and boy, was it easy to get around. It was as moderate of temperature as anywhere else you could find. In fact, right around the time I moved there—this was 1958—I remember my mother showing me national articles declaring it to be the best climate in America. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

Woz told how his father, “an extremely good teacher and communicator”, gave him “classical electronics training” from the beginning starting when he was 7:

“Because my dad was an engineer, there were all kinds of interesting things lying around my house. And when you’re in a house and there are resistors lying around everywhere, you ask, “What’s that? What’s a resistor?” And my dad would always give me an answer, a really good answer even a seven-year-old could understand. He was just an extremely good teacher and communicator.

He never started out by trying to explain from the top down what a resistor is. He started from the beginning, going all the way back to atoms and electrons, neutrons, and protons. He explained what they were and how everything was made from those. I remember we actually spent weeks and weeks talking about different types of atoms and then I learned how electrons can actually flow through things—like wires. Then, finally, he explained to me how the resistors work—not by calculations, because who can do calculations when you’re a second grader, but by real commonsense pictures and explanations. You see, he gave me classical electronics training from the beginning. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

With the training and guidance by his father, Woz excelled in science projects at the elementary school age. In a 2012 article, he described his accomplishments in the elementary school years:

“I enjoyed early electronic kits with buttons and buzzers but that was a mild start which could have gone in other directions. Science fair projects in elementary school, really solidified my direction. A couple of simple projects, a flashlight apparatus with rubber bands instead of solder, and an electrolysis project were also not determining of my real interest. But I found a journal in a hall closet with descriptions of binary numbering and logic gates and storage devices.

When I discovered that a 9-year old could understand this stuff, I knew it would be my passion forever. I didn’t think there were jobs in computers but I would love them as a pastime. This interest was solidified by large construction projects (ham radio transmitter and receiver from kits after learning and getting my license), atomic electron orbital display (92 lights, 92 switches, tons of relays, some diodes for logic), a tic-tac-toe computer (about 100 transistor circuits for rules that I made from playing games, although later in life I minimized it to about 10-20 rules for simplicity), a 10-bit binary adder-subtractor. In no case did I copy existing logic or circuits and that forced me to learn it all well.”

(How Steve Wozniak Became the Genius Who Invented the Personal Computer, by Steve Wozniak, July 17, 2012, Gizmodo)

Very impressive, Woz not only played the tic-tac-toe game but built a “tic-tac-toe computer” with about 100 transistor circuits for playing the game – well over a decade before building his own version of the early commercial video game “Pong” and designing the first version of the “Breakout” video game as mentioned earlier. Most impressively as recalled, Woz did the projects by learning it well and doing it on his own, not copying existent designs.

In building his “tic-tac-toe computer”, Woz benefited from not only his father’s teaching on the logics of transistors and circuits and on how to build them, but also the assistance of none other than the Silicon Valley-trendsetting Fairchild Semiconductor, which gave Woz a large quantity of “cosmetic defects” transistors for free, courtesy of his father:

“In sixth grade my father taught me how transistors work, leading into logic circuits. I learned how to fashion OR gates from resistors or diodes, AND gates from diodes, and invertors from transistors.

Although they were still expensive, my dad got local transistor companies (Fairchild) to donate “cosmetic defects” of hundreds of diodes and transistors to me. My dad taught me how gates could make decisions based on inputs. He said how you could combine all the inputs of a tic-tac-toe game (which squares had “X”, which had “O” and which were empty) and gates could decide the best response. Unfortunately I didn’t come up with a great simplification (along the lines I used for my 6th graders last year) and it took hundreds of gates laid out on a 3’ by 4’ piece of plywood with components soldered to nails. I tried hard but couldn’t get the “tic-tac-toe computer” into the 6th grade science fair.”

(“THE WOZ INTERVIEW!”, by Auri Rahimzadeh, 1995, Woz.org)

Steve Wozniak was born on August 11, 1950, as in his Encyclopædia Britannica biography cited earlier. In his 6th grade, typically 11-12 years old, Woz’s “tic-tac-toe computer” was probably made in 1961-1962. That would be the time when Howard Aiken retired from Harvard and most likely discussed with Cuthbert Hurd about starting a “microcomputer computer company”, with a Lockheed assistant director of engineering doing the design work – in Sunnyvale where Aiken was a consultant at Lockheed Missiles and Space Company.

Aiken’s goal unfulfilled in his lifetime would eventually be achieved by a Lockheed Missiles and Space Company engineer’s son, a 6th grader at that earlier time but already making a special “tic-tac-toe computer”.

In his high school years, Wozniak learned not only programming but designing computers on his own, as he recalled in the 2012 article cited earlier:

“In high school I got to program a computer and came across a manual for an existing minicomputer. I took my elementary school logic experience and tried to teach myself how to design a computer, given its architecture. I had no books on how to do this. I shut my door and worked alone. After a few tries over months, I had a pretty decent design with the chips of the day.

Then I started designing every minicomputer made. I’d design them over and over, making a game to save parts. I had no books but came up with good techniques because it was for myself. …”

(Steve Wozniak, July 17, 2012, Gizmodo)

The first machine Woz designed and built that was close to a real computer was an Adder/Subtractor in his 8th-grade year. To Woz’s disappointment, it won him only an honorable mention at the Bay Area Science Fair, where his experience made him feel that the competition’s judging outcomes were unfairly predetermined:

“The Adder/Subtractor wasn’t more complicated in terms of size or construction time than the tic-tac-toe machine, but this project actually had a goal that was closer to real computing. A more important purpose than tic-tac-toe. …

My project had a function, a real function that was useful. You could input numbers, add or subtract one, and see your answer.

But here’s the thing. I took it down to the Bay Area Science Fair one night, to set it up before the day of judging. Some people showed me where to put it and asked me if I’d like to tell them about it. I told them no, figuring that I’d just tell them the story on judging day. By then I’d gotten kind of shy. Looking back, I think I may have turned down the judges without knowing it.

When I showed up on judging day, all the projects already had their awards. The judging had already happened somehow! I had an honorable mention, and there were three exhibits that had higher awards that mine. I saw them and remember thinking they were trivial compared to mine, so what happened? I then looked in the fair brochure and those three were all from the school district that was putting on the fair.

I thought, Hey, I’ve been cheated. But that night, I showed the machine and talked to lots of people—including, I’m sure, the real judges—and it seemed like they really understood how big my project was, I mean, it was great and I knew it and everyone knew it. I was able to explain how I’d used logic equations and gates and how I’d combined gates and transistors with binary number (1s and 0s) arithmetic to get the whole thing working.”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

Fortunately for Woz, the U.S. Air Force gave him its top award for the Bay Area Science Fair. The ‘VIP’ treatment coming with the award from the Air Force gave this 13-to-14-year-old a real boost of confidence as well as his “love for flying”; as Woz recalled in his autobiography, “that Adder/Subtractor was such a key project in my getting to be the engineer who ended up building the first personal computer”:

“After that, the Air Force gave me its top award for an electronics project for the Bay Area Science Fair, even though I was only in eighth grade and the fair went up to twelfth grade. As part of the award, they gave me a tour of the U.S. Strategic Air Command Facility at Travis Air Force Base. And they gave me a flight in a noncommercial jet, my first-ever flight in any plane. I think I might have caught my love for flying then.

When I look back, that Adder/Subtractor was such a key project in my getting to be the engineer who ended up building the first personal computer. This project was a first step to that. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

Or as noted earlier that in his autobiography Wozniak referred to his Lockheed missiles engineer father as his “single greatest influence”, in awarding Woz for his 8th-grade invention that would become his first step towards “the engineer who ended up building the first personal computer”, the U.S. Air Force likely saluted not only Woz’s accomplishment but also his father’s engineering career associated with the Air Force.

In fact, his father had helped him come up with the initial idea of building the Adder/Subtractor, according to Woz in an interview over a decade before his autobiography:

“In eighth grade my dad showed me a book of computer reports which were all interesting to me. I learned Boolean Algebra basics there. The book had logic diagrams of a binary adder (1 bit with carry in and out) and of a binary subtractor. We came up with the idea of building one. …”

(Auri Rahimzadeh, 1995, Woz.org)

Courtesy of his father, during his high school years Fairchild Semiconductor, which had given him “cosmetic defects” transistors for his 6th-grade tic-tac-toe computer, continued to provide Woz with help in designing computers:

“… One time in high school, I was trying to get chips for a computer I’d designed. My dad drove me down to meet an engineer he knew at Fairchild Semiconductor, the company that invented the semiconductor. I told him I’d designed an existing minicomputer two ways. I found out that if I used chips by Sygnetics (a Fairchild competitor), the computer had fewer chips than if I used Fairchild chips.

The engineer asked me which Sygnetics chips I’d used.

I told him the make and model number.

He pointed out that the Sygnetics chips I’d used in the design were much larger in physical size, with many more pins and many more wires to connect, than the equivalent Fairchild chips.

I was stunned. Because he made me realise in an instant that the simpler computer design would really have fewer connections, not simply fewer chips. So my goal changed, from designing for fewer chips to trying to have the smallest board, in square inches, possible.

Usually fewer chips means fewer connections, but not always.”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

As Wozniak recalled, even in high school years he already had the goal of designing computers that were small in size, and a conversation with a Fairchild Semiconductor engineer gave him the right technical perspective in the subject matter.

Thus, naturally, designing and building a microcomputer or personal computer was the next step for Wozniak when the technological components, most importantly the microprocessor, became available, as he recalled:

“… The Data General NOVA came out and had a very different architecture which wound up taking half as many chips, due to being designed around available parts very well. it was a very structured architecture. I told my dad I’d someday have a 4K NOVA (enough for a programming language). He said it cost as much as a house down payment. Throwing down the gauntlet, I countered that I’d live in an apartment.

The day I discovered that microprocessors were much like these minicomputers I’d designed back in high school, the formula for an affordable 4KB computer popped into my head instantly, remembering my old goal. Thankfully I’d been through a lot of stages leading up to this, building games around my TV (the only free output device) and terminals for the Arpanet. You do a lot when you love electronics and have little chance of ever having a girlfriend or wife.”

(Steve Wozniak, July 17, 2012, Gizmodo)

As Wozniak indicated, even after he became an experienced amateur computer designer, he continued to consult his father on important projects he would like to do.

And of course, as discussed earlier, in order to persuade Steve Wozniak to start Apple Computer with him, Steve Jobs went to persuade Wozniak’s family first – no doubt as I have reviewed, Woz’s father was the key!

A reason why Woz’s electrical engineer father had so much influence in the son’s personal development in the computer field may be that the father was in fact a pioneer in the field of integrated circuits, as Woz recalled:

“Because my father was involved with the earliest IC’s in regard to his lockheed work, I went to trade shows when only 10 years old and saw the first chips with 2 transistors on one chip of silicon (germanium), and the promise of 6 to 10 transistors on a chip in the near future. Over the years, my father had manuals around the house that caught my attention, with the early IC’s. During high school I discovered minicomputer manuals and started putting chips together to make computer designs. I was totally self taught in this regard, designing alone in my room.”

(Auri Rahimzadeh, 1995, Woz.org)

Recall that the integrated circuit was invented by Robert Noyce of Fairchild Semiconductor and Jack Kilby of Texas Instruments, independently in 1958-1959. In Woz’s case, when he was 10 years old – presumably in 1960 or 1961, not long before starting his tic-tac-toe computer project – his father was involved in IC development at Lockheed and he got to attend trade shows with his father; then over the years, his father brought home various IC manuals and minicomputer manuals, and Woz got to read them and learn to design computer alone in his room.

As reviewed earlier, the Harvard early computer pioneer Howard Aiken’s later consultancy for Lockheed Missiles and Space Company focused on switching circuit design. I would not be surprised if, in fact, Aiken was a consultant for some of the projects Wozniak’s father was a member of – as quoted earlier from Woz’s autobiography, due to secrecy his father never told him much about the work.

A very relevant and intriguing question arising from my review of this history is: had Aiken and Cuthbert Hurd, and the Lockheed assistant director of engineering already doing the computer design work, in the 1960s, gone forward in starting a new company, would Woz’s father have been a prospective top engineer to be recruited to this “first microcomputer computer company in the world”?

I understand that Woz’s father could be only one of many engineers Lockheed had in this field, and not as distinguished as Aiken the consultant with a prominent pioneer status. So what could be the chance that this one engineer, with a talented young son learning to do electronics projects at home, be chosen for such a brave new venture in those early years?

One may reason that Steve Wozniak’s father was already famous, in some way, that could be a factor favoring him. In his autobiography, Woz told of his father’s former fame at California Institute of Technology as the best football quarterback that university ever had:

“According to my birth certificate, my full name is Stephan Gary Wozniak, born in 1950 to my dad, Francis Jacob Wozniak (everyone called him Jerry), and to my mom, Margaret Louise Wozniak. My mother said she meant to name me Stephen with an e, but the birth certificate was wrong. So Stephen with an e is what I go by now.

I forgot to mention before that my dad was kind of famous, in his own way. He was a really successful football player at Caltech. People used to tell me all the time that they used to go to the games just to see Jerry Wozniak play. …

Once, at De Anza, my quantum physics teacher said, “Wozniak. That’s an unusual name. I knew a Wozniak once. There was a Wozniak who went to Caltech.”

“My father,” I said, “he went to Caltech.”

“Well, this one was a great football player.”

That was my father, I told him. He was the team’s quarterback.

“Yes,” the teacher said. “We would never go to football games. but at Caltech, you had to go just to watch Jerry Wozniak. He was famous.”

You know, I think my dad was the one good quarterback Caltech ever had. He even got scouted by the Los Angeles Rams, though I don’t think he was good enough to play pro. Still, it was neat to hear from a physics teacher that he remembered my dad for his football. It made me feel like I shared a history with him. The teacher once brought me a Caltech paper from back in those days with a picture of my dad in his uniform. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

As told, when Woz was a De Anza College student his quantum physics teacher had known his father Jerry Wozniak from the father’s star football quarterback days at Caltech.

Okay, the fame wasn’t in engineering; but it was rare that a former university star football quarterback became an engineer in the IC/computer fields at a military aerospace company, and it likely made Jerry Wozniak more noticed than his fellow engineers in one respect.

And as Woz’s De Anza quantum physics teacher said, “Wozniak” was “an unusual name” – as a matter of fact it was unusual in the context of the history of early electronic computers.

The first general-purpose electronic computer, the development of which involved Howard Aiken’s competitor John von Neumann, prominent mathematician who has been regarded as “the father of computers” as in Part 5 (i), was named ENIAC for the full name: Electronic Numerical Integrator And Computer.

As ENIAC was being completed in the mid-1940s, the project group at the University of Pennsylvania’s Moore School of Electrical Engineering planned for the next computer to be named EDVAC; soon von Neumann, returning to the Institute for Advanced Study at Princeton, started the IAS computer project; and of particular interest, by 1950 the National Bureau of Standards developed the computers SEAC and SWAC as reviewed in Parts 5 (i) & (ii).

Most of these early computer names end with “AC”; but ENIAC was the only one ending with “NIAC”, and the name Wozniak is similar in the sense that it also has two syllables the second of them, “niak”, differs from “NIAC” by only the last letter and is pronounced the same.

While it appeared a random coincidence, that a person with a name similar to the famous first electronic computer became an engineer in the computer field and his son with that name later became famous for developing the first personal computer, there were other interesting coincidences.

As in Part 5 (i), the RAND Corporation in Santa Monica, the Cold War think-tank where von Neumann was a leading strategist, in the early 1950s developed a computer following von Neumann’s design, and named it in honor of von Neumann as JOHNNIAC, i.e., also ending with “ENIAC”. JOHNNIAC’s full name is: John v. Neumann Numerical Integrator and Automatic Computer.

(“JOHNNIAC”, Wikipedia)

It looks interesting that, of the various early electronic computers with different names, the one named after “the father of computers” John von Neumann also had a name ending in “NIAC”, similar to “niak”, just like the first one von Neumann was involved in developing.

Was there some sort of pattern?

There were only 3 other early electronic computers that had names ending in “NIAC”: MANIAC I, II & III. Their full names are: Mathematical Analyzer Numerical Integrator and Computer I, II & III.

(“List of vacuum tube computers”)

By inspecting the name acronyms, one can see that many end with “AC” because their names end with “and Computer” or “Automatic Computer”, whereas the few ending with “NIAC” also have “Numerical Integrator” ahead of it.

Like JOHNNIAC, MANIAC I was based on von Neumann’s computer design and the other two were subsequent improvements, MANIAC I & II built at the Los Alamos National Laboratory in the 1950s, and MANIAC III built in the 1960s at the University of Chicago.

(“MANIAC I”, “MANIAC II”, and, “MANIAC III”, Wikipedia)

So these “NIAC”s all had influence from von Neumann in one way or another.

But the influence was more concrete. Besides von Neumann’s taking part in ENIAC’s development, and JOHNNIAC’s naming in his honor, the influence was also a result of von Neumann’s leading roles in both computer development and numerical computing for nuclear bomb development; when ENIAC was completed, von Neumann brought together his ENIAC colleagues and his colleagues at the Manhattan Project, and one of the latter, the physicist Nicholas Metropolis, took up computing on ENIAC for hydrogen bomb development, invented the Monte Carlo method of statistical computing, and then started the MANIAC computer project:

“n 1942 and 1943, Metropolis accepted an appointment as a research instructor at the University of Chicago, where he worked with James Franck. Franck was a Nobel Laureate in physics, having received the award with Gustav Hertz in 1925 for discovering the laws that governed the impact of an electron upon an atom.

In early 1943, Robert Oppenheimer convinced Metropolis to come to Los Alamos. His first assignment was to develop equations of state for materials at high temperatures, pressures, and densities.

During World War II, scientists at Los Alamos used slow, clanking, electromechanical calculators when designing the first atomic weapons. These calculators proved fragile, and soon Metropolis and Richard Feynman were spending some of their time repairing these calculators.

At the end of World War II, mathematician John von Neumann brought together the developers of the first electronic computer, known as ENIAC, and several Los Alamos scientists, Metropolis among them. It then fell upon Stanley Frankel and Metropolis to develop a problem for ENIAC to solve: in 1945, the two men had the computer run complex calculations involving the design of the first hydrogen bomb.

Metropolis returned to Chicago, where he continued to work with ENIAC. Using the germ of an idea conceived by Enrico Fermi some 15 years earlier, Metropolis in 1948 led a team that carried out a series of statistical calculations on ENIAC. These statistical calculations would become collectively known as the Monte Carlo method of calculation, which since then has helped address issues such as traffic flow, economic problems, and the development of nuclear weapons.

… The Mathematical Numerical Integrator and Computer—MANIAC for short—became operational on March 15, 1952. …”

(“The Metropolis Fellowship: Who Was Nick Metropolis?”, Issue 2, 2011, National Security Science, Los Alamos National Laboratory)

And so the “NIAC”s received such names because of these early electronic computers’ goal of enabling large-scale numerical computing, as in ENIAC and the MANIACs from “Numerical Integrator and Computer”, and JOHNNIAC from “Numerical Integrator and Automatic Computer”.

John von Neumann’s interest and expertise in numerical computing is what had brought him into the World War II Manhattan Project in the first place, and the need for greater computing power then brought him into the ENIAC project, here quoted again from the science historian Liesbeth De Mol but with more details than previously in Part 5 (i):

“Von Neumann got particularly interested in computers for doing numerical calculations in the context of theoretical physics and thus understood, quite early, that fast computing machines could be very useful in the context of applied mathematics.

In 1943, during World War II, von Neumann was invited to join the Manhattan project – the project to develop the atomic bomb – because of his work on fluid dynamics. He soon realized that the problems he was working on involved a lot of computational work which might take years to complete. He submitted a request for help, and in 1944 he was presented a list of people he could visit. He visited Howard Aiken and saw his Harvard Mark I (ASCC) calculator. He knew about the electromechanical relay computers of George Stibitz, and about the work by Jan Schilt at the Watson Scientific Computing Laboratory at Columbia University. These machines however were still relatively slow to solve the problems von Neumann was working on. But then he accidentally met Herman Goldstine at Aberdeen railwaystation. While waiting for their train on the platform, Goldstine told him about the top-secret ENIAC project at the Moore school [13]. Von Neumann got very excited, and Goldstine made arrangements (providing the necessary clearance document) so that von Neumann could visit the ENIAC. …”

(“Doing Mathematics on the ENIAC. Von Neumann’s and Lehmer’s different visions”, by Liesbeth De Mol, in E. Wilhelmus, I. Witzke (eds.), Mathematical practice and development throughout History, 2008, Logos Verlag)

Jerry Wozniak probably had nothing to do with numerical computation. But as reviewed earlier, he was an engineer working at various companies in Southern California, where the Cold-War think tank RAND Corporation and its JOHNNIAC was based, when he was recruited in 1958 to the Lockheed Missiles and Space Company founded in Northern California in 1956, that would become a top developer of nuclear missiles. Moreover, von Neumann who had died in February 1957 had become the U.S. Air Force’s “principal adviser” on nuclear weapons, including on intercontinental ballistic missiles (ICBM), as previously quoted in Part 5 (i):

“… The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. …”

(“Passing of a Great Mind: John von Neumann, a Brilliant, Jovial Mathematician, Was a Prodigious Servant of Science and His Country”, by Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

To be specific, what Lockheed Missiles and Space Company produced, as noted in Woz’s autobiography and in my earlier review of Howard Aiken’s Lockheed consultancy, was the more specialized Polaris nuclear missiles, submarine-launched ballistic missiles known as SLBM rather than ICBM:

Polaris missile, first U.S. submarine-launched ballistic missile (SLBM) and the mainstay of the British nuclear deterrent force during the 1970s and ’80s.”

(“Polaris missile”, Encyclopædia Britannica)

In any case, when the Los Angeles area-based Lockheed Corporation recruited the local Caltech’s former star football quarterback Jerry Wozniak to its new missiles company in Northern California in 1958, it could have been a pure coincidence that his name had “niak” like “NIAC”, but conceivably it could have occurred to the Lockheed management that taking an engineer with a name resembling those of computers closely associated with von Neumann, to the computer field in the missiles program, could be a modest ‘memorial rite’ in honor of the recently deceased “great man” – dubbed by the early computer science pioneer Louis Fain as in Part 5 (ii) – and a scientific leader of the computer and nuclear missile fields.

In this ‘memorial rite’ scenario, the fact that the 3 MANIC computers had been developed by von Neumann’s former colleague named “Metropolis” could be relevant because, in a similar word, the name Wozniak is ‘Polish’:

August 11, 1950 – Steve Wozniak (Born)

Steve Wozniak, also known as “Woz,” is a Polish American computer engineer who invented the Apple I and Apple II computers. His invention of the Apple personal computer led to the largest computer revolution in history.”

(“Museum’s Historic Reflections Project Part 2”, August/September 2014, The Polish American News, Polish American Cultural Center, Philadelphia)

I also note that Jerry Wozniak was hired by Lockheed in 1958 when Howard Aiken was already a Lockheed consultant and Aiken’s close friend Louis Ridenour was a top Lockheed executive, having been a director at its Missile System Division, as discussed earlier.

In other words, the former U.S. Air Force chief scientist Ridenour, a Caltech Ph.D. in physics, may have had a hand in the hiring of of a famous fellow Caltech alumnus, star football quarterback Jerry “Wozniak”, to become an engineer in computer technology in the new nuclear missiles program in Northern California, for which Aiken was a prominent consultant.

But then when Ridenour unexpectedly died in his sleep, in May 1959 in Washington, D.C. after an evening of drinking with Aiken as reviewed earlier, Aiken likely found that his clout and options suddenly diminished; in other words, starting “the first microcomputer computer company in the world” became harder without the help of his powerful and influential friend Louis Ridenour.

Subsequently, when Jerry Wozniak stayed in the secretive missiles program, i.e., did not get to join any company to develop the microcomputers as Aiken wanted to do, and then later his son became famous for inventing the personal computer, the little ‘memorial rite’, if true, may have turned into a monstrous ‘secret ritual’.

It would have been a pretty good ‘memorial rite’ in the 1960s had Aiken started the “first microcomputer computer company in the world” with the help of Cuthbert Hurd and some Lockheed engineers, and with Jerry Wozniak in a significant engineering role: besides giving some publicity to a “Wozniak”, it would have satisfied Aiken’s desire for leading the development of a new generation of electronic computers – he had been beaten by von Neumann at developing the first generation – and doing so with a “niak”, like “NIAC” as in ENIAC, working under him.

But as extensively reviewed earlier, despite Aiken’s desire such a new computer company was never launched, probably because Hurd was either intellectually empathetic of von Neumann and dismissive of Aiken’s interest in getting rich, or could not agree to ownership terms with Aiken, or because IBM, where Hurd had been a key executive in computer development, was not in favor of a new Aiken computer venture as bad feelings from IBM’s 1940s collaboration with Harvard and Aiken on the Mark I continued to persist.

As I have commented earlier, that both the playboy businessman and influential IBM board director Sherman Fairchild and the Nobel Prize laureate William Shockley got to start new Silicon Valley companies but the Harvard-retired, business goal-driven Howard Aiken could not, did not seem fair to the legendary, albeit conservative, early computer pioneer.

Fortunately, while still in Southern California Jerry Wozniak had begun tutoring his 6-year-old son “Woz” on electronics, and by the 1960s when Aiken and Hurd did not go through with their plan to start a new computer company, the teenage Woz was well on his way to growing into a prolific amateur computer designer.

Along the way, Fairchild Semiconductor gave Woz significant help, the U.S. Air Force gave him an award that critically boosted the young boy’s confidence, and the future would look bright for the next generation of “Wozniak”.

Then in 1970 when Aiken and Hurd again talked about starting a new computer company, this time to develop “a microprocessor and personal computer”, in that same year a 19-to-20-year-old Steve Wozniak already did his own building of a small computer, the “Cream Soda computer” – without the microprocessor as the latter was only being invented by Intel, as discussed earlier.

From that point on, technologically speaking, what Woz needed to develop the world’s first personal computer, that would become popular and commercially successful, was the arrival of the microprocessor plus good dynamic RAM, as earlier cited from him.

But before that became reality, in early 1973 Steve Wozniak moved an important step forward when he got his “dream job” designing calculators at Hewlett-Packard, as he later recalled:

“Now, finally, there was a time in my life—a time right after that third year at Berkeley—that I finally got my dream job. But it wasn’t a job building computers. It was a job designing calculators at Hewlett-Packard. And I really thought I would spend the rest of my life there. That place was just the most perfect company.

This was January of 1973, and for an engineer like me, there was no better place to work in the world. Unlike a lot of technology companies, Hewlett-Packard wasn’t totally run by marketing people. It really respected its engineers. And that made sense, because this was a company that had made engineering tools for years—meters, oscilloscopes, power supplies, testers of all types, even medical equipment. It made all the things engineers actually used, and it was a company driven by engineers on the inside so far as what engineers on the outside needed. Man, I loved that.

… the HP 35 was the first scientific calculator, and it was the first in history that you could actually hold in your hand. …”

(Steve Wozniak with Gina Smith, 2006, W. W. Norton & Company)

After in his earlier years receiving, through his father’s connection, help from Silicon Valley’s technologically and culturally trendsetting company Fairchild Semiconductor, in January 1973 Woz officially became a designer at the Silicon Valley-founding Hewlett-Packard, and so was now ready for Silicon Valley – except that the company he wanted to stay in for life would repeatedly turn down his requests to develop the personal computer there, and it was his friend Steve Jobs on the outside who persuade his family to convince him to do it with Jobs, on their own.

Two months after Wozniak’s hiring by HP, on March 14, 1973 Howard Aiken died in his sleep at a hotel in St. Louis, Missouri, as reviewed, during a consulting trip at Monsanto, having just quit consulting for Lockheed but continuing to persist in his quest to make computers smaller.

Recall that in Part 5 (ii), a critical interview with the early computer science pioneer Louis Fein, conducted by Pamela McCorduck on May 9, 1979, has been extensively quoted and reviewed, giving significant glimpses into academic politics in the early years of the emergence of computer science as an academic discipline, from the mid-1950s to the early 1960s.

I met Pamela in the mid-1980s after becoming acquainted with her husband Joseph Traub, who had in that earlier year, namely 1979, founded the computer science department at Columbia University.

In around 1990, Joe told me that Caltech gave him “one of their Fairchild Scholars”, i.e., a Sherman Fairchild Distinguished Scholarship for an academic visit and a short period of stay at Caltech.

Now reading a Caltech brochure I notice that, surprise, a Sherman Fairchild Foundation grant started its Fairchild Scholars program in 1973 – the year Steve Wozniak got his dream job at Hewlett-Packard and Howard Aiken died – and the scholarship was the original idea of Caltech professor Francis Clauser – someone with a same name as Woz’s father Francis Jacob Wozniak:

“This program was established back in 1973 by the gift of $7.5 million from the Sherman Fairchild Foundation. It was named in honor of the founder of the Fairchild Camera and Instrument Corporation and of Fairchild Industries, a man who would himself have been an ideal Fairchild Scholar. He was a pioneer – and an indefatigable inventor – in the fields of photography, aviation, and sound engineering.

Under the terms of the grant, the money was to be used over a period of ten years to underwrite the costs of visits to the Caltech campus of distinguished scholars or of young persons of outstanding promise from the worlds of academia, industry, and government. The appointments were to be made for periods ranging from a term to a year. Francis Clauser, Clark Blanchard Millikan Professor of Engineering, Emeritus, who originally suggested the idea, pointed out how much the members of the Caltech community would benefit from the opportunity to interact with the world’s intellectual leaders. And, of course, the sharing of wisdom and ideas would go both ways.”

(“The Fairchild Scholars Program”, 1981, Engineering & Science, Caltech Office of Public Relations)

In the timeline of this history, the erection of the Sherman Fairchild Distinguished Scholars program at Caltech brought together Jerry Wozniak’s alma mater and the inventor status of the businessman who had financed the start of Fairchild Semiconductor, subsequently a co-inventor of the integrated circuit, in the same year 1973 when Jerry’s son Woz became officially employed as a calculator designer at Hewlett-Packard.

This timeline was significant to the Wozniak family because: Fairchild Semiconductor had given Woz important help in his personal growth into an amateur computer designer, and Jerry, a former star football player at Caltech, had involvement in the early integrated circuit development – not at the place of original invention Fairchild Semiconductor but at the secretive Lockheed missiles program.

I hope in 1973 the newly interred Aiken did not take it as a slight by Sherman Fairchild – largest shareholder and powerful board member of IBM, a company excelling and leading in building computers even though its original collaborator, namely Howard Aiken on Mark I at Harvard, did not get to go further.

No, it couldn’t be a put-down by Sherman Fairchild, because he had passed away on March 28, 1971 – just 10 days short of the 75th birthday in comparison to Aiken’s 5 or 6 days past the 73rd birthday of March 8 or 9 as cited in Part 5 (ii).

(Frank and Suanne Woodring, 2007, Arcadia Publishing)

Still I would remark, in relation to an observation in Part 5 (ii) regarding my first arrival in America and the death of Diana Forsythe, that Sherman Fairchild had lived twice as many final March days as Howard Aiken did.

(Continuing to Next Part)

Leave a comment

Filed under Computer, Computing, History, Industry, Nobel Prize, Politics, Science, Technology

A review of postings about scientific integrity and intellectual honesty, with observations regarding elite centrism – Part 5: inventions, innovations, and ushering of ‘the new normal’ (ii)

(Continued from Part 5 (i))

In 1982 receiving my bachelor’s degree in computer science at Sun Yat-sen University in China, my first choice for Ph.D. study was not a mathematics department, as much as the world’s largest and one of the best UC Berkeley’s was where I later received my Ph.D., but the Scientific Computing and Computational Mathematics program in the Stanford University computer science department, as discussed in Part 4.

Stanford’s computer science department, then considered the best in the world as I recall, also had its intellectual origin in the National Bureau of Standards’ Institute for Numerical Analysis at UCLA, a historically unique mathematical computing research institution reviewed in Part 5 (i).

George Forsythe, a Stanford numerical analyst instrumental in establishing the department and serving as its founding chairman, had been a key member of the INA at UCLA until the institute was terminated in 1954, at which point he became a UCLA mathematics professor until moving to Stanford in 1957:

“George E. Forsythe was the first regular member of research staff of INA. He came to INA with a strong background in the application of mathematics to meteorology. He was a universalist in the sense that he collaborated with all members of the research staff and with other members of INA. He was very active in the computational aspects of the projects pursued at INA. He also became a leader in the educational program of the Institute. …

George E. Forsythe was one of the senior members of INA who remained with NAR. He soon was given a faculty appointment in the Department of Mathematics, where he was in charge of the educational program in numerical analysis. … In 1957 Forsythe received a very attractive offer of a faculty position at Stanford University, which would enable him to set up a program of his own. He accepted this offer. …”

(Magnus R. Hestenes and John Todd, Mathematicians Learning to Use Computers: The Institute for Numerical Analysis UCLA 1947-1954, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

The NAR cited above which Forsythe remained with after INA’s closure due to McCarthyism-related politics was Numerical Analysis Research, as mentioned in Part 5 (i), the group of former INA members staying as UCLA math faculty members.

While at INA, Forsythe utilized his abilities and expertise to take initiatives and lead research projects:

“… Rosser and Forsythe were chiefly responsible for the study of systems of linear equations. Forsythe, in particular, undertook the task of classifying the various known methods for solving systems of linear equations.

Forsythe initiated a study of Russian mathematical progress which led to the publication of a bibliography of Russian books [24]. Pertinent articles by Russians were collected and selected ones were translated into English by C. D. Benster under the editorships of Forsythe and Blanch. Some translations were published commercially [20,42]. Several appeared as NBS reports [49,59,90]. An informal, but important, result of this program was the initiation of a class in Russian for mathematicians at UCLA and INA.”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology)

The above-mentioned project initiated and led by Forsythe to translate Russian mathematics articles, and for mathematicians to learn the Russian language, must have been quite unusual at the height of the Cold War in the 1950s – it sounded more like in China, as in Part 4, when my future undergraduate thesis adviser Yuesheng Li was a student interpreter for a visiting Russian math professor.

Forsythe’s colleague J. Barkley Rosser mentioned in the above quote, as in Part 5 (i) was at one time INA’s director and later in 1963 became director of the Army Mathematics Research Center at the University of Wisconsin at Madison.

My recollection that Stanford’s computer science department was the best in the early 1980s, was true at least in some concrete measure according to the book on INA history by Magnus R. Hestenes and John Todd, namely that under George Forsythe’s leadership it became “the most influential” CS department in the U.S., attracting almost as many National Science Foundation Fellows as all other CS departments combined:

… In 1961 he became Professor of Computer Science and Chairman of the Department of Computer Science. Under his leadership, this department became the most influential one in the country, attracting almost as many National Science Foundation Fellows as all other such departments combined. …”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology)

The year cited above, 1961, of Forsythe becoming the computer science department chairman is however incorrect. 1961 was when Forsythe founded the Computer Science Division within the Stanford mathematics department; the computer science department was later founded in 1965, capping Forsythe’s career that had included working as an Air Force meteorologist and introducing automatic computing to the Boeing Company:

“George was born on January 8, 1917, in State College, Pennsylvania, and moved as a small boy with his family to Ann Arbor, Michigan. His undergraduate work was at Swarthmore College, where he majored in Mathematics. His experience there had a strong influence on his life. His graduate study was in Mathematics at Brown University where he received his M.S. in 1938 and his Ph.D. in 1941. He then came to Stanford but his first year here was interrupted by service in the Air Force, in which he became a meteorologist. … He spent a year at Boeing where he introduced what may have been the first use of automatic computing in that company. He spent several years in the Institute for Numerical Analysis of the National Bureau of Standards, a special section located on the campus of the University of California, Los Angeles. He joined the Institute because he wanted to watch the development of the Standards Western Automatic Computer (SWAC), one of the first of the digital computers. …

Stanford acquired its first computer in 1953, and research and instruction in numerical mathematics and computation began to develop. Soon after this the Mathematics Department began to search for new leadership in this field, and George Forsythe was the unanimous choice of the faculty. It was in 1957 that he returned to Stanford, joining once again the Mathematics Department, this time as Professor. … Under his leadership, the Computer Science Division of the Mathematics Department was formed in 1961, and he began the slow process of gathering an outstanding group of colleagues.

The culmination of this effort was the founding of the Computer Science Department on January 1, 1965, by which time he had succeeded in attracting a nucleus of leading computer scientists. George was very skillful in bringing together many diverse points of view. … Of all his professional activities, building, and leading the department was closest to his heart. He did, however, contribute his leadership to Stanford in other but related tasks. He served as Director of the Stanford Computation Center from 1961 to 1965.  …

George had a nationwide influence on Computer Science education. The emergence of a discipline of Computer Science is due to his efforts more than to those of any other single person. …”

(“MEMORIAL RESOLUTION: GEORGE ELMER FORSYTHE (1917 – 1972)”, Computer Science Department, Stanford University)

As stated in the above 1972 Stanford memorial resolution on the occasion of his death, Forsythe had been more instrumental than any other person in “the emergence of a discipline of Computer Science”. In other words, Forsythe was not only the founder of a computer science department but in some sense a founding figure of the academic discipline of computer science.

Interestingly, this founding figure had been born in State College, Pennsylvania, a state where the electronic computer was later born; and he had grown up in Ann Arbor, Michigan, of interest to me as the place of intellectual formation of my Ph.D. adviser Stephen Smale as in Part 2 – and coincidentally also the place the “father of computers” John von Neumann’s daughter Marina von Neumann Whitman has settled in as in Part 5 (i).

Around 1961, Forsythe began to advocate establishing computer science departments in universities, showing clear foresight for their curriculum composition, as recalled by his later Stanford colleague Donald E. Knuth:

“… It is generally agreed that he, more than any other man, is responsible for the rapid development of computer science in the world’s colleges and universities. His foresight, combined with his untiring efforts to spread the gospel of computing, have had a significant and lasting impact; one might almost regard him as the Martin Luther of the Computer Reformation!

In 1961, we find him using the term “computer science” for the first time in his writing:

By that time Forsythe knew that numerical analysis was destined to be only a part of the computing milieu; a new discipline was crystalizing which cried out to be taught. He had come to Stanford as a professor of mathematics in 1957, but now he and Professor John Herriot wanted to hire colleagues interested in programming, artificial intelligence, and such topics, which are not considered mathematics. Stanford’s administration, especially Dean Bowker (who is now Chancellor at Berkeley), also became convinced that computing is important; so George was able to found the Division of Computer Science within the Mathematics Department in 1961.

During that academic year he lectured on “Educational Implications of the Computer Revolution” at Brown University:

“… To think of a computer as made up essentially of numbers is simply a carryover from the successful use of mathematical analysis in studying models…Enough is known already of the diverse applications of computing for us to recognize the birth of a coherent body of technique, which I call computer science…Whether computers are used for engineering design, medical data processing, composing music, or other purposes, the structure of computing is much the same. We are extremely short of talented people in this field, and so we need departments, curricula, and research and degree programs in computer science…I think of the Computer Science Department as eventually including experts in Programming, Numerical Analysis, Automata Theory, Data Processing, Business Games, Adaptive Systems, Information Theory, Information Retrieval, Recursive Function Theory, Computer Linguistics, etc., as these fields emerge in structure…Universities must respond [to the computer revolution] with far-reaching changes in the educational structure.[60]”

… Louis Fein had also perceived the eventual rise of computer science; he had recommended in 1957 that Stanford establish a Graduate School of Computer Science, analogous to the Harvard Business School. …

George argued the case for computer science long and loud, and he won; at Stanford he was in fact “the producer and director, author, scene designer, and casting manager of this hit show.” …”

(“George Forsythe and the Development of Computer Science”, by Donald E. Knuth, August 1972, Volume 15, Number 8, Communications of the ACM)

As Knuth pointed out, Stanford’s Dean Bowker, later in 1972 UC Berkeley Chancellor, played an important role facilitating Forsythe’s founding of the computer science division in the math department.

It was thus sad that Forsythe, a distinguished numerical analyst, scientist, and founder figure in computer science, lived to only 55 years of age when he died on April 9, 1972, of pancreatic cancer.

(“George Elmer Forsythe”, by J. J. O’Connor and E. F. Robertson, November 2010, School of Mathematics and Statistics, University of St Andrews, Scotland)

Forsythe, “almost … the Martin Luther of the Computer Reformation” according to Knuth, was only 2 years older than John von Neumann, the “father of computers”, at the time of his death in February 1957 due to cancer as detailed in Part 5 (i).

But at least Forsythe had not been exposed to work-related nuclear radiation like von Neumann had feared about himself, who had been involved in the Manhattan Project of atomic bomb development – rather, Forsythe had introduced automatic computing to Boeing aircraft manufacturing.

It turned out that in the 1960s Forsythe had already had skin cancer; still, his 1972 death came as a shock to his colleagues and former Ph.D. students, as they later recalled in July 1997:

“… Appropriately, a minisymposium at SIAM’s 45th Anniversary Meeting at Stanford commemorated the 25th anniversary of Forsythe’s death.

Rather than a formal review of Forsythe’s accomplishments, the memorial minisymposium, organized by Cleve Moler, chief scientist at The MathWorks and Forsythe’s eighth doctoral student, was a sort of Irish wake that celebrated the man as much as his science. …

To begin the conversation, Moler drew on a presentation about Forsythe given several months earlier by James Varah of the University of British Columbia, Forsythe’s 12th student. Varah had spent considerable time in Stanford’s Forsythe archives gathering material about Forsythe’s life and work.

Even after 25 years, the shock of Forsythe’s sudden illness and death seemed fresh to many of his friends. Moler somberly recalled Forsythe “telling me one day in his office that he had a doctor’s appointment that afternoon for a possible ulcer. Two weeks later he was dead.”

Knuth described writing a memorial article for him in the short week between Forsythe’s death and Knuth’s departure for a sabbatical in Norway. “His briefcase was still on top of his desk. In a drawer I found life expectancy rates for skin cancer, a disease he had in the 1960s. He knew he was ill.”

Parlett concurred: “He knew he was dying. He came to Berkeley to see me in 1971, the summer before he died. The visit wasn’t really needed. It was his way of saying goodbye.””

(“Remembering George Forsythe”, by Paul Davis, January 8, 1998, SIAM News, Society for Industrial and Applied Mathematics)

As quoted, Beresford Parlett, the Berkeley numerical analysis professor whom I considered a mentor, had been Forsythe’s Ph.D. student, as had been James Varah, the former UBC computer science department head who helped to offer me my job there following my Berkeley Ph.D. in 1988, as in Part 4.

The “minisymposium” in memory of Forsythe was held during SIAM’s 45th Anniversary Meeting at Stanford as cited above. It was the same SIAM meeting in July 1997 mentioned in Part 4 regarding a session, “Moving-Grid Methods for Partial Differential Equations”, which was organized by a former Berkeley math Ph.D. student from China who had prepared my living arrangement before my August 1982 arrival at Berkeley, and which included several numerical analyst presenters of intriguing backgrounds at Berkeley, British Columbia and some U.S. national laboratories.

As if tragedy would strike twice, about a month after the SIAM mini-symposium remembering Forsythe, his daughter Diana died of drowning while backpacking in Alaska, at the age of 49:

“Diana E. Forsythe, ’69, of Palo Alto, August 14, at 49, of drowning while backpacking in Alaska. A scholar who studied the culture of science and technology, she was the daughter of George Forsythe, founder of Stanford’s computer science department, and Alexandra Forsythe, whose teaching specialty at Stanford was the use of computers in education. Though she got her undergraduate degree from Swarthmore, she studied at the Stanford-in-Britain program during the 1967-68 year. After teaching anthropology and computer science at the University of Pittsburgh, she returned to Stanford in 1995 with a fellowship from the System Development Foundation; she was also a visiting scholar at the Stanford Center for Biomedical Ethics. Her paper on the hidden cultural assumptions in the way computer systems are designed was published in December 1996. She joined the UCSF faculty this year as an associate professor in the medical anthropology program. Survivors include her husband, Bernard Shen.”

(“Obituaries”, November/December 1997, Stanford Magazine)

Somehow Diana Forsythe’s age when she died, 49, coincided with the month and day of her father’s death decades earlier, April 9. Her date of perishment, August 14, was exactly 30 days following the mini-symposium honoring her late father, held on July 15 and led by Cleve Moler of The MathWorks, another of George Forsythe’s former Ph.D. students.

(“A Tribute to the Memory of George Forsythe”, July 15, 1997, SIAM’s 45th Anniversary Meeting, Stanford University)

Closely following her father’s footsteps, Diana graduated from George Forsythe’s alma mater, Swarthmore College, then attended Stanford before getting a teaching job at the University of Pittsburgh.

I note that Marina von Neumann, whose application for Ph.D. study at Princeton – like the equivalent of Stanford for Diana Forsythe – had been rejected as in Part 5 (i), had also ended up teaching at the University of Pittsburgh.

(““The Martian’s Daughter” by Marina von Neumann Whitman”, October 2, 2012, Gerald R. Ford School of Public Policy, University of Michigan)

From that point on, the critical career difference between the two daughters of famous fathers in the computing field was the lift Marina then received from President Richard Nixon in 1972 – coincidentally the year of George Forsythe’s death – to become the first female member of the White House Council of Economic Advisers, as in Part 5 (i), which no doubt gave her credentials for her later appointment as a vice president of General Motors.

As taken into consideration in Part 5 (i), back in the 1950s Marina’s father, an important scientific adviser to the U.S. military, had been well acquainted with members of then President Dwight Eisenhower’s cabinet, with Nixon then the vice president.

For Diana there was no higher lift – none higher than where her father had founded and led the most influential computer science department in the U.S. – and she eventually returned to Stanford-related jobs, analyzing “hidden cultural assumptions” in the computer field, finally landing a UC San Francisco professorship just in the year she died in Alaska.

George Forsythe’s daughter Diana Forsythe had a much worse luck in life than John von Neumann’s daughter Marina Whitman – perhaps the degree of parental fame mattered for the next generation.

But it has been a consolation that following her death, Diana was immediately remembered in her anthropology discipline by the American Anthropological Association with the Diana Forsythe Prize honoring her spirit of “feminist anthropological research”:

“The Diana Forsythe Prize was created in 1998 to celebrate the best book or series of published articles in the spirit of Diana Forsythe’s feminist anthropological research on work, science, and/or technology, including biomedicine. …”

(“Awards: Diana Forsythe Prize”, GAD, American Anthropological Association)

Just like the “father of computers” John von Neumann not having been the first to develop the electronic computer, joining the World War II ENIAC project after it had started, George Forsythe, the most influential person in the emergence of the computer science discipline, wasn’t the first to advocate for it – that credit goes to Louis Fein, a name cited in Donald Knuth’s tribute to Forsythe.

Fein, a computing education consultant based in the Stanford area of Palo Alto, had begun campaigning for computer science in the mid-late 1950s, producing a report for Stanford University in 1957, and in 1960 taking the role of chairman of the Education Committee of the Association for Computing Machinery – as in Part 3, ACM has been the main international organization for computer science:

“… the earliest significant papers on CS education appear to be by Louis Fein who was a private consultant in California and a passionate supporter of CS education in universities. He wrote an unpublished report on computing education for Stanford University in 1957 and published three papers (Fein 1959a, 1959b, 1961) and was appointed chairman of the ACM Education Committee in 1960. …

In the two similar 1959 papers that were based on his work for Stanford University, Fein explains that he had been studying the operation of university programs in computing, data processing and related fields since 1956 by holding formal and informal discussions with university administrators, computer center directors, faculty members, students and industry representatives. … The most important impact on university programs at that time was IBM selling heavily discounted IBM 650s to about 50 universities on the condition that they would offer some computing courses. The universities were offering a variety of computer courses but, in Fein’s view, most universities were putting too much emphasis on the computing equipment and courses were being designed as supplements to the equipment when equipment ought to be a supplement to the courses. …

Fein had a clear vision for computer science education in universities and he appears to have been one of the first to call the field “Computer Science” when he suggested that universities establish a Graduate School of Computer Sciences …”

(“Computer Science Curriculum Developments in the 1960s”, by G. K. Gupta, April-June 2007, Volume 29, Issue 2, IEEE Annuals of the History of Computing)

Like George Forsythe, Louis Fein had moved from Southern California, though not from the INA at UCLA but from the military-oriented aerospace industry.

Fein had worked at the Raytheon Company as a chief engineer building the RADAC computer in Massachusetts, moved to Southern California to install it for the Navy, and then formed his own company which he referred to as the “Three C’s”, the Computer Control Company:

“FEIN: In 1954 and 1955, I was in southern California installing and operating at the Naval Air Missile Test Center at Point Magu the RADAC, the Raytheon digital automatic computer which I had built as chief engineer at Raytheon Manufacturing Company between 1948 and 1951. I had, after leaving Raytheon, formed the Computer Control Company which was an organization made up of people from Raytheon that worked for me. On October 30, 1952 I formed the Three C’s, Computer Control Company, because after the acceptance tests were passed for the RADAC, the RADAC was then whipped from Waltham, Massachusetts to Point Magu, that’s right outside of Oxnard. The Navy department asked me if I would go out to Point Magu together with some engineers and install and operate the computer center there. The RADAC was built and actually was intended for the data reduction at Point Magu. They insisted however that I must go personally. …”

(“An Interview with LOUIS FEIN”, interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In the above-cited May 9, 1979 interview by Pamela McCorduck, Fein recalled that on August 10, 1955, he left his company to become an independent consultant based in Palo Alto, especially for the Stanford Research Institute; in that capacity and at the request of Al Bowker, then Stanford Graduate Dean, he wrote a report for Stanford on starting computer science education:

FEIN: … In August 1955, for a variety of reasons, mostly administrative, I left Three C’s and decided that I would write, consult, and teach perhaps. As a matter of fact that was August 10, 1955. Immediately after leaving Three C’s. by August 11, I had, as I recall, seven or eight contacts already made, and by August 18, I was consulting for five or six computer outfits. … One of the first assignments I’d gotten was with SRI, the Stanford Research Institute then, who had a contract with the Bank of America to build what later became ERMA and they are right here in Menlo Park. I had also gotten an assignment from Sylvania Electronic Defense Laboratory to work on some computer hardware, and I also worked for Electro-Data, RCA, but I rather liked it up here and so we moved up to Palo Alto in January 1956. I was almost walking distance to SRI and to Sylvania and when I had to go down to Pasadena for Electro-Data or to Camden for RCA or to New York for IBM, I just went, but Palo Alto became my home base and I’ve lived here since. I’ve been a free lance independent computer consultant since August 10, 1955, which I believe also might be the longest in the world. … One of my friends introduced me to Al Bowker who is now the Chancellor at Berkeley, but then was the Provost at Stanford. I may not be remembering accurately; he was either Provost or Graduate Dean, and perhaps later became Provost. Al was interested in the proposals that IBM was making people of the following type: “We will give you a 650 for free if you will give a course in scientific computing and one in business computing.” And Al also had heard a little about this computing business and I suggested to him that I thought that computing fits in a university. I didn’t quite know how. But it was clear to me and apparently to him that computers would develop in such a way that most of the disciplines in the university would have need for them at least as auxiliary devices. The question of whether or not computer science, not yet the computer itself, was a discipline worthy of study by the university was not yet settled but in my enthusiasm I thought it might be and so Al Bowker commissioned me to make a study on what might be called “The Role of the University in Computers and Data Processing.” …”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

SRI was the same institute the Silicon Valley pioneer Douglas Engelbart went to, as in Part 5 (i), after leaving UC Berkeley in 1956 and finding out that both Stanford University and the Hewlett-Packard Company had no vision at the time for computer development. Fein had arrived a little earlier, in January 1956 as quoted above, immediately participating in SRI’s ERMA computer project for Bank of America.

ERMA was “the world’s first computer used in banking”, unveiled in September 1955 in a transcontinental videoconference hosted by the actor Ronald Reagan, a future U.S. president.

(“Our Story: Bank of America revolutionizes banking industry”, Bank of America)

While campaigning for computer science, Louis Fein found, in his interactions in the late 1950s with computing pioneers in the academia, that many – including George Forsythe, and UC Berkeley’s Harry Huskey and Derrick Lehmer who had been involved with the first electronic computer ENIAC as in Part 5 (i) – were not so positive.

In his 1979 interview Fein recalled Forsythe’s ‘hedging the bet’ attitude regarding establishing a computer science department at Stanford, and the negative attitudes of John Herriott, later Forsythe’s co-founder of the computer science division in the mathematics department, and of math department chairman David Gilbarg:

“FEIN: …

… Herriot, you know Herriot, I used to talk locally in Palo Alto, a lot at Stanford on this idea, and he was – hostile was an understatement – to the idea. I mean he would get up and shout, “What you want it pie in the sky and you can’t have pie in the sky!” And that was the reception I got. George Forsythe… I met him at a Los Angeles Conference before I came to Stanford – and he asked me about Stanford because he was considering either Berkeley or Stanford and I told him I thought Stanford was the place to go. He was interested in numerical analysis, and he had also worked on SWAC over there, and I told him I was working for Stanford and trying to persuade them about a computer science department. a I was already in the middle of the study and I had the outline of what I was going to recommend and he finally came to Stanford. After he came to Stanford his position was – well equivocal; he could go for it or not go for it. There was a very, very strong opposition in the mathematics department and George was in the mathematics department and Dave Gilbarg who was the chairman of the department, who was a very close friend of mine, because his boy and my boy were on the same little league team, and you know we used to go to the movie together with the families. But Dave thought computing was like plumbing. We don’t have plumbers studying within the university and what does computing have to do with intellect? I am exaggerating of course. He was murder on it and he may still be to this day. And George wasn’t sure he wanted a separate department either. Herriot though the pie in the sky, crazy. So the opposition came from insiders not outsiders…”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

About the Berkeley personalities, Fein recalled the reaction of Harry Huskey – as in Part 5 (i) the leader of the SWAC computer development at the INA at UCLA, and then a Berkeley math and electrical engineering professor – against having a computer science department:

“FEIN: … Well, this report came out and as I recall, one of the anecdotes, I gave a copy to West Churchman who apparently was very interested in it – West is a philosopher and all of that, and he thought it was a great thing. He organized a small seminary to be held at Berkeley (and I don’t recall exactly when that was and I regret so many times not having kept a date book on these things) and as I recall around the table, Harry Huskey, my good Derek Lehmer, Julie (Julian) Feldman, pretty sure Ed Feigenbaum…at least those and I may think of some others before we finish the interview. West introduced me, saying I had a notion that universities should be involved in computers and that there should be a separate computer science department. And I made my pitch. And Harry Huskey said he saw no need whatever for having a separate department. He was doing computing in engineering.”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

As Fein described, dismissiveness toward computer science came from the perspective of engineering as well.

And according to Fein, Derrick Lehmer – as in Part 5 (i) an early pioneer in mathematical computing and a former INA director who brought Huskey to Berkeley after INA’s closure in 1954 – called Fein and his idea “crazy”:

“FEIN: … And Derek Lehmer – he didn’t see any need for a department. He was, even at that time, trying to use the computer that Morton built (Morton was the computer man at Berkeley in those days and they were building a homemade computer). I think Lehmer was trying to use it for prime number calculations. And Derek Lehmer patted me on the head and…crazy idea…as a matter of fact, one of my sons later went to Berkeley, well two of them went to Berkeley, one of them, Danny who was with Mario Savio during those days and once he ran into Derek Lehmer and he said that Professor Lehmer said, “Oh, your father is Lou Fein, that crazy guy that wants…I thought he was joking, but…(laughter).”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

As detailed in Part 5 (i), Huskey and Lehmer were the leading computational mathematicians at UC Berkeley, both had worked with von Neumann in relation to ENIAC, and Huskey held negative opinions of von Neumann in the ENIAC project while Lehmer was of opposite mathematical interests to von Neumann’s.

There seemed to be a common thread in both the early electronic computer development where von Neumann was the leader of an ambitious computer-building movement, and the later emergence of the computer science discipline, that Huskey and Lehmer were solid technical experts but skeptics about ambitious development goals.

After submitting his report to Stanford Graduate Dean Al Bowker, recommending the establishment of a graduate school of computer science, Fein did not receive a formal response for 30 months while the university appointed Forsythe as the Computing Center director, and so had to write a letter to enquire:

“FEIN: …

So I envisioned a set of courses in a department itself, actually in a graduate school to start off with, because we really didn’t know much about computers for undergraduate study, so we had to start out with a graduate program, I thought. And I laid out a set of courses, not only a set of courses for the school itself, but for other departments, computers for – medicine, computer for so–, and a research program, and some dollars required and where it might come from, and I wanted to start it with some seed money that wasn’t merely a computer center as many people were doing. …

FEIN: And this report lays it out, and in my other papers which I suppose you should see because George took a lot of his stuff from these papers. I used to talk to George a lot about this. Anyway, the relationship between the computer center and the computer department was identified at least. … and Al…there was a difference in philosophy between Al and me. I laid out a structure, an organization and I thought that even if we didn’t have a bunch of computer geniuses around in which you can set up a school…you have a bunch of slots and then you get the best person you can to fill the slot…and Al didn’t believe this; he said “It doesn’t make any difference what the structure is, because what we need is to get a von Neumann out here and then things will go well.” The great man theory.

It’s clear that with that view he didn’t think I was that great man and he didn’t want in principle to set up an organization and then fill it with people who were available, great or not. Meanwhile, Burroughs, I believe, was dangling some equipment in front of his eyes for free because all of these manufacturers are very interested in getting universities to pick up their equipment for sales promotional purposes and Al, I think, got a very good deal from Burroughs. Burroughs gave him a machine and he, I think, started what was essentially a computer center and it isn’t clear to me when he decided that George should be the one to do that and that maybe alter, I don’t know exactly what Al was thinking because he stopped talking to me about it – that’s why I had to write the letter about it’s 30 months since…I guess Al appointed George to start with the computer center and maybe to give some courses in numerical analysis. …”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

So the pioneering computer science advocate Louis Fein, in his May 9, 1979 interview conducted by Pamela McCorduck, quoted above, made two rather serious allegations: one, when George Forsythe began to advocate for computer science in the early 1960s “a lot of his stuff” came from Fein’s earlier papers; and two, Stanford Graduate Dean Al Bowker held a biased view that a new discipline needed to be started by a “great man” like John von Neumann, which Fein wasn’t.

Apparently, Forsythe was more of such a “great man” – and then also suffered von Neumann’s fate.

In making his allegations, Fein also gave his explanation why his advocacy was ignored by the Stanford administration, namely that his ambition of becoming a computer science department founding chairman was viewed as a threat by academic insiders – possibly insinuating that Forsythe was such an insider:

“FEIN: Well, the usual political explanation – Lou Fein brings in an idea, “An outsider is bringing in an idea and it is clear that he is bringing it in because he wants to be the chairman of the department and I prefer that I be the chairman of the department and so I will resist it.” The resistance to innovation anyway, and secondly the threat.””

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Fein’s interviewer Pamela McCorduck tried to place Fein’s historical dispute with Stanford in a neutral light:

“McCORDUCK: …

The first time I came across the mention of your name as I was going through the archives of George Forsythe’s papers in the Stanford archives, was in a letter that you wrote to Frederick Terman and in this letter you said that “it has been almost thirty months since we last discussed the formation of a graduate school of computer science at Stanford as recommended in my report commissioned by Dr. A. H. Bowker.” …

McCORDUCK: Now I inferred from the text of the letter you wrote that Terman at some point must have responded to you and said, “Look, dozens of faculty members come to me with hot propositions. Yours is no hotter than anybody else’s. It’s interesting but it’s not revolutionary so please leave me alone already.” And you were arguing in your letter that it wasn’t just another hot proposition but that it was in fact revolutionary which he didn’t seem to see or at least denied.”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Frederick Terman mentioned above, to whom Fein sent a letter of grievance after waiting for nearly 30 months for a response to his report, as in Part 5 (i) had been the mentor of William Hewlett and David Packard in their founding of the Hewlett-Packard Company, an event since recognized as the birth of Silicon Valley.

The Provost of Stanford, his hometown, Terman proudly pitched for the region over the East Coast and Southern California:

““The price that is paid for all these blessings is annoying traffic congestion around 8 a.m. and 5 p.m. in the 10- to 20-minute drive between home and work. But this is really a pretty small price to pay to avoid having to live in the East, or even in Southern California where the traffic is worse and the smog is denser.”

Fred Terman had just retired as provost of Stanford University when he offered that 1965 assessment of what was to become Silicon Valley…

Born in English, Ind., in 1900, Frederick Emmons Terman seemed destined from the start for intellectual glory. His father, Lewis, who suffered from chronic battles with tuberculosis, moved his family to Stanford’s sunny environs when Fred was a youngster. There, Lewis, a noted scholar and educator, invented the Stanford-Binet IQ test. He also prepared a course of home tutelage that enabled young Fred to complete grade school in just four years.”

(“THE ENGINEER WHO JUMP-STARTED SILICON VALLEY”, by Joan O’C. Hamilton, August 7, 1997, Business Week)

An MIT engineering Ph.D. graduate in the 1920s under Vannevar Bush, who later during World War II was the U.S. government’s leading scientific adviser as discussed in Part 5 (i), Terman became an excellent teacher at Stanford, at Bush’s request managed a military research lab at Harvard during World War II and later received the Medal of Merit; subsequently as Stanford Dean of Engineering and then Provost, Terman was at home with Cold War-oriented research – as a young boy he had already tinkered with ham radio with his friend Herbert Hoover, Jr.:

“By the time he was 14, the younger Terman had developed an interest in ham radio (which he pursued with his friend, Herbert Hoover Jr.), and with it, a lifelong love of electronics. He earned two Stanford degrees, in chemistry and engineering, then a PhD from MIT, where his mentor was Vannevar Bush, who later became director of the government’s Office of Scientific Research & Development. In the 1920s, Terman returned to Stanford, where he developed a reputation as an excellent teacher… Hewlett-Packard co-founder David Packard would later write in his autobiography, “The HP Way,” that “Professor Terman had the unique ability to make a complex problem seem the essence of simplicity.”

… Other contemporaries say Stanford’s early lead on Silicon Valley vis-a-vis UC-Berkeley, was largely because Stanford had the aggressive and well-connected Terman, while Berkeley had a dean of engineering who was building a much broader base with young faculty.

JAMMIN’ THE RADAR. During the early years of World War II, Vannevar Bush recruited Terman to head the Harvard Radio Research Laboratory, back at Cambridge, Mass. Terman would later be awarded the highest civilian medal – the Medal of Merit – for his work there. Under his direction, the lab developed strategies and methods to confound enemy radar…

During this period Terman both added to his already far-flung network of powerful people in industry and government and lobbied for the government to devote much more funding for science and engineering in higher education as a key to military success. When he returned to Stanford in 1946 as dean of engineering, he embarked upon what he called his “steeples of excellence” strategy to gain world reknown for Stanford. …

This approach was instrumental in nursing back to financial health a Stanford that World War II had threatened. Not only did Stanford lose a lot of tuition-paying students to the war, it got virtually no research money from the federal government, which did, in fact, pour money into Harvard, Yale, and other Eastern schools. As dean, Terman was happy and anxious to take on government contracts that would later fuel the cold war arms race. …”

(Joan O’C. Hamilton, August 7, 1997, Business Week)

The U.S. Presidential Medal for Merit was awarded to “such civilians of the nations prosecuting the war under the joint declaration of the United Nations and of other friendly foreign nations as have distinguished themselves by exceptionally meritorious conduct in the performance of outstanding services since the proclamation of an emergency by the President on September 8, 1939.”

(“Federal Register: Executive Order 9637–Medal for Merit”, U.S. National Archives)

But Terman’s achievements and pedigree could not have intimidated Fein, who had not only gone through the Southern California military aerospace industry but had worked as a military technology engineer during World War II, including in a Harvard lab – just a different lab from the one Terman led:

“Louis Fein graduated from Long Island University in 1938, with a bachelor’s degree in physics. Upon graduating, Fein entered the University of Colorado at Boulder to work on his master’s degree in physics. He graduated from Colorado in 1941 and went to work as as an instructor in mathematics at Earlham College in Richmond, Indiana. By 1943, Fein had left Earlham and went to work for the Harvard Underwater Sound Laboratory as an engineer on sonar devices, underwater sound gear, acoustic gear, and ultrasonic gear. While working at the Lab, Fein took courses at Harvard and the Massachusetts Institute of Technology (MIT) in electronics and mathematics. When the Harvard Underwater Sound Laboratory closed, Fein went to work for Submarine Signal Company, The Submarine Signal Company permitted Fein to enroll at Brown University to work on his doctorate in 1945. … In 1948, Fein applied to the Raytheon Manufacturing Company, who had a contract to make two computers under the HURRICANE project for Point Magu in California. …”

(“Interviewee: Louis Fein Interviewer: Henry S. Tropp”, May 4, 1973, Computer Oral History Collection, 1969-1973, 1977, Smithsonian National Museum of American History)

I note an interesting contrast between the two at Harvard during the war, that Terman directed research on radar in the air while Fein did engineering work on sonar underwater.

I also note that Fein received his Ph.D. from Brown, the same university Forsythe had received his.

With his own solid educational and engineering work backgrounds, some of which previously near Terman at Harvard, it was only logical for Fein to appeal to Stanford Provost Terman after 30 months not getting a reply from Graduate Dean Bowker about his consultation report on computer science.

Terman’s feedback, a suggestion of getting “a little contract” from the Office of Navy Research like how Bowker had started Stanford’s statistics department, disappointed Fein:

“FEIN: …

…I had a talk with Terman, because after I presented this report, I talked a lot with Al (Bowker) and then Terman. Al had started the statistics department at Stanford and statistics had the same kind of history of resistance by academics and mathematics departments for introducing a pedestrian study like statistics, which is like plumbing, into the university. The way in which apparently Al got it going was to get a contract from ONS for one person, get a little project going and then hustle another contract from maybe the Air Force and get something going. And after 8 or 9 or 10 years he had gotten something. And I couldn’t see with the vision I had, doing computer science by getting a little dinky computer from IBM…

FEIN: … Also Terman, you know Al’s notion was the great man theory, and Terman was telling me, “Well, Lou,” he says, “We can’t just set you up like you want, but why don’t you do like what Al did with statistics: get yourself a little contract from ONR and hire yourself one person and so on and after a while do like he did.” And I couldn’t see that at all, and with hindsight I am still right.”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In fact, “a little contract from ONR”, i.e., the U.S. Navy’s Office of Naval Research reviewed in Part 5 (i), was not only how Bowker had started the Stanford statistics department but how Bowker himself had been hired by Stanford; Terman had gone to Mina Rees, head of ONR’s mathematics division – previously mentioned in Part 5 (i) – to try to win an ONR statistical sampling contract, and Rees suggested for that purpose hiring Bowker, a Columbia University graduate student who had worked with her in probability studies of strategic bombing during World War II:

“… On one of Terman’s visits to the East Coast, he had talked with Mina Rees, the head of the ONR’s mathematics division, and learned that the ONR was planning to let a contract for statistical sampling work. To Terman this was a clear indication that mathematical statistics was an “important” field, and he was eager for Stanford to obtain the contract … Stanford, however, did not have any mathematical statisticians on its faculty and no money to hire any. But, as Terman was quick to realize, if Stanford could obtain the ONR contract, it could use the contract funds to cover a portion of the salaries of any statisticians the university might hire; the university’s portion of the salaries would be paid with overhead funds accumulated from government contracts. … The university might even make money, Terman speculated. …

Terman’s plan to hire “top notch” statisticians using contract funds from the ONR clearly required the cooperation of the ONR. Terman went directly to Rees with his plan and asked her for the names of the best available statisticians in the country. She suggested Albert Bowker, a graduate student at Columbia University who, during the war, had done probability studies of strategic bombing as a member, along with Rees, of the OSRD’s applied mathematics division at Columbia. Bowker was hired by Stanford; shortly afterwards, the university received from the ONR the statistical sampling contract.”

(Rebecca S. Lowen, Creating the Cold War University: The Transformation of Stanford, 1997, University of California Press)

So Graduate Dean Bowker had originally been recruited to Stanford by Terman at the recommendation of the ONR to start Stanford’s statistics research, and so was doubtlessly trusted by Provost Terman.

In an interview by McCorduck on May 21, 1979, only 12 days after the Fein interview, Bowker asserted that Terman had “tended to follow” his advice on computing matters:

“McCORDUCK: Do you happen to remember why Louis Fein came to mind when you decided to commission this report?

BOWKER: Well, he had some interest in this field and was around Stanford. I’ve forgotten exactly now why I talked to him. I talked to a number of other people and many of them were quite negative.

McCORDUCK: But still you persisted. What was Terman’s role in this?

BOWKER: Well, he tended to follow my advice in these areas and I’m sure he was…He and I had worked from the very beginning to develop new machine capabilities at Stanford rather than a joint activity at my laboratory and at his when he was Dean of Engineering. We had worked together for a long time on computing matters.”

(“An Interview with ALBERT BOWKER”, interview by Pamela McCorduck, May 21, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Hence, I can infer that when Terman told Fein to start with “a little contract from ONR” like Bowker had done starting the statistics department, it was also Bowker’s view that the computer science department should also start this way.

My view of it, on the basis of my review of history in Part 5 (i), is that funding through a contract with a military or military-oriented research agency was a foundational cornerstone of U.S. government support for scientific research, first put in place under the leadership of Vannevar Bush, Frederick Terman’s mentor – Louis Fein must have known it well as he had been a chief computer engineer at the military aerospace company Raytheon, a company co-founded by Bush.

(“Raytheon Company: History”, Raytheon Company)

In fact, where Bowker as a student had worked with Rees during World War II, “the OSRD’s applied mathematics division at Columbia” as in the second previous quote, was the U.S. government’s Office of Scientific Research and Development under Bush, which as in Part 5 (i) managed science funding contracts during the war.

Louis Fein might not like the “little contract” much, but George Forsythe proceeded to found and build up a Stanford computer science department beyond such limits, a department with unmatched national influence and an unparalleled number of National Science Foundation Fellows – thus Graduate Dean Al Bowker was right about the “great man”.

I note that Bowker’s background did not seem to include real military technology research like what Terman and Fein did at Harvard radar and sonar labs during wartime, but only the probability studies with Rees at Columbia; at Stanford the “little contracts” he got from defense research agencies to start and build up the statistics department – as Fein described – were thus presumably partly a favor from Rees at ONR.

Rather, Al Bowker was a lifelong academic from a U.S. Capital childhood with his father a member of the Bureau of Standards, as detailed in Part 5 (i) the government agency whose management of the INA at UCLA was cut short during the McCarthy era, before George Forsythe’s move to Stanford; following the dispute with Fein, Bowker left Stanford in 1963 to become Chancellor of the City University of New York, and beginning in 1971 Chancellor of UC Berkeley, before joining U.S. President Jimmy Carter’s administration in 1980 – prior to my 1982 arrival at Berkeley under the helm of his successor Ira Michael Heyman:

“Albert Hosmer Bowker, a former chancellor of the University of California, Berkeley, an expert in statistics and an innovative administrator during his decades-long career in higher education across the country, died Sunday in a retirement home in Portola Valley, Calif. He was 88 and had been suffering from pancreatic cancer.

Bowker was chancellor of UC Berkeley, which he called a “wild and wonderful place,” from 1971 to 1980…

Ira Michael Heyman, a UC Berkeley professor emeritus who served as vice chancellor under Bowker and as chancellor from 1980 to 1990, said that during Bowker’s term as chancellor dwindling state funding made it difficult to maintain existing programs and almost impossible to launch anything new. He applauded Bowker’s role in setting up the UC Berkeley Foundation.

Born in Winchendon, Mass., in 1919, Bowker grew up in Washington, D.C., where his father worked for the federal Bureau of Standards.

He earned his B.S. in mathematics at Massachusetts Institute of Technology (MIT) in 1941 and a Ph.D. in statistics at Columbia University in 1949. …

He began his professional career in 1941 as a research assistant in MIT’s Department of Mathematics, and took a post as an associate mathematical statistician at Columbia University from 1943 to 1945.

Bowker became an assistant professor of math and statistics at Stanford University in 1947, and was chair of its statistics department from 1948 to 1959. He is credited with setting up a mathematical statistics research lab and a computer center at Stanford, where he served as the graduate division dean from 1959 to 1963.

Bowker was chancellor of the City University of New York (CUNY) from 1963 to 1971, where he supported a plan to provide free tuition for full-time CUNY undergraduates. …

Bowker’s role in higher education continued after he left UC Berkeley in 1980. During the Carter administration, he accepted a position as assistant secretary for postsecondary education for the newly-formed U.S. Department of Education. He served there from 1980 to 1981, then took a post as dean of the School of Public Affairs at the University of Maryland from 1981 to 1984.

He was executive vice president of the University of Maryland from 1984 to 1986. He returned to CUNY as vice president for planning at its research foundation from 1986-1993.“

(“Albert Bowker, innovative UC Berkeley chancellor during 1970s, dies at age 88”, by Kathleen Maclay, January 22, 2008, UC Berkeley News)

Reading the UC Berkeley obituary above, I notice that Fein’s allegation of Bowker getting a “little contract” here and there for “8 or 9 or 10 years” was inaccurate: Bowker had in fact founded the statistics department in 1948 immediately after his 1947 arrival at Stanford – he had not even earned his Ph.D. when he became founding chairman of Stanford’s statistics department, but apparently Mina Rees’s recommendation meant everything to a Stanford eager to get the ONR statistical sampling contract.

But I can understand that, given the “little contract” mode of that era, it likely took many more years for Bowker to get sufficient funding to build up a strong department.

Even though Fein did not make it explicit, I can detect that in his lament of Bowker’s “little contract” he was also somewhat belittling of Bowker’s academic credential at the time of founding the Stanford statistics department, compared to Fein’s own long lists of accomplishments when starting to advocate for computer science.

I notice that Bowker’s top-level academic leadership positions have been at left-leaning public universities, CUNY and UC Berkeley, and after Stanford all his positions have been at public institutions.

After arriving at Berkeley, Bowker became a lifelong member of a close-knit academic community there:

““Al Bowker was an outstanding chancellor who paved the way for UC Berkeley into the modern era,” said UC Berkeley Chancellor Robert J. Birgeneau. “For 28 years after stepping down as chancellor, Al Bowker remained an integral part of the Cal community, offering advice for the chancellors who came after him. I was always delighted to see him at the Faculty Club, entertaining colleagues and participating in campus life. He will be greatly missed.”

Bowker maintained a home in Berkeley at University Terrace, a condominium community for university faculty and staff, having moved to a residence in Portola Valley several years ago to be close to his grandchildren.”

(Kathleen Maclay, January 22, 2008, UC Berkeley News)

So it is possible that Bowker’s purer academic perspective entitled him a high sense of intellectual self-esteem that helped keep at bay the enthusiasm of Louis Fein, a person of strong but mostly military-oriented engineering credentials, to wade into the academia for some founding status in a new discipline.

Consistent with this as the explanation why Al Bowker did not act on Louis Fein’s consultation report on computer science but helped George Forsythe take the initiative, is a crucial fact reviewed in Part 5 (i): Mina Rees, Bowker’s wartime mentor at Columbia and then head of ONR’s mathematics division, in the 1950s expressed strong opposition to the U.S. Army’s direct role in academic mathematical research, i.e., opposing the establishment of the Army Mathematics Research Center at Wisconsin-Madison.

Further convergence of Bowker and Rees later actually happened: in 1963 when Bowker became Chancellor of CUNY, Rees was already CUNY Dean of Graduate Studies. Rees spent her entire academic career, before and after serving in government and military research agencies, in the CUNY public education system: at New York City’s Hunter College, CUNY and CUNY Graduate Center.

(“FORMER PRESIDENTS: Mina S. Rees”, The Graduate Center, City University of New York; and, “Mina Rees: August 2, 1902 – October 25, 1997”, Biographies of Women Mathematicians, Agnes Scott College)

But despite living to a good 88 years, Al Bowker still died of the same pancreatic cancer his former Stanford underling George Forsythe had died of at 55 – and in a different coincidence, Forsythe’s daughter Diana and Mina Rees died in the same year 1997, Diana in August at 49 before Mina in October at 95!

Mina lived to nearly twice the age of Diana, though not quite. On the other hand, the day in August when I had arrived at the San Francisco International Airport and Berkeley from China 15 years earlier, the 28th, happened to be twice of the August day, the 14th, when Diana Forsythe died in Alaska.

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 3) – when violence and motive are subtle and pervasive”, March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

And despite George Forsythe’s talents and ambitious drive, Stanford wasn’t the first university to start a computer science department.

The honor of the first belongs to Purdue University, which established its computer science department in 1962.

In his May 1979 interview Fein also recalled his interactions with Purdue, claiming it as an example that his idea of a new department was viewed as a threat by an existing department:

“McCORDUCK: Do you think people thought of this as a threat? In what way?

FEIN: Yes, it’s threatening in the sense that they themselves typically weren’t in the field and monies that might normally come to them might be diverted to this new department. I had this experience at Purdue. I was invited to Purdue by Tom Jones who is now a Vice President of MIT. He got hold of one of my papers and he thought it was the greatest thing since the wheel. He invited me to Purdue where he was Chairman of the Department. And his people almost murdered me – they didn’t want anything to do with this. Oh, after I left, two years later, they had one…And that’s my explanation. I think anyone who brings in a new idea anywhere that requires money from a common budget poses a financial threat and obviously a professional one.”

(interview by Pamela McCorduck, May 9, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In his subsequent interview by McCorduck, Bowker did confirm that Stanford mathematicians were resistant to the discipline of computer science and even to that of applied mathematics:

“McCORDUCK: … Apparently one of the things you said to George Forsythe (I got all my information, by the way, from the Stanford archives. George kept superb notes to himself and things like that; notes exist and I must say if you ever go read these, you’d be flattered out of your mind by some of these things. They are very, very complimentary to you.) One of the things you said in your farewell conversation with him, apparently you telephoned to say goodby to him when you were leaving. …

McCORDUCK: … When you called him to say goodby, you said to him, you recommended to him, that the Computer Science Division not stay in Math, in fact do its best to get out of Math, because you detected some resistance on the part of mathematicians, resistance to hiring non-mathematical but nevertheless legitimate computer science types.

BOWKER: Yes, well, I think it’s true around the country where very few departments of mathematics have taken the lead in this field. There is some activity in ours and other areas but generally mathematics departments tend to emphasize the pure mathematics. The Stanford department had more emphasis on classical analysis than on applicable mathematics if you like, than most in this country, but still was not terribly interested in becoming primarily an applied mathematics department and still isn’t.”

(interview by Pamela McCorduck, May 21, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

But Bowker politely reminded McCorduck that the Stanford administration, Terman in this case – and presumably Bowker himself since the two had been together on computing matters as mentioned earlier – had controlled the pace of change:

“McCORDUCK: … What I’m curious to know is how you managed to convince a lot of people who were resistant to this that it wasn’t just another “hot project.”

BOWKER: Well, I think Stanford was trying to create activities growing out of research largely, but activities that in some sense fell in between basic science and engineering. …

I don’t remember exactly when Dr. Terman changed his mind. In fact, my recollection is that the department may not have come into being until after I left or at the time I left.”

(interview by Pamela McCorduck, May 21, 1979, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In my view, Louis Fein’s grievance about Stanford, and about existing academic departments in general, consisted of two different matters: one, a university’s attitude toward establishing a computer science department; and two, how the university treated Fein’s active lobbying.

Reviewing the Purdue facts carefully, I come to a conclusion that, rather than Fein’s outsider role having been treated as a threat, his lack of university faculty experience hindered his ambition – a conclusion consistent with my earlier remark on why Stanford Graduate Dean Bowker resisted Fein’s effort to get some computer science founding status, namely that Fein’s primary background had been in the military technology industry and not the academia.

In 1961, Fein was briefly considered a possible candidate for the directorship of Purdue’s Computer Sciences Center:

“The Statistical and Computing Laboratory was moved into the Schools of Engineering along with the Department of Mathematics and Statistics. Hawkins separated the Computing Laboratory from the Statistical Laboratory and it was renamed the Computer Sciences Center in 1961. … Efforts were made to find a senior person in the computer field to be director of the Computer Sciences Center, and correspondence between Pyle and T.F. Jones mentions Louis Fein and Bernard Galler as possibilities, but the search was not pressed very vigorously until Felix Haas took charge early in 1962 …”

(“The Origins of Computing and Computer Science at Purdue University”, by Saul Rosen and John R. Rice, August 1990, Department of Computer Science, Purdue University)

But Fein’s prospect was short-lived. Unlike Stanford taking a step-by-step approach, Purdue quickly decided to establish a computer science department in 1962 and let the founding department chairman also assume the center directorship:

“During the spring and summer of 1961 there was an active search for a permanent head of the Division of Mathematical Sciences. There were negotiations with Dr. F. Joachim Weyl who was director of the Naval Analysis Group in the Office of Naval Research. Weyl withdrew his name from consideration and the position was offered to Felix Haas who was head of the Mathematics Department at Wayne State University. Haas accepted the position during the summer, with the understanding that he would start at Purdue in January, 1962.

… By the beginning of 1962 many universities, including most Big Ten universities, had requested, and some had already received, grants from the National Science Foundation under a program initiated in 1956 in support of institutional computer facilities at universities. … The most important effort of the Computer Sciences Center under Haas and Pyle in the spring of 1962 was to produce an NSF proposal that requested $920,000 to help finance the acquisition of the IBM 7044 and to support related programs in computer service, research and instruction.

… Haas recalls a meeting with Hawkins and Hovde, probably before he officially started at Purdue. in which they agreed that the Division of Mathematical Sciences would be organized internally into three academic departments, Mathematics, Statistics, and Computer Science, plus a Computer Sciences Center that would provide computing services for the whole university. It was considered only natural then that the head of the Department of Computer Sciences would also be the director of the Computer Sciences Center.”

(Saul Rosen and John R. Rice, August 1990, Department of Computer Science, Purdue University)

As in the above, the National Science Foundation had initiated a program in 1956 to support universities to establish computer facilities, and it helped make university computer science viable.

The year 1956 saw a considerable number of interesting events in the computing field, as in Part 5 (i): John von Neumann was in hospital treatment for cancer and made the decision to move from the Institute for Advanced Study in Princeton to the University of California, a move that did not happen as he soon died in early 1957; the Army math research center at Wisconsin-Madison was founded; the long reign of Thomas J. Watson, Sr. at IBM ended and the rule was passed to his son, Watson, Jr.; and IBM established its San Jose research laboratory in the future Silicon Valley in California.

In 1956 John von Neumann also played a key leading role in the NSF initiative to fund academic computing facilities:

“… during the early 1950s, NSF support for computer science was modest and was channelled through its mathematics research program. This picture changed as a result of the 1956 endorsement by the Advisory Panel on University Computing Facilities (chaired by John von Neumann) of a specialized NSF program for the support of computer science, the 1957 launch of Sputnik, and the passage of the National Defense Education Act in 1958. NSF support for computer science research grew rapidly after 1958, and was especially important in meeting the critical need of academic researchers for computer equipment. Between 1957 and 1972, the National Science Foundation expended $85 million to support the purchase by more than 200 universities of computer hardware.”

(“The Federal Government Role in the Development of the American Software Industry: An Assessment”, by Richard N. Langlois and David C. Mowery, in David C. Mowery, ed., The International Computer Software Industry: A Comparative Study of Industrial Evolution and Structure, 1996, Oxford University Press)

In the academic computing field that I was in during the 1980s and 1990s, Purdue University was known as a practically inclined Midwest school, different from the intellectually inclined West-Coast UC Berkeley and Stanford.

The last quote from the article by Saul Rosen and John R. Rice shows that practical speediness: in 1961 a new head of Purdue’s division of mathematical sciences, Felix Haas, was hired to start in 1962; prior to arrival, Haas already reached an agreement with Purdue to start a new computer science department alongside the existing mathematics and statistics departments in the division; upon arrival, Haas immediately applied for NSF funding to purchase a new IBM computer and to support “computer service, research and instruction” while the preparation for a new computer science department was underway.

Like Louis Fein, Purdue computer science department’s founding chairman Sam Conte had a background working in military-oriented technology companies; but unlike Fein, Conte had been a university professor:

“Having decided that there was going to be a Computer Sciences Department. Haas moved rapidly to recruit a department head. Bill Miller, who was then head of the Division of Applied Mathematics at Argonne, was approached. but when he removed himself from consideration the position was offered to Sam Conte. Conte had been an Associate Professor in the Mathematics Department at Wayne State University up to 1956, before Felix Haas came to Wayne State. Since 1956 Conte had worked for five years at Space Technology Laboratories and then at Aerospace Corporation in California. where he was Manager of the Department of Programming and Analysis in the Computing Laboratory. At a meeting of the University Executive Council on March 12, 1962 Felix Haas announced that “Samuel Come, a distinguished scientist currently with Aerospace Corporation, will join the Purdue staff on July 1 to become Director of the Computer Sciences Center.” …

On October 24, 1962 President Hovde asked for and received approval from the Board of Trustees to change “. . . the internal administrative organization of the Division of Mathematical Sciences . . . effective October 1, 1962.” The Department of Computer Sciences and the Computer Sciences Center were listed as components of the division, along with Departments of Mathematics and Statistics, and a Statistical Laboratory. Professor S. D. Conte was listed as chairman of the Department of Computer Sciences and as Director of the Computer Sciences Center. …

After Sam Conte, the first faculty member hired for the new Computer Sciences Department was Saul Rosen. Conte had known Rosen at Wayne State University before they both left that university in 1956. Rosen then worked in the software area for Burroughs and Philco corporation and then as an independent consultant. He inquired of Conte about possible consulting work on the west coast and Conte suggested that he join the new Computer Sciences Department which he was forming at Purdue. …”

(Saul Rosen and John R. Rice, August 1990, Department of Computer Science, Purdue University)

As cited above, Conte had been an associate professor at Wayne State University before joining the aerospace industry.

Saul Rosen, the first Purdue computer science department faculty member cited above alongside Conte, had also been a Wayne State associate professor before joining the industry; Rosen had also earned his math Ph.D. from the University of Pennsylvania, where ENIAC had been born.

(“Saul Rosen: 1922–1991”, RCAC, Purdue University)

Interestingly, the year when both Conte and Rosen left Wayne State for the industry was 1956 – just like a number of significant events in the computing field listed earlier.

In contrast, in his early career Louis Fein had only been a college math instructor with a master’s degree, as quoted earlier. Later he taught some computer courses here and there, including at Wayne State and Stanford:

“… Fein taught from 1952 to 1953 at Wayne State University a course on digital computer systems and then later in 1956, at Stanford University. …”

(May 4, 1973, Computer Oral History Collection, 1969-1973, 1977, Smithsonian National Museum of American History)

So, in my opinion Fein’s not getting what he wanted at Stanford or Purdue was due to his lack of established academic credentials: universities, either the elite Stanford or the practical Purdue, would choose established faculty members to lead their new departments, not someone with only temporary teaching experience, even if in Fein’s case he had strong records working in the industry and campaigning for such new departments – the rare case of Al Bowker’s hiring at Stanford and founding a new department before receiving the Ph.D. was not an instance of exception, in my view, but one of expediency of the inside track in the academia.

But as the last quote from the Purdue history article by Saul Rosen and John R. Rice indicates, within Purdue in the hiring of Conte and Rosen their industry pedigrees were emphasized, with Conte referred to as “a distinguished scientist currently with Aerospace Corporation”. The two’s most recent jobs had been at military-oriented technology companies: Space Technology Laboratories and Aerospace Corporation for Conte, and Philco Corporation for Rosen.

Space Technology Laboratories was the leading contractor for intercontinental ballistic missiles development, something John von Neumann had played a key role for as the U.S. Air Force’s principal scientific adviser as in Part 5 (i); Aerospace Corporation conducted research and development for the Air Force’s space and missile program; and Philco Corporation produced transistors and built computers for the U.S. Navy and the National Security Agency, and also for the commercial market.

(“Former TRW Space Park, now Northrop Grumman, designated as historic site for electronics and aerospace work”, by John Keller, December 18, 2011, Military & Aerospace Electronics; “ABOUT US: PROVIDING TECHNICAL AND SCIENTIFIC EXPERTISE FOR MORE THAN 55 YEARS”, Aerospace Corporation; and, “First-Hand: The Navy Codebreakers and Their Digital Computers – Chapter 2 of the Story of the Naval Tactical Data System”, by David L. Boslaugh, Engineering and Technology History Wiki)

Nonetheless, in that Purdue history I do notice another sign of a close-knit academia, i.e., besides my earlier comment on Al Bowker’s style as a lifelong academic: both of Purdue computer science department’s founding members had been faculty members at Wayne State University from which Purdue had just hired Felix Haas to lead the Division of Mathematical Sciences, overseeing the math and stats departments and establishing the new computer science department – apparently Haas then brought in former Wayne State associates to fill the founding roles, in another example of expediency of the inside track in the academia.

In October 1962, the Purdue board of trustees proudly noted the university founded the first academic computer science department in the U.S.:

“… The October 24 entry on the minutes of the board of trustees makes it very clear that a Department of Computer Sciences was officially established in the fall of 1962, and provides a firm basis for the claim that the first Computer Science Department at an American university was established at Purdue.”

(Saul Rosen and John R. Rice, August 1990, Department of Computer Science, Purdue University)

The mathematician and computer scientist Carl de Boor, a leader of the very successful spline functions research at the Army Math Research Center at Wisconsin-Madison and the Ph.D. study choice recommended for me by my undergraduate adviser in 1981-1982 as discussed in Part 4 and Part 5 (i), receiving his University of Michigan Ph.D. in 1966 first became a Purdue computer science faculty member:

“The first task of Samuel Conte as new department head was to hire some faculty and define a graduate program. … In the very first year, there were seven teaching faculty, including Conte, a numerical analyst. … Four were already at Purdue. Two new faculty were hired, Robert Korphage in theory and Saul Rosen in programming systems. …

In 1963 there were three new faculty members: Richard Buchi in theory, Walter Gautschi in numerical analysis, and John Steele in programming systems. … The following year John Rice was hired in numerical analysis, and this completed the initial phase of hiring.

No new faculty was hired in 1965, and only one, Carl de Boor in numerical analysis, was hired in 1966. De Boor was the first of a number of young Ph.D.s hired who became influential members of the department. …”

(“History of the Computer Sciences Department at Purdue University”, by John R. Rice and Saul Rosen, in John R. Rice and Richard A. DeMillo, eds., Studies in Computer Science: In Honor of Samuel D. Conte, Springer Science+Business Media, 1994)

De Boor later moved to the Army Math Research Center in 1972 – the year George Forsythe died and Marina von Neumann Whitman was appointed to the White House Council of Economic Advisers, as mentioned earlier.

(“CURRICULUM VITÆ: CARL(-WILHELM REINHOLD) de BOOR”, Department of Computer Sciences, University of Wisconsin-Madison)

So the elite Stanford was behind the practical Purdue.

And despite George Forsythe’s historical reputation as the most influential advocate for establishing computer science as a discipline, Stanford wasn’t even the second university to found a CS department.

The runner-up claim belongs to the University of North Carolina at Chapel Hill, which established its computer science department in 1964 – the year before Stanford in 1965.

UNC had made a very forward-looking move early, in February 1959, hiring John W. Carr, III as its Computation Center director.

Like Louis Fein, Carr had a pioneering background in computer development, but it was in academic settings at MIT and the University of Michigan after World War II service as a Navy electronics officer:

“A pioneer in the computer world, Mr. Carr received a doctorate in mathematics at the Massachusetts Institute of Technology and began work there in 1949 with the university’s groundbreaking electronic computer, “Project Whirlwind.”

Then, after a year as a Fulbright scholar at the Sorbonne in Paris in 1950, he joined the staff at the University of Michigan, where he taught the first courses on computer applications and from 1952 to 1955 headed the construction and design of a digital computer.

While serving in the Navy during World War II, he studied radar design and became a lieutenant and the electronics officer aboard the Boxer, an Essex-class aircraft carrier.”

(“John Carr, Emeritus Professor At Penn”, by Bill Price, April 12, 1997, philly.com)

At Michigan, Carr not only became an associate professor of mathematics – like Sam Conte and Saul Rosen at Wayne State – but also a leader of the broader computing community as the president of the Association for Computing Machinery, before moving to UNC Chapel Hill; at UNC, the computation center he directed obtained a new computer, with support from the maker Sperry-Rand Corporation, the U.S. Bureau of the Census, and the National Science Foundation:

“In May, 1959, the Consolidated University of North Carolina will install a new Univac Scientific ERA-1105 Digital Computer in the new Physics and Mathematics Building now being built at Chapel Hill. Purchase of this machine was made possible through the support and cooperation of the Sperry-Rand Corporation, the Bureau of the Census, and the National Science Foundation.

… Beginning in February of this year Dr. John W. Carr, III, Associate Professor of Mathematics at the University of Michigan and former President of the Association for Computing Machinery assumed the post of Director of the Computation Center and Associate Professor of Mathematics at the University in Chapel Hill.”

(“INAUGURATION OF THE RESEARCH COMPUTATION CENTER AT THE UNIVERSITY OF NORTH CAROLINA”, 1959, The University of North Carolina, Chapel Hill)

John Carr’s academic credentials, in both teaching and computer development, were significant when he became the 6th president of the ACM with a 2-year term starting in 1956. It added one more milestone to the year 1956 when other significant events happened in the computing field as mentioned earlier, including an NSF initiative led by von Neumann’s efforts to start a special program to support university computing facilities: in Carr’s case, he became the first current university academic to serve as the ACM president.

ACM’s founding president was John H. Curtiss, the head of the Applied Mathematics Division at the National Bureau of Standards, as in Part 5 (i) with a leadership role at the INA at UCLA; ACM’s 2nd president was John W. Mauchly, as in Part 5 (i) one of the lead inventors of ENIAC at the University of Pennsylvania’s Moore School of Electrical Engineering, who by the time of his ACM presidency had co-founded his own computer company with ENIAC co-inventor John P. Eckert; ACM’s 3rd president was Franz Alt, who as in a quote in Part 5 (i) had been on the same Computations Committee with Derrick Lehmer overseeing ENIAC at the Aberdeen Proving Ground, and who then became deputy chief of the Computation Laboratory of the NBS; ACM’s 4th president was Samuel B. Williams, a retired Bell Laboratories computer pioneer, and an NBS consultant at the time of his ACM presidency; and ACM’s 5th president was Alston S. Householder, then director of the Mathematics Division at Oak Ridge National Laboratory.

(“Margaret R. Fox Papers, 1935-1976. Finding Aid”, by Pat Hennessy, Kevin D. Corbitt and and John L. Jackson, August 1993, Charles Babbage Institute, Center for the History of Information Technology, University of Minnesota; “Alston Scott Householer”, by G.W. Stewart, October 1993, Volume 26, SIAM News; J. A. N. Lee, eds., International Biographical Dictionary of Computer Pioneers, 1995, Institute of Electrical and Electronics Engineers; “Dr. Franz Alt”, by Atsushi Akera, January/February 2006, ACM Oral History interviews; “John W. Mauchly”, Encyclopædia Britannica; and, “ACM Past Presidents”, Association for Computing Machinery)

From my standpoint, John Carr’s becoming the first current university academic to serve as the ACM president signalled that by 1956 the wider computing community had come to view the academia as ready to play a significant role in the computing field.

Additionally of interest to me is the fact that this ACM president happened to be also a professor at a university math department when Stephen Smale, later my Berkeley Ph.D. adviser, earned his Ph.D. – in 1957 – while being a leading activist in left-wing and anti-war politics as detailed in Part 2.

(“Biography: Steve Smale”, Department of Mathematics, University of California, Berkeley)

However, subsequently in 1964 when the UNC Chapel Hill computer science department was founded, John Carr not only was not the founding chairman but was not even around at UNC. He had left in 1962 for the University of Pennsylvania:

“Mr. Carr joined the faculty at the University of Pennsylvania in 1962 as professor of electrical engineering and later taught computer science. From 1965 to 1971, he served as graduate group chairman in computer science and taught there until retiring in 1993.”

(Bill Price, April 12, 1997, philly.com)

That was electronic engineering at the University of Pennsylvania, i.e., the Moore School where the first electronic computer ENIAC had been born, where in 1965 Carr became the graduate group chairman in computer science – distinguished indeed even if it wasn’t a computer science department.

And to Carr’s credit, in that same year 1965 Moore school Ph.D. student Richard L.Wexelblat became the world’s “first person to receive a Doctorate in Computer Science from a recognized graduate program in Computer Science”.

(“History of CIS at Penn”, Department of Computer and Information Science, Penn Engineering)

Oh well, North Carolina’s loss was Pennsylvania’s gain.

Still, I wonder why Carr left, rather unexpectedly in the evolution of the history. Was it because UNC did not make it in time to be the first university to found a computer science department? That could be a reason because Penn he then moved to was the birthplace of the electronic computer – a consolation prize for him, that is.

Over the years at Penn, Carr reached out internationally and brought computer science knowledge to important places in the world, including China’s Jiao Tong University, and Egypt’s Air Force Academy as its computer science department head:

“Mr. Carr developed contacts with computer scientists around the world, lecturing in the former Soviet Union and in China, where he was appointed adjunct professor of computer science at Jiao Tong University in Shanghai.

In the early 1970s, he was a visiting professor at the Mathematisch Centrum in Amsterdam, Netherlands, and at the University of Sydney, Australia.

In 1987, he headed the department of computer science at the Egyptian Air Force Academy near Cairo and oversaw the construction of a computer laboratory and curriculum development for cadets.”

(Bill Price, April 12, 1997, philly.com)

But unfortunately, John Weber Carr also became a victim of pancreatic cancer, in 1997:

“John Weber Carr 3d, 73, emeritus professor of computer science at the University of Pennsylvania School of Engineering, died of pancreatic cancer Tuesday at his home in Bryn Mawr.”

(Bill Price, April 12, 1997, philly.com)

Compared to the most influential computer science advocate, pancreatic cancer victim George Forsythe at 55, Carr lived a long life to 73; on the other hand, compared to the Stanford administrator who had facilitated Forsythe in taking the initial modest step of founding the computer science division within the math department, pancreatic cancer victim Al Bowker at 88, who like Carr had moved on to another institution before a historic computer science department was founded, Carr’s life was short.

I note that Carr died in the same year as Diana Forsythe and Mina Rees, i.e., George Forsythe’s daughter and Al Bowker’s mentor, respectively.

And Chapel Hill’s loss could also be a blessing in disguise.

In 1964 establishing the second computer science department among U.S. universities, UNC Chapel Hill recruited Alfred Brooks to be the founding chairman; a distinguished computer scientist who decades later in 1999 received the ACM’s A. M. Turing Award, the highest honor in computer science as mentioned in Part 3, Brooks has enjoyed a long life – now in retirement after 51 years at Chapel Hill – and a long memory of life even before John Carr’s days:

“Fred Brooks retired in 2015 after 51 years at UNC — but he started teaching long before he got here.

“I started regular teaching when I was in high school,” he said. “My senior year, one of the teachers came down with cancer mid-year and I got sworn in to teach geometry and trig because there wasn’t anyone else around to do it.”

Brooks started UNC’s computer science department and has worked with its faculty, staff and students since the 1960s. “Fred founded the department in the mid 1960s, and it is probably very difficult to believe this, but at the time, the notion of forming a free-standing computer science department at a liberal arts university was unheard of,” Kevin Jeffay, chairperson of the Department of Computer Science, said.

“So for that reason we are actually the second oldest computer science department in the country. So it was actually a bit of an experiment, and obviously one that worked very well.” Aside from founding the computer science department at UNC, Brooks also receieved the 1999 A. M. Turing Award, one of the most prestigious awards in the field of computer science. “It’s the equivalent of the Nobel Prize in computer science, so he’s internationally recognized as one of the brilliant computer scientists of our time,” Jeffay said.

Despite his achievements, Jeffay said he is very humble.

“He’s very modest and generous, always giving the credit to his students and to his collaborators,” he said. “He’s a wonderful colleague.”

Gary Bishop, a professor in the department, said Brooks is more than just his accomplishments. “He’s a giant, but he’s also a nice guy,” Bishop said.”

(“Founding UNC’s computer science department was an experiment — but it paid off for Fred Brooks”, by Maggie Budd, March 7, 2016, Daily Tar Heel)

Chapel Hill lost a former ACM president but gained a future winner of the ACM’s Turing Award. Isn’t a scientist of the highest professional honor as worthy as a top leader of the professional association?

Take a look at Brooks’s prior achievements before Chapel Hill, listed in his Turing Award biography, and one is really surprised:

Frederick Phillips Brooks, Jr. was born April 19, 1931, in Durham, North Carolina. … he earned his AB in physics at Duke University in 1953. Brooks then joined the pioneering degree program in computer science at Harvard University, where he earned his SM in 1955 and his PhD in 1956. At Harvard he was a student of Howard Aiken, who during World War II developed the Harvard Mark I, one of the largest electromechanical calculators ever built, and the first automatic digital calculator built in the United States.

After graduation Brooks was recruited by IBM, where for the first several years of his career he served in various positions in Poughkeepsie and Yorktown Heights, New York. During that time he helped design the IBM 7090 “Stretch” supercomputer… Stretch was IBM’s first transistorized computer, containing some 150,000 transistors. Although it was a commercial failure, it pioneered a number of advanced concepts quite important to contemporary computing… Brooks went on to participate in the design of the architecture of the IBM Harvest, a variant of the Stretch with special features for the National Security Agency. He later helped the government assess the computing capability of the Soviet Union.

Brooks was next assigned to help design the IBM 8000, a new transistorized mainframe computer intended to replace the IBM 700/7000 series. But by the early 1960s, the global market for computers was incredibly crowded, with numerous companies offering incompatible, proprietary systems. As customers replaced their older systems with faster ones, they realized that their investment in software was a growing problem, because they had to rewrite it for every new system. Bob Evans promoted IBM’s vision to develop a single product line of general purpose computers with a common instruction set that permitted customers to preserve their investment in software as the moved from slower machines to faster ones. Evans assigned Brooks to lead the team to design this product line, called the System/360, which was announced in 1964. Brooks coined the term “computer architecture” to mean the structure and behavior of computer processors and associated devices, as separate from the details of any particular hardware implementation.

The importance of the System /360 cannot be understated: it was a widely successful project that transformed the face of business computing and reshaped the landscape of the computer companies throughout the world. …

While the hardware architecture for the System/360 was well underway, it was clear that there was considerable risk in delivering the operating system for the new series of machines. Brooks was assigned to lead the software team in building what was perhaps the largest operating system project of its time. …

After the successful delivery of the System/360 and its operating system, Brooks was invited to the University of North Carolina, where he founded the University’s computer science department in 1964. …”

(“FREDERICK (“FRED”) BROOKS”, A. M. Turing Award, Association for Computing Machinery)

As described above, Alfred Brooks was the team leader of IBM’s revolutionarily successful System/360 computer that changed the face of business computing, and in the same year 1964 this IBM product was announced he was invited to be the founding chairman of Chapel Hill’s computer science department – at the age of only 33.

Brooks’s moving on from IBM to a good role in academic computer science was consistent with the fact that in the late 1950s and early 1960s IBM had become the main backer and influencer of academic computing activities:

“… Indeed, a survey of US academic computing in late 1959 argued that “it is fair to say that, in many cases, to the extent that a university computing activity has a purpose at all, it has been made for them by IBM.””

(Matti Tedre, The Science of Computing: Shaping a Discipline, 2015, CRC Press)

I notice that Brooks received his Harvard Ph.D. in 1956, the year so many significant events occurred in the computing field.

However, I note that by his Turing Award biography Brooks had not been a university faculty member prior to the Chapel Hill founding chairman job, having worked exclusively at IBM – even if in the 2015 story of his retirement quoted earlier, Brooks emphasized that he had begun teaching as a high school senior substituting for a cancer-stricken math teacher.

In light of the above fact, I have to modify my earlier-stated conclusion regarding Louis Fein’s grievance of not having been given a founding position in computer science: Brooks’s case illustrated that someone without established academic credentials, i.e., not having been a university professor, could still be the founding chairman of the second, if not the first, computer science department in the U.S., provided the person had an exceptionally strong computer industry record – in this case with an IBM brand name.

In other words, Fein could wish he had been a chief engineer for a brand-name IBM computer, rather than a Raytheon computer few knew about.

Experience in a military-oriented technology company like Raytheon, or those Purdue’s Conte and Rosen had worked in, was a significant credential; but Brooks’s experience at IBM developing a computer for the National Security Agency seemed adequate substitute for such military-orientation prerequisite, if there was any – he was also a graduate of Duke University, where the U.S. Army Office of Ordinance Research was located as in Part 5 (i).

Still, Brooks’s IBM achievements may not have been the only reason that Chapel Hill invited him to found the second academic computer science department in the U.S.: as quoted from his Turing Award biography, Brooks had received his Harvard Ph.D. under Howard Aiken, who had during World War II led the development of the Harvard Mark I, “one of the largest electromechanical calculators ever built, and the first automatic digital calculator built in the United States”.

In the historical timeline, the Harvard Mark I had been built before ENIAC, and von Neumann had paid a visit to Aiken and Mark I before taking part in the ENIAC project, as previously quoted in Part 5 (i):

“In 1943, during World War II, von Neumann was invited to join the Manhattan project – the project to develop the atomic bomb – because of his work on fluid dynamics. He soon realized that the problems he was working on involved a lot of computational work which might take years to complete. He submitted a request for help, and in 1944 he was presented a list of people he could visit. He visited Howard Aiken and saw his Harvard Mark I (ASCC) calculator. …”

(“Doing Mathematics on the ENIAC. Von Neumann’s and Lehmer’s different visions”, by Liesbeth De Mol, in E. Wilhelmus, I. Witzke (eds.), Mathematical practice and development throughout History, 2008, Logos Verlag)

Thus Aiken had been a prominent computer pioneer ahead of von Neumann, the “father of computers”.

But Mark I was not an “electronic” computer, only “electromechanical”, and was not Harvard-built but IBM-built under Harvard professor Howard Aiken’s guidance:

“During Aiken’s initial years as a Harvard graduate student, he followed the usual program of studies. He then shifted his allegiance to the field of electronics, the physics of vacuum tubes, and the properties of circuits, working directly under Professor E. Leon Chaffee… He began teaching in his second year as a graduate student and, after receiving his PhD in 1938, was appointed a faculty instructor… Aiken never published any of the results of his thesis research; all of his published writings dealt with one or another aspect of computing and computers.

Aiken’s 1937 proposal for a calculating machine began with a series of paragraphs devoted to an account of the pioneers in machine calculation: Pascal, Moreland, Leibniz, and, above all, Babbage. …

… On May 10, 1939, about a year and a half after Aiken’s first approach to IBM, James Bryce wrote Aiken that all the papers had been signed and that he was now “engaged in getting an appropriation put through.” He would then “issue the shop orders” and “begin the actual work of designing and constructing the calculating machine.” …

In January 1943, the Harvard machine was completed in the North Street Laboratory at Endicott, N.Y., and ran a test problem. But only in December 1943 was the machine demonstrated to members of the Harvard faculty. …

On April 17, 1944, Harvard’s president, James Bryant Conant, reported to IBM’s president, Thomas J. Watson, Sr., that “the calculating machine” had been “put into operative condition.” …

The IBM ASCC (the Harvard Mark I) was the first of a series of four computers associated with Howard Aiken. Mark I and Mark II were electromagnetic, using relays, but Mark III and Mark IV had a variety of electronic components, including vacuum tubes and solid-state transistors. …”

(“Howard Hathaway Aiken”, by J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

Besides leading the development of “the first automatic digital calculator” built in the U.S. and the subsequent Mark II, III & IV machines, Aiken’s pioneering contributions to the computer field included founding the world’s first academic program in the future discipline of computer science – years before computer science departments, influenced by it, came into existence:

“Howard Aiken’s place in the history of computers, however, is not to be measured by these four machines, interesting and important as they may have been. He recognized from the start that the computers being planned and constructed would require mathematicians to program them, and he was aware of the shortage of such mathematically trained men and women. To fill this need, Aiken convinced Harvard to establish a course of studies leading to the master’s degree, and eventually also the doctorate, in what was to become computer science. Just as Aiken–by the force of his success, abetted by his ability to find outside funding for his programs–achieved tenure and rose to become the first full professor in the new domain of computer science, so he inaugurated at Harvard what appears to have been the first such academic program anywhere in the world. The roster of his students contains the names of many who became well known in this subject, including Gerrit Blaauw, Frederick Brooks, Jr., Kenneth Iverson, and Anthony Oettinger. As other later programs came into being, they drew directly or indirectly on Aiken’s experience at Harvard. …”

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

Therefore, Alfred Brooks’s IBM achievements and founding of the UNC Chapel Hill computer science department were also matters of the pride and distinction of his scientific lineage following his teacher, who had been the inventor of the first automatic digital calculator in the U.S. and the founder of the first academic computer science program in the world.

A question then naturally arises, in comparison to von Neumann: given Aiken’s prominent pioneering achievements why hasn’t he been regarded, like von Neumann has, as the “father of computers”, or at least a “father of computers”?

One clear reason is that Aiken’s first “calculating machine” wasn’t electronic. As quoted earlier, his Ph.D. research had been in electronics, especially vacuum tubes that would be the basic electronics building blocks, but he never published in that field, instead spending his time on the calculating machines project which, for Mark I and Mark II, did not use vacuum tubes like ENIAC did.

Aiken was very conservative, preferring to use only reliable components, and in those days mechanical relays were much more reliable than electronics, albeit much slower:

“… Of the four, Mark I was the most memorable because it produced such reliable results, and could run continuously for 24 hours a day, seven days a week. Thus, although it was very slow compared with any of the electronic machines, it produced a huge output–since unlike its electronic rivals, which had long “down times”–it ran continuously. …

… The Mark I was used at Harvard by a US Navy crew that included Grace Murray Hopper and Richard Bloch. Aiken was extremely conservative in his use of well-tested, well-understood elements, using electromechanical decimal rotary counters and relays…”

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

As a result of that conservatism, Aiken’s technology for Mark I & II was soon rendered obsolete by the emergence of the electronic ENIAC:

“… Of course, by 1946, when Mark II was becoming operational, ENIAC (the Electronic Numerical Integrator and Computer, built for the Ballistic Research Laboratory at Aberdeen, Maryland, by the University of Pennsylvania’s Moore School of Electrical Engineering) had been completed and had demonstrated the enormous advantage of electronic elements over relays in large-scale computing machines. The path to the future thus was shifted from Aiken’s machines to ENIAC. Although Mark I and Mark II continued to do useful work for many years (which may be taken as an index of the increasing national need for computing services), their technology was obsolete.”

(I. Bernard Cohen and Gregory W. Welch with Robert V. D. Campbell, eds., Makin’ Numbers: Howard Aiken and the Computer, 1999, MIT Press)

There might also be doubt as to whether Aiken’s Harvard Ph.D. expertise in vacuum tubes was actually applicable to building an electronic computer:

“Although Aiken’s field of science for his doctorate was electron physics, and although the subject of his dissertation was space charge within vacuum tubes (or electron tubes), his expertise was in the physics of vacuum tubes and not in electronics, not in the design and application of circuits using vacuum tubes. …”

(I. Bernard Cohen, Howard Aiken: Portrait of a Computer Pioneer, 2000, The MIT Press)

A second reason why Aiken has not but von Neumann has been regarded as the “father of computers” is von Neumann’s strong advocacy for the “stored program”, which led to the notion of the “von Neumann architecture” discussed in Part 5 (i).

Aiken, in his conservatism, did not trust “stored program” and so his Mark machines, I, II, III & IV, never became modern general-purpose computers:

“… Aiken is sometimes held to be reactionary because he was always wary of the concept of the “stored program” and did not incorporate it into any of his later machines. This stance did put him out of step with the main lines of computer architecture in what we may call the post-Aiken era, but it must be kept in mind that there are vast fields of computer application today in which separate identity of program must be maintained, for example, in telephone technology and what is known as ROM (“read-only memory”). In fact, computers without the stored-program feature are often designated today (for instance, by Texas Instruments Corporation) as embodying “Harvard architecture,” by which is meant “Aiken architecture.””

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

Aiken’s conservatism also meant that his work did not carry the broader relevance and potential like the work of von Neumann, who subsequently became a prominent leader of U.S. science as detailed in Part 5 (i).

On the other hand, compared to John von Neumann, Howard Aiken enjoyed many more years of life.

Depending on the biographical source, Aiken was born on March 8 or 9, 1900, in New Jersey.

(“Howard H. Aiken: 1964 Harry H. Goode Memorial Award Recipient”, IEEE Computer Society; and, “Howard Hathaway Aiken”, Encyclopædia Britannica)

So when von Neumann died of cancer in February 1957 at the age of 53, Aiken was already nearly 57. He was alive, but had become even more conservative, and out-of-date in his scientific knowledge – a fact even Alfred Brooks, a Harvard Ph.D. graduate under Aiken and one of Aiken’s “most devoted disciples”, later acknowledged:

“In certain respects, Aiken had rapidly become a conservative figure in the world of computing. In the 1950s he was already “old” by the standards of this rapidly advancing science, art, and technology. Computer science and invention had become a young man’s game. … In the words of Maurice Wilkes, the new computer innovators were young men with “green fingers for electronic circuits,” many of whom had come from experience with radar and “were used to wide bandwidths and short pulses.”

… Even Fred Brooks, one of Aiken’s most devoted disciples, admitted in retrospect that the Harvard Comp Lab had a “Charles River view of the world”—that the students were not fully aware of the developments taking place in other institutions (including MIT, just down the river).”

(I. Bernard Cohen, 2000, The MIT Press)

But Howard Aiken had his own bigger ambitions, namely to become a businessman and to start his own computer company.

Like others in the computing field, such as Louis Fein, Aiken was a consultant to military research projects, in his case at Lockheed Missile and Space Division in California; in 1961 Aiken retired early from Harvard with the intention of starting his own company, to develop microcomputers with the help of a “close associate”, Cuthbert Hurd, and a Lockheed assistant director of engineering:

“In 1961, Aiken decided to take advantage of Harvard’s provision for early retirement and to begin a new career. He could have continued in his professorship for another five years, and possibly even for a few years after that. Instead, he chose to take advantage of a university rule that permitted tenured members of the faculty to retire at the age of 60. … Mary Aiken recalls that he had a disagreement with some member of the university administration and decided that the time had come to start a new life. …

Tony Oettinger has written that Aiken had always said that he was at least as smart as most businessmen and wanted to prove that he was right. Fred Brooks concurs in this opinion. In a telephone conversation with Cuthbert Hurd, who was a close associate of Aiken’s, especially after his retirement from Harvard, I was given confirmation of this reason for Aiken’s having retired early from Harvard. Hurd said that he had never discussed this matter with Aiken, but that on two or three occasions when Aiken was in California, where he was a regular consultant for the Lockheed Missile and Space Division, the two of them had “talked at great length about organizing a company.” “If we had done it and if it had been successful,” Hurd mused, “it would have been the first microcomputer computer company in the world.” Hurd told me that “an Assistant Director of Engineering at Lockheed . . . was doing the design work,” and that “Howard, along with that man and me” would form the new company. …”

(I. Bernard Cohen, 2000, The MIT Press)

“It would have been the first microcomputer computer company in the world”, as Aiken’s close associate Cuthbert Hurd later recalled, had they gone ahead and form the company together.

Rather intriguingly, Lockheed Missile and Space Division, based in the future Silicon Valley region of Northern California, had been founded in the same year 1956 as the occurrence of many significant events in the computing field mentioned earlier, such as the founding of the IBM San Jose research laboratory; Lockheed settled in the neighboring city of Sunnyvale:

“The Bayshore Freeway was still a two-lane road, and 275 acres of bean fields adjacent to Moffett Field were purchased in 1956 to become the home of Lockheed Missile & Space Division (now Lockheed Martin Space Systems Company). The company chose the site because of the stellar talent pool provided by nearby colleges and universities, the good weather, quality of living and proximity to an airport.

But while times seemed quiet, America was shrouded in uncertainty, and Lockheed was expanding its mission to confront the challenges of the Cold War. The Soviet Union had developed an offensive nuclear capability, and the United States was in desperate need of better intelligence to characterize the potential threat and respond accordingly.

Addressing both concerns, the first reconnaissance satellite, called Corona, and the Polaris submarine-launched ballistic missiles (SLBMs) were designed and built in just a few short years by the company’s engineers and scientists — armed only with slide rules, mechanical calculators, the basic laws of physics and an abundance of imagination.

By 1960, the Sunnyvale population had reached 53,000 — a five-fold increase over what it was just 10 years earlier. Employees at Lockheed, by that time, topped 20,000. In parallel to the population boom, Sunnyvale would become a preferred location for many semiconductor and high-technology companies.”

(“Lockheed grew up with Sunnyvale”, Myles D. Crandall, February 25, 2007, Silicon Valley Business Journal)

It’s hard to believe but true, that the Lockheed military aerospace engineers came in 1956 “armed only with slide rules, mechanical calculators” – so primitive compared to the Southern California military aerospace companies like Raytheon, which had had their own computer development activities even in the early 1950s as reviewed in Part 5 (i) and as shown in Fein’s past work.

But at least they came; and soon some of them were eager – I would assume there were others besides an assistant director of engineering at a management level – to form a new company with Howard Aiken, the ‘godfather’ of mechanical calculators if I may say so, to develop microcomputers.

These Lockheed engineers may have also wished for computing help from Hewlett-Packard, the nascent Silicon Valley’s founding company – but HP wasn’t into it as later recalled by Silicon Valley pioneer Douglas Engelbart about that era, as discussed in Part 5 (i).

Nonetheless it was one more factor turning things in favor of Northern California in 1956, when in his hospital bed John von Neumann planned a move to the University of California, likely choosing Southern California’s UCLA over Northern California’s UC Berkeley as reviewed in Part 5 (i).

Lockheed’s 1956 arrival, initially at Stanford’s industrial park, also boosted that university’s Cold War-oriented scientific research guided by Provost Frederick Terman, with Lockheed bringing in two leading missiles and dynamics experts, Nicholas Hoff and Daniel Bershader, to Stanford’s faculty:

“… Lockheed’s Space and Missile Division, which decided to locate in Stanford’s industrial park in 1956, suggested that Terman let the company offer an appointment at Stanford to one of the nation’s leading aeronautical engineers, Nicholas Hoff, whom the company believed it would be unable to hire without this incentive. Hoff, head of aeronautical engineering at Brooklyn Polytechnic Institute, had received his degree from Stanford in the early 1940s and after the war had begun research on supersonic aircraft and missiles with the support of substantial military contracts. To Terman, Lockheed’s proposal was yet another way to link firmly the engineering school and industry and, at no cost to Stanford, to improve the aeronautical engineering program, which had been languishing for lack of funds since World War II. … Lockheed also arranged for the head of its gas dynamics division, Daniel Bershader, to teach part-time at Stanford; he soon received a permanent appointment. As a result of Lockheed’s selection of Hoff and Bershader, Stanford’s aeronautical engineering program shifted decisively from research on commercial airplane structures to research related to guided missiles and space vehicles. … ”

(Rebecca S. Lowen, 1997, University of California Press)

In 1956, even more important to Silicon Valley, and to the landscape of high technology, was the arrival of the semiconductor industry that would give meaning to the name “Silicon Valley”:

“In September 1955 William Shockley and Arnold Beckman agreed to found the Shockley Semiconductor Laboratory as a Division of Beckman Instruments “to engage promptly and vigorously in activities related to semiconductors.” Shockley rented a building … in Mountain View, California, began recruiting “the most creative team in the world for developing and producing transistors.” He attracted extremely capable engineers and scientists, including Gordon Moore and Robert Noyce, who learned about and developed technologies and processes related to silicon and diffusion while working there. In December 1956 Shockley shared the Nobel Prize in Physics for inventing the transistor, but his staff was becoming disenchanted with his difficult management style. They also felt the company should pursue more immediate opportunities for producing silicon transistors rather than the distant promise of a challenging four-layer p-n-p-n diode he had conceived at Bell Labs for telephone switching applications.

After unsuccessfully asking Beckman to hire a new manager, eight Shockley employees – including Moore and Noyce plus Julius Blank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last and Sheldon Roberts – resigned in September 1957 and founded the Fairchild Semiconductor Corporation in Palo Alto. Many other employees, from technicians to PhDs, soon followed. Over the next decade, Fairchild grew into of the most important and innovative companies in the semiconductor industry, laying the technological and cultural foundations of Silicon Valley while spinning off dozens of new high-tech start-ups, including Advanced Micro Devices (AMD) and Intel. Shockley continued pursuing his four-layer diode but his company never realized a profit. Beckman sold the operation to Clevite Corporation in 1960. Shockley became a professor of electrical engineering and applied science at Stanford University.”

(“1956: Silicon Comes to Silicon Valley”, The Silicon Engine, Computer History Museum)

Described in the above, a 1957 rebellion by 8 disciples of William Shockley, a 1956 Nobel Physics Prize winner for his role in the invention of the transistor, at Shockley Semiconductor Laboratory against his “difficult management style” and orthodox scientific focus was a watershed event that led to the founding of the Fairchild Semiconductor Corporation by this younger generation of scientists and engineers, a company that would lay “the technological and cultural foundations of Silicon Valley”.

In that bustling environment, amidst a new semiconductor industry and the Lockheed engineers eager to take part, Howard Aiken, once a prominent computer pioneer and leading competitor to John von Neumann but not winning the recognition to be a “father of computers”, had a second chance in 1961 upon retiring from Harvard, to become the ‘father of microcomputers’ – if I may use the phrase.

Doing so in the fledgling Silicon Valley might even let Aiken share some glories of its founding with distinguished others, most notably Stanford Provost Frederick Terman – born like Aiken at the turn of the century and mentor to the Hewlett-Packard founders – and the Fairchild Semiconductor founding group – rebels against their conservative mentor William Shockley.

But this second chance did not materialize for Aiken, as Hurd later recalled that they did not follow through with the plan:

“… Aiken, Hurd continued, “wanted me to help raise the money.” They “never followed through” with this plan. “I thought that maybe he wanted to be rich,” Hurd concluded, “and was thinking about starting the company for that reason.””

(I. Bernard Cohen, 2000, The MIT Press)

Now I am curious. Why did Aiken and Hurd not follow through with their plan, given that, as quoted earlier, “an Assistant Director of Engineering at Lockheed . . . was doing the design work” already?

Hurd’s somewhat coy explanation was that Aiken “wanted me to help raise the money”, and he thought Aiken “wanted to be rich” and “was thinking about starting the company for that reason”.

Reading Hurd’s descriptions, I can see two possible scenarios why they did not follow through, both to do with money: one, the two men could not agree on ownership sharing, i.e., Hurd was unwilling to settle for the role of ‘helper’, feeling that Aiken wanted too much; and two, Hurd was idealistic and viewed Aiken as too money-oriented to be in computer development with.

Thus, finding out who Cuthbert Hurd was would help understand his mindset in relation to Aiken.

Hurd had also been a computer pioneer, a key driving force behind IBM’s first commercial general-purpose computer, IBM Model 701, the so-called “Defense Calculator” discussed in Part 5 (i), that was launched during the Korean War era:

“Dr. Hurd was a mathematician at the Atomic Energy Commission laboratory in Oak Ridge, Tenn., when he joined I.B.M. in 1949 as its director of applied science. A year later, after the outbreak of the Korean War, he was one of two people assigned to determine how I.B.M. could contribute to the war effort.

Making a bold proposal, Dr. Hurd and his partner, James Birkenstock, recommended that the company design and build a general-purpose computer, bearing the heavy expense itself so that I.B.M. would own the patents. The new machine, the I.B.M. 701, cost $3 million to develop and was introduced with great fanfare in 1952, putting I.B.M. on the path to becoming the dominant force in the computer industry.

Dr. Hurd went on to help develop several other I.B.M. computers and served as a consultant to the company for years after leaving in 1962 to become chairman of the board of the Computer Usage Company, the first independent computer software company. …”

(“Cuthbert Hurd, 85, Computer Pioneer at I.B.M.”, by Laurence Zuckerman, June 2, 1996, The New York Times)

Given Hurd’s backgrounds as above, the first scenario why Aiken and Hurd did not materialize their plan to start a computer company is quite possible: having been a key IBM computer-development executive, Hurd was not content with just helping “raise the money” for the more famous Aiken while settling for much less himself; interestingly, Hurd left IBM in 1962, the year after Aiken had left Harvard in 1961, and so the timing was right for the two to consider start a computer company together; the fact that Hurd, originally a mathematician, left IBM to become the board chairman of “the first independent computer software company”, the Computer Usage Company, confirmed that he deserved a top leadership position in a smaller company.

It also made logical sense in this context that, without collaboration with Aiken, Hurd would focus on computer software and usage, not on developing computers.

Not getting to found the world’s first microcomputer company, Aiken, who decades earlier before attending graduate school had been a chief electrical engineer at the Madison Gas and Electric Company in Wisconsin, in 1961 indeed became a businessman in New York, while also spending part of his academic retirement time in Fort Lauderdale, Florida, as the University of Miami’s “Distinguished Professor of Information Technology”:

“… While in high school he also worked twelve hours a night at the Indianapolis Light and Heat Company. In 1919 he entered the University of Wisconsin, in Madison, supporting himself throughout his four years there by working as an operating engineer at the Madison Gas and Electric Company. Aiken received a degree in electrical engineering in 1923; he continued to work for Madison Gas, now as chief engineer, responsible for the design and reconstruction of the company’s electric generating station. He remained with Madison Gas until 1928.

After spending ten years as an electrical engineer, Aiken felt he had chosen the wrong field. He decided to study mathematics and physics, and enrolled for a year at the University of Chicago for that purpose. He continued his studies at Harvard, where he obtained a master’s degree in physics in 1937 and a doctorate in physics in 1939. …

In 1961 Aiken retired from Harvard and moved to Fort Lauderdale, Florida. He became Distinguished Professor of Information Technology at the University of Miami, helping the school set up a computer science program and a computing center. He also founded Howard Aiken Industries Incorporated, a New York consulting concern. He had always told friends that a good professor with half a mind should be able to run circles around people in industry. Now he would prove it. He said he would spend the remainder of his life trying to make money, and he did just that.”

(Robert Slater, Portraits in Silicon, 1987, The MIT Press)

As indicated in the above, Aiken’s friends, not just Hurd, knew that Aiken always wanted to make money as a businessman in the industry; in my opinion, even the name he chose for his company, Howard Aiken Industries Incorporated, showed his ambition very clearly that it would be his own industries.

Aiken’s company specialized in taking over ailing companies, fixing and then selling them:

“… Essentially, Aiken Industries specialized in taking over companies that were ailing and bringing them back to good health, at which point they were sold. He could not help but be active at the university level, and he accepted a part-time teaching post at the nearby University of Miami, becoming a colleague of John Curtiss.”

(I. Bernard Cohen, 2000, The MIT Press)

I have to wonder how much Aiken, busy making money in his industries, actually really worked on “setting up a computer science program and a computing center” at the University of Miami as in the second previous quote, since as in the above quote John Curtiss was already a professor there. As reviewed earlier and in Part 5 (i), Curtiss, a former head of the applied mathematics division at the National Bureau of Standards overseeing the INA at UCLA, and the founding president of the ACM, had been fired from the Bureau in 1953 – a victim of McCarthyism-type politics.

Richard McGrath, a lawyer in a senior role at Aiken Industries, later confirmed that he and others were “helping him to build his company” – Howard Aiken Industries Inc., later called Norlin Technologies Inc. – “assisted” Aiken and “managed … for Howard” – the kind of role Cuthbert Hurd had likely been unwilling to accept:

“I was able to learn more about Aiken’s activities during the years after his retirement from Harvard from Richard McGrath, an attorney who was closely associated with Aiken from 1962 to 1973. During these years, McGrath—accompanied by Martin Flaherty and James Marsh—was “almost constantly on the road” with Aiken, “helping him to build his company (which was originally known as Howard Aiken Industries Inc. but was later called Norlin Technologies Inc.).” The three of them “assisted” Aiken in the “process of acquiring companies that became divisions of Aiken Industries.” According to McGrath, Flaherty and Marsh “then managed several of the companies for Howard” while he served in a legal capacity.”

(I. Bernard Cohen, 2000, The MIT Press)

According to McGrath, Aiken’s management style was “like a visiting fireman”:

“McGrath remembers Aiken as “a born teacher and mentor” who “left an indelible imprint on all our lives.” … What McGrath found particularly noteworthy was Aiken’s “management style,” the way in which “he put together a highly successful high technology company with multiple divisions, without ever having an office or secretary of his own.” Aiken, McGrath concluded, “literally worked out of his hat,” perfecting “the art of visitation, traveling from division to division like a visiting fireman.””

(I. Bernard Cohen, 2000, The MIT Press)

I can imagine Charles River water literally pouring out of the magic hat of this “visiting fireman”.

If Howard Aiken thought of himself and his name as the real worth, it shouldn’t have been unexpected. The relationship between Aiken, Harvard and IBM in the development of Harvard Mark I had illustrated the prominent sense of special importance Aiken and Harvard regarded themselves with.

The Harvard-IBM agreement for building the Mark machines clearly defined the special privilege position Harvard had over IBM:

“By March 31, 1939, the final agreement had been drawn up and signed. IBM agreed (1) “to construct for Harvard an automatic computing plant comprising machines for automatically carrying out a series of mathematical computations adaptable for the solution of problems in scientific fields.” Harvard agreed (2) to furnish “without charge” the structural foundation, and (3) to appoint “certain members of the faculty or staff or student body” to cooperate with “the engineering and research divisions of IBM in completing the design and testing.” It was agreed (4) that all Harvard personnel assigned to this project would sign a standard “nondisclosure” agreement to protect IBM’s proprietary technical and inventive rights. IBM (5) would receive no compensation, nor were any charges to be made to Harvard. The finished “plant” would become “the property of Harvard.” …”

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

In the 5-point agreement above between IBM and Harvard, other than Point (2) that was logically Harvard’s role to provide the space to house the machines, and Point (4) that prevented Harvard personnel from disclosing IBM proprietary information, the stipulations were in Harvard’s favor: it obliged IBM to pay for building the Mark machines, do the construction with the involvements of some Harvard personnel, and give the product’s full ownership to Harvard.

Moreover, after Mark I’s completion when IBM president Thomas Watson, Sr. was going to attend the Harvard ceremonies for its dedication, Harvard was so brimming of self-esteem that its press release did not want to acknowledge that the machine had been built by IBM, instead emphasizing that “the inventor, Commander Howard H. Aiken, U.S.N.R,” was in charge of the project; Harvard’s attitude really irritated Watson:

“The Harvard News Office, in close consultation with Aiken, prepared a news release. It was evidently not considered necessary to clear the release with IBM… The release was headed “World’s greatest mathematical calculator” and bore the statement: “The NAVY, which has sole use of the machine, has approved this story and set this release date [Monday papers, August 7, 1944].” The first five paragraphs … stated that the machine would be presented to Harvard by IBM, that it would solve many types of mathematical problems, that the presentation would be made “by Mr. Thomas J. Watson, president of International Business Machines Corporation,” that the machine was “new in principle,” and was an “algebraic super-brain.” Then followed the bold unqualified statement that “In charge of the activity…is the inventor, Commander Howard H. Aiken, U.S.N.R,” who “worked out the theory which made the machine possible.” It may be observed that not only was Aiken designated “the inventor,” but no reason had been given thus far for IBM being the donor-it had not even been mentioned that IBM had actually constructed the machine. In fact, in the whole eight pages, the only reference to IBM’s contribution was a single paragraph later on in the release.

Two years of research were required to develop the basic theory. Six years of design, construction, and testing were necessary to transform Commander Aiken’s original conception into a completed machine. This work was carried on at the Engineering Laboratory of the International Business Machines Corporation at Endicott, N.Y., under the joint direction of Commander Aiken and Clair D. Lake. They were assisted in the detailed design of the machine by Frank E. Hamilton and Benjamin M. Durfee.

It is said that when Watson arrived in Boston accompanied by his wife and first saw the news story, he became so irate that he even planned to return to New York without attending either the ceremonial luncheon or the formal dedication ceremonies.When Watson arrived at his hotel, he telephoned–so the story goes–to his Harvard hosts, threatening to boycott the ceremonies on the following day. Conant and Aiken thereupon rushed from Cambridge to Boston to placate Watson, who launched into a furious tirade against Aiken and (presumably) Harvard. Evidently Conant and Aiken succeeded in calming Watson, who did attend the dedication on the following day and gave a star performance.”

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

Learning how Harvard and Aiken had treated IBM president Thomas Watson in the days of Mark I helps one understand that when a mid-level IBM executive like Cuthbert Hurd was interested in getting into business with Aiken in 1961-1962, it likely had to be Hurd helping Aiken build Aiken’s company – something lawyer Richard McGrath then did.

In 1967 Aiken retired again, this time from his own company – as quoted earlier it was Howard Aiken Industries Inc. later renamed Norlin Technologies Inc. – and became its vice chairman of the board; again, Aiken did not fully retire but returned to working as a computer consultant for big technology companies, now Monsanto in addition to Lockheed:

“During the 1973 interview, Hank Tropp questioned Aiken about aspects of his life and career after leaving Harvard. Aiken referred, first of all, to his “forming Aiken Industries, beginning in 1961” and his becoming “vice-chairman of the board” in 1967. “So now,” Aiken said, “I go to board meetings, but I’m not going at it the way I used to. . . . When they kicked me out of Harvard, I had to find a new job and that was Aiken Industries. And when they kicked me out, I had to find a new job and went into the consulting business. So now I spend a good deal of time at Monsanto.” … Aiken said that he had been a consultant at Lockheed “for many years,” but that he had “quit that this year.”

Once the subject had been brought up, Aiken felt the need to discuss the subject at length. “You can’t quit,” he said. “At the time you quit, you’ve had it.” “If I were to quit work and sit here in this study,” he continued, “I think I’d be dead very soon. I don’t think I’d last.”

… During his last years, Aiken continued his long-time service for the aerospace industry in California, making periodic visits to Lockheed Missiles Company. Cuthbert Hurd gave me a list of some people with whom Aiken was associated in California when he used to come out to the West Coast on his regular visits. One of them was George Garrett, who, Hurd informed me, “was for a while the Director of Computer Activities at Lockheed Missiles. Howard and George saw a great deal of each other for a certain period.” …”

(I. Bernard Cohen, 2000, The MIT Press)

Just like I have remarked earlier, Howard Aiken was a “godfather” of what Lockheed Missiles was doing in the computing field – and he did his work by regularly visiting like a fireman.

By now Aiken was rich, and he continued to care about developing microcomputers, or what would become “personal computers”; in 1970, Aiken again discussed with Cuthbert Hurd a plan to start a new computer company, PANDATA, this time as the investor he would offer Hurd the board chairman position:

“Hurd, on another occasion, gave some further details on a new company that Aiken had proposed to form. He recalled that he, Aiken and William Main had met several times in 1970 “to discuss the formation of a corporation to be called PANDATA.” The three had “discussed the idea of what was to become a microprocessor and personal computer.” Aiken, according to Hurd, “had an early vision of the usefulness of such devices,” “believed that they could be mass produced at a low cost,” and “wished to form an integrated company to manufacture and sell them.” Aiken wanted Hurd “to help form the company, be chairman of the board, and raise the money.” Aiken himself “wished to make a considerable investment in the new company.” …”

(I. Bernard Cohen, 2000, The MIT Press)

So it looked like that in 1970 Howard Aiken had a third chance, with his idea of “a microprocessor and personal computer”, to become the ‘father of personal computers’ this time.

The name of the third partner this time – the last time it would have been a Lockheed assistant director of engineering – was reported in Bernard Cohen’s biography of Aiken: William Main.

Main had been Aiken’s co-organizer of a 1962 symposium on switching theory in space technology, sponsored by the Air Force Office of Scientific Research and the Lockheed Missiles and Space Company in Sunnyvale.

(Howard Aiken and William F. Main, eds., Switching theory in space technology: [Symposium on the Application of Switching Theory in Space Technology, held at Sunnyvale, California, February 27-28 and March 1, 1962], 1963, Stanford University Press)

But again, a new computer company with Aiken and Hurd together did not materialize:

“… Hurd reported, however, that he “was busy at the time with other activities” and that Aiken “died before the venture could be launched.””

(I. Bernard Cohen, 2000, The MIT Press)

At least Hurd was interested this time in 1970, as quoted – apparently because he was offered the board chairmanship – but understandably he was already the board chairman of the Computer Usage Company and so would like to do it with Aiken sometime later.

With or without Hurd, Aiken continued to pursue his vision of miniaturization of computers – now with the Monsanto Chemical Company:

“One of Aiken’s final computer-related assignments in these post-retirement years was for the Monsanto Chemical Company, which was trying to develop magnetic bubbles as the basis of a new memory technology. At the time of the 1973 interview, Aiken was enthusiastic about magnetic-bubble technology. This led him to talk about miniaturization in general. He had on the table a hand-held Bowman electronic calculator, and he showed an obvious sense of delight as he discussed how powerful a tool this small device was. He said that he foresaw a time when a machine the size of this calculator would be more powerful than mainframe computers. …”

(I. Bernard Cohen, 2000, The MIT Press)

Aiken also found someone else to help him, Dick Bloch, who started a new company Genesis to invest in new technology ideas – the concept was reminiscent of Aiken Industries, but through starting new companies instead of buying old companies, and then selling the stake:

“… Aiken had never been concerned to patent his innovations while a member of the Harvard faculty, and the innovations he produced for Monsanto were patented in the company’s name rather than his. But he was concerned with a patent for one of his own inventions, the creation of his last retirement years. The invention in question was related to the general problems of encryption and decoding and the security of computer data.

Aiken went into some detail about this most recent invention and the company that was in the process of being organized to exploit his innovations in relation to the security of computer information. “That’s the Information Security Corporation,” Aiken said. It was “being formed by Dick Bloch to exploit a cryptographic invention of mine.” The parent company was called Genesis. In an earlier interview with Bloch, Tropp discovered that the primary mission of Genesis was to seek new ideas that could be exploited commercially and then to find financing. Once the venture was started, Genesis would provide the early management; as soon as the company was able to stand on its own feet, however, Bloch and Genesis would, in a sense, “get out and look for something else.” …”

(I. Bernard Cohen, 2000, The MIT Press)

The man Aiken referred to, Dick Bloch, or Richard Bloch as in an earlier quote about a Navy crew using Harvard Mark I, had been the chief operating officer of Aiken’s Mark I project, and then an executive in the aerospace and computer industries:

“Richard M. Bloch, a pioneer in the development and design of digital computers, died of cancer on May 22 in Framingham, Mass. He was 78 and lived in Marlborough, Mass.

As chief operations officer at Harvard University’s Computation Laboratory in the 1940’s, Mr. Bloch helped design and program the first automatic digital computer, the Mark 1.

Over the succeeding years, Mr. Bloch held a number of administrative positions in the rapidly growing computer industry, including general manager of the computer division of Raytheon, vice president for technical operations at Honeywell, vice president for corporate development at the Auerbach Corporation and vice president of the advanced systems division of General Electric.

He was also chairman and chief executive of the Artificial Intelligence Corporation and the Meiko Scientific Corporation.”

(“Richard Bloch, 78, Pioneer in Digital Computers”, by William H. Honan, May 29, 2000, The New York Times)

What a pity! The inventor of the world’s first “automatic digital computer” – officially a “calculating machine” as cited earlier – after becoming a businessman for over a decade and getting rich, in the end still had to rely on a former deputy from his first Harvard project 3 decades ago to get his new technology ideas into business.

And that company, Genesis, and its subsidiary Information Security Corporation set up to exploit a cryptographic invention of Aiken’s as in the second previous quote, likely did not succeed since Bloch’s The New York Times obituary made no mention of either.

Noteworthy in Bloch’s May 29, 2000 obituary is the fact that he was once the general manager of Raytheon’s computer division – presumably the place Louis Fein had once worked developing a computer for the Navy in Southern California.

I note that Richard Bloch was also a victim of cancer, although at the age of 78 he faired better than John Carr at 73.

Aiken’s 1973 interview by Henry Tropp quoted in Bernard Cohen’s biography of Aiken, in which Aiken talked about his consulting for Monsanto, his quitting Lockheed consulting that year, and a new company with Dick Bloch, was conducted on February 26-27.

(“Interviewee: Howard Aiken (1900-1973) Interviewers: Henry Tropp and I.B. Cohen”, February 26-27, 1973, Smithsonian National Museum of American History)

About 10 days later it was Aiken’s birthday – March 8 or 9 as mentioned earlier – and a few days after that, on March 14 Howard Aiken died in his sleep in a hotel in St. Louis, Missouri, during a consulting visit to Monsanto:

“Howard Aiken died in his sleep in a hotel in St. Louis on 14 March 1973. He was in St. Louis for one of his regular consultations with Monsanto. He was 73 years old.”

(I. Bernard Cohen, 2000, The MIT Press)

Aiken was 73, the same age John Carr died at decades later, although Aiken wasn’t a victim of cancer that Carr would be.

In comparison, John Curtiss, Aiken’s retirement colleague at the University of Miami, former founding president of ACM and a victim of McCarthyism-type politics at the National Bureau of Standards, later died in 1977 at only 67; Curtiss was also a Harvard Ph.D. alumnus.

(J. A. N. Lee, eds., 1995, Institute of Electrical and Electronics Engineers)

From my angle of analysis, Aiken lived a relatively long life compared to his contemporary leading computer pioneer John von Neumann who is regarded as the “father of computers” that Aiken isn’t – 20 years longer – whereas Carr lived a relatively short life compared to his contemporary computer science pioneer Alfred Brooks, who is living and in 2015 retired from UNC Chapel Hill at 84.

But Aiken missed at least two further chances, following his early retirement from Harvard to enter the industry, to become the ‘father’ of something in the computer world – microcomputers, then personal computers – as a result of his would-be collaborator Cuthbert Hurd not going forward with it each time, by Hurd’s own candid admission.

As quoted earlier, Cuthbert Hurd later said that that Aiken “died before the venture could be launched”, referring to the company PANDATA he, Aiken and William Main talked about in 1970 – the second and last such chance for Aiken – for developing “a microprocessor and personal computer”.

The year after Aiken’s death, in 1974 Hurd left his board chairmanship at Computer Usage Company and started his own company, Cuthbert C. Hurd Associates:

“… From 1949 to 1962 he worked at IBM, where he founded the Applied Science Department and pushed reluctant management into the world of computing. Hurd later became director of the IBM Electronic Data Processing Machines Division.

After 1962 he served as chairman of the Computer Usage Company, the first independent computer software company, until 1974 when he formed Cuthbert C. Hurd Associates.”

(“Cuthbert Hurd: Biography”, Engineering and Technology History Wiki)

Now I begin to doubt Hurd’s sincerity.

If Hurd could leave IBM in 1962 following Aiken’s 1961 Harvard retirement and starting Aiken Industries, it suggested that he could get into a new business with Aiken, but that they did not reach an agreement that time.

But if Hurd then could leave the independent computer software company board chairmanship in 1974 after Aiken’s 1973 death, and had in effect agreed to “the venture” with Aiken in 1970 but just hadn’t “launched” it, why couldn’t he have left to launch it earlier? Aiken was already 70 years of age in 1970 when he asked Hurd to “help form the company, be chairman of the board, and raise the money” – at that age Aiken’s prospect of starting a new company very much dwindled in the twilight.

The pattern of the timing of Hurd’s departure from an over-decade-long corporate position, both in 1962 and 1974, that it was the year after something had happened with Aiken, showed that Hurd was repeatedly capable of changing a long-time career job, but just not doing it a little earlier to start a business with Aiken.

That leads to the second possible scenario raised earlier regarding why Hurd did not follow through with the plans to start a company with Aiken: Hurd might be idealistic, and dismissive or even contemptuous of Aiken’s preoccupation with getting rich. Note that the first scenario, Hurd and Aiken not agreeing on ownership sharing, as discussed became less likely in 1970 when Hurd was offered the board chair position and in effect agreed to launching a venture later.

It turned out that Cuthbert Hurd not only was a close associate of Howard Aiken’s but had been a close associate of John von Neumann’s also; and so a comparison of his attitudes toward Aiken versus toward von Neumann can help determine if the second scenario above was likely the case.

In a 1981 interview by Nancy Stern on the history of computer development in his personal experience, Hurd talked in great depth about von Neumann, referring to the name over 60 times, but mentioned Aiken only twice and only in the context of a von Neumann-Aiken rivalry as follows (name underline emphases added):

“STERN: Can you tell me about when you first met Von Neumann?

HURD: I’m fuzzy, I don’t know whether it was 1947, or 1948. I met him at some meeting of the American Mathematical Society. I don’t know whether it was in Washington or New York. Some place east with the American Mathematical Society. And of course he was known as a great mathematician. It was also known he was interested in computers. …

STERN: Now when did you meet him again after that initial [time]?

HURD: I met him at normal times, up between whenever that was until the time I joined IBM. I met him at normal times in the sense of, there were a few conferences which we would now call computer conferences. And I’d see him around, at one of those places that he was on the program.

STERN: Can you recall where some of those conferences were? We’re talking about the Harvard computing conferences?

HURD: Yes. I would be fairly sure but not certain that he would not be at one.

STERN: Why not?

HURD: I don’t think that he and Aiken were close.

STERN: Well they both sat on the National Research Council Meeting.

HURD: I don’t think they were close.

STERN: Was it a kind of competition because they were both doing computing projects, do you think?

HURD: I think so, although neither one ever said that to me. Neither one ever said, I knew Howard Aiken very well. Neither of them ever said anything to me, derogatory or anything whatsoever about the other, but I just observed that those two gentlemen were not necessarily very close. …”

(“An Interview with CUTHBERT C. HURD”, by Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In the above, Hurd clearly tried to emphasize that Aiken and von Neumann did not quite like each other. Given that, Hurd’s barely mentioning Aiken in this 1981 interview while detailing his relationship with, knowledge about and admiration of von Neumann clearly indicated his preference for von Neumann over Aiken.

Hurd talked about von Neumann’s quick mastery of the body of knowledge of psychiatry while terminally ill with cancer:

“STERN: Well, I was thinking even in terms of things that had nothing to do with what people tell me, but the conversion to Catholicism at the last minute would not be something one would expect from someone like Von Neumann.

HURD: That’s right and that was the one thing I was going to mention to you. And I never understood that. I didn’t. I didn’t have anything against it. He and I never talked about religion at all. We didn’t talk about philosophy, and I spent two and a half days a month for this period or whatever it was. Plus other times. And I had the impression somewhere along the line he became interested in psychiatry. And their was a colonel I think in the air force who was his psychiatrist, and somebody told me that when John got interested in psychiatry, that he quickly learned more about formal psychiatry, than the people who the psychiatrist knew. And I didn’t understand the significance of that. But I was told this, maybe by Klari. But other than that, I was surprised, because at the time of the funeral, let’s see, I went down and I hired a car, and I went to where Klari was and his family. Took them to church, wherever they went afterwards. It was a Catholic church. And I was a little surprised, because he never talked about religion.

STERN: … But was Klari surprised about the conversion?

HURD: I don’t remember her saying so. I don’t think she was Catholic either.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In the above, Hurd did not explain why von Neumann had a psychiatrist, who was provided by the Air Force – as in Part 5 (i) von Neumann was a leading scientific adviser for the U.S. Air Force – presumably it was to help him cope with his life-ending illness.

As quoted, at the time of von Neumann’s death Hurd was very close to von Neumann’s family, and was the person chauffeuring and accompanying them to von Neumann’s church funeral and wherever they attended afterwards.

Both Hurd and the interviewer Nancy Stern expressed perplexity about von Neumann’s conversion to Catholicism prior to his death – von Neumann’s quick mastery of psychiatry happened around that time, when he also had an Air Force colonel psychiatrist.

On related scientific subjects, Hurd pointed out that von Neumann had full knowledge of neurophysiology, and had pioneering ideas for the mathematical analysis of the brain:

“STERN: Now in terms of this interest in psychiatry that you mentioned, he also had this interest in the McCulloch-Pitts research that related to the computer, or the brain as a kind of a computer. Did he speak about that at all, to you?

HURD: Well, yes. I think it would be clearer that he knew at that time as much about neurophysiology as was known.

STERN: That’s quite a statement considering he was a mathematician.

HURD: He believed, I am sure, that it was possible to find out how the brain works. There were ways to do that. And I think he had in his mind the way of going about discovering it. And he thought it was associated with what we now call software. The kind of coding, programs in the brain, and I could never find out how he intended to go about that. I’m convinced that he thought he could do that. Well we talked about that. And we also talked about his paper on how he proved that unreliable components can be used to produce a reliable machine. We used to talk about that. And how we arrived at that numerical analysis. …”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

In the above two quotes, Hurd told of an von Neumann who could quickly master the full body of knowledge of various scientific disciplines, and who would try to pioneer studies of them from mathematical perspectives.

In what Hurd described, I can already see an instance of contrast between Aiken and von Neumann: the conservative Aiken would only use reliable components for his Mark I, and thus mechanical relays rather than electronic vacuum tubes, but the ambitious and brilliant von Neumann proved mathematically that “unreliable components can be used to produce a reliable machine”.

Hurd described von Neumann’s personality as friendly and approachable in spite of his greatness:

“HURD: … You asked once of a great man talking to the subordinates, in our association he never talked down to me. But in almost every case he always was far ahead of me in his thoughts about a subject. He never talked down to me, but I know he had lots of thoughts including the one that you just talked about, but he never discussed them.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

This particular view of von Neumann that Hurd gave is one John Nash, whose Ph.D. research idea was quickly dismissed by von Neumann, would likely disagree; as in Part 2, Nash’s impressions included that von Neumann, according to Sylvia Nasar in her book “A Beautiful Mind”, “had the preoccupied air of a busy executive”.

To the scientists close to von Neumann, such as Hurd, it was like a mystery that a mathematical genius – the only genius Hungary had produced in that era according to Nobel Physics Prize winner Eugene Wigner quoted in Part 5 (i) – who was born Jewish, near the end of his life converted to Catholicism; as Hurd said, quoted earlier, “I never understood that”, and “I was a little surprised”.

The economist Oskar Morgenstern, von Neumann’s collaborator and co-founder of the game theory in mathematical economics, was quite critical of von Neumann’s religious conversion:

Of this deathbed conversion, Morgenstern told Heims, “He was of course completely agnostic all his life, and then he suddenly turned Catholic—it doesn’t agree with anything whatsoever in his attitude, outlook and thinking when he was healthy.” …”

(William Poundstone, Prisoner’s Dilemma, 1993, Anchor Books)

Perhaps in his hospital bed von Neumann was no longer “agnostic” like Morgenstern had viewed him as, and no longer believed that the physical world at the quantum level was not governed by causality – a worldview he had advocated in relation to his mathematics of quantum mechanics, discussed in Part 4.

The conversion occurred when von Neumann knew he was terminally ill; he suddenly decided to do so, and Father Anselm Strittmatter, a Benedictine monk, baptized him and acted as his priest in his last year of life:

“As the end neared, von Neumann converted to Catholicism, this time sincerely. A Benedictine monk, Father Anselm Strittmatter, was found to preside over his conversion and baptism. Von Neumann saw him regularly the last year of his life.

… The conversion did not give von Neumann much peace. Until the end he remained terrified of death, Strittmatter recalled.”

(William Poundstone, 1993, Anchor Books)

I wonder if Father Anselm Strittmatter’s counselling influenced the decision by John von Neumann in 1956 while in hospital, who continued his work as a U.S. Atomic Energy Commissioner and key scientific adviser to the U.S. Air Force, to move to the University of California and focus on computer research – most likely at UCLA as in Part 5 (i) – and if so, whether it was due to increased fear of nuclear radiation – a factor disfavoring UC Berkeley as his choice as reviewed in Part 5 (i) – or some higher spiritual thought.

Originally in 1929 in Hungary, when von Neumann married his first wife Mariette, i.e., Marina von Neumann’s mother, with the marriage was his acceptance of the Catholic faith of hers, which von Neumann did not take seriously at the time:

“In 1929, … he was invited to lecture on quantum theory for a semester at Princeton. Upon being offered the job, he resolved to marry his girlfriend, Mariette Koevesi. He wrote back to Oswald Veblen of Princeton that he had to attend to some personal matters before he could accept. Von Neumann returned to Budapest and popped the question.

His fiancée, daughter of a Budapest doctor, agreed to marry him in December. Mariette was Catholic. Von Neumann accepted his wife’s faith for the marriage. Most evidence indicates that he did not take this “conversion” very seriously. …”

(William Poundstone, 1993, Anchor Books)

So von Neumann had not taken Catholicism seriously until near death. But back in 1929 his own family and Mariette’s family were all new or recent Jewish converts to Catholicism:

“Von Neumann married Mariette Kövesi in 1929 and their daughter Marina was born in 1935. The marriage broke up in 1936 and they divorced in 1937. In 1938, von Neumann went back to Budapest and married Klára (more tenderly, Klári) Dán. Both his wives came from converted Jewish families. Von Neumann and his family did not convert until after his father had died in 1929; then, they converted to Catholicism.”

(Balazs Hargittai and István Hargittai, Wisdom of the Martians of Science: In Their Own Words with Commentaries, 2016, World Scientific)

It isn’t clear from the above story if von Neumann’s nominal conversion to Catholicism in 1929 was a result of his marriage or his parental family’s insistence. But the last time before his death, von Neumann initiated to do so in a sincere manner. A Life magazine story on the life of John von Neumann following his death, described it in the same context as the prominent honors bestowed on him around that time:

“In April 1956 Von Neumann moved into Walter Reed Hospital for good. Honors were now coming from all directions. He was awarded Yeshiva University’s first Einstein prize. In a special White House ceremony President Eisenhower presented him with the Medal of Freedom. In April the AEC gave him the Enrico Fermi award for his contribution to the theory and design of computing machines, accompanied by a $50,000 tax-free grant.

Although born of Jewish parents, Von Neumann had never practised Judaism. After his arrival in the U.S. he had been baptized a Roman Catholic. But his divorce from Mariette had put him beyond the sacraments of the Catholic Church for almost 19 years. Now he felt an urge to return. One morning he said to Klara, “I want to see a priest.” He added, “But he will have to be a special kind of priest, one that will be intellectually compatible.” Arrangements were made for special instruction to be given by a Catholic scholar from Washington. After a few weeks Von Neumann began once again to receive the sacraments.”

(“Passing of a Great Mind: John von Neumann, a Brilliant, Jovial Mathematician, Was a Prodigious Servant of Science and His Country”, by Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

According to the above high-profile Life story, von Neumann had not been a religious Jew to begin with, was baptized Catholic as a result of his first marriage and attended church with his first wife until their divorce.

In this sense, the late-life request for conversion should be described as a ‘reconciliation’; and as his demanded condition of a compatible priest showed, to the end von Neumann continued to perceive differences between the Catholic Church and his intellectual interests.

In a subsequent tragedy that serves as an eerie hidden context for the comparison earlier between the “great man” John von Neumann, the “father of computers”, and the “great man” George Forsythe, the most influential person in the emergence of computer science, von Neumann’s second wife Klara (Klari) later died of drowning in 1963, decades before the 1997 drowning of Forsythe’s daughter Diana whose life I have earlier contrasted to von Neumann’s daughter Marina’s. Here the incident was described by von Neumann’s friend, the mathematician Stanislaw Ulam:

“Klari was a moody person, extremely intelligent, very nervous, and I often had the feeling that she felt that people paid attention to her mostly because she was the wife of the famous von Neumann. This was not really the case, for she was a very interesting person in her own right. Nevertheless, she had these apprehensions, which made her even more nervous. She had been married twice before (and married a fourth time after von Neumann’s death). She died in 1963 in tragic and mysterious circumstances. After leaving a party given in honor of Nobel Prize-winner Maria Mayer, she was found drowned on the beach at La Jolla, California.”

(S. M. Ulam, Adventures of a Mathematician, 1991, University of California Press)

In his 1981 interview Cuthbert Hurd was also asked about his own contributions to the computing field, and he answered by paraphrasing the most important of his own contributions in the spirit of von Neumann’s, that von Neumann interacted with and convinced senior decision makers – Hurd cited the examples of the nuclear physicist Edward Teller heading the Lawrence Livermore national lab, and IBM president Thomas Watson – of the benefit of computers, while he interacted and communicated with engineers:

“STERN: To summarize what, what would you say Von Neumann’s most significant contributions to the computing field were?

HURD: I think they were, they were two quite different things. I think irrespective of who really had the ideas first, that the publication, the promulgation of those papers which he and Burks and Goldstine wrote. I think those were very important documents. The other thing was, Von Neumann because of his reputation as a mathematician would gain the confidence of people and because he was so highly articulate gave people confidence in the fact that a computer if built would work, it would be a success. And I want to support that thing by two, two instances. When I first met Teller, and I don’t know when that was, in 1954, and Livermore got one of the first Univacs and Livermore got one of the first 701s. When I first met him, Teller said to me that the fact that Von Neumann, for whom he had the greatest respect, and who had some experience with computers felt, that use of the computer would be highly useful at Livermore, that that fact was a deciding influence on Teller’s part, on deciding to make the investment. … It was a big investment to make a computer, in the stand point that you have to have people, so the confidence that he gave Teller was a very important thing for Teller, and of course as soon as Teller made the decision other people would make the decision on the same basis, just because Teller did. I can illustrate it another way. I talked to, I just happened to be talking to Tom Watson, over the phone to Watson, and he said “Cuthbert I always remember the time you brought Von Neumann up to see us, and Von Neumann gave us confidence in what we were doing”. …

STERN: Kind of legitimating the computing field in general?

HURD: Yes.

STERN: How about your own contributions if you were to sum them up? Your most significant to the computing field.

HURD: I want to be careful not to compare myself to Von Neumann in anyway, but in the same way that Von Neumann gave some key people confidence in the field, I think the fact, that I could talk with engineers and understand what they were doing in any detail I wanted to. I was also able to communicate with people about what I knew about computing. I think that’s it in contributions. …”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Reading this January 20, 1981 interview, in which he emphasized the von Neumann-Aiken rivalry as computer pioneers and gave glowing praises of von Neumann, but otherwise scantily mentioned Aiken, I can’t help but wonder how much Cuthbert Hurd, affected by such a mindset, really thought of the worth of Aiken’s role in any collaboration to start a new computer company.

But perhaps the most telling of Hurd’s views of von Neumann versus Aiken is Hurd’s description, in the 1981 interview, of von Neumann’s attitude toward money, that can be contrasted to Hurd’s dismissiveness toward Aiken’s focus on getting rich.

Hurd had played key roles in recruiting von Neumann for certain scientific consultancies in the atomic energy field and in the computer industry.

The first time in the late 1940s, Hurd recruited von Neumann to be a consultant at the Oak Ridge National Laboratory:

“HURD: … One of my first conversations with him, would he become a consultant with Oak Ridge and we were working on a design of a gaseous fusion plant. There was a lot of numerical analysis what we would now call computing, and I thought and a colleague of mine, Dr. George Garrett, who was my boss, he’s a mathematician, thought that John Von Neumann would be useful, and when we told Alston Householder about this, Alston thought it would be a good idea to get John to become a consultant. John was already a government consultant, so I talked to him about this. He examined his schedule and after some period of months I guess, decided he would have time and the interest to become a consultant also at Oak Ridge so he became a consultant. But the first time he visited was after I had left.

STERN: Now, when he was a consultant for Oak Ridge was it a financial arrangement that he had with Oak Ridge or did he do this as part of his consulting with the government?

HURD: That’s it.

STERN: It was part of his consulting with the government?

HURD: That’s it.

STERN: And the expectation would be that he’d come down once a month, twice a month, what was the frequency?

HURD: I think he just came once for a preliminary visit and then it would be decided how often he came. I think he went once or twice. …

STERN: Well Oak Ridge did eventually build a IAS type computer.

HURD: Yes.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

The Oak Ridge national lab had begun as the Clinton laboratories of the Clinton Engineer Works, the leading production site for nuclear bomb materials during the Manhattan Project.

(Bruce Cameron Reed, The History and Science of the Manhattan Project, 2014, Springer; and, “OAK RIDGE AND HANFORD COME THROUGH (Oak Ridge [Clinton] and Hanford, 1944-1945)”, The Manhattan Project, U.S. Department of Energy)

Nuclear materials production had not been a part of von Neumann’s Manhattan Project roles, which were in bomb design at the Los Alamos national lab, and in analyzing and selecting the bombing targets – including proposing the Japanese ancient capital of Kyoto as one.

(Kati Marton, The Great Escape: Nine Jews who Fled Hitler and Changed the World, 2006, Simon and Schuster; “The enduring legacy of John von Neumann”, by John Waelti, October 14, 2011, The Monroe Times; and, “IMPLOSION BECOMES A NECESSITY (Los Alamos: Laboratory, 1944)”, The Manhattan Project, U.S. Department of Energy)

After World War II, von Neumann had his focus on designing and advocating for the hydrogen bomb and for intercontinental ballistic missiles, as discussed in Part 5 (i).

But as Hurd explained, there was “a lot of numerical analysis”, i.e., computing, that von Neumann’s talents would be useful, von Neumann accepted Oak Ridge’s invitation, and Oak Ridge later built a von Neumann IAS-type computer – as a part of a von Neumann-led computer-building movement among the academia and the scientific institutions, reviewed in Part 5 (i).

As in the above quote, after recruiting von Neumann for consulting Hurd then left Oak Ridge before von Neumann began to attend the lab. As quoted earlier, Hurd moved to IBM in 1949 to found and direct its applied science department, and became a key driving force behind IBM’s first commercial computer, the IBM 701 “Defense Calculators” of the Korean War era.

The second time in the early 1950s, recruiting von Neumann to be a consultant for IBM, Hurd had an observation about von Neumann’s attitude toward money, that von Neumann was “interested in money” but it was not a primary reason for his doing consultant work, as Hurd later told:

“STERN: Bullom in the obituary you showed me, frequently talked about Von Neumann being interested in power, if you recall. Did you have any sense of that being the case?

HURD: No.

STERN: What do you think his reasons for consulting for IBM were?

STERN: Or just speculate?

HURD: I think he liked the opportunity to be with a group of bright people. He clearly enjoyed his consulting projects. He liked it. And there were some problems to be solved in IBM which were challenges which he could solve. For example we discussed the possibility of writing a simulator of the Endicott plant. And we decided it was silly. I think he was interested in money. I don’t think that was a primary reason but when we came to discuss the terms why he was clearly interested in what the honorarium would be. I think I remember at first he thought it was terribly high. I don’t know what Von Neumann thought later. I think he was also interested in having his ideas put into a practical application. I don’t think our association was a deterministic thing, because he and I could have seen each other [at times other] than when he consulted.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Both von Neumann and Aiken were interested in money. However, when von Neumann was told of the IBM honorarium figure for his consulting, he thought “it was terribly high”, whereas Aiken’s goal, reviewed earlier, was “to be rich”.

And when Hurd said he did not think his association with von Neumann was “a deterministic thing”, he probably suggested that the outcome of their interactions made them associate more.

In that same logic, was it, or was it not, “a deterministic thing” that Hurd’s interactions with Howard Aiken, which led to Hurd’s becoming cited as a “close associate” of Aiken’s, did not lead to their starting a business together even though he could do so and Aiken repeatedly expressed an interest?

I think the various anecdotes Hurd told that I have reviewed are surprisingly revealing of Hurd’s own inclination towards von Neumann, and partially explain why he twice succeeded in recruiting von Neumann to major scientific and technological consultancies, and yet twice failed to proceed with forming a company with Aiken to develop microcomputers and personal computers.

But there was something more specific linking Hurd’s recruiting of von Neumann for consultancies to his later discussions with Aiken about co-starting a company: in Bernard Cohen’s biography of Aiken, “Howard Aiken: Portrait of a Computer Pioneer”, quoted earlier, Hurd was quoted as saying that Aiken was very close to George Garrett, Lockheed Missiles Company’s Director of Computer Activities; and in his interview with Nancy Stern quoted earlier, Hurd said that he and his Oak Ridge national lab colleague and boss George Garrett, a mathematician, decided to recruit von Neumann as a consultant.

The George Garrett at Oak Ridge and the one at Lockheed Missiles were in fact the same person, like Cuthbert Hurd a mathematician in the atomic energy field and then in the computing field, though in Garrett’s case at Lockheed Missiles he continued to be in a military arena:

“George A. Garrett, 84, a 21-year resident of Menlo Park, died May 21 after an illness of several months. Born in Sardis, Miss., he graduated from the University of Mississippi and received a doctorate in mathematics from Rice University in Houston. He began his career as the party chief of a seismographic crew, until World War II, when he worked as a civilian for the Navy at MIT, developing and testing airplane sonar. After the war, he worked in the burgeoning nuclear field at Oak Ridge, Tenn., where he developed peaceful uses for atomic energy and helped to design the nuclear power plant installed at Paducah, Ky. After years at Oak Ridge, he became director of information processing for Lockheed Missiles and Space Company in Sunnyvale. In 1977 he transferred to the position of senior scientist for Bechtel Corp. in San Francisco. …”

(“Deaths”, May 31, 1995, Palo Alto Online)

Now I can see that the two engaged in very specific activities in a human-resources sense: Hurd and Garrett together associated with von Neumann while in the atomic energy field, and then with Aiken in the computing field; first with von Neumann, Garrett no doubt worked with him at Oak Ridge on both nuclear energy and the computer built there following von Neumann’s design, while Hurd moved to IBM, where he recruited von Neumann for a second time; later with Aiken, Garrett most likely had a role in recruiting him to Lockheed Missiles in the fledgling Silicon Valley, while Hurd went there to discuss with Aiken, and their discussions and negotiations would determine whether Hurd, by now an established computer industry executive, would take the second step in Aiken’s case, i.e., to start a new computer company together.

Apparently Hurd did not take that second step with Aiken that he had taken with von Neumann.

Possibly due to disillusionment, Howard Aiken, as told in his February 1973 interview quoted earlier from Bernard Cohen’s biography of him, ended his consultancy with Lockheed that year and concentrated on consulting for Monsanto for the same goal of making computers small – unfortunately, and somewhat prophetically for something he had said in that interview, Aiken died in his pursue of the goal.

Cuthbert Hurd was of course not the only industry executive in the computer field Aiken was close to, Aiken’s former Mark I underling Richard Bloch being another, who at some point became general manager of Raytheon’s computer division as quoted earlier. But as the example of Louis Fein illustrates, in the computing field Raytheon was no match for IBM.

Given that Hurd had been IBM’s director of the Applied Science Department and then director of the Electronic Data Processing Machines Division, he had the full breath of management experiences in computer development and computer applications.

At IBM, Hurd had a lot of interactions with Thomas Watson father and son:

“HURD: … I was hired with some notion about helping IBM get in the computer field. It was vague. When I came in I talked to Mr. Watson Sr. who was the chairman, Mr. Phillips who was the president and Tom Watson Jr. who was executive vice president, the director of sales, the director of product planning and John McPherson who I guess was vice president of engineering. I talked to everybody, we all talked about this subject.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

Loud arguments between Watson, Sr. and Watson, Jr. – the father against and the son for IBM computer development as first noted in Part 5 (i) – were often overheard by Hurd and other executives:

“STERN: … Now you’ve mentioned and I’ve read elsewhere, that in the late ’40s, Watson Jr. was really interested in getting into the computing field and Watson Sr. was not. Did you experience any kind of difference of opinion over this issue?

HURD: First let me say as I said earlier that I think, in 1949, Tom did not think specifically that IBM should get into the computer field, because the computer field was not defined. He used to talk about electronics. He talked about two things. He’d talk about electronics, and he’d talk about magnetic tape, and he understood instinctively that something had to be done with it, but we were all aware now–what do I mean by all? Half dozen people were aware– that there was a difference of opinion. Which would sometimes have a very strong expression between Mr. Watson Sr. and Mr. Watson Jr. The away we knew about this was, Mr. Watson Sr.’s on the seventeenth floor, Mr. Watson Jr.’s office was on the sixteenth floor. My office was on the fifteenth floor. We’d sometimes have meetings on the sixteenth floor, and Tom would disappear, and sometimes if somebody happened to go up or down, they never took the elevator. There was a stairway. You’d go outside Tom’s office, and in fifteen or twenty feet there was a door and it was a stairway, and it was not unusual to hear very loud voices.

STERN: [Laugh.]

HURD: And nobody ever said that you know explicitly that this is what this is going on. But everybody had the impression that these two gentlemen, who had very strong minds were disagreeing about something. …”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

As quoted earlier, after he left IBM for the chairmanship of the Computer Usage Company, Hurd continued to be a consultant for IBM. It was only logical because that independent company, founded in 1955, produced software for use on IBM computers.

(Paul E. Ceruzzi, A History of Modern Computing, 2003, The MIT Press)

Hurd’s important responsibilities within IBM, and continuing close ties as a consultant to IBM afterwards, could in fact be a main reason why Aiken repeatedly discussed with him ideas of starting a new company to develop smaller computers. After all, from the very beginning it had been IBM that made Aiken’s Mark I project a reality.

As reviewed in details in Part 5 (i), major U.S. government funding support for science and technology was first jumpstarted by the considerations of military usage of the Second World War; the first general-purpose electronic computer ENIAC, with John von Neumann involved in a key development role, was made during that time for the U.S. Army’s Ballistic Research Laboratory at Aberdeen Proving Ground; subsequently, IBM’s first commercial computer, the Model 701 Defense Calculator, with Cuthbert Hurd in a key development role, was made in response to the Korean War.

It was no exception for Aiken’s own invention, the Mark machines, which he had independently proposed to IBM in 1937 several years before World War II. Ultimately, they were all made for use for the U.S. military’s needs:

“The four large-scale calculators which Aiken developed were:

Automatic Sequence Controlled Calculator (the Harvard Mark I, known within IBM as the ASCC): conceived by Aiken in 1937, designed by IBM engineers and by Aiken, built by IBM as a gift to Harvard. The Mark I was used at Harvard by a US Navy crew that included Grace Murray Hopper and Richard Bloch. …

Mark II: Designed and built at Harvard for the Naval Proving Ground at Dahlgren, Va., for the development of ballistics tables. …

Mark III: Like Mark II, this machine was designed and built at Harvard for Dahlgren. …

Mark IV: Designed, built, and operated at Harvard for the US Air Force…”

(J. A. N. Lee, 1995, Institute of Electrical and Electronics Engineers Inc.)

As for Aiken himself, the Harvard press release for Mark I’s dedication, discussed earlier, referred to Aiken as “Commander Howard H. Aiken” of the U.S. Navy Reserve; clearly the Navy, which made good use of the first two, non-electronic Mark machines, was a pride of Aiken and Harvard.

So for Howard Aiken, his consultancy at Lockheed Missiles and Space Company, where he worked with George Garrett and had discussions with Cuthbert Hurd about starting a new company, was probably in his understanding a necessary military-affiliation ingredient for launching the new venture of his desire; with proposed technical help of some Lockheed engineering personnel as mentioned earlier, it could make him the “father” of a new generation of computers – an accolade that had eluded him in his Harvard Mark projects.

The U.S. Navy certainly prized its Harvard Mark machines association. Grace Murray Hopper, as cited above Richard Bloch’s colleague in the Navy team using Mark I, and another of Aiken’s Harvard underlings, eventually rose in the Nay to become a rear admiral, on the merits of her computer work leadership:

Grace Brewster Murray was born on December 9, 1906 in New York City. In 1928 she graduated from Vassar College with a BA in mathematics and physics and joined the Vassar faculty. While an instructor at Vassar, she continued her studies in mathematics at Yale University, where she earned an MA in 1930 and a PhD in 1934. …

In 1930 Grace Murray married Vincent Foster Hopper. (He died in 1945 during World War II, and they had no children.) She remained at Vassar as an associate professor until 1943, when she joined the United States Naval Reserve to assist her country in its wartime challenges. … she was assigned to the Bureau of Ordnance Computation Project at Harvard University, where she worked at Harvard’s Cruft Laboratories on the Mark series of computers. In 1946 Admiral Hopper resigned her leave of absence from Vassar to become a research fellow in engineering and applied physics at Harvard’s Computation Laboratory. In 1949 she joined the Eckert-Mauchly Computer Corporation as a Senior Mathematician. This group was purchased by Remington Rand in 1950, which in turn merged into the Sperry Corporation in 1955. …

Throughout her years in academia and industry, Admiral Hopper was a consultant and lecturer for the United States Naval Reserve. After a seven-month retirement, she returned to active duty in the Navy in 1967 as a leader in the Naval Data Automation Command. Upon her retirement from the Navy in 1986 with the rank of Rear Admiral, she immediately became a senior consultant to Digital Equipment Corporation, and remained there several years, working well into her eighties. She died in her sleep in Arlington, Virginia on January 1, 1992.”

(“Grace Murray Hopper”, 1994, Grace Hopper Celebration of Women in Computing, as posted by Yale University Department of Computer Science)

Like her former Harvard mentor Howard Aiken, Grace Murray Hopper also died during sleep. But at 86 on New Year 1992, she enjoyed 13 more years of life than Aiken.

In Part 3 I have mentioned a women’s computing conference in October 2014 that Microsoft CEO Satya Nadella was invited to attend and there Maria Klawe disagreed with something he said; the annual conference, Grace Hopper Celebration of Women in Computing, is named in honor of Grace Murray Hopper.

(“About”, Anita Borg Institute Grace Hopper Celebration of Women in Computing)

Subtly different, Cuthbert Hurd, whom Aiken counted on for co-launching a computer industry venture, had a U.S. Coast Guard Academy working background from World War II:

“HURD: I started mathematics with a Ph.D. [from] the University of Illinois, 1936. Dissertation concerned asymptotic solutions of differential equations. Taught at Michigan State College, now Michigan State University, until the war. And organized and staffed department for reserve officers of The Coast Guard Academy, and was educational officer for the academy, helped the admiral revise the curriculum, and was briefly dean of Allegheny College, joined the Oak Ridge Project in nuclear energy. In 1947 was a Technical Research Head. Did work which involved dealing with lots of data. Felt we needed something more, became acquainted with IBM. Called on Mr. Watson Sr. told him I wanted to work for IBM. Joined IBM, organized the Applied Science department. Got IBM’s first computer announced in about a year and a half.”

(Nancy Stern, January 20, 1981, Charles Babbage Institute, The Center for the History of Information Processing, University of Minnesota)

As described, at the Coast Guard Academy Hurd was an organizer of reserve officers, and an educational officer, helping the admiral revise the curriculum.

Hmm, once helping always helping the admiral revise ‘The Coast Guard Academy curriculum’, so to speak, now I begin to wonder: might it not be Hurd’s closer empathy for von Neumann than for Aiken, but a more subtle form of institutional bias that prevented Hurd from helping Aiken’s ambition of developing and commercializing a new generation of computers?

More specifically, prior to his Oak Ridge career Hurd’s interests had all been in education: college teaching and research, personnel education and administration during World War II at the Coast Guard Academy, and academic administration as a college dean; then at Oak Ridge national lab, Hurd began managing technical research.

Hurd’s starting role at IBM, founding director of the applied science department, was thus likely offered to him because of his prior long career in administering education and scientific research.

After IBM, the independent computer software company Hurd became the board chair of, the Computer Usage Company, emphasized “usage” of computers, in fact IBM computers as mentioned earlier.

Even later, Hurd became board chairman of a company specializing in “educational software”:

“… He was later appointed chairman of the Picodyne Corporation, which specialized in educational software, and in 1984 he co-founded Quintus Computer Systems, which was devoted to the commercialization of artificial intelligence. At the time of his death, he was the chief scientist of Northpoint Software Ventures Inc., a developer of risk management software.”

(Laurence Zuckerman, June 2, 1996, The New York Times)

Hence, my metaphor of Cuthbert Hurd always helping the admiral revise the curriculum, includes about some detected disdain, tinted by Hurd’s educational institutional perspective albeit perhaps not as decidedly biased as Al Bowker’s lifelong academic perspective, toward the business passion and acumen of Howard Aiken, a retired Harvard professor – as a result Hurd talked about but never took a real step toward forming a computer company with Aiken.

John von Neumann, stricken with cancer in his early 50s, wanted to move to California to engage in “research on the computer and its possible future uses, with considerable commercial sponsorship”; but did not have life left to do so as detailed in my review in Part 5 (i).

Howard Aiken, von Neumann’s academic computer-pioneer rival, lived two decades longer, retired from the academia, started a businessman career, and became a computer consultant in California’s growing Silicon Valley for over a decade, at Lockheed Missiles and Space company; but despite his repeatedly expressed wishes, with technological ideas proposed, to start a company to develop smaller computers, the promise he was given in return was never fulfilled.

Finally Aiken gave up on Lockheed and Silicon Valley, pinning his hope on Monsanto’s computer technology ambition; but sadly, he immediately met his end.

What consoled Howard Aiken, undoubtedly, was that within a short 3 years of his early retirement from Harvard, Alfred Brooks, one of his “most devoted disciples” as referred to earlier, made significant achievements as the development team leader of a revolutionarily important and successful IBM commercial computer, and became the founding chairman of the second academic computer science department in the U.S.

Decades after Aiken’s death, in 1999 Brooks received computer science’s highest honor, the A. M. Turing Award, an honor in my opinion Howard Aiken very much deserved and should have been given – had not been for his career conversion to a businessman.

Since founding the University of North Carolina at Chapel Hill’s computer science department, Brooks’s research has specialised in computer graphics:

“After the successful delivery of the System/360 and its operating system, Brooks was invited to the University of North Carolina, where he founded the University’s computer science department in 1964. He chaired the department from 1964 to 1984, and served as the Kenan Professor of Computer Science. His principal research area, real-time three-dimensional graphics, provides virtual environments that let biochemists reason about the structure of complex molecules, and let architects walk through buildings under design. Brooks has also pioneered the use of a haptic force feedback display to supplement visual graphics.”

(“FREDERICK (“FRED”) BROOKS”, A. M. Turing Award, Association for Computing Machinery)

As discussed in Part 3, Chapel Hill’s computer science department was where Jack Snoeyink moved to in 2000, from the University of British Columbia, who had a Stanford computer science Ph.D. with specialisation in computer graphics and whose 1990 hiring by the UBC computer science department had ended my hope for a UBC tenure-track faculty position.

I note that Snoeyink became Brooks’s colleague the year after Brooks’s winning of the Turing Award.

As also reviewed in Part 3, graduating from Stanford Snoeyink was recruited by UBC computer graphics professor Alain Fournier, who had taught courses at Stanford and been acquainted with Snoeyink’s Ph.D. adviser Leonidas Guibas; later to Chapel Hill Snoeyink became a colleague of Henry Fuchs, a senior professor in computer graphics who had been on the faculty of the University of Texas at Dallas in the mid-late 1970s when Fournier was a Ph.D. student there, interested in specializing under Fuchs before Fuchs’s departure for Chapel Hill.

Clearly, UNC Chapel Hill’s computer science leader Alfred Brooks recognized talents like Henry Fuchs and Jack Snoeyink.

But mindful of Brooks being a former devoted disciple of Howard Aiken, and mindful of a computer-pioneer rivalry between Aiken and John von Neumann, I note that back in the Manhattan Project era von Neumann had had an important collaborator also by the name of Fuchs, Klaus Fuchs, in pioneering design of the hydrogen bomb, who turned out to be a spy handing U.S. nuclear weapons secrets to the Soviet Union:

“Fuchs, the Quaker son of a Lutheran pastor, was born in Russelsheim, Germany in 1911. He attended both Leipzig and Kiel Universities but, as a Communist, was persecuted by the Nazis. He fled through Switzerland and France to Britain, where he attended in succession Bristol and then Edinburgh Universities. In 1940 he was taken into custody as a German ‘enemy alien’ and shipped off to an internment camp in Quebec, Canada.

In 1941, with the help of his old professor at Edinburgh, Max Born (himself a Jewish refugee from Germany) he was back in Britain — and in 1942 he obtained a job with the ‘Tube Alloys’ project (code name for the British atomic research programme).

He was naturalised British in 1942 and, ironically, he signed the Official Secrets Act at about the same time as he started meeting ‘the girl from Banbury’, really Ruth Werner, a German communist working for the Soviets. …

In late 1943, he was transferred to New York to work on the US atomic bomb programmme at Columbia University; then, in the summer of the following year, he started work at the Theoretical Physics Division on the hydrogen bomb at Los Alamos in New Mexico … Together with John von Neumann he filed a patent, far ahead of its time, for initiating fusion and implosion in an H-bomb.

In 1947, he became the first head of the Theoretical Physics Division at the Harwell Atomic Energy Research Establishment, which had been set up at the instigation of Frederick Lindemann, Lord Cherwell (1886-1957), Churchill’s scientific adviser …

While at Harwell, Fuchs met Soviet agent Alexander Feklisov (1914-2007) at least twice… On the first occasion, he gave top-secret information about the American super H-bomb and how scientists Fermi and Teller had proved its workability. On the second, he gave away secrets which… “played an exceptional role in the subsequent course of the Soviet thermonuclear programme”.

(“Harwell head gave away H-bomb secrets”, by Chris Koenig, March 9, 2011, Oxford Times)

So I wonder if Professor Alfred Brooks was intrigued not only by Professor Henry Fuchs’s talents but also by the Fuchs name when he hired the latter, who is today the Federico Gil Distinguished Professor of Computer Science at UNC Chapel Hill.

(“Henry Fuchs: Federico Gil Distinguished Professor”, Department of Computer Science, University of North Carolina at Chapel Hill)

In any case, per his Turing Award biography, in his IBM days Brooks had participated in developing a computer for the NSA, and helping the U.S. government assess the computing capability of the Soviet Union, and so he had his nerves.

Alain Fournier did not get to become Henry Fuchs’s Ph.D. student in Dallas, but decades later Jack Snoeyink he mentored became Fuchs’s faculty colleague at Chapel Hill, and so an intention was realized by one of a younger generation.

Despite Jack’s role in UBC academic politics to my detriment, mentioned in a quote in Part 1, it is still fitting that a Ph.D. from the most influential academic computer science department in the U.S. founded by the late George Forsythe, “almost … the Martin Luther of the Computer Reformation”, became a professor in the U.S.’s second academic computer science department founded ahead of Stanford’s, and in the same computer graphics field as that department’s founder Alfred Brooks.

Snoeyink went to Chapel Hill in the same year 2000 when Fournier died of cancer, as in Part 3. Fournier was 56 – only a year older than George Forsythe at his death decades earlier.

(“Alain Fournier, a life in pictures”, Pierre Poulin, Département d’informatique et de recherche opérationnelle, Université de Montréal)

Then in the same year 2015 when Brooks retired after 51 years at UNC, Snoeyink became a program director in the U.S. National Science Foundation as noted in Part 3 – I can only hope that this is an indication of progress.

(Part 5 continues in (iii))

Leave a comment

Filed under Academia, Computer Science, Health, History, Politics, Religion, Science, Technology

A review of postings about scientific integrity and intellectual honesty, with observations regarding elite centrism – Part 5: inventions, innovations, and ushering of ‘the new normal’ (i)

(Continued from Part 4)

The electronic computer ranks among the foremost in the history of scientific and technological innovations. The mathematician John von Neumann, who played important roles in World War II U.S. military science projects, is often regarded as the “father of computers” for his key participation in the development of ENIAC, the first general-purpose electronic computer, at the Moore School of Electrical Engineering, the University of Pennsylvania, and his subsequent leading role in spreading the development of computers:

“The Moore School signed a research contract with the Ballistic Research Laboratory (BRL) of the U.S. Army, and in an August 1942 memorandum Mauchly proposed that the school build a high-speed calculator out of vacuum tube technology for the war effort. In 1943, the army granted funds to build the Electronic Numerical Integrator and Computer (ENIAC) to create artillery ballistic tables. Eckert served as chief engineer of a team of fifty engineers and technical staff on the ENIAC project.

Completed in 1945, the ENIAC consisted of 49-foot-high cabinets, almost 18,000 vacuum tubes, and many miles of wiring, and weighed 30 tons. …

… While building the ENIAC, Mauchly and Eckert developed the idea of the stored program for their next computer project, where data and program code resided together in memory. The concept allowed computers to be programmed dynamically so that the actual electronics or plugboards did not have to be changed with every program.

… During World War II, von Neumann worked on the Manhattan Project to build the atomic bomb and also lent his wide expertise as a consultant on other defense projects.

After becoming involved in the ENIAC project, von Neumann expanded on the concept of stored programs and laid the theoretical foundations of all modern computers in a 1945 report and through later work. His ideas came to be known as the “von Neumann Architecture,”, and von Neumann is often called the “father of computers.” … After the war, von Neumann went back to Princeton and persuaded the Institute for Advanced Study to build their own pioneering digital computer, the IAS (derived from the initials of the institute), which he designed.

Eckert and Mauchly deserve equal credit with von Neumann for their innovations, though von Neumann’s elaboration of their initial ideas and his considerable prestige lent credibility to the budding movement to build electronic computers. …”

(Eric G. Swedin and David L. Ferro, Computers: The Life Story of a Technology, 2005, The Johns Hopkins University Press)

In a February 2013 blog post, I wondered about the prospect in the mid-1950s that von Neumann, then stricken with cancer, was to move from the Institute for Advanced Study in Princeton, New Jersey, to the University of California; “I bet it was my alma mater UC Berkeley”, I marvelled at how much his coming would have been a boost to Berkeley’s science and the nascent Silicon Valley’s technology:

“In any case, it was a pity for the University of California that John von Neumann died at his prime age, as before his death he had decided to resign from the Institute for Advanced Study in Princeton and move to one of the UC campuses, as also revealed in the book quoted about him:

“When Johnny was in hospital in 1956, with what proved to be his terminal cancer, he wrote to Oppenheimer and explained, although not yet for publication, that he was not in fact going to come back to the IAS. He had privately accepted an offer to be professor at large at the University of California: he would live near one of its campuses (it had not been quite decided which) and proceed with research on the computer and its possible future uses, with considerable commercial sponsorship. We cannot know how much he would then have enriched our lives, with cellular automata, with totally new lines for the computer, with new sorts of mathematics.”

I bet it was my alma mater UC Berkeley John von Neumann was about to move to, given its close collaborations with several National Labs that had nuclear science and weapons researches, and its proximity to what would become Silicon Valley around Stanford University across the Bay as discussed in Part 1.

The scientific and technological history of the Bay Area would have looked very different, so much more – had “Johnny” come to his senses earlier, I sigh.”

(“Guinevere and Lancelot – a metaphor of comedy or tragedy, without Shakespeare but with shocking ends to wonderful lives (Part 2)”, February 28, 2013, Feng Gao’s Posts – Rites of Spring)

What I quoted from in the above was Norman MacRae’s book, John Von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, originally published in 1992.

Von Neumann soon died, and his intended move has left room for imagination since the University of California had several major campuses and “it had not been quite decided which” according to MacRae.

My educated guess of UC Berkeley was based on the close relations Berkeley had with the nearby national laboratories – the Lawrence Berkeley National Lab and the Lawrence Livermore National Lab discussed in Part 4 – in nuclear science and weapons research, of top interest to von Neumann.

For instance, in his involvement in the development of the hydrogen bomb, von Neumann had spent time at the Livermore lab with the physicist Edward Teller, Cold War strategist Herman Khan, and others:

“Herman Kahn … went to U.C.L.A. and majored in physics. During the war, he served in the Pacific theatre in a non-combat position, then finished his B.S. and entered a Ph.D. program at Cal Tech. He failed to graduate… went to work at rand. He became involved in the development of the hydrogen bomb, and commuted to the Livermore Laboratory, near Berkeley, where he worked with Edward Teller, John von Neumann, and Hans Bethe.”

(“Fat Man: Herman Kahn and the nuclear age”, by Louis Menand, June 27, 2005, The New Yorker)

On the other hand, UCLA has claimed that von Neumann had planned to move there, before his premature death in February 1957, because of his close association with a computer project at RAND Corporation in Santa Monica in the Los Angeles region:

1950_____________________________________

At RAND Corporation in Santa Monica, a project to build a von Neumann type machine was closely tracking the ongoing development t the Institute for Advanced Studies in Princeton (von Neumann often came West to consult at RAND and, in fit, his plan to relocate to UCLA was aborted by his death in February 1957).”

(“THE HISTORY OF COMPUTER ENGINEERING & COMPUTER SCIENCE AT UCLA”, by Gerald Estrin, Computer Science Department, UCLA Engineering)

As cited, the RAND computer project closely followed von Neumann’s computer design at the IAS in Princeton.

RAND was the Cold War think-tank from which John Nash, whose Princeton Ph.D. thesis idea in game theory had once been dismissed by von Neumann, was expelled in 1954 due to a police arrest for homosexual activity in a public restroom, here with more details than a previous quote in Part 2:

“That August… He spent hours at a time walking on the sand or long the promenade in Palisade Park, watching the bodybuilders on Muscle Beach…

One morning at the very end of the month, the head of RAND’s security detail got a call from the Santa Monica police station, which, as it happened, wasn’t far from RAND’s new headquarters on the far side of Main. It seemed that two cops in vice, one decoy and one arresting officer named John Otto Mattson, had picked up a young guy in a men’s bathroom in Palisades Park in the very early morning. … The man, who looked to be in his mid-twenties, claimed that he was a mathematician employed by RAND. Was he?

… Nash had a top-secret security clearance. He’d been picked up in a “police trap.” …

Nash was not the first RAND employee to be caught in one of the Santa Monica police traps. Muscle Beach, between the Santa Monica pier and the little beach community of Venice, was a magnet for bodybuilders and the biggest homosexual pickup scene in the Malibu Bay area. …”

(Sylvia Nasar, A Beautiful Mind, 1998, Simon & Schuster)

Unlike John Nash, a libidinous young gay man incessantly roaming public beaches and parks and thus a misfit for RAND’s security sensitivity, John von Neumann was a leading Cold War brain of the think-tank, alongside Herman Khan:

“… RAND is a civilian think tank … described by Fortune in 1951 as “the Air Force’s big-brain-buying venture”, where brilliant academics pondered nuclear war and the new theory of games. …

Nothing like the RAND of the early 1950s has existed before or since. It was the original think tank, a strange hybrid of which the unique mission was to apply rational analysis and the latest quantitative methods to the problem of how to use the terrifying new nuclear weaponry to forestall war with Russia – or to win a war if deterrence failed. The people of RAND were there to think the unthinkable, in Herman Kahn’s famous phrase. … And Kahn and von Neumann, RAND’s most celebrated thinkers, were among the alleged models for Dr. Strangelove. … RAND had its roots in World War II, when the American military, for the first time in its history, had recruited legions of scientists, mathematicians, and economists and used them to help win the war. …”

(Sylvia Nasar, 1998, Simon & Schuster)

Beyond RAND or Livermore, von Neumann’s advocacy for the use of nuclear weapons was famous, rare among scientists, and framed in the mindset of “world government” – as in Part 2 a political ideal Nash was also drawn to and attempted to campaign for:

“After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable—and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at that time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”

(“Passing of a Great Mind: John von Neumann, a Brilliant, Jovial Mathematician, Was a Prodigious Servant of Science and His Country”, by Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

So for von Neumann, world government needed to be achieved through war, even nuclear war, rather than through the kind of peace movement Nash wished for but failed to start.

Von Neumann’s contributions as a military technology consultant and a Cold War adviser were immense:

“His fellow scientists… knew that during World War II at Los Alamos Von Neumann’s development of the idea of implosion speeded up the making of the atomic bomb by at least a full year. His later work with electronic computers quickened U.S. development of the H.bomb by months. The chief designer of the H-bomb, Physicist Edward Teller, once said with wry humor that Von Neumann was “one of those rare mathematicians who could descend to the level of the physicists.” …

… The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. …”

(Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

Without von Neumann’s ambitious push, the United States would have fallen behind the Soviet Union in the development of intercontinental ballistic missiles (ICBM), namely long-range strategic nuclear missiles:

“In the early 1950s, the champion of strategic bombers in the United States was the famous, truculent, imperious Gen. Curtis LeMay, the chief of the Strategic Air Command, who, during the last months of World War II, had tried to break Japan’s will and avert the necessity of an American invasion by dropping 150,000 tons of firebombs on Japanese cities. …

In the Pentagon of the 1950s, LeMay was “king of the mountain,” as one colleague put it, known for pulverizing those few men who tried to stand in his way. …

Lacking LeMay’s blinders, Bennie Schriever realized that the Soviets planned to rest their future defense not on bombers but on intercontinental ballistic missiles capable of striking the United States with only 15 minutes of advance warning. The Kremlin was also fast improving batteries of surface-to-air missiles that could knock LeMay’s beloved bombers out of the sky. …

Schriever’s new way of thinking began in 1953, when he was still a colonel. During a briefing on intermediate-range bombers at Maxwell Air Force Base in Alabama, he had a fateful conversation with the legendary refugee scientists Edward Teller and John von Neumann. They predicted that by 1960, the United States would be creating hydrogen bombs so lightweight that missiles could carry them. The following year, Schriever, by then a general, was asked to supervise, on highest priority, the creation of some kind of ICBM force. …”

(“Missile Defense”, by Michael Beschloss, October 1, 2009, The New York Times)

But as discussed earlier, in hospital for cancer treatment in 1956 von Neumann expressed the wish to move to California to conduct computer research.

The computer developed at RAND in association with von Neumann was name JOHNNIAC in honor of him, and was one of the most utilized computers of that early generation:

JOHNNIAC (circa 1952-1966)

The JOHNNIAC (John von Neumann Integrator and Automatic Computer) was a product of the RAND Corporation. It was yet another machine based on the Princeton Institute IAS architecture. The JOHNNIAC was named in von Neumann’s honor, although it seems that von Neumann disapproved. JOHNNIAC is arguably the longest-lived of the early computers. It was in use almost continuously from the end of the Korean War, until finally shut down on 11 February 1966 after logging over 50,000 operational hours. After two “rescues” from the scrap heap, the machine currently resides at the Computer History Museum.”

(Marshall William McMurran, ACHIEVING ACCURACY: A Legacy of Computers and Missiles, 2008, Xlibris Corporation)

UCLA’s claim that it and RAND were von Neumann’s choice in 1956 is thus consistent with MacRae’s account of von Neumann’s wish, that he wanted to concentrate on “research on the computer and its possible future uses, with considerable commercial sponsorship” – von Neumann’s prior association with RAND and JOHNNIAC made it more likely for UCLA rather than UC Berkeley to be his choice.

In that vein I would guess anew, that with von Neumann’s guidance RAND could have started a ‘Computer Beach’ at Santa Monica. It could resemble the Silicon Valley’s start in a Palo Alto garage by the Hewlett-Packard Company’s founders under the mentorship of Frederick Terman, their Stanford University professor:

The Rise of Silicon Valley

In 1939, with the encouragement of their professor and mentor, Frederick Terman, Stanford alumni David Packard and William Hewlett established a little electronics company in a Palo Alto garage. That garage would later be dubbed “the Birthplace of Silicon Valley.””

(“History of Stanford”, Stanford University)

There would have been plenty of time to build a computer industry for von Neumann and RAND, as RAND was much more than a garage and Hewlett-Packard’s first computer would come out only 10 years later in 1966 – the year of JOHNNIAC’s retirement.

(“Hewlett-Packard”, 2008, Silicon Valley Historical Association)

My imagined Computer Beach versus the nascent Silicon Valley would have been one scenario, while UCLA versus UC Berkeley was another factor von Neumann likely considered.

Stricken with cancer, von Neumann likely came to think of his involvement in the atomic bomb development as an occupational hazard, as I discussed in my February 2013 blog post, again quoting from MacRae:

“After his intimate participations in advanced military researches during World War II and afterwards, including in the development of the nuclear bomb, John von Neumann died of cancer in 1957 at only 53, and there has been a question whether his premature death had been work-related:

“It is plausible that in 1955 the then-fifty-one-year-old Johnny’s cancer sprang from his attendance at the 1946 Bikini nuclear tests. The safety precautions at Bikini were based on calculations that were meant to keep any observer’s exposure to radiation well below what had given Japanese at Hiroshima even a 1 percent extra risk of cancer. But Johnny, like some other great scientists, underestimated risks at that time. He was startled when radiation probably caused the cancer and death in 1954 at age fifty-three of his great friend Fermi, whose 1930s experiments with nuclear bombardment in Italy were not accompanied by proper precautions. Soon after a Soviet nuclear test in 1953 Sakharov and Vyacheslav Malyshev walked near the site to assess its results. Sakharov ascribed Malyshev’s death from leukemia in 1954, and possibly his own terminal illness thirty-five years later, to that walk.”

So von Neumann and some other great scientists in the nuclear bomb development may have “underestimated” the health risks, and he and Enrico Fermi who had discovered nuclear reaction, both died at the age of 53.

Hmm, I doubt that a mathematician of von Neumann’s caliber would have incorrectly calculated the cancer risks from radiation.”

(February 28, 2013, Feng Gao’s Posts – Rites of Spring)

I doubted in that blog post, and still do, that a mathematician and scientist of John von Neumann’s caliber would have underestimated his risk of getting cancer from nuclear radiation. But the facts remain that both von Neumann and Enrico Fermi, whose discovery of nuclear reaction made the atomic bomb possible, died of cancer at the same age of 53.

Moreover, as I noted in that blog post, previously quoted in Part 4, that former UC Berkeley physicist Robert Oppenheimer, leader of the atomic bomb development and director of the IAS to whom von Neumann expressed the wish to move to UC, later also died of cancer – 10 years older at the age of 63:

“The physicist Robert Oppenheimer, the director of IAS at Princeton with whom von Neumann discussed his pending move in 1956, had hailed from UC Berkeley to become “father of the atomic bomb”, leading the development of nuclear bombs at Los Alamos National Lab founded by him in northern New Mexico; Oppenheimer later also died of cancer, at the age of 63.”

(February 28, 2013, Feng Gao’s Posts – Rites of Spring)

So it is possible that in 1956 cancer prompted von Neumann’s final decision that would be to stay away from UC Berkeley and the nearby nuclear labs.

But von Neumann was also a professor of mathematics, at Princeton’s Institute for Advanced Study and, if he had moved to California, at UCLA or UC Berkeley, the two oldest primary UC campuses as in Part 4. The academic factors could also sway his decision.

At UCLA there was a computer, SWAC (Standards Western Automatic Computer), developed at the Institute for Numerical Analysis sponsored by the National Bureau of Standards and funded by the Office of Naval Research:

1947____________________________________________

The Institute for Numerical Analysis was set up on the UCLA campus under sponsorship of the National Bureau of Standards and with funding from the Office of Naval Research. The primary function of INA was “to conduct research and training in the types of mathematics pertinent to the efficient exploitation and further development of high-speed automatic digital computing machinery.” INA attracted a stream of internationally recognized applied mathematicians. Harry Huskey completed the SWAC (Standards Western Automatic Computer) development project in 1950, and it became one of the very few places where modern numerical experiments could be conducted. The SWAC provided a testing ground for computer engineers, programmers and applied mathematicians. …”

(Gerald Estrin, Computer Science Department, UCLA Engineering)

As quoted, upon its 1950 completion the SWAC computer at UCLA became “one of the very few places where modern numerical experiments could be conducted”.

The National Bureau of Standards wanted computers for practical needs, and so SWAC and its Eastern sibling SEAC were quickly built, completed even before von Neumann’s IAS computer in Princeton:

“The SEAC (Standards Eastern Automatic Computer) and the SWAC (Standards Western Automatic Computer) were built by the National Bureau of Standards in Washington and Los Angeles respectively. Both of these computers were designed to be completed rapidly in order to satisfy the computing needs of the National Bureau of Standards until either the IAS machine or the UNIVAC was completed. … In May of 1950 the SEAC became the first post-ENIAC American computer to be fully operational. … SWAC was completed by July 1950…”

(Louis A. Girifalco, Dynamics of Technological Change, 1991, Van Nostrand Reinhold)

Von Neumann, on the other hand, was the leader of a larger computer-building movement at universities and scientific institutions in the U.S. and internationally, distributing his computer design plans far and wide:

“At Cambridge, computer development was led by Maurice Wilkes, who constituted the staff of the “University Mathematical Laboratory,” which was founded in 1937 to tend the university’s Differential Analyzer. In May of 1946, Wilkes read von Neumann’s “First Draft on the EDVAC,” which he obtained from L. J. Comrie who had just visited the United States. Wilkes was invited to the Moore School 1946 summer lectures and returned home determined to build an electronic computer. … He called his machine the EDSAC (Electronic Delay Storage Automatic Calculator). …

EDSAC also led to the first computer to be used for commercial data processing. …

In spite of these great achievements, the future course of digital computers was largely determined in the United States. The machines built immediately after the ENIAC that set this course were the EDVAC (1952), the IAS machine of von Neumann (1951), the SEAC (1950) and SWAC (1950) of the Bureau of Standards, the REA 1101 (1950), the UNIVAC (1951), and Whirlwind (1951). Although the EDVAC was the last of this group to become operational, its design was well known and had a profound influence on all post-ENIAC machines, as has been already noted.

The Institute for Advanced Study machine was funded by the Army through the efforts of von Neumann and Goldstine. Von Neumann’s great prestige and widespread contacts ensured that the IAS machine would be widely used. In fact, von Neumann’s original plan was to distribute design plans to a number of institutions as they were developed so that the other organizations could rapidly build copies. A number of copies, with some alterations and improvements, were actually built at various institutions, including the Rand Corporation, the Los Alamos National Laboratory, the Argonne National Laboratory, and the University of Illinois. All of these were paid for by the United States government. Von Neumann’s machines were not limited to the United States. Several versions were built abroad, including the SILLIAC in Australia.”

(Louis A. Girifalco, 1991, Van Nostrand Reinhold)

The Institute for Advanced Study’s list of historical computers influenced by von Neumann is longer and includes international ones such as in Stockholm, Moscow, Munich and Sydney:

“Differences over individual contributions and patents divided the group at the Moore School. In keeping with the spirit of academic enquiry, von Neumann was determined that advances be kept in the public domain. ECP progress reports were widely disseminated. As a consequence, the project had widespread influence. Copies of the IAS machine appeared nationally: AVIDAC at Argonne National Laboratory; ILLIAC at the University of Illinois; JOHNNIAC at RAND Corporation; MANIAC at Los Alamos Scientific Laboratory; ORACLE at Oak Ridge National Laboratory; ORDVAC at Aberdeen Proving Grounds; and internationally: BESK in Stockholm; BESM in Moscow; DASK in Denmark; PERM in Munich; SILLIAC in Sydney; and WEIZAC in Rehovoth, to mention a few. …”

(“Electronic Computer Project”, Institute for Advanced Study)

In the lists in the above two quotes, the U.S. institutions building computers, many following von Neumann’s design, included National Bureau of Standards’ SWAC at UCLA, RAND’s JOHNNIAC, and MANIAC at the Los Alamos national lab, but none at UC Berkeley or the nearby Lawrence Berkeley and Lawrence Livermore national labs.

With UCLA having a computer built by the National Bureau of Standards, wasn’t UC Berkeley in an obviously inferior position when von Neumann cast his eyes on moving to the University of California to focus on computer research and development?

Yes and no.

UC Berkeley was building CALDIC, the first computer built in the San Francisco Bay Area of Northern California and the first by a university on the West Coast, although compared to those in the Los Angeles region it was a modest one, credited mostly for educational training and completed only in 1954:

“In the immediate aftermath of World War II, the nascent West Coast computer industry was concentrated in Los Angeles. A number of Southern California aerospace firms received military funding to develop computers, many of which were meant to support aircraft design and research (Eckdahl, Reed, and Sarkissian, 2003; Norberg, 1976). The CALDIC (California Digital Computer) built at UC Berkeley in 1954 was the first computer developed in the Bay Area and the first computer developed at a West Coast university. Supported by the Office of Naval Research, in 1948 Professors Paul Morton (EE), Leland Cunningham (astronomy), and Richard Lehmer (mathematics) began building the CALDIC, which was meant to be an intermediate-size computer (Hoagland, 2010: 15). Like many university-developed computers during this period, the CALDIC was not commercialized, nor were any patents issued on the results of the work. Instead, the project’s main contribution to the local economy was the graduate students supported by the project, several of whom later became industry leaders.

For example, Albert Hoagland, Roy Houg, and Louis Stevens worked on CALDIC’s magnetic data storage system and on graduation joined the newly formed IBM research laboratory that had been established in San Jose in 1956 (Flamm, 1988: 20ff). … IBM’s San Jose Laboratory soon became a global center for digital magnetic storage innovation. … Another CALDIC PhD student, Douglas Engelbart, went to the Stanford Research Institute and developed some of the cornerstones of personal computing such as the mouse, “windowed” user interfaces, and hypertext (Bardini, 2000). Students trained through the CALDIC project thus emerged as leading industrial researchers in the Bay Area computer industry of the 1960s.”

(Martin Kenney and David C. Mowery, eds., Public Universities and Regional Growth: Insights from the University of California, 2014, Stanford Business Books)

But according to Douglas Engelbart, cited above as the a Berkeley Ph.D. student in the CALDIC project, that computer wasn’t really completed at the time of his Ph.D. graduation in 1955, Berkeley wasn’t receptive to a creative pioneer like him on its faculty, and he soon left:

“… After completing his B.S. in Electrical Engineering in 1948, he settled contentedly on the San Francisco peninsula as an electrical engineer at NACA Ames Laboratory (forerunner of NASA).

He began to envision people sitting in front of cathode-ray-tube displays, “flying around” in an information space where they could formulate and portray their thoughts in ways that could better harness their sensory, perceptual and cognitive capabilities which had heretofore gone untapped. And they would communicate and collectively organize their ideas with incredible speed and flexibility. So he applied to the graduate program in electrical engineering at the University of California, Berkeley, and off he went to launch his crusade. At that time, there was no computer science department and the closest working computer was probably on the eastern side of the country, with MIT’s Project Whirlwind. Berkeley did have a serious R&D program developing a general-purpose digital computer, the CalDiC, but remained unfishined throughout his time there.

He obtained his Ph.D. in 1955, along with a half dozen patents in “bi-stable gaseous plasma digital devices,” and stayed on at Berkeley as an acting assistant professor. Within a year, however, he was tipped off by a colleague that if he kept talking about his “wild ideas” he’d be an acting assistant professor forever. So he ventured back down into what is now Silicon Valley, in search of more suitable employment.

He settled on a research position at Stanford Research Institute, now SRI International, in 1957. …”

(“A Lifetime Pursuit”, by Christina Engelbart, Doug Engelbart Institute)

Before settling on a career at Stanford Research Institute beginning in 1957, Engelbart also approached both Stanford University and Hewlett-Packard – where the future Silicon Valley had begun – and learned that neither had plans for computer development:

Were you working with computers at Ames in 1951?

No! The closest computer was somewhere on the east coast.

I left Ames to attend the University of California and received a Ph.D., and taught there for a year. …

So I contacted the Dean of the School of Engineering at Stanford, just across the bay from Berkeley, and received a nice letter that said something like, “Dear Dr. Engelbart. Thank you for your interest in Stanford. Unfortunately, our School of Engineering is a small department, and we have chosen to focus only on those areas which we feel offer real potential. Since computers are only useful to service entities, we have no interest in developing a focus in them. Best of luck, etc.”

Wow – I can’t believe they were so myopic! It’s hard to believe that an institution so closely allied with the birth of Silicon Valley could have missed that one…

They weren’t alone! I also spoke with David Packard (of Hewlett-Packard). We had a great conversation, and I was all set to work for them. Then, as I was driving home from the interview, a question forced its way into my mind. About a quarter of the way home, I stopped and called the vice president of engineering at HP I was going to work for, and asked, “I assume HP is planning on going into digital instruments and digital computers, and I’ll get a chance to work in those areas, right?” And he replied that they didn’t think there was much potential there, so the answer was no.”

(“Doug Engelbart: Father of the Mouse”, by Andrew Maisel, editor-in-chief, SuperKids Educational Software Review)

Engelbart’s early experiences of discouragement were a remarkable contrast to his subsequent pioneering work and achievements, which later in 1997 won him the A. M. Turing Award – computer science’s highest honor as mentioned in Part 3:

“… He is not a computer scientist, but an engineer by training and an inventor by choice. His numerous technological innovations (including the computer mouse, hypertext, and the split screen interface) were crucial to the development of personal computing and the Internet. …”

(“DOUGLAS ENGELBART”, A. M. Turing Award, Association for Computing Machinery)

Amazing, an engineer with pioneering ideas for computers was ignored by the fledgling Silicon Valley’s leading engineering centers, Hewlett-Packard, Stanford and Berkeley, but went on to success, receiving the computer science community’s top honor.

During that 1950s in Southern California, including Los Angeles, the military-oriented aerospace technology companies were very active in computer research, receiving substantial funding from the U.S. military:

“One other pocket of activity, in historical hindsight, looms in importance as a transporter of computer technology from laboratory to market. Located on the West Coast of the United States and tied closely to the aerospace industry in Southern California, which, in turn, was very dependent on government contracts, this activity focused on scientific and engineering computing. The design of aircraft inherently required extensive mathematical calculations, as did applications such as missile guidance. Early efforts (late 1940s) were primarily housed at Northrop Aircraft and to a lesser extent at Raytheon. Both had projects funded by the U.S. government: Northrop for its Snark missile and Raytheon for a naval control processor, for example. Northrop worked with an instrument supplier (Hewlett-Packard) on early digital projects. Then, in 1950, a group of Northrop engineers formed their own computer company called Computer Research Corporation (CRC). Like ERA, it had a military sponsor the U.S. Air Force for which it built various computers in the first half of the 1950s.”

(James W. Cortada, The Computer in the United States: From Laboratory to Market, 1930 to 1960, 1993, M.E. Sharpe)

As quoted, while Southern California’s aerospace companies were building computers, Northern California’s Hewlett-Packard was only “an instrument supplier” for some of them.

In fact, when the future leading computer maker IBM first entered the electronic computer market in the early 1950s with its IBM 701 machine, there were very few buyers and half of them were companies and organizations using the SWAC computer at INA at UCLA:

“By 1951 the demands for computational assistance were so great that it was difficult for the Computational Unit to fulfill its obligations. Accordingly, a new unit of INA, called the Mathematical Services Unit, was formed under the supervision of Harry Huskey. It was funded by the United States Air Force. One of the purposes of this unit was to encourage Federal Government contractors to learn how to use electronic computers. Accordingly, computational services using SWAC were made available to them. Many of these contractors made use of this service. Effectively, the NBS offer to these contractors was to augment their contracts by providing free computational services of a type which was not as yet available elsewhere. It is interesting to note that when IBM announced that they would build a “Defense Calculator” if they get 12 orders, 6 of the 12 orders came from INA’s computer customers. The “Defense Calculator” became the IBM 701 – their entry into the “Electronic Computer Age.””

(Magnus R. Hestenes and John Todd, Mathematicians Learning to Use Computers: The Institute for Numerical Analysis UCLA 1947-1954, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

So in 1956 when von Neumann was planning to move to California to focus on computer research, the future Silicon Valley region in Northern California, be it academically or industrially in the computer field, paled in comparison to the Los Angeles region in Southern California.

Moreover, von Neumann’s role as the U.S. Air Force’s leading scientific adviser on ICBM helped bring additional military aerospace presence to the Los Angeles region:

“In July 1955, along with von Neumann and others, Schriever had an audience with President Eisenhower in the West Wing. He explained not only the paramount importance of ICBMs and the “radical” new organization he had established near Los Angeles to develop them, but also why he had not handed the project over to commercial aircraft contractors…

“Most impressive!” Ike declared. … Eisenhower secretly ordered the Pentagon to build ICBMs with “maximum urgency.” That same summer, Schriever learned from intelligence sources how little time they had: the Soviets were already testing ­intermediate-range ballistic missiles.”

(Michael Beschloss, October 1, 2009, The New York Times)

But von Neumann was also a great mathematician and so the specifics of mathematical computing and numerical analysis could also be factors influencing his decision choosing between UCLA and UC Berkeley.

UC Berkeley had mathematics professor Richard Lehmer, cited in an earlier quote as one of the leaders of the CALDIC computer project.

I know it might not sound that big to someone of von Neumann’s ambitions. But recall, as previously quoted in Part 4, that Berkeley math professor Derrick H. Lehmer – the same person – became the director of UCLA’s Institute for Numerical Analysis at the time of the Loyalty Oath controversy in the early 1950s, when he joined 3 other Berkeley math professors among 29 tenured Berkeley professors, and 2 at UCLA, to express objection to McCarthyism:

“Wanting to show proof of loyalty, Robert Gordon Sproul, then President of the University of California, proposed the Loyalty Oath which would have all professors declare they were not and never had been communists.

Some 29 tenured professors from UC Berkeley and two from UCLA (one of whom later became a UC President) refused to sign. …

The Regents of the time mandated that all professors had to sign, or be fired. In the Mathematics Department, three professors refused: John Kelley, Hans Lewy, and Pauline Sperry. Another professor, D.H. Lehmer, attempted to avoid signing by taking a leave of absence to take a federal job at UCLA as Director at the Institute for Numerical Analysis. …”

(“Loyalty Oath Controversy: Interview with Leon Henkin”, Fall 2000, Vol. VII, No. 1, Berkeley Mathematics Newsletter)

Derrick Lehmer’s former directorship at the INA at UCLA was a strong credential in the computing field and thus an encouraging factor; John von Neumann himself had been a distinguished visitor at INA:

“INA attracted many distinguished visitors such as, John von Neumann, Solomon Lefschetz, Edward Teller, Norbert Wiener, and many others, …”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

But I have to concede that the contrast of 29 Berkeley professors refusing to sign the loyalty oath to UCLA’s only 2 refusals, unfortunately, went opposite to von Neumann’s Cold War strategizing at RAND and his close collaboration with nuclear weapons scientists like Edward Teller, also a distinguished visitor at INA as cited above.

Lehmer returned to Berkeley in 1953, and in 1954 the National Bureau of Standards ended the Institute for Numerical Analysis due to the objection of the U.S. Department of Defense, leaving the SWAC computer to UCLA:

THE PERIOD SUMMER 1953 THROUGH SPRING 1954

D. H. Lehmer returned to the University of California at Berkeley in August and C. B. Tompkins took over the Directorship of INA. …

The National Bureau of Standards was a co-sponsor with the American Society of a Symposium on Numerical Analysis held at Santa Monica City College, August 26-28. John H. Curtiss was the chairman of the organizing committee. The symposium was entitled “American Mathematical Society Sixth Symposium in Applied Mathematics: Numerical Analysis.” …

… A large number of the participants had been associated with NBS and INA as visiting scientists. NBS and INA were represented by C. B. Tompkins, Olga Taussky-Todd, Emma Lehmer, M. R. Hestenes, T. S. Motzkin, and W. R. Wasow. …

David Saxon returned to his position in the Department of Physics at UCLA. He had a distinguished career in research and in administration. In 1975 he became President of the University of California. In 1983 he was appointed Chairman of the Corporation of Massachusetts Institute of Technology.

We now begin the final period of existence of INA. …

The decision of Secretary of Defense, Charles E. Wilson, to no longer permit a non-DOD Government agency to serve as administrator of projects carried out at a university but supported entirely, or in large part, by DOD funds, caused the National Bureau of Standards to give up its administration of INA by June 30, 1954. The University of California was invited to take over this administration. The university was not in a position to take over all sections of INA. However, UCLA agreed to take over the administration of the research group, the SWAC and its maintenance, and the Library. … The question of faculty status of INA members was to be dealt with after the takeover had been accomplished. The new organization was to be called Numerical Analysis Research (NAR). …”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

As quoted, the INA at UCLA had been under the administration of the National Bureau of Standards, then U.S. Secretary of Defense Charles E. Wilson decided that a government non-defense agency should no longer manage projects of substantial defense funding, and so the INA as administered by the NBS and funded by the ONR had to end.

Nevertheless, I note that the physicist David Saxon mentioned above was one of those only 2 UCLA professors refusing to sign the McCarthy-era loyalty oath, one of whom later a UC president as quoted earlier: Saxon was the future president of the University of California encompassing UCLA, UC Berkeley and many other campuses.

(“David S. Saxon, 85; Physicist Forced Out in McCarthy Era Later Led UC System in a Time of Tight Budgets”, by Elaine Woo, December 9, 2005, Los Angeles Times; and, “IN MEMORIAM: David Stephen Saxon: President Emeritus University of California: 1920 – 2005”, by Richard C. Atkinson, The University of California)

And I am struck by the contrast between the title of the above-quoted 1991 book by Magnus R. Hestenes and John Todd on the history of NBS’s INA at UCLA in 1947-1954, “Mathematicians Learning to Use Computers”, and the title of a The New York Times article discussed in Part 4 about the 1986 International Congress of Mathematicians held at UC Berkeley featuring my Ph.D. Adviser Stephen Smale as the leading plenary speaker, “MATHEMATICIANS FINALLY LOG ON”.

Mathematicians were “learning to use computers” in the 1940s and 1950s, and yet it took them until the 1980s to “finally log on”!

A timeline, in the context of the facts reviewed earlier, seemed to be: in the late 1940s and early 1950s mathematicians were “learning to use computers” at the Institute for Numerical Analysis run by the National Bureau of Standards at UCLA, until 1954 when the institute was terminated due to a Pentagon decision; in 1956 John von Neumann, a leading mathematician and the “father of computers”, was deciding on which UC campus to move to for computer research and chose UCLA, which no longer held a dominant lead over UC Berkeley but his Cold War think-tank RAND was nearby which had also followed his computer design; in 1957 von Neumann died of cancer at an early age of 53; then it took another 3 decades for mathematicians to “finally log on” computers.

I know my timeline appears stretching the facts, namely that the death of one mathematician, as great as von Neumann might be, could have such a devastating impact on the history of mathematicians’ acquaintance with computers.

But there were other intriguing and tell-tale facts hinting at a similar timeline.

Within a small number of years following the the world’s leading gathering of mathematicians in 1986 at Berkeley, as mentioned in Part 4, several Berkeley mathematicians in the computational fields who influenced my study there, Richard Karp, Stephen Smale, Andrew Majda, William Kahan, and also the University of Wisconsin’s Carl de Boor, all mentioned in Part 4, received the John von Neumann Lecture honor of the Society for Industrial and Applied Mathematics:

“The John von Neumann lecturers:

  • 1986 Jacques-Louis Lions
  • 1987 Richard M. Karp
  • 1988 Germund G. Dahlquist
  • 1989 Stephen Smale
  • 1990 Andrew J. Majda
  • 1992 R. Tyrrell Rockafellar
  • 1994 Martin D. Kruskal
  • 1996 Carl de Boor
  • 1997 William (Velvel) Kahan

…”

(“The John von Neumann Lecture”, Society for Industrial and Applied Mathematics)

Not only that starting in 1987 these Berkeley professors received the John von Neumann Lecture honor, but that none of the prior recipients in the SIAM’s list cited above, starting from 1960 and including mathematicians, physicists and other scientists, had been a Berkeley recipient as far as I know.

So, like The New York Times said mathematicians “finally log on” at the 1986 ICM held at UC Berkeley, and then the UC Berkeley mathematicians, i.e., those whose work facilitated it, finally receive an honor named for John von Neumann – now imagine if von Neumann himself had been living and leading in the missing decades!

Recall an anecdote told in Part 4, that in the fall of 1983 when I was contemplating doing Ph.D. study with the mathematician Smale or the numerical analyst Andrew Majda, Majda commented to me that Smale “knows nothing about numerical analysis”; I note here that Majda had in fact been a professor at UCLA before Berkeley, and was moving to Princeton where von Neumann had been famous.

But despite Majda’s opinion, SIAM awarded Smale the John von Neumann Lecture honor one year before it accorded Majda. It illustrated that the contributions to the computational fields by the more pure math-inclined mathmaticians like Stephen Smale were appreciated by the applied mathematics community.

Back in August 1953 when the National Bureau of Standards and the Institute for Numerical Analysis hosted in Santa Monica an event titled, “American Mathematical Society Sixth Symposium in Applied Mathematics: Numerical Analysis” – as earlier quoted from the book by Hestenes and Todd – it was a first for computing: the AMS Symposia in Applied Mathematics had begun in the late 1940s and this 6th symposium was the first to devote to computational issues.

(“Proceedings of Symposia in Applied Mathematics”, AMS eBook Collections, American Mathematical Society)

In the summer of 1953 John von Neumann was in fact the outgoing AMS president, serving 1951-1953. At the time, von Neumann’s nuclear science expertise was giving him increasingly prominent national responsibilities: he became a General Advisory Committee member of the U.S. Atomic Energy Commission in 1952, and a member of the Technical Advisory Panel on Atomic Energy in 1953.

(Herman H. Goldstine, The Computer from Pascal to von Neumann, 1972, Princeton University Press)

The INA’s closure in 1954 was a major setback to the mathematical computing field, but John von Neumann was moving to the top in the nuclear arena, appointed a U.S. Atomic Energy Commissioner by President Dwight Eisenhower – and on that job for only 6 months before a diagnosis that he had cancer:

“In October 1954 Eisenhower appointed Von Neumann to the Atomic Energy Commission. Von Neumann accepted, although the Air Force and the senators who confirmed him insisted that he retain his chairmanship of the Air Force ballistic missile panel.

Von Neumann had been on the new job only six months when the pain first struck in the left shoulder. After two examinations, the physicians at Bethesda Naval Hospital suspected cancer. Within a month Von Neumann was wheeled into surgery at the New England Deaconess Hospital in Boston. A leading pathologist, Dr. Shields Warren, examined the biopsy tissue and confirmed that the pain was a secondary cancer. Doctors began to race to discover the primary locations. Several weeks later they found it in the prostate. Von Neumann, the agreed, did not have long to live.”

(Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

When the INA was closing, most of its scientists and engineers left for jobs elsewhere, including going to the industry and to RAND, while a few went to UC faculty jobs:

“At this time, a Numerical Analysis section was set up at NBS in Washington with John Todd as Chief and with, on a smaller scale, a mission similar to that of INA. …

The engineers resigned on November 1, 1953 and accepted positions with the Magnavox Corporation. …

By June 30, 1954, various members of INA had accepted positions in industry and in various departments of universities. For example, B. Handy, A. D. Hestenes, M. Howard, and E. C. Yowell were employed by National Cash Register. S. Marks and A. Rosenthal went to the Rand Corporation. …

During his leave of absence from INA, Lanczos was employed by North American Aviation as a specialist in computing. In 1954 at the invitation of Eamon de Valera, who was at that time Prime Minister of the Republic of Ireland, Lanczos accepted the post of Senior Professor in the School of Theoretical Physics of the Dublin Institute for Advanced Studies. …

In 1954 Harry Huskey accepted a faculty position at UC-Berkeley, where he continued to make significant contributions in the computer field. In 1967 he moved to UC-Santa Cruz to serve as Professor of Computer and Information Science. There he set up the USCS Computer Center and served as its Director from 1967-1977. Internationally, he was in great demand as a consultant to various computer centers, e.g., centers in India, Pakistan, Burma, Brazil, and Jordan. …

Charles B. Tompkins became a member of the Department of Mathematics at UCLA. He was in charge of the NAR Project … He continued to make the computing facility available to all interested faculty and students. …”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

The above quote noted that the National Bureau of Standards had not had a numerical analysis group in the U.S. Capital until its INA in Los Angeles was shutting down.

Another key fact cited above was that Harry Huskey, quoted earlier as leader of the SWAC computer development and leader of computer training for U.S. government contractors, moved to UC Berkeley in 1954.

Huskey had been an original member of the first electronic computer ENIAC project and met von Neumann there although as he later recalled, he and his fellow ENIAC engineers did not have a high opinion of von Neumann because the latter did not pay attention to details:

“… I heard that there were projects at the electrical engineering department of the Moore School at the university, and I applied for part time work. Since their work was classified they couldn’t tell me what they were doing, so I had no idea what I would be doing. When finally clearance came I was the showed the ENIAC and I’ve worked in computers ever since.

… The von Neumann report was not helpful, in my opinion. So I think the answer– well, we had general meetings in which von Neumann participated. And I think the people who were actually working on the project took the feeling that, “Well, he doesn’t worry about the details. He waves his hand.” That sort of position. …”

(“Oral History of Harry Huskey”, interview by William Aspray, February 7, 2006, Computer History Museum)

So later in 1956 when von Neumann contemplated about UCLA or UC Berkeley to move to, a key computer development leader who had moved from the INA at UCLA to Berkeley was not so positive about him.

In 1954 it was Derrick Lehmer and also Paul Morton, cited earlier as leaders of the CALDIC computer project, who offered Huskey his Berkeley professorship in both mathematics and electrical engineering departments:

“The fact that INA was a project under the Bureau of Standards caused it to be terminated as a Bureau project. The SWAC computer was given to the Engineering Department of UCLA, and the mathematical research part of INA was set up as a project in the Math Department of UCLA. And so that ended that phase of things. I had gone on leave to Wayne University with Jacobson, with the charter to set up a computer center there, and so I spent the year working on that, and when I came back to the Bureau, all this other stuff had happened. So the question was, what is the future? And at that point, Lehmer and Paul Morton at Berkeley offered me a position, so I took that. It was an associate professorship.

It was half math and half EE, and so on July 1st of that year, I moved to Berkeley. That’s about the whole story.”

(interview by William Aspray, February 7, 2006, Computer History Museum)

As mentioned earlier, Berkeley’s CALDIC computer project was reportedly completed in 1954 but then Douglas Engelbart later said it was still not when he graduated in 1955. So it is possible that Huskey’s arrival helped finish it.

Like Huskey, Lehmer’s association with the computer and von Neumann had come earlier; in 1945-1946 Lehmer was a member of the Computations Committee planning ENIAC’s use at the U.S. Army’s Ballistic Research Laboratory at the Aberdeen Proving Ground:

“… A Computations Committee had been established in 1945 to provide a group of experts to plan for the arrival of ENIAC at the BRL and to see that it would be applied productively. The members of the Computations Committee included the mathematicians Haskell Curry, Franz Alt, and Derrick Lehmer and the astronomer Leland Cunningham. All of them had come to Aberdeen during the war to assist with the BRL’s computational work, and they retained a connection with the lab for several years afterward—some as employees, others as frequent visitors. …”

(Thomas Haigh, Mark Priestley and Crispin Rope, ENIAC in Action: Making and Remaking the Modern Computer, 2016, The MIT Press)

The Aberdeen Proving Ground, as previously mentioned in a quote in Part 4, was a military research facility where Berkeley math professor Hans Lewy had worked during World War II, who later was one of Lehmer’s fellow objectors to the McCarthy-era UC loyalty oath.

Lehmer had strong historical credentials for overseeing ENIAC computing; he had been a pioneer in building electro-mechanical computing devices:

“Lehmer made contributions to many parts of number theory, but he was especially interested in relevant numerical calculations. He was unsurpassed in this field. …

While still an undergraduate, Lehmer realized that it would be helpful to have a mechanical device for combining linear congruences, and at various times, he supervised the construction of several such machines. These special-purpose computers, known as sieves, were particularly useful in factoring large numbers. The first model, constructed in 1927, used 19 bicycle chains. …

In 1932, an improved sieve was constructed and displayed at the 1933 World’s Fair in Chicago. Here, instead of bicycle chains, disk gears with various numbers of teeth were used, with holes opposite each tooth. For a given problem, the unwanted holes were plugged, and a photoelectric cell was used to stop the machine when open holes were lined up. …

Lehmer was a pioneer in the development of modern computing machines and in their use in the solution of scientific problems, particularly those arising in number theory. In 1945-46 he was called to the Ballistic Research Laboratory of the Aberdeen Proving Ground to prepare that laboratory for the installation of the ENIAC, the first general-purpose electronic computer. he observed the completion of that computer in Philadelphia and took part in its testing in Aberdeen. …”

(“Derrick H. Lehmer, Mathematics: Berkeley: 1905-1991 Professor Emeritus”, John L. Kelley, Raphael M. Robinson, Abraham H. Taub, and P. Emery Thomas, 1991, University of California)

I note that the Lehmer sieves built in the 1920s and 1030s as explained above could only do specific mathematical computations – unlike later ENIAC, the first “general-purpose” electronic computer.

As a mathematician, Lehmer’s passion was in computations for number theory, and even on ENIAC as in the following anecdote of a July 4 holiday weekend he and his wife Emma chose to spend on ENIAC:

Lehmer’s Holiday Computations

Another well-documented calculation from 1946 was carried out by the Berkeley number theorist Derrick Lehmer. Lehmer spent the year 1945-46 at the Ballistic Research Lab as a member of a group helping to plan for ENIAC’s use. He experimented with the machine by running “little problems” when it was otherwise not in use. …

As Derrick Lehmer later recounted, he and his family descended on ENIAC over the July 4 weekend, a weekend during which very little work is done in the United States. (Lehmer’s wife, Emma, was a noted mathematician who did much of the computational work required to get ENIAC’s output from this visit into publishable form.) With help from John Mauchly, they were allowed to “pull everything off the machine” and set up their own problem.

Lehmer credited Mauchly with the idea of implementing a sieve on ENIAC. Lehmer’s program, as partially reconstructed by the historians Maarten Bullynck and Liesbeth de Mol, made use of ENIAC’s ability to perform several parts of a computation at once. In the reconstruction, fourteen accumulators were used to simultaneously test a single number against different prime numbers. Lehmer’s paper does not provide enough information to make it certain that his original implementation exploited that technique, but in discussing the computation he later complained that ENIAC “was a highly parallel machine, before von Neumann spoiled it.” …”

(Thomas Haigh, Mark Priestley and Crispin Rope, 2016, The MIT Press)

While doing research in parallel computation beginning in the mid-late 1980s, I became familiar with the term, “the von Neumann bottleneck” – related to something Lehmer said in the above quote – coined by IBM computer scientist John Backus:

“… What is a von Neumann computer? When von Neumann and others conceived it over thirty years ago, it was an elegant, practical, and unifying idea that simplified a number of engineering and programming problems that existed then. Although the conditions that produced its architecture have changed radically, we nevertheless still identify the notion of “computer” with this thirty year old concept.

In its simplest form a von Neumann computer has three parts: a central processing unit (or CPU), a store, and a connecting tube that can transmit a single word between the CPU and the store (and send an address to the store). I propose to call this tube the von Neumann bottleneck. …

… Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. …”

(“Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs”, by John Backus, 19 77 ACM Turing Award Lecture, Association for Computing Machinery)

Whatever the limitations of the von Neumann computer design, back in the 1940s and 1950s on the early computers the kind of mathematical delicacies Lehmer enjoyed was the opposite of the norm, while the norm was serious military research led by von Neumann, especially for atomic bomb development:

“But ENIAC’s more profound contributions to advances in military science and technology came with Cold War work that would have been prohibitively expensive to attempt by hand. ENIAC simulated explosions of atomic and hydrogen bombs, airflow at supersonic speeds, and designs for nuclear reactors. With the considerable assistance of John von Neumann, it established the digital computer as a vital tool within the emerging military-industrial-academic complex carrying out cutting-edge research and development work during the early years of the Cold War. A few years later, IBM launched its first commercial computer, the Model 701, as the “defense calculator” and sold it almost exclusively to defense contractors. The United States Government even managed the delivery queue for IBM, making sure that computers were dispatched first to the firms doing the most important work.”

(Thomas Haigh, Mark Priestley and Crispin Rope, 2016, The MIT Press)

As the above quote indicates, in the 1940s-1950s military priorities were the highest of the “cutting-edge research and development work”, and allocation of computer use was centrally managed by the U.S. government – be it for academic, industrial or commercial usage.

From this perspective, one could refer to it as U.S. government generosity that for 7 years, 1947-1954, mathematicians got to go to the Institute for Numerical Analysis at UCLA to learn to use the SWAC computer run by the National Bureau of Standards – in 2 of the years even under the directorship of Derrick Lehmer, an objector to the UC Loyalty Oath – before Secretary of Defense Charles Wilson pulled the plug in 1954.

The historian of science Liesbeth De Mol has done a comparison showing the contrasting mathematical focuses of Lehmer and von Neumann, i.e., Lehmer’s pure mathematics interests versus von Neumann’s applied mathematics ambitions.

De Mol wrote of Derrick Lehmer the mathematician:

“Derrick H. Lehmer (1905-1991) was born into number theory. His father, Derrick N. Lehmer, was a number-theorist, known for his factor table up to 10,000,000 and his stencil sheets to find factors of large numbers. …

Throughout Lehmer’s papers one finds numerous statements about the experimental character of mathematics and more specifically number theory, which he regarded as a kind of observational science. It is exactly in this context that one should understand Lehmer’s interest in computers. He regarded them as instruments to experimentally study mathematics … Already as a young boy, Lehmer began to design and build small special-purpose machines, known as sieves, to assist him in his number-theoretical work.

When World War II began, Lehmer “got involved into war work mostly having to do with the analysis of bombing…”. He built a special-purpose machine, a “bombing analyzer [which] was a combination of the digital and the analog device. […] I demonstrated it in Washington one time at the Pentagon. […] This thing was Army Ordnance, I guess. …”. Just after the war Lehmer was called upon by the Ballistic Research Laboratories (Aberdeen Proving Ground) to become a member of the ‘Computations Committee’, which was assembled to prepare for utilizing the ENIAC after its completion …”

(“Doing Mathematics on the ENIAC. Von Neumann’s and Lehmer’s different visions”, by Liesbeth De Mol, in E. Wilhelmus, I. Witzke (eds.), Mathematical practice and development throughout History, 2008, Logos Verlag)

In short, Lehmer was an experimentally and computationally oriented pure mathematician, who also proved his abilities in his wartime work for the military.

De Mol wrote of John von Neumann the mathematician:

“John von Neumann is far more famous than D.H. Lehmer, not in the least because the hardware of computers nowadays is still referred to as ‘the von Neumann architecture’. He was a mathematician by education and made major contributions in many different fields, including: mathematical logic, set theory, economics and game theory, quantum mechanics, hydrodynamics, computer science,…

Von Neumann’s acquaintance with the field of mathematical logic had a major influence on his work on computers. …

It was not his interest in logic, however, that triggered his interest in the subject. … Ulam explains why von Neumann got interested in computers …:

It must have been in 1938 that I first had discussions with von Neumann about problems in mathematical physics, and the first I remember were when he was very curious about the problem of mathematical treatment of turbulence in hydrodynamics. […] He was fascinated by the role of Reynolds number, a dimensionless number, a pure number because it is the ratio of two forces, the inertial one and the viscous […] [von Neumann] […] wanted to find an explanation or at least a way to understand this very puzzling large number. […] I remember that in our discussions von Neumann realized that the known analytical methods, the method of mathematical analysis, even in their most advanced forms, were not powerful enough to give any hope of obtaining solutions in closed form. This was perhaps one of the origins of his desire to try to devise methods of very fast numerical computations, a more humble way of proceeding. Proceeding by “brute force” is considered by some to be more lowbrow. […] I remember also discussions about the possibilities of predicting the weather at first only locally, and soon after that, about how to calculate the circulation of meteorological phenomena around the globe.

Von Neumann got particularly interested in computers for doing numerical calculations in the context of theoretical physics and thus understood, quite early, that fast computing machines could be very useful in the context of applied mathematics.”

(Liesbeth De Mol, in E. Wilhelmus, I. Witzke (eds.), Mathematical practice and development throughout History, 2008, Logos Verlag)

As De Mol described, von Neumann was a pure mathematician, but more importantly an applied mathematician ambitious for real-world applications.

It was von Neumann’s applied-math research in fluid dynamics that led to his participation in the atomic bomb development, for which he began searching for available computing power, actively surveying the existing state-of-the-art calculating machines, as De Mol described:

“In 1943, during World War II, von Neumann was invited to join the Manhattan project – the project to develop the atomic bomb – because of his work on fluid dynamics. He soon realized that the problems he was working on involved a lot of computational work which might take years to complete. He submitted a request for help, and in 1944 he was presented a list of people he could visit. He visited Howard Aiken and saw his Harvard Mark I (ASCC) calculator. He knew about the electromechanical relay computers of George Stibitz, and about the work by Jan Schilt at the Watson Scientific Computing Laboratory at Columbia University. These machines however were still relatively slow to solve the problems von Neumann was working on. …”

(Liesbeth De Mol, in E. Wilhelmus, I. Witzke (eds.), Mathematical practice and development throughout History, 2008, Logos Verlag)

So, even taking into account INA’s 1954 closure reducing UCLA’s strength in computing before von Neumann deciding in 1956 to go to UCLA or Berkeley, the two former INA computational mathematicians who returned to or moved to Berkeley, Lehmer and Huskey, were not the compatible types for von Neumann.

In the spirit of my review in Part 4 of some Berkeley mathematicians, I would like to view the contrast between Lehmer and von Neumann – articulated by Liesbeth De Mol – as an older-generation phenomenon prior to the contrast between Stephen Smale and Alexander Chorin in the 1970s and 1980s: Smale’s anti-war politics was more outspoken and higher-profile than Lehmer’s expression of objection to McCarthyism, whereas Chorin, instrumental in forming a faculty group in numerical analysis specializing in fluid dynamics, affiliated with the Lawrence Berkeley national lab and funded by military research agencies, was probably not quite von Neumann’s caliber.

Chorin’s former Ph.D. adviser Peter Lax, of New York University’s Courant Institute of Mathematical Sciences, had in fact become a protégé of von Neumann’s while a teenager before university; later during the Manhattan Project, Lax worked in the Los Alamos national lab and got his start in the subject of fluid dynamic shock waves – in the fall of 1983 Andrew  Majda introduced me to the subject as in Part 4 – there through von Neumann’s introduction.

(“NYU’s Peter Lax Wins ‘Nobel Prize of Mathematics’”, by Gary Shapiro, March 23, 2005, The New York Sun)

While the Polish-born Jewish Chorin as an incoming NYU Ph.D. student was initially mistaken by his adviser Peter Lax for a Hungarian compatriot, as in a tale told in Part 4, von Neumann had been the unmistakable Hungarian Jewish genius – the only Hungarian genius according to Nobel laureate Eugene Wigner:

“… Five of Hungary’s six Nobel Prize winners were Jews born between 1875 and 1905, and one was asked why Hungary in his generation had brought forth so many geniuses. Nobel laureate Wigner replied that he did not understand the question. Hungary in that time had produced only one genius, Johnny von Neumann.”

(Norman MacRae, John Von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, 1992, Pantheon Books)

As my review so far has shown, in 1956 when von Neumann planned to move to California, if his intent was to focus on computer research as described in Norman MacRae’s book, rather than on nuclear science, then the Los Angeles region of UCLA was stronger in that respect and more conducive for his interests politically, industrially and academically, than UC Berkeley and the nascent Silicon Valley.

But the year 1956, tantalizingly, also was when some things began to happen in favor of the future Silicon Valley.

One of the happenings was that IBM established a research laboratory in San Jose, the future Silicon Valley’s largest city, as quoted earlier, and several Berkeley CALDIC computer project students had their industry-leading career start there, working on digital magnetic storage systems.

I understand that such computer peripherals might not be much for a prominent computer pioneer and ambitious scientific leader like John von Neumann. But IBM had held, and would continue to hold, von Neumann in high regard.

Following von Neumann’s death, Herman Goldstine, his former collaborator at the ENIAC project and his deputy at the Princeton IAS computer project, became the founding director of the Mathematical Sciences Department at IBM’s central research organ, Thomas J. Watson Research Center in New York state:

“… long before the Eniac was running it was obvious it had several major design defects. The gargantuan machine, weighing 30 tons and containing 18,000 electronic tubes, took several days to program and could store just 20 numbers. A study group for an improved machine, to be called the Edvac (Electronic Discrete Variable Automatic Computer), was established, consisting of Goldstine, Mauchly, J. Presper Eckert (Eniac’s principal engineer) and Arthur Burks (a mathematical logician). The group was shortly joined by John von Neumann.

In June 1945, von Neumann wrote the seminal Edvac Report, whose wide circulation established the new computing paradigm and ultimately the worldwide computer industry. Von Neumann’s sole authorship of the report, and his towering reputation as America’s leading mathematician, completely overshadowed the contributions of the others in the group, causing deep resentment in Eckert and Mauchly.

At the end of the war the group broke up because of these tensions. Eckert and Mauchly formed the computer company that eventually became today’s Unisys Corporation, while von Neumann, Goldstine and Burks went to the Institute for Advanced Study (IAS), Princeton University, to build a computer in an academic setting. Goldstine was appointed assistant director of the computer project, and director from 1954. In addition he co-wrote with von Neumann a set of reports, Planning and Coding of Problems for an Electronic Computing Instrument (1947) that established many early ideas in computer programming.

The IAS computer was an important design influence on the early computers of IBM, for whom von Neumann was a consultant. In 1958, following von Neumann’s death and the termination of the IAS computer project, Goldstine became the founding director of the Mathematical Sciences Department at IBM’s Watson Research Center in Yorktown Heights, New York.”

(“Herman Goldstine: Co-inventor of the modern computer and historian of its development”, by Martin Campbell-Kelly, July 4, 2004, The Independent)

As described, the leading architects of the original electronic computer ENIAC, Presper Eckert and John Mauchly, subsequently took an entrepreneurial route, forming a commercial company to further computer development, whereas von Neumann led Goldstine and several others starting the IAS computer project at the Institute for Advanced Study in Princeton – a project that not only led to proliferation of computer development in academic and scientific institutions as discussed earlier, but also had important design influence on IBM computers, with von Neumann himself a consultant for IBM.

Von Neumann had not been a founding member of the ENIAC project; it was Goldstine who had started the project on behalf of the U.S. Army, and then invited von Neumann’s participation in 1944:

“While there are challengers for the title of “first computer,” the dedication of ENIAC on Feb. 15, 1946, is widely accepted as the day the Information Age began. And like the Declaration of Independence in Philadelphia 170 years before, it declared a revolution.

Dr. Goldstine — now 82 and executive officer of the American Philosophical Society in Philadelphia — recalls arriving at Aberdeen in 1942 as a newly commissioned lieutenant in the Army Air Corps. He had just been pulled out of his squadron when the Army realized that it had better uses for a Ph.D. mathematician from the University of Chicago.

At Aberdeen, Lieutenant Goldstine was given the mission of speeding up the calculation of firing tables needed for accurate artillery and the charts needed for bombing runs. At the time, the necessary math was done by a group of young women using mechanical desk calculators. The system wasn’t working.

In the process of consulting with university experts, Lieutenant Goldstine met a 32-year-old physicist named John W. Mauchly, who outlined his idea for an all-electronic digital computer that could perform computations 1,000 times faster than a human.

Lieutenant Goldstine was intrigued, so he took the idea back to his boss, Lt. Col. Paul Gillon. He gave the project both his approval and its name — Electronic Numerical Integrator and Computer.

ENIAC was designed and built at the Moore School by a team led by Dr. Mauchly and J. Presper Eckert, an engineer in his early 20s. The newly promoted Captain Goldstine ran interference with the Army brass and contributed his own considerable expertise, says Paul Deitz, a civilian official at Aberdeen who is an unofficial historian of the ENIAC project.

In 1944, soon after the first part of ENIAC was completed, Dr. Goldstein had a chance meeting at the Aberdeen train station with John L. von Neumann, one of the leading mathematicians of his day and an adviser to the Ballistic Research Laboratory at the proving ground.

Dr. Goldstein recalls that when he told Dr. von Neumann about the ENIAC project, “he suddenly became galvanized.” It turned out that Dr. von Neumann had been working on a project in Los Alamos, N.M., that required high-power computing.”

(“Computer age had clumsy start Electronic era: Born 50 years ago, the ancestor of today’s PCs and calculators was slow, unreliable and weighed 30 tons”, by Michael Dresser, February 12, 1996, The Baltimore Sun)

Clearly, had von Neumann gone to the San Francisco Bay Area in the mid-late 1950s the newly founded IBM San Jose research laboratory would have been privileged to receive his advice.

The presence of national-level nuclear science, top-level West Coast universities with growing interest in computers, and IBM’s arrival in the Bay Area, could have given von Neumann another chance on pioneer computer research – as an alternative to the more active, military-funded industrial computer activities in the Los Angeles region where von Neumann also had his RAND and JOHNNIAC.

21 years later views about von Neumann’s computer design began to change, and it was an IBM San Jose Research Laboratory scientist, John Backus quoted earlier, who put forth the term, “the von Neumann bottleneck”, in his 1977 Turing Award lecture which made references to von Neumann a whopping over 90 times – love him or hate him!

(John Backus, 19 77 ACM Turing Award Lecture, Association for Computing Machinery)

Within the academia, the termination of the Institute for Numerical Analysis at UCLA in 1954, when the National Bureau of Standards gave up its management role due to the Pentagon’s objection, was a watershed event in the history of the computing field.

Harry Huskey, the SWAC computer project leader and computer training leader at INA who subsequently moved to Berkeley, later blamed the INA’s end on McCarthyism targeting the NBS:

“… some company made an additive to add to batteries that was supposed to extend their life, and the Bureau of Standards was given the job of testing it. So they tested it and decided that it didn’t do any good at all, and reported this. The guy that manufactured it contacted his congressman and said whatever, and that ended up with the Commerce Department appointing a committee, the Kelly Committee, I think it was, to review what the Bureau of Standards was doing, and this is also tied up with McCarthy. McCarthy was witch-hunting, you know, and I think the– well, they’re almost independent, but anyway, the McCarthy business caused the Bureau to fire a number of people, starting at the top. Ed Condon was fired. The next director, Alan Astin I think, was forced to resign or fired, or something. In the math division, John Curtiss was fired.

The whole Bureau operated with a good fraction of its budget coming from projects that were financed by other government agencies, and almost all of that was wiped out. If the Navy had a project going on, they would transfer it back to the Navy, and that sort of thing, so there was a real cutback in operation.

The fact that INA was a project under the Bureau of Standards caused it to be terminated as a Bureau project. …”

(interview by William Aspray, February 7, 2006, Computer History Museum)

I wouldn’t be surprised if the INA’s demise had to do with McCarthyism, given that in the summer of 1954 after its closure, John Nash was arrested for public homosexual activity in nearby Santa Monica and expelled from RAND.

On the other hand, from an opposite viewpoint, the end of NBS’s broad management role in scientific research may have reflected a Pentagon objective to get the ‘bang for their buck’, i.e., to focus funding on research directly relevant to the U.S. military.

Historically in the United States, substantial government support for scientific research had begun only with the coming of World War II:

“… During the Great Depression … a Science Advisory Board was created by executive order to advise the President. However, the board’s attempts to establish a basic research program in universities did not succeed.

The most significant step toward a durable relationship between government and science came in 1940. The war raging in Europe presented an opportunity for scientific work to affect a conflict. The leaders of the scientific community began to lobby for the creation of a government agency that would mobilize U.S. scientists for the country’s inevitable entry into the war. As a result, President Roosevelt created the National Defense Research Committee (NDRC) under the chairmanship of Dr. Vannevar Bush. Bush was a former Dean of Engineering at MIT and was later the president of the Carnegie Institution in Washington. …

A major landmark in the progress of governmental support for science in the United States turned out to be the creation of the expanded Office of Scientific Research and Development (OSRD), under Vannevar Bush. This initiated a structure under which U.S. scientists were brought into war efforts through a contract mechanism, while leaving them free to pursue their creative work. …”

(Jagdish Chandra and Stephen M. Robinson, An Uneasy Alliance: The Mathematics Research Center At the University of Wisconsin, 1956-1987, 2005, Society for Industrial and Applied Mathematics)

As in the above history, the U.S. government’s scientific research funding came primarily from World War II preparation and in the form of contracts, which did not prohibit the scientists from pursuing other scientific and creative interests: mobilization of the scientific community was led by the U.S. government’s National Defense Research Committee (NDRC) headed by former MIT Dean of Engineering Vannevar Bush, and later the expanded Office of Scientific Research and Development (OSRD) under Bush became a contracting agency for wartime scientific research.

The ENIAC discussed earlier was a prominent example of military research and development by academic scientists: the development of the first general-purpose electronic computer was directly initiated, funded and supervised by the Army, but was carried out at a university; after its completion, the leading developers were free to move on to start their own company, or build computers in the academia.

After World War II, the Navy’s Office of Naval Research became the primary science funding agency before the founding of the National Science Foundation – with the exceptions of medical research funded by the National Institutes of Health, and nuclear science research funded by the Atomic Energy Commission:

“In 1946, the Office of Naval Research (ONR) was created to plan, foster, and encourage scientific research and to provide within the Department of the Navy a single office which by contract or otherwise was able to sponsor, obtain, and coordinate innovative research of general interest to all sectors of the Navy. … By and large, the naval authorities believed that most of the basic research carried out under ONR’s auspices should be published in the normal way. This policy allayed many fears in the academic and scientific community. The office began to take on the role that was envisaged for the yet-to-be-established National Science Foundation (NSF).

The National Institute of Health (NIH), established in 1930 and generously funded by OSRD during the war, became a major focus of government support for medical research in the universities. The Atomic Energy Commission (AEC) was established in 1946, and this agency forged close links with universities by contracting research work to them and by building up the university-associated laboratories that it had inherited from the Manhattan Project. …”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

Despite the original recommendation by Vannevar Bush, the U.S. government’s leading science adviser, the NSF founded in 1950 did not include defense research in its charter; and the Army and Air Force proceeded to establish their own research agencies:

“When the NSF was eventually established in 1950, defense research was excluded from its terms of reference. In the initial recommendation, Dr. Bush had envisioned defense research as one of the organizational component of NSF’s charter. As a consequence, the Department of the Army, and subsequently the Air Force, established their own offices of research. The Department of the Army’s Office of Ordinance Research was established in June 1951 on the campus of Duke University.”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

As quoted above, Vannevar Bush, former MIT Dean of Engineering , had envisioned the National Science Foundation to have defense research as an organizational component.

Bush had outlined his vision in a July 1945 report to President Harry Truman, in which the proposed “National Research Foundation” would include a “Division of National Defense” alongside other divisions such as a “Division of Medical Research” and a “Division of Natural Sciences”.

(“Science The Endless Frontier: A Report to the President by Vannevar Bush, Director of the Office of Scientific Research and Development, July 1945”, National Science Foundation)

The actual outcome, i.e., an NSF without a defense research branch, was positive for someone like Stephen Smale, who in the 1960s faced the unpleasant prospect, as in Part 2, that his anti-war activism risked his NSF grant eligibility – fortunately the NSF did not need to defer to the Pentagon.

From this angle, the National Bureau of Standards’ loss of management role for defense-funded research projects was inline with the separation of the NSF and defense research, although some might reason that when it came to the administration of national standards for technology there should be as few exceptions as possible.

But as pointed out by Harry Huskey, quoted earlier, in 1953 the NBS’s loss came as a result of McCarthyism-type politics. There was a public scandal, Congressional hearings and the firing of key NBS leaders; at the recommendation of the Congressional Kelly Committee, the Pentagon transferred all weaponry research away from the NBS:

“… The Battery AD-X2 controversy, on the other hand, was serious indeed. It caused the firing of the Bureau’s director, followed eventually by full reinstatement; prompted the investigation of the Bureau by two high-level committees and brought about dramatic changes in its programs; provoked a furor in the whole scientific community and led a large number of the Bureau staff to threaten resignation; resulted in six days of hearings before a Senate select committee; made the Bureau and its director front-page news for months; brought about the resignation of an assistant secretary of commerce; and (in part) caused the transfer of 2000 persons from the Bureau to newly formed military laboratories.

It can be fairly said that no other single report has had as great an effect on the history of the Bureau as the “Kelly Committee Report,” as it is commonly known. …

… Hence it recommended the “transfer of weaponry projects to the Department of Defense,” but recommended “continued use of the Bureau by Department of Defense and Atomic Energy Commission for non-weaponry science and technical aid.” Following these recommendations, on September 27, 1953, four ordnance divisions, totaling 2000 persons—1600 in three divisions at the Harry Diamond Ordnance Laboratory in Washington, and 400 at the Missile Development Division in Corona, California—were transferred to Army Ordnance and Naval Ordnance respectively, although all operations remained at their respective sites. …”

(Elio Passaglia with Karma A. Beal, A Unique Institution: The National Bureau of Standards, 1950-1969, 1999, National Institute of Standards and Technology, U.S. Department of Commerce)

As the Kelly Committee stated, quoted above, that research in “non-weaponry science and technical aid” for the Department of Defense could continue within the NBS.

Obviously, most of the mathematical research and computer training at the Institute for Numerical Analysis, funded by the Office of Naval Research as mentioned earlier, was “non-weaponry science” and so should have been able to continue. But as quoted earlier, Secretary of Defense Charles Wilson decided to end NBS management of all defense agency-funded projects, including the INA.

The U.S. Army understood the importance of academic scientific research, as seen in the fact that its Office of Ordinance Research was first founded on the campus of Duke University, following the establishment of the NSF independent of the Pentagon, as quoted earlier.

The end of the INA became a point in time following which the Army directly went into initiating and supervising university-based mathematical research.

Led by Lieutenant General James M. Gavin, Lieutenant General Arthur Trudeau and Brigadier General Chester Clark, the Army proceeded with forming its own mathematics research center in the academia, with the focus on relevance to the interests of the Army:

“Army general officers such as Lt. Gen. Arthur Trudeau, Lt. Gen. James M. Gavin, and Brig. Gen. Chester Clark, and other officers such as Lt. Col. Ivan R. Hershner, recognized early in the 1950s that the Army is a major user of the fruits of research in mathematics, no matter what the source is. … these enlightened officers and other members of the Army establishment were successful in convincing the Army to establish a center of mathematical expertise at an academic institution.

In preparation for this crucial decision, the Mathematics Advisory Panel of the Army, a precursor group to the Army Mathematics Advisory Group (AMAG) and the Army Mathematics Steering Committee (AMSC), conducted a survey of the uses of mathematics in Army activities and combined that with a census of its mathematically trained personnel and its expenditures for mathematical investigation. …

… The Advisory Panel made two recommendations. First, it advised that the Army establish for itself a mathematics research center at an academic institution. The key aspects of the work statement were to conduct basic research in selected areas of mathematics relevant to the interests of the Army, to provide educational and training courses to the Army on current mathematical methods, and to be available for consulting on mathematical problems encountered by Army scientists and engineers. It was to carry on research in four areas…:

  • Numerical analysis. This was broadly understood as the adaptation of mathematics to high-speed computation, to include the use of electronic computing machines, the formulation of mathematical problems for exploration by such computers, and hence the broadening of the field in which such computers could be used. This area was originally intended to include “the engineering physics of high-speed computers,” presumably what is now referred to as computer architecture and computer engineering, though unfortunately very little was in fact done at MRC in those areas.
  • Statistics and the theory of probability.
  • Applied mathematics, including ordinary and partial differential equations as well as physical mathematics with emphasis on fluid mechanics, elasticity, plasticity, electro-dynamics, electrical networks, wave guidance, and propagation. 
  • Operations research, including such subfields as linear and nonlinear programming, game theory, decision theory, information theory, and optimization.

Second, the Advisory Panel recommended that it be recognized and established as a continuing body, with the assignment to inform itself about new mathematical developments and to keep itself informed of the Army’s needs in and uses of mathematics, to supervise activities of this kind, and to facilitate the interchange of relevant information between activities. Initially, this was a committee of about twenty-five, including four from academic institutions. The rest represented various Army activities.”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

In the above history account, the reflection of the military interests in the founding of an Army mathematics research center can be seen in the Army experts’ overwhelming dominance on the advisory panel over the academics.

Army experts led by Lieutenant Colonel Ivan R. Hershner, the University of Vermont’s mathematics department chairman, visited 26 universities that showed some interest, including, “Brown, Columbia, University of Chicago, Duke, California Institute of Technology, Harvard, the University of Illinois, the University of Michigan, MIT, New York University, the University of North Carolina, UCLA, UC-Berkeley, Stanford, the University of Wisconsin, and the University of Virginia”; out of 21 university proposals submitted, the University of Wisconsin was chosen:

“Towards the realization of the first recommendation, the chief of research and development of the Army appointed Ivan R. Hershner (then the chair of the Mathematics Department at the University of Vermont) to head an effort to explore with various universities and research groups their possible interest in this center. Letters were sent to over fifty U.S. institutions of higher learning. Based on the level of interest expressed, this small group of experts visited twenty-six universities …

This process resulted in twenty-one formal proposals. A technical advisory committee of Army scientists evaluated these proposals … The Army had offered to provide a state-of-the-art computer, but it expected that the selected university would supply suitable physical space to house the center. …

The decision to establish the Mathematics Research Center at the University of Wisconsin was announced on November 16, 1955, by Lt. Gen. James M. Gavin, chief of research and development of the U.S. Army. …

The first contract for MRC’s operation was signed on April 25, 1956, and the university designated Professor Rudolph E. Langer as MRC’s first director. ….”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

The Army Mathematics Research Center started in 1956, the same year IBM started its San Jose research lab.

It is interesting that following a nationwide search, the Army’s final choice of the academic host for its mathematics research center was the leading university in the home state of then Senator Joseph McCarthy.

That might be coincidental, but it wasn’t isolated. The Army’s Office of Ordinance Research had been founded in 1951, as quoted earlier, at Duke University, which happened to be the alma mater of the high-profile, staunchly anti-Communist Senator Richard Nixon – soon-to-be U.S. Vice President – from Southern California, whose political tie to North Carolina was intimate, even within the Senate, as recalled by future Senator Jesse Helms, then an assistant to Senator Willis Smith of North Carolina:

“Serving as administrative assistant to a United States Senator is a true learning experience. …

In 1951 there were ninety-six U.S. Senators representing the then forty-eight states…

One of those ninety-six Senators back then was a delightful young Republican Senator from California named Richard M. Nixon. I was impressed by his intellect and his genuine interest in working with people who shared conservative principles without concern for their party tag. Senator Nixon had a solid North Carolina connection because he was a graduate of the law school at Duke University. As I mentioned, Senator Smith had been on the university’s board of trustees for some time … There were many visits to Senator Smith’s office by then President of Duke, Arthur Hollis Edens, and Senator Nixon often stopped by to greet Dr. Edens. The Duke connection as fellow alumni helped establish a solid friendship between Senator Nixon and Senator Smith.

The assignment of office space had put Mr. Nixon’s offices between the offices of Senator Smith and North Carolina’s senior Senator, Clyde R. Hoey, on one corner of the third floor of the Russell Senate Office Building. …”

(Jesse Helms, Here’s where I Stand: A Memoir, 2005, Random House)

As illustrated, much thought had been given to the assignment of office locations in a Congressional building – let alone the location of an Army central research center.

The Army Mathematics Research Center at the University of Wisconsin-Madison began its life less than 2 years after the termination of the Institute for Numerical Analysis at UCLA, and in the same year 1956 when John von Neumann, former president of the American Mathematical Society and a top adviser to the U.S. military, was hospitalized for cancer treatment and made the decision to move to the University of California.

Von Neumann soon died, in February 1957 at the age of 53.

Shortly afterwards in May 1957, McCarthy suddenly died at only 48.

Prior to that, in the early summer of 1954 – just as the INA was closing – McCarthy’s ongoing Senate committee hearings hunting for Communists in the U.S. government were foiled by the Army, after he tried to target former Army General Dwight D. Eisenhower:

“…Often, the information McCarthy used came from FBI files, which were full of rumor and third-hand accounts.

The McCarthy era began on February 9, 1950 when the obscure Republican senator from Wisconsin gave a speech to 275 members of the local Republican women’s club at the McClure Hotel in Wheeling, West Virginia.

“While I cannot take the time to name all the men in the State Department who have been named as members of the Communist Party and members of a spy ring, I have here in my hand a list of 205—a list of names that were known to the secretary of State and who, nevertheless, are still working and shaping policy of the State Department,” McCarthy said…

McCarthy eventually made the mistake of turning his sights on President Dwight D. Eisenhower. A former Army general who had led allied forces to victory during World War II, Eisenhower was as American as apple pie.

As McCarthy began accusing Eisenhower of being soft on Communists, Hoover realized he would have to distance himself from the senator. Just before what became known as the Army-McCarthy hearings started on April 22, 1954, Hoover ordered the bureau to cease helping him. …

During the hearings, McCarthy failed to substantiate his claims that the Communists had penetrated the Army, which had hired a shrewd Boston lawyer, Joseph Welch, to represent it. McCarthy noted that Fred Fischer, a young lawyer in Welch’s firm, had been a member while at Harvard Law School of the National Lawyers Guild, described by the attorney general as the “legal mouthpiece of the Communist Party.” Supreme Court Justice Arthur J. Goldberg had also been a member of the group.

Upon hearing this accusation, Welch responded, “Until this moment, senator, I think I never really gauged your cruelty or recklessness.” When McCarthy continued to hound Fischer, Welch said, “Have you no sense of decency, sir, at long last? Have you left no sense of decency?”

After two months, the hearings were over, and so was McCarthy’s career. Watching the hearings on television, millions of Americans had seen how he bullied witnesses and what an unsavory character he was. Behind the scenes, Eisenhower pushed fellow Republicans to censure McCarthy.

In August 1954, a Senate committee was formed to investigate the senator. …

On December 2, 1954, the Senate voted 67 to 22 to censure him. After that, when he rose to speak, senators left the Senate chamber. Reporters no longer attended his press conferences. On May 2, 1957, McCarthy died at the age of forty-eight of acute hepatitis, widely believed to be a result of his alcoholism…”

(“The Real Story on Joe McCarthy”, by Ronald Kessler, April 7, 2008, Newsmax)

Under the Army’s supervision the Mathematics Research Center at Wisconsin-Madison excelled. A clear sign that the MRC viewed itself as inheriting the mantle of the INA at UCLA was the fact that J. Barkley Rosser, an early director of the INA, became the second director of the MRC in 1963:

“In 1949 he was asked to become the Director of Research at a newly created Institute for Numerical Analysis, located at UCLA and sponsored by the National Bureau of Standards. At this early stage in modern electronic computing, Rosser was successful in drawing together a stellar group of mathematicians whose ultimate impact on the future of computing was memorable. He also saw that the computer held great promise for pure mathematics; one example was a project aimed at finding high precision values for the zeros of the Riemann zeta-function. While the final publication was delayed until 1969, this was among the earliest computational evidence supporting a famous conjecture of Riemann connected with properties of the prime numbers.

With the Institute functioning, Rosser returned to Cornell. In 1953-54 he received a joint Guggenheim-Fulbright fellowship which he spent in Europe, writing a book on modern logic. However, because able scientific administrators are rare, he also continued to receive requests to fill such posts, serving on many panels and committees connected with the Space Program and related projects, as well as other scientific organizations and research centers. Among these: Director of the Institute for Defense Analysis, Chairman of the Mathematics Division of the NRC, and Chairman of the Conference Board of the Mathematical Sciences.

In 1963 he moved permanently from Cornell to Wisconsin, to become the Director of the Mathematical Research Center, replacing the first Director, Rudolph Langer, who had chosen to retire. The presence of two longtime Princeton friends, Joe Hirshfelder and Steve Kleene, was an added incentive for Rosser. The MRC operated under a contract from the Department of the Army…”

(“Memorial Resolution of the Faculty of the University of Wisconsin-Madison: On the Death of Emeritus Professor J. Barkley Rosser”, March 5, 1990, University of Wisconsin Madison)

From INA directorship in 1949 to directorship of the Institute for Defense Analysis, chairmanship of the National Research Council’s mathematics division, and then directorship of the Army mathematics research center, the mathematician J. Barkley Rosser took on several important management positions affiliated with the U.S. government and the defense establishment. As a result, as told in the above quote, some of his own mathematical research did not get to publication for 2 decades until 1969.

Interestingly, that particular research piece of Rosser’s was the use of the computer at INA, the SWAC computer as mentioned earlier, to calculate the zeros of the Riemann zeta-function, i.e., to provide evidence for the Riemann Hypothesis – a famous pure mathematics problem which John Nash’s unsuccessful attempt at solving in 1958 contributed to his mental instability, as in Part 2.

The rite of manhood in Professor Rosser’s occupation, I suspect, be it mathematics applied to the Army’s interests or mathematics as difficult as the Riemann Hypothesis.

The year after Rosser’s publication of his computing work on the Riemann Hypothesis, i.e., in 1970 as in Part 4, the MRC under his directorship was the target of the most powerful U.S. domestic terror bombing up to that point, which killed a physicist, Robert Fassnacht.

The bombing was a part of anti-war protests persisting over the years against the Army-affiliated math center, with some protestors advocating for “A People’s Math Research Center”:

“During the years of protest against the war and against MRC, many persons wrote documents, pro or con, about the center’s activities in support of the Army. Among all of these, the one that stands out as probably the most comprehensive single presentation of the case against the center is a booklet called THE AMRC Papers, produced in 1973 by a group calling itself the Madison Collective of Science for the People. …”

The Booklet is organized in four parts, whose titles are

  • How AMRC Helps the Army
  • How AMRC Works
  • AMRC’s Relationship with the University of Wisconsin
  • An Alternative: A People’s Math Research Center

The part of most interest here is the first, which includes four chapters on specific areas in which it is alleged that MRC helped the Army. The titles of these chapters are Counterinsurgency, Chemical & Biological Warfare, Missile, and Conventional Weapons. … Indeed, many of the descriptions reported in these four chapters are taken directly from the reports of the center itself, and others from documents produced by military agencies. … this booklet was being sold in Madison at a time when some Army scientists responsible for oversight of the MRC contract were in town. Mindful of the difficulty they frequently encountered in persuading other Army officials that mathematical research was doing anything of real use to the Army, the scientists went out on the street and bought 40 copies of the booklet because it made such powerful arguments that MRC was in fact of great benefit in advancing the Army’s programs!”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

Mina Rees, a mathematician who had held management positions in the U.S. government research agencies, including with the applied mathematics panel of the National Defense Research Committee and as the head of the mathematics division at the Office of Naval Research, and who had played a key role in starting the NBS-sponsored INA at UCLA, expressed her strong opposition to the Army’s plan of directly involvement in an academic math research center:

“I think now rather with amusement of our feeling that the West Coast was somewhat underprivileged in — in this kind of development, but we did go to major universities all over the United States and it was after the visits to various places and an assessment of the degree of interest and the degree of involvement that the various universities were willing to undertake that we decided that the University of California at Los Angeles had the best chance of doing a – the kind of thing that we saw as needed, and I would think that we spent at least a year making that decision.

Yes. I think that was an outcome of discussions between John Curtiss and me, and one reason that we chose Southern California was that we thought that that was the place where we could get people to do that. Now the – what is it called – the Institute at Wisconsin – the Army Research – Mathematics Research Institute which had its troubles during the students’ uprisings some years later built on the same concept and tried to exploit the same attractiveness at having Army work done in a university. I was strongly opposed to that at that time, and I had no foresight- I don’t claim any foresight of what was going to happen later – but it just did not seem to me the right way to go about that problem, but it did seem to be the right: way to go about the development of solid mathematics.”

(“Interviewee: Mina Rees (1902-1997) Interviewer: Henry Tropp”, September 14, 1972, Computer Oral History Collection, 1969-1973, 1977, Smithsonian National Museum of American History)

There was “solid mathematics” done at the Army MRC at Wisconsin-Madison despite her strong opposition to the setup, as Rees later admitted in the above.

Moreover, the solid mathematics did not apply only to the military’s interests, but also in civilian industry.

Recall as previously quoted in Part 4, the significant achievements of Wisconsin-Madison mathematics professor Carl der Boor – SIAM’s 1996 John von Neumann Lecturer as cited earlier – in the development of the theory and applications of spline functions, which became “indispensible tools” in computer-aided design, and in auto and airplane manufacturing, among other industrial fields:

“… Splines were introduced in the 40’s (by the late I.J. Schoenberg of Wisconsin) as a means for approximating discrete data by curves. Their practical application was delayed almost twenty years until computers became powerful enough to handle the requisite computations. Since then they have become indispensible tools in computer-aided design and manufacture (cars and airplanes, in particular), in the production of printer’s typesets, in automated cartography… Carl is the worldwide leader and authority in the theory and applications of spline functions. … Carl has made Wisconsin-Madison a major international center in approximation theory and numerical analysis…”

(“Van Vleck Notes: Dedications, Honors and Awards …”, Fall 1997, Department of Mathematics, University of Wisconsin)

I. J. Schoenberg mentioned above, the founder of the mathematical theory of spline functions, had done some of his early work at the INA amidst a host of other researchers, including Derrick Lehmer, J. Barkley Rosser and David Saxon mentioned earlier, pursuing various subjects of their interests:

“THE PERIOD SUMMER 1951 THROUGH SPRING 1952

Research in the Mathematical Theory of Program Planning was carried enthusiastically by Motzkin, Agmon, Blumenthal, Gaddum, Schoenberg, and Walsh. During July and August a joint seminar was held with Rand on “Linear inequalities and related topics.” Invited speakers from outside were: A. W. Tucker, R. W. Shepherd, J. M. Danskin, S. Karlin, and R. E. Bellman.

Studies in numerical integration of ordinary and partial differential equations were pursued vigorously by Agmon, Bers, Fichera, and Wasow. … Rosser investigated the problem of computing low moments of normal order statistics. …

… Schoenberg pursued his theory of splines, a theory that has many useful applications.

Lehmer developed a practical method for obtaining the so-called Kloosterman Sums and investigated their properties. A series of tests for primality of Mersenne numbers were made on the SWAC, using a code sent in by R. M. Robinson of UC-Berkeley. …

Studies in theoretical physics were carried out by Saxon in cooperation with members of the Physics Department and other departments at UCLA …”

(Magnus R. Hestenes and John Todd, 1991, National Institute of Standards and Technology, U.S. Department of Commerce)

But it wasn’t until the mid-1960s at the MRC at Wisconsin-Madison that research in spline functions theory, “a theory that has many useful applications” as quoted above, flourished:

“Work on splines at MRC started in 1965 under the leadership of two permanent members, I. J. Schoenberg and T. N. E. Greville. The work evolved into a separate area in 1966 and continued for years thereafter. In fact, it probably is the case that spline functions are one of the best recognized of the mathematical advances that MRC brought about. …

… The contrast between the sustained success of the spline function subarea (benefiting from the continuous attention and organizational work of Schoenberg and later of Carl de Boor) and the sporadic nature of the other numerical analysis work provides a striking example of the importance of influential continuing staff in the development and sustenance of a research area.”

(Jagdish Chandra and Stephen M. Robinson, 2005, Society for Industrial and Applied Mathematics)

As quoted, spline functions became one of MRC’s “best recognized” successful research advances, whereas other numerical analysis work was “sporadic” in nature.

The direct funding, by U.S. defense research agencies, of mathematical research applicable to their interests continued to and during the 1980s, as can be seen in my situation when I was applying for graduate study in the U.S. and then studying for my mathematics Ph.D. at Berkeley, here as summarized from previous discussions in Part 4:

  • in 1982 graduating from Sun Yat-sen University in China, Prof. Yuesheng Li who had supervised my undergraduate thesis in spline functions theory, suggested that I go to the MRC at Wisconsin-Madison to study with Carl de Boor, whose industry-applied research had been funded by the U.S. Army;
  • when I chose UC Berkeley, Prof. Li suggested that I study with Alexander Chorin, whose ground-breaking research in computational fluid dynamics had been funded by the U.S. Navy;
  • partly at the advice of Tosio Kato at Berkeley, I chose Stephen Smale, a prominent pure mathematician and former anti-war movement leader, to be my Ph.D. adviser, whose research had been funded by the National Science Foundation;
  • Smale’s ambitious work to develop mathematical theories for numerical analysis was consistently dismissed by Berkeley numerical analysts, especially by Chorin, and Smale’s claims of his work being in applied mathematics were not accepted by those aligned with the numerical analysts.

From an industry point of view, the dominance of military influences in the early development of computers could be partly due to the ineptitude, or ineffectiveness, of the civilian sector, as seen in Berkeley Ph.D. and Silicon Valley pioneer Douglas Engelbart’s experience in the mid-1950s with Hewlett-Packard, discussed earlier.

IBM, which in 1956 started a research laboratory in San Jose as discussed earlier, hadn’t done that well, either:

“… IBM’s president from 1914 to 1956, Thomas J. Watson, Sr., had failed to recognize growing scientific and engineering demand for high-speed computing and visualized only a small market for the new electronic machines. Only under the patriotic cover of IBM’s support for the Korean War effort and through the leadership of Thomas J. Watson, Jr., did the firm manufacture its first computer, the Defense Calculator—IBM Model 701. The eighteen machines produced were oriented toward scientific use, with limited input/output equipment, and were all placed at government installations or with defense contractors. …”

(David O. Whitten and Bessie E. Whitten, eds., Manufacturing: A Historiographical and Bibliographical Guide, 1990, Greenwood Press)

Like with the invention of the first electronic computer ENIAC, war mobilization played a key role in the start of IBM computer manufacturing, during the Korean War era – despite the International Business Machines Corporation’s decades-long history in a closely related industrial field.

It is also interesting that the IBM San Jose research lab’s start coincided with the end of the over 4-decades-long reign of Thomas J. Watson, Sr. at IBM, in 1956 as quoted above.

Watson, who had adopted for IBM the alluring slogan, “World peace through world trade”, passed the reign to his son Thomas J. Watson, Jr., a month before his death in June 1956.

(“Thomas J. Watson: CEO 1914 – 1956”, International Business Machines Corporation)

Another industrial company was more eager than IBM.

I have quoted in Part 4 from an 2011 blog post, about Prof. Li in 1982 stressing to me the benefits of Carl de Boor’s General Motors connection:

“When I applied for graduate study in the United States Professor Li seriously recommended the U. S. Army Mathematics Research Center at the University of Wisconsin, Madison – Dr. Carl de Boor there and his General Motors connection were Professor Li’s favorite …”

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 3) – when violence and motive are subtle and pervasive”, March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

I had no knowledge of the specifics of Professor de Boor’s GM connection.

But there is something about Charles Wilson, President Eisenhower’s Secretary of Defense who in 1953 dumped the National Bureau of Standards from the management of defense agency-funded projects.

Wilson had been promoted from the presidency of General Motors:

“Running on an anti-New Deal, pro-business, anti-corruption, anti-Communism platform, and featuring a pledge to end the Korean conflict, the Republican Eisenhower-Nixon ticket rode roughshod over the Stevenson-Sparkman Democrats, winning the White House as well as both houses of Congress. A changed philosophy of Government had been installed in Washington, one best exemplified by the nomination as secretary of defense of Charles (“Engine Charlie”) Wilson, president of General Motors, whose statement, “What’s good for the country is good for General Motors and vice versa,” was added to the lexicon of the Nation’s political history.”

(Elio Passaglia with Karma A. Beal, 1999, National Institute of Standards and Technology, U.S. Department of Commerce)

Yup, what was good for the Army’s interests was probably good for General Motors, and “Charlie Engine” Wilson had more of that drive than Thomas Watson, Sr.

By 1981-1982 as I was applying to U.S. graduate schools and had discussions with Prof. Li, there was a General Motors senior executive with a prominent mathematical computing link in the family.

Marina von Neumann Whitman, General Motors vice president and chief economist beginning in 1979, was the daughter of the late “father of computers” who had spread his computer-building ‘gospel’ around the academia and scientific institutions; she had been the first woman ever to be on the White House Council of Economic Advisers, appointed by President Richard Nixon, Eisenhower’s former vice president, in 1972:

“Whitman’s father, John von Neumann, is known for inventing Game Theory, pioneering developments in computer science and contributing to the Manhattan Project, among other achievements.

“This was a force to contend with,” Whitman said. “He was a wonderful father, but he put a lot of pressure on me to always be on the top of everything.”

Still, it’s safe to say she’s escaped her father’s shadow. She was the first woman to be appointed to the president’s Council of Economic Advisers in 1972, by President Richard Nixon. Whitman also served as vice president and chief economist of General Motors from 1979 to 1985 and group vice president for public affairs from 1985 to 1992.”

(“Marina von Neumann Whitman to read from new memoir ‘The Martian’s Daughter’”, by John Bohn, October 2, 2012, The Michigan Daily)

General Motors’ recognition of von Neumann Whitman’s talents was only logical, considering that in the 1950s Secretary of Defense Charles Wilson, the former GM president, had benefited greatly from her father’s advice, even at his hospital bedside in his last year of life:

“… At Walter Reed, where he was moved early last spring, an Air Force officer, Lieut. Colonel Vincent Ford, worked full time assisting him. Eight airmen, all cleared for top secret material, were assigned to help on a 24-hour basis. His work for the Air Force and other government departments continued. Cabinet members and military officials continually came for his advice, and on one occasion Secretary of Defense Charles Wilson, Air Force Secretary Donald Quarles and most of the top Air Force brass gathered in Von Neumann’s suite to consult his judgment while there was still time. …”

(Clay Blair, Jr., February 25, 1957, Volume 42, Number 8, Life)

Ironically, the earlier experiences of Marina von Neumann, entering the real world, included being turned down for a job prospect at IBM, where her late father had been a consultant, and being invited to apply and then rejected for Ph.D. study at Princeton University, where her father had been famous – for rather unusual personal reasons:

“She remembers one contentious exchange after Whitman told her father that she planned to get married upon graduating college.

“He had a fit,” Whitman said. “He thought that this would be the death knell for any professional ambitions I might have. And in the 1950s, he was statistically right, but he was wrong about me.”

Using two distinct anecdotes, Whitman’s second focus in the book is how society has changed during her lifetime. In the first, she describes how she was turned down for a prospective job opportunity at IBM because the recruiter saw she was engaged to be married.

The second anecdote discusses Whitman’s application to Princeton University for a Ph.D. in economics; the economics department invited her to apply, yet turned her down for a simple reason.

“I went to see the president (of Princeton). And what the conversation boiled down to was, ‘I’m so sorry, Mrs. Whitman, we can accept a student of your caliber, but we just don’t have enough ladies’ rooms.’ ”

(John Bohn, October 2, 2012, The Michigan Daily)

Marina Whitman is now a professor of business administration and public policy at the Gerald R. Ford School of Public Policy, the University of Michigan, Ann Arbor.

(““The Martian’s Daughter” by Marina von Neumann Whitman”, October 2, 2012, Gerald R. Ford School of Public Policy, University of Michigan)

(Part 5 continues in (ii))

Leave a comment

Filed under Academia, Computer, Computing, Health, History, Industry, Politics, Science, War and peace

Universal basic income – too good to be true, or simply not enough?

Good news travels fast.

Last December, a story appeared in the mainstream international media, e.g., Britain’s Independent newspaper, that Finland was planning to bring in a guaranteed basic income for all her citizens:

“Finland’s government is drawing up plans to give every one of its citizens a basic income of 800 euros (£576) a month and scrap benefits altogether.

A poll commissioned by the agency planning the proposal, the Finnish Social Insurance Institute, showed 69% supported the basic income plan.

Prime Minister Juha Sipila was quote by QZ as backing the idea.

“For me, a basic income means simplifying the social security system,” he said.

The proposal would entitle each Finn to 800 euros tax free each month, which according to Bloomberg, would cost the government 52.2 billion euros a year.

The country’s government will make a final decision on the plan in November 2016.”

(“Finland plans to give every citizen 800 euros a month and scrap benefits”, by Will Grice, December 6, 2015, Independent)

In quoting its source, Bloomberg View columnist Leonid Bershidsky’s article a month earlier, the Independent skipped the fact that there would be a “pilot stage” first:

“… Full implementation would be preceded by a pilot stage, during which the basic income payout would be 550 euros and some benefits would remain.”

(“Finns May Get Paid for Being Finns”, by Leonid Bershidsky, November 3, 2015, Bloomberg View)

A few days later the news appeared among the World Economic Forum’s Global Agenda, referring to the pilot stage as a “hybrid programme”:

“… The Finnish proposal, slated to be finalized in 2016, will first be rolled out as a hybrid programme in 2017, offering 550 euros a month while maintaining some social services. …”

(“Finland’s basic income experiment – can it work?”, by Donald Armbrecht, December 10, 2015, Global Agenda, World Economic Forum)

But it was “too good to be true”.

On the same day of the World Economic Forum article, Britain’s The Guardian dampened the euphoria by reporting that what Finland was planning was only a policy experiment – on a small scale and with no commitment to what would come next:

“Finland is not planning to scrap its existing benefit system and give everyone an unconditional grant of €800 a month – contrary to what some recent headlines may have told you.

What it is planning promises to be an interesting policy experiment involving a sample of the population, which may or may not include some form of basic income paid to all participants: which in turn may not be unconditional, and may be worth a lot less than €800. Still the general excitement was testimony to widespread interest in the basic income idea.”

(“Even in Finland, universal basic income is too good to be true”, by Declan Gaffney, December 10, 2015, The Guardian)

It “may be worth a lot less than €800”, because that level of a guaranteed income for all would occupy the Finnish government’s entire current budget, as noted by Jim Edwards of Business Insider UK:

“In Finland, €800 a month will cost the government €52.2 billion a year. The government’s revenue for 2016 is €49.1 billion. In theory, the shortfall should not be a problem because not every Finn is an adult (only those of working age will receive it) and richer Finns’ payments will be taxed. In addition, the end of other social programmes should produce savings.”

(“Here’s how much we’d all get if the UK dumped its welfare state and introduced a universal basic income scheme instead”, by Jim Edwards, December 13, 2015, Business Insider UK)

So for what the Finnish government could afford, it might not be enough for a living income – unless, of course, the government finds additional revenue, e.g., by increasing taxes as suggested by U.S. basic income advocate Scott Santens, who had proposed a basic annual income at a similar level, $12,000 per adult and $4,000 per child:

“Basic income is entirely affordable given all the current and hugely wasteful means-tested programs full of unnecessary bureaucracy that can be consolidated into it. And the cost also depends greatly on the chosen plan. A plan of $12,000 per U.S. citizen over 18, and $4,000 per citizen under 18 amounts to a revenue need of $2.98 trillion, which after all the programs that can be eliminated are rolled into it, requires an additional need of $1.5 trillion or so. So where do we come up with an additional $1.5 trillion?

• A land value tax has been estimated to be a source of revenue of about $1.7 trillion.

• A flat tax of around 40% would be sufficient. Due to the way such a tax works in combination with UBI, this would effectively be a reduction in taxes for about 80% of the population.

• A 10% value added tax (VAT) has been estimated to be a source of revenue of about $750 billion. That could be increased to reach $1.5 trillion or added to other sources of additional revenue.

• These other sources of revenue could be a carbon tax ($440 billion), a financial transaction tax ($350 billion), or taxing capital gains like ordinary income and creating new upper tax brackets ($160 billion). Did you know that for fifty years – between 1932 and 1982 – the top income tax rate averaged 82%? Our current highest rate is 39%.

…”

(“Why Should We Support the Idea of Universal Basic Income?”, by Scott Santens, June 26, 2015, Huffington Politics)

Ambitious taxation plans, but I am puzzled by some of Scott Santens’s numbers: the current highest U.S. income tax rate of 39% is pretty much the proposed flat-tax rate of 40% already; if that highest tax rate is currently paid by Americans with the highest incomes, then little additional money would be squeezed from them through the flat tax; and now where would the money for “reduction in taxes for about 80% of the population” come from?

Oh well, big taxes tend to be less popular than their advocates claim. Santens obviously understood it, and went on to suggest other big revenues:

“…

• From 2008 to 2014, we created about $5 trillion out of thin air, and handed it to banks in hopes they would lend it to people. It was called quantitative easing. The result was rich people got even richer. Why not skip the banks, and just hand debt-free money directly and equally to all citizens? Potentially, a quarter of basic income could require no taxes at all.

• There is a place in the world that already pays a regular dividend to everyone living there, universally to child and adult, through a wealth fund it has created through royalty fees paid by companies for the rights to profit from its natural resources. This place is Alaska, and the “Alaska Model” could be applied anywhere as a means of granting a basic income as the social dividend from a sovereign wealth fund of resource-based revenue.

…”

(Scott Santens, June 26, 2015, Huffington Politics)

Yeah, everyone would be rich if money could be made out of thin air, or could flow nonstop from oil.

Like with the reported Finnish plan of €800/month per adult, Santens’s $2.98 trillion plan of $12,000 per adult and $4,000 per child would gobble up much of the government budget:

“The U.S. spent about $3.7 trillion in the fiscal year that just ended, about $12,000 for every American. …”

(“5 myths about the budget”, by Michael Grunwald, October 21, 2015, Politico)

The difference is that the 2015 U.S. government budget could also pay each child $12,000, not just $4,000.

It is hard to imagine the government spending the bulk of the budget on a universal basic income like that, rather than raising big taxes to pay for it.

In early 2014 when the Liberal Party of Canada adopted two resolutions advocating for a “Basic Annual Income”, Toronto Star columnist Carol Goar immediately reminded the public that this was an idea Liberal leader Justin Trudeau’s father, former Prime Minister Pierre Elliott Trudeau, had dodged decades earlier due to taxation concerns:

“The Liberal Party has handed Justin Trudeau a gift he dared not refuse, but will soon regret accepting.

One of the “priority resolutions” approved by delegates at their biennial convention in Montreal this past weekend calls for a Liberal government to “work with provinces and territories to design and implement a Basic Annual Income” for all Canadians.

The same gift was thrust into his father’s hands 42 years ago. Sen. David Croll, author of a groundbreaking parliamentary report, entreated Pierre Elliott Trudeau to introduce a Guaranteed Annual Income. “Let this be our priority project; a project that will stir the world’s imagination,” he urged Canada’s 15th prime minister. “We need search no further for a national purpose.”

Trudeau’s heart said yes. His head said no.

He chose reason over passion. “It’s a good theory,” he acknowledged. “But we cannot guarantee to bring everyone over the poverty line by giving them part of the taxpayers’ pocket.””

(“‘Basic annual income’ loaded with pitfalls: Goar”, by Carol Goar, February 25, 2014, Toronto Star)

But this time, Trudeau the son did not say “no” right away, perhaps because the party resolutions made no reference to taxes.

The first of the two February 2014 Canadian Liberal Party resolutions on Basic Annual Income, Policy Resolution 97, put forth by National Women’s Liberal Commission, stated:

“…

BE IT RESOLVED that the Liberal Party of Canada advocate for a federal pilot of a basic income supplement in at least one Canadian town or city, in cooperation with the appropriate provincial and municipal government(s).”

(“Policy Resolution 97: Basic Income Supplement: Testing a Dignified Approach to Income Security for Working-age Canadians”, National Women’s Liberal Commission, Liberal Party of Canada)

“A federal pilot” involving “at least one Canadian town or city” would be like a minimal version of the Finnish “policy experiment” to be decided in November 2016. On this scale, obviously no new tax is needed.

For such a small-scale policy experiment, Finland and Canada would not be alone. Twenty municipalities in the Netherlands, Utrecht among them, are working to put a basic income into experiment, though keeping a low profile about it:

“It’s an idea whose adherents over the centuries have ranged from socialists to libertarians to far-right mavericks. It was first proposed by Thomas Paine in his 1797 pamphlet, Agrarian Justice, as a system in which at the “age of majority” everyone would receive an equal capital grant, a “basic income” handed over by the state to each and all, no questions asked, to do with what they wanted.

… in Utrecht, one of the largest cities in the Netherlands, and 19 other Dutch municipalities, a tentative step towards realising the dream of many a marginal and disappointed political theorist is being made.

The politicians, well aware of a possible backlash, are rather shy of admitting it. “We had to delete mention of basic income from all the documents to get the policy signed off by the council,” confided Lisa Westerveld, a Green councillor for the city of Nijmegen, near the Dutch-German border.

“We don’t call it a basic income in Utrecht because people have an idea about it – that it is just free money and people will sit at home and watch TV,” said Heleen de Boer, a Green councillor in that city, which is half an hour south of Amsterdam.

Nevertheless, the municipalities are, in the words of de Boer, taking a “small step” towards a basic income for all by allowing small groups of benefit claimants to be paid £660 a month – and keep any earnings they make from work on top of that. Their monthly pay will not be means-tested. They will instead have the security of that cash every month, and the option to decide whether they want to add to that by finding work. The outcomes will be analysed by eminent economist Loek Groot, a professor at the University of Utrecht.

A start date for the scheme has yet to be settled – and only benefit claimants involved in the pilots will receive the cash – but there is no doubting the radical intent. The motivation behind the experiment in Utrecht, according to Nienke Horst, a senior policy adviser to the municipality’s Liberal Democrat leadership, is for claimants to avoid the “poverty trap” – the fact that if they earn, they will lose benefits, and potentially be worse off.”

(“Dutch city plans to pay citizens a ‘basic income’, and Greens say it could work in the UK”, by Daniel Boffey, December 26, 2015, The Guardian)

Elsewhere, a small-scale experiment was recently carried out in 2008-2009 in an African village in Namibia, with considerable success according to the German aid organizations that conducted it:

“It sounds like a communist utopia, but a basic income program pioneered by German aid workers has helped alleviate poverty in a Nambian village. Crime is down and children can finally attend school. Only the local white farmers are unhappy.

The African continent receives roughly €30 billion in annual development aid, through charitable organizations, humanitarian assistance projects or direct payments to governments. The money flows into thousands of aid projects, into things like well-digging and malaria prevention, but some of it also ends up in the private bank accounts of corrupt statesmen or is spent on wars, and often never reaches its intended recipients. Indeed, the results of half a century of aid to the developing world are devastating: Out of the 40 nations that the International Monetary Fund (IMF) categorizes as “heavily indebted poor countries,” 33 are in Africa.

It seems that the financial assistance coming from donor nations is barely keeping the continent alive, which leads to two possible conclusions: Either development aid is not a solution, or Africa is beyond help.

In the small Namibian village of Otjivero, a coalition of aid organizations is attempting to prove that both conclusions are wrong. They insist that Africa can be helped — provided it gets the right kind of help, which requires a new and different approach to aid.

The idea is simple: The payment of a basic monthly income, funded with tax revenues, of 100 Namibia dollars, or about €9 ($13), for each citizen. There are no conditions, and nothing is expected in return. The money comes from various organizations, including AIDS foundations, the Friedrich Ebert Foundation and Protestant churches in Germany’s Rhineland and Westphalia regions.”

(“A New Approach to Aid: How a Basic Income Program Saved a Namibian Village”, by Dialika Krahe, August 10, 2009, Spiegel Online International)

But the Namibia experiment’s scientific validity has been questioned by social policy expert Rigmar Osterkamp:

“In January 2008, an innovative civil society initiative was started in the Namibian village of Otjivero. It paid a basic income grant to all residents with financial backing from Germany. The aim was to demonstrate how poverty and high levels of inequality can be reduced. The project was not evaluated diligently, however, so it did not serve as a valid pilot scheme. Its impacts remain unclear, but are certainly unsustainable.

…”

(“Lessons from failure”, by Rigmar Osterkamp, May 3, 2013, D+C Development and Cooperation)

The affordability of a broader basic income at this level is certainly doubtful, considering that for Africa’s 1.1 billion people to receive an international aid of €108 each annually would require nearly €120 billion, several times the €30 billion Africa received as per the 2009 Spiegel report.

As for “a communist utopia” the Namibia project might sound like, the income without requirement of work would certainly be much ‘freer’ than the communist rule in which I grew up in China: there was no income without work, but the government aimed at full employment by assigning mandatory jobs to people of working age.

Also confusing is the notion of rations, related to a basic income experiment in India. Under the communist rule decades ago in Chinese cities the rations were quotas, within the limits of which residents could purchase living necessities such as food stuff; that is quite different from the present rations in Delhi, provided free of charge to the residents, as in the following story about basic income experiments in India:

“In 2011 two pilot schemes have started in India, one conducted by the Self-Employed Women’s Association (SEWA), a well-known trade union for women who earn a low income through their own labour or small businesses. The project was supported by UNICEF. In eight villages the pilot provided all adults for one year with an unconditional payment of 200 Rupees (about 3.75 US Dollars | 2.80 Euros) per month and each child under the age of 14 with 100 Rupees a month. These payments represented about 40% of the bare subsistence level.

The other pilot is supported by the Delhi Government. It gives households a choice between continuing to receive food rations in an existing scheme or taking a monthly cash transfer instead. Many have opted for the cash.”

(“GROWING SUPPORT FOR BI WORLWIDE”, December 2012, Global Basic Income Foundation)

In Canada in February 2014, the second Liberal Party resolution adopted on universal basic income, Policy Resolution 100 – a Priority Resolution mentioned in Carol Goar’s Toronto Star article – proposed by the party’s Prince Edward Island wing, demanded the design and implementation of a Basic Annual Income:

“…

BE IT RESOLVED that a Federal Liberal Government work with the provinces and territories to design and implement a Basic Annual Income in such a way that differences are taken into consideration under the existing Canada Social Transfer System.”

(“Policy Resolution 100: Priority Resolution: Creating a Basic Annual Income to be Designed and Implemented for a Fair Economy”, Liberal Party of Canada (Prince Edward Island), Liberal Party of Canada)

The existing “Canada Social Transfer System”, referred to in this policy resolution, is defined as follows:

“The Canada Social Transfer (CST) is the primary federal contribution in Canada to provincial and territorial social programs related to post-secondary education (PSE), social assistance and social services, and programs for children.”

(“The Canada Social Transfer: Past, Present and Future Considerations”, by James Gauthier, September 13, 2012, Library of Parliament Canada)

This Liberal Party Priority Resolution is thus about turning the “existing” federal government fund transfer to the provinces and territories for post-secondary education, social assistance, social services and children’s programs, together with funds the provinces and territories already have, into a system of Basic Annual Income – whatever the income amount comes to, presumably.

The basic income normally does not include allowance for higher education:

“… Bettering oneself beyond basic needs – attaining higher education and pursuing a rewarding, long-term career – are aims that a basic income was never intended to replace.”

(“How can we not afford a ‘basic annual income’?”, by Rob Rainer and Kelly Ernst, February 27, 2014, Toronto Star)

Since government funds for post-secondary education are unlikely to be diverted to the basic income, the parts of the Canada Social Transfer that may be utilized would be funds for social assistance, social services and children’s programs.

The latest signal from Canadian Prime Minister Justin Trudeau’s government indicates that funds for children’s benefits would likely be separate from any basic income, and more importantly, the basic income is currently not on the government agenda although future discussions would be welcome:

“Veteran economist Jean-Yves Duclos, who is Minister of Families, Children and Social Development, told The Globe and Mail the concept has merit as a policy to consider after the government implements more immediate reforms promised during the election campaign.

Interest in the idea of a guaranteed income is heating up since the Finnish government announced last year that it will research and test the concept.

That has led to growing calls to explore the idea here. Former senator Hugh Segal and Conference Board of Canada chief economist Glen Hodgson are among those recommending pilot projects.

A guaranteed income was not part of the federal Liberal platform, and Mr. Duclos said it is not currently on the government’s agenda given the focus on delivering campaign commitments. However, the minister is clearly interested in exploring the idea over the longer term.

One of Mr. Duclos’s most pressing files is folding several existing benefits for parents into a single monthly payment that is geared to income. That has a target implementation date of July. The minister noted that elements of that plan are in line with a guaranteed national income.

“Most importantly, I think it’s the principles behind the idea [of a guaranteed income] that matter. These principles are greater simplicity for the government, greater transparency on the part of families and greater equity for everyone,” he said. “In fact, it’s the same principles that are behind the implementation of our Canadian child benefit in the next budget, so it’s great that different versions of different systems can achieve the same objectives of greater simplicity, transparency and equity.”

Conservative MP and finance critic Lisa Raitt said she would like the House of Commons finance committee to study the idea. She also said she raised the issue with Finance Minister Bill Morneau recently during a private pre-budget meeting.

“He seemed favourable,” she said. “I have an open mind on it. I know that there’s been progress made on it around the world in terms of how people are viewing it. I don’t know if it will work in Canada but the work of the committee will help us figure out whether or not it is something that is good or not good.””

(“Minister eyes guaranteed minimum income to tackle poverty”. by Bill Curry, February 5, 2016, The Globe and Mail)

Apparently, a party policy priority resolution is not necessarily in the party’s election platform, not in this case.

As Jean-Yves Duclos, the Canadian Minister of Families, Children and Social Development, explained, the Liberal government’s new Canadian child benefit is a different version of a different system but follows the same principles that would be for a “guaranteed national income”: “greater simplicity for the government, greater transparency on the part of families and greater equity for everyone”.

Whatever the principles, if the money has been for social assistance and social services, i.e., social welfare only, how much of a “guaranteed national income”for everyone can it be turned into?

Quite a lot more money than what the relative small number of welfare recipients get, because a larger amount of money is spent on others. For example, the U.S. data is staggering, although it includes some assistance on education (Pell grants):

“New data compiled by the Republican side of the Senate Budget Committee shows that, last year, the United States spent over $60,000 to support welfare programs per each household that is in poverty. The calculations are based on data from the Census, the Office of Management and Budget, and the Congressional Research Services.

“According to the Census’s American Community Survey, the number of households with incomes below the poverty line in 2011 was 16,807,795,” the Senate Budget Committee notes. “If you divide total federal and state spending by the number of households with incomes below the poverty line, the average spending per household in poverty was $61,194 in 2011.”

This dollar figure is almost three times the amount the average household on poverty lives on per year. “If the spending on these programs were converted into cash, and distributed exclusively to the nation’s households below the poverty line, this cash amount would be over 2.5 times the federal poverty threshold for a family of four, which in 2011 was $22,350 …” the Republicans on the Senate Budget Committee note.

To be clear, not all households living below the poverty line receive $61,194 worth of assistance per year. After all, many above the poverty line also receive benefits from social welfare programs (e.g. pell grants).

As for the welfare programs, the Republicans on the Senate Budget Committee note:

A congressional report from CRS recently revealed that the United States now spends more on means-tested welfare than any other item in the federal budget—including Social Security, Medicare, or national defense. Including state contributions to the roughly 80 federal poverty programs, the total amount spent in 2011 was approximately $1 trillion. Federal spending alone on these programs was up 32 percent since 2008.

The U.S. Census Bureau estimated that almost 110 million Americans received some form of means-tested welfare in 2011. These figures exclude entitlements like Medicare and Social Security to which people contribute, and they refer exclusively to low-income direct and indirect financial support—such as food stamps, public housing, child care, energy assistance, direct cash aid, etc. For instance, 47 million Americans currently receive food stamps…”

(“Over $60,000 in Welfare Spent Per Household in Poverty”, by Daniel Halper, October 26, 2012, The Weekly Standard)

Still, $1 trillion is just over 1/3 of the $2.98 trillion needed, according to basic income advocate Scott Santens quoted earlier, for a U.S. national basic income of $12,000 per adult and $4,000 per child per year.

But it is a lot of money that presumably can be consolidated into one basic income program to achieve “greater simplicity, transparency and equity”, as phrased by Canada’s Jean-Yves Duclos.

The British data is similar:

“So how might this work out in the UK?

We decided to use numbers for the 2013-14 financial year because those are the most-complete numbers provided by the Office of National Statistics and the Office for Budget Responsibility:

  • UK WELFARE BUDGET FOR 2013-14
  • Total welfare spending: £251 billion
  • Population: 64.5 million
  • Of which, children: 15 million

If that budget was recast as a universal basic income, this is what you would get:

  • UK BASIC INCOME BUDGET FOR 2013-14
  • Basic income per head for all residents, annually: £3,891
  • Basic income per head for all residents, monthly: £324
  • Basic income per head for adults only, annually: £5,081
  • Basic income per head for adults only, monthly: £423

(Jim Edwards, December 13, 2015, Business Insider UK)

In 2013, £3,891 was over $5,800, and £5,081 was over $7,600.

(“Yearly Average Currency Exchange Rates: Translating foreign currency into U.S. dollars”, last updated January 15, 2016, U.S. Internal Revenue Service)

So the British government welfare spending is clearly more than 1/3 of what is needed for a basic income at the same level as the U.S. one advocated by Santens.

The idea that a universal basic income would provide better “equity”, or more equality for the poor, appeals to the political left, while the notion that it provides “simplicity” in government management appeals to the political right:

“… To those on the left, a UBI would create greater equality by ending poverty and providing a minimum living standard. It would also increase bargaining power for workers, who could demand better working conditions with a safety cushion. …

Meanwhile, a few conservatives have advocated a form of basic income for a different set of reasons. The right likes basic income because it would allow for the removal of many overlapping and piecemeal government programs, such as food stamps and unemployment insurance, as well as programs the government directly runs. …”

(“Thinking Utopian: How about a universal basic income?”, by Mike Konczal, May 11, 2013, Wonkblog, The Washington Post)

The history of attempts at introducing a universal basic income by politicians and intellectual advocates in the Western world has been quite long, in the U.S. dating back to at least the Richard Nixon era:

“… Milton Friedman, the libertarian Nobel laureate economist, proposed a version of this idea called a “negative income tax,” in which every household would be given a check for a set amount, such that some people actually had a negative tax burden. That got picked up by the Nixon administration, in particular then-aide and future U.S. Sen. Daniel Patrick Moynihan, and Congress almost passed a version of the proposal. George McGovern proposed a $1,000 tax credit for every man woman in child during his 1972 run against Nixon, which he called a “demogrant”. …

Despite the current unpopularity of the idea — commonly known as a “basic income” when it takes the form of an unconditional payment to all citizens — among policymakers, it has some adherents among intellectuals, including Charles Murray of AEI – of The Bell Curve and Coming Apart fame – and the political philosopher Philippe van Parijs.”

(“Obama doesn’t want to just write welfare recipients checks. But what if we did?”, by Dylan Matthews, August 8, 2012, Wonkblog, The Washington Post)

I note that the U.S. politicians’ approaches were more pragmatic than universal: they simply proposed tax credits which would be universal when not dependent on income level, and the amount may not be at all close to the basic living level.

One of the most recent such proposals was put forward by then U.S. Congressman Bob Filner:

“The most recent version of the idea introduced in Congress was California Rep. Bob Filner’s “A Tax Cut for the Rest of Us Act,” designed by basic income activists, which would have replaced the standard deduction of the income tax with a $2,000 credit per adult and $1,000 credit per child, both fully refundable. The bill, introduced in 2006, didn’t catch on, only gaining one other cosponsor, and is not large enough to eliminate poverty, but it did give tax analysts a chance to crunch the numbers on what a basic income would actually cost.

According to Citizens for Tax Justice, a left-leaning think tank and advocacy group, the Filner proposal came to … $186 billion annually, a figure made lower by the fact that the credit is optional, only applies to those who doesn’t itemize deductions and replaces rather than supplements the standard deduction. …”

(Dylan Matthews, August 8, 2012, The Washington Post)

I note that Bob Filner, the main subject of my February 9, 2015 blog post, was a high-profile and successful Democratic Congressman who went on to become Mayor of San Diego only to resign in disgrace due to a sexual-impropriety scandal, while losing in his efforts campaigning for a U.S.-Mexico bi-national Olympics for the year 2024.

(“Sexual complaints against a seasoned U.S. Democrat, and the end of a U.S.-Mexico bi-national Olympics dream”, February 9, 2015, Feng Gao’s Posts – Rites of Spring)

Filner’s tax credit plan was proposed in 2006. According to the estimation of the U.S. Bureau of Labor Statistics’ Consumer Price Index calculator, $1,000 U.S. in 1972, when then Democratic presidential candidate George McGovern proposed a universal tax credit, would equal $4,822.97 in 2006.

(“H.R. 5257 (109th): Tax Cut for the Rest of Us Act of 2006”, GOVTRACK; and,“CPI Inflation Calculator”, U.S. Bureau of Labor Statistics)

So Filner’s plan of $2,000 per adult and $1,000 per child would have given every American an amount just over 40% or just over 20%, respectively, of McGovern’s plan. Moreover, the tax credit would replace the standard deduction, and so the real gain would be even smaller.

Filner’s plan not only would have been very inadequate as a basic income, but paled in comparison to the proposals of others, including conservative scholar Charles Murray’s, according to Dylan Matthews in his Washington Post Wonkblog article:

“Basic income activists have pegged the amount for a full basic income at $10,000 per adult and $2,000 per child. Here’s how much proposals between Filner’s and that plan would cost, given the current size of the population:

Filner’s proposal, not including any offsets from repealing the standard deduction, would cost a little over $500 billion a year. A plan with $5,000 grants for adults, or about half the $10,000 annual poverty line for adults, costs about $1.25 trillion a year, and Murray’s proposal to give $10,000 annually for every adult over 21 comes to about $2.25 trillion.”

(Dylan Matthews, August 8, 2012, The Washington Post)

While the basic income amounts in the proposals analyzed in Matthews’s article are all less than basic income advocate Scott Santens’ vision of $12,000 per adult and $4,000 per child, Murray’s is close to it, and is noteworthy given his fame as a conservative scholar.

Charles Murray, as reviewed in my January 23, 2015 blog post, has been highly controversial for his view on racial difference in IQ intelligence:

“Murray was one of the authors of the infamous 1994 book, The Bell Curve, whose claims about the genetic roots of the black/white IQ gap set off the most famous public intellectual debate over race and IQ.”

(“A Harvard Ph.D. thesis on “Hispanic IQ”, bad publicity even for the conservative Heritage Foundation in Washington, D.C.”, January 23, 2015, Feng Gao’s Posts – Rites of Spring)

Murray’s proposal was cheered on by British writer Tim Worstall in a Forbes magazine article on December 6, 2015 – the same day of the Independent story quoted at the start of this blog post – who emphasized that it would cost only about as much as the current U.S. spending on welfare:

“There’s rather a lot of discussion around these days about the merits of a universal basic income. We have, for example, those who tell us that the robots are about to steal all our jobs and therefore we need to tax the capital owners in order to provide that basic income for all. Well, maybe, but it’s not going to work out that way. However, that universal basic income is still a startlingly good idea simply because it’s better than any of the various welfare systems we have at present. But do note: It works by being universal and basic.

Charles Murray (in his book In Our Hands) did the math for the US: $10,000 a year to each adult over 21. It works. We spend about the same amount we currently do on welfare providing it. Chris Dillow, the thinking man’s Marxist, has pointed to similar studies for the UK suggesting £130 a week works.

This is a basic income. It is not a living wage, it doesn’t even reach the full year full time minimum wage. But you can, just about, in all the countries mentioned and with the sums for those countries, just about get by.”

(“Finally, Someone Does Something Sensible: Finland To Bring In A Universal Basic Income”, by Tim Worstall, December 6, 2015, Forbes)

But wait. Dylan Matthews’s August 2012 Washington Post article, quoted earlier, had stated Murray’s proposal would cost $2.25 trillion/year, whereas Scott Santens’s estimation of $2.98 trillion annual costs for his proposal, cited earlier, stated an additional $1.5 trillion was needed on top of elimination of existent programs – closer to a U.S. Congressional report figure cited earlier in Daniel Halper’s The Weekly Standard article, that in 2011 the entire U.S. governmental spending on poverty programs was about $1 trillion.

So how then could Murray’s math be so much smarter, than even the U.S. Congress, that his proposal’s estimated spending of  $2.25 trillion was what the U.S. already spent on welfare, and thus would carry no additional tax burden?

Here is what Murray wrote in a 2008 article, referring to his 2006 book, In Our Hands:

“To frame the discussion, it is useful to think in terms of a specific proposal. The one I have proposed in a book entitled In Our Hands converts all transfer payments to a single cash payment for everyone aged twenty-one and older (Murray 2006). It would require an amendment to the American Constitution that I am not competent to frame in legal language, but its sense is easy to express: ‘Henceforth, federal, state, and local governments shall make no law nor establish any program that provides benefits to some citizens but not to others. All programs currently providing such benefits are to be terminated. The funds formerly allocated to them are to be used instead to provide every citizen with a cash grant beginning at age twenty-one and continuing until death. The annual value of the cash grant at the program’s outset is to be US$10,000.’

The GI [Guaranteed Income] eliminates programmes that are unambiguously transfers — Social Security, Medicare, Medicaid, welfare programmes, social service programmes, agricultural subsidies, and corporate welfare. It does not apply a strict libertarian definition of transfer, leaving activities such as state-funded education, and funding for transportation infrastructure and the Post Office in place. …

Once benefits replacement is used as the basis for financing a GI, the money problem becomes manageable. By about 2011, the GI will be cheaper than maintaining the system the United States has in place, and the cost savings will increase geometrically in the years to come.”

(“The Social Contract Revisited: Guaranteed Income as a Replacement for the Welfare State”, by Charles Murray, April 22, 2008, The Foundation for Law, Justice and Society)

Aha, although Murray’s definition of welfare does not include education, it includes “Social Security, Medicare, Medicaid”, in addition to the standard welfare and social services; it even includes agricultural subsidies. In this manner, Murray’s Guaranteed Income would actually save the U.S. government money.

No wonder this staunch conservative scholar is sure he can provide a $10,000 guaranteed income while balancing the government books: in giving every U.S. citizen that one check, he would take away all their current entitlements and benefits.

According to the Bureau of Labor Statistics CPI calculator, $10,000 in 2006 would equal $11,756.80 in 2015. So in terms of the size of the check, Murray’s proposal is as generous as basic income advocate Scott Santens’s $12,000. However, when a person gets sick and needs medical care, $12,000 can be very little!

Murray proposed a mandatory medical insurance requirement to accompany the guaranteed income:

“The GI requires that every recipient of the grant, beginning at age twenty-one, spends US$3000 of the US$10,000 grant on a health care insurance package that includes coverage for high-cost single events such as surgery and for catastrophic longterm illnesses or disability. The GI also requires that insurance companies treat the entire population as a single risk pool. Given that environment, health insurance companies can offer plans with excellent coverage for somewhere around US$3000. They can be so inexpensive for the same reason that life insurance companies can sell generous life insurance cheaply if people buy it when they are young.”

(Charles Murray, April 22, 2008, The Foundation for Law, Justice and Society)

With $3,000 going to health insurance, the actual amount for basic living would be no more than $7,000. By the estimation of the Bureau of Labor Statistics CPI calculator, that amount in 2006 would equal $8,229.76 in 2015.

Charles Murray’s proposed guaranteed income for basic living would only be $685.81/month in 2015.

Tell that to the retirees.

According to the U.S. Social Security Administration, the basic social security income for 2016 is $733/month for one person and $1,100/month for a couple, but in December 2015 the average income has reached $1,228.12/month per beneficiary and $1,341.77/month for a retired worker – nearly twice the size of Murray’s proposed Guaranteed Income – and the retirees enjoy Medicare, most of them at a small premium of $104.90/month in 2015 and 2016.

(“It’s Official: Medicare Part B Premiums Will Rise 16% In 2016 For Some Seniors”, by Ashlea Ebeling, November 16, 2015, Forbes; and, “Monthly Statistical Snapshot, December 2015”, December 2015, and, “You May Be Able To Get Supplemental Security Income (SSI)”, January 2016, U.S. Social Security Administration)

The key here is that the Social Security is a retirement savings mechanism financed through payroll taxes, and as a result many retirees receive substantially higher benefits than the basic amount. Likewise, Medicare is a health insurance program financed primarily through payroll taxes.

(“Social Security: Medicare”, October 2015, “How is Social Security Financed?”, and, “Social Security Benefit Amounts”, U.S. Social Security Administration)

So for many of the retirees, Charles Murray’s proposed Guaranteed Income to replace not only the standard welfare but the government-administered retirement savings and health insurance schemes, would mean a substantial loss of income.

I wonder when Tim Worstall of the Adam Smith Institute in London leaped into the frenzy starting the latest media blitz on universal basic income, did he ever see ‘the devil in the details’?

To be fair, the current Social Security benefit eligibility requirements  include “limited income”, and “limited resources”, i.e., limited personal assets, whereas Murray’s guaranteed income would be in addition to the personal income, and independent of the personal assets:

“With regard to the elderly living in retirement, the first and largest advantage of the GI over the current system is that it is truly universal (American Social Security is not), and even in the worst case provides US$10,000 a year for every elderly person in the country. But the GI does more than give everyone a guaranteed floor income. The GI makes it easy for low-income people to have a comfortable retirement. Summarizing the more detailed discussion in the book, consider someone who puts US$2000 a year in an index-based stock fund every year from age twenty-one until he retires at sixty-six. If one applies a worst-case scenario, assuming a lower compound average growth rate (4%) than has actually occurred in any forty-five-year period in the history of the American stock market, that person will have about US$253,000 at age sixty-six, with which they could purchase an annuity worth about US$20,500 a year, on top of the US$10,000 continuing grant. What about people who don’t save any money or invest it unwisely? Everyone, including the improvident and incompetent who have squandered everything, still have US$10,000 a year each, US$20,000 for a couple, no matter what. …”

(Charles Murray, April 22, 2008, The Foundation for Law, Justice and Society)

So the rich would be getting richer while retired workers with limited means would get poorer, under Murray’s proposal?

Well, not exactly. Murray’s is able to save the government welfare money also because the guaranteed income amount would be reduced through taxation if one’s earned income is higher, meaning that those who make high incomes might not actually get the guaranteed income:

“… Earned income has no effect on the grant until it reaches US$25,000. From US$25,000 to US$50,000, surtax is levied that reimburses the grant up to a maximum of US$5000. The surtax is 20 per cent of incremental earned income. The grant is administered for individuals without regard to earned income from other members of the household.”

(Charles Murray, April 22, 2008, The Foundation for Law, Justice and Society)

So when a person’s earned income reaches $50,000, 20% surtax would return $5,000 Guaranteed Income back to the government coffer.

Murray did not specify what to do with income higher than $50,000. But if the 20% surtax is applied to all earned incomes over $25,000, Murray’s Guaranteed Income would not be a basic income for all even though no one is barred from it, but a basic income for all those making less than $75,000.

The simplicity of the earned income surtax with which Murray would implement the guaranteed income is significant.

In this manner, I think, perhaps, without drastically increasing taxes as advocated by Scott Santens or eliminating “Social Security, Medicare, Medicaid” as suggested by Charles Murray, a sizable budget from the existing government welfare-type programs can be rolled into a single basic income for all those in need fully or partially, thus effectively eliminating poverty.

According to Daniel Halper of The Weekly Standard quoted earlier, in 2011 the amount spent by the 80 or so U.S. government poverty programs was around $1 trillion, and the number of households with incomes below the poverty line was 16,807,795.

According to the U.S. Census Bureau data, the number of households in the U.S. in 2010-2014 was 116,211,092, the average size of the households was 2.63 persons, and the total population per the April 1, 2010 census was 308,745,538.

(“QuickFacts United States”, U.S. Census Bureau)

The percentage of households in poverty in 2011 was thus just under 14.5%.

By household percentage, since $2.98 trillion is needed for Scott Santens’s basic income of $12,000 per adult and $4,000 per child, and 14.5% of $2.98 billion is $432.1 billion, that would be an average amount needed to provide a basic income to those living in poverty; the rest of the $1 trillion in the government poverty program spending would then go to ‘partial’ basic income for the households above the poverty line, e.g., receiving the basic income but with their higher earned income subject to a surtax as in Murray’s proposal.

Unfortunately, like with the retirees’ Medicare benefits that would be taken away in Murray’s plan, the $1 trillion government spending on poverty programs included a substantial portion spent on medical care for the poor, primarily Medicaid:

“Although the public is aware that Social Security and Medicare are large expensive programs, few are aware that for every $1.00 spent on these two program, government spends 76 cents on assistance to the poor or means-tested welfare.

In FY2011, federal spending on means-tested welfare came to $717 billion. State contributions into federal programs added another $201 billion, and independent state programs contributed around $9 billion. Total spending from all sources reached $927 billion.

About half of means-tested spending is for medical care. Roughly 40 percent goes to cash, food, and housing aid. The remaining 10 to 12 percent goes what might be called “enabling” programs, programs that are intended to help poor individuals become more self-sufficient. These programs include child development, job training, targeted federal education aid and a few other minor functions.

The total of $927 billion per year in means-tested aid is an enormous sum of money. One way to think about this figure is that $927 billion amounts to $19,082 for each American defined as “poor” by the Census. However, since some means-tested assistance goes to individuals who are low income but not poor, a more meaningful figure is that total means-tested aid equals $9,040 for each lower income American (i.e., persons in the lowest income third of the population).”

(“Examining the Means-tested Welfare State: 79 Programs and $927 Billion in Annual Spending. Testimony before Committee on the Budget United States House of Representatives”, Robert Rector, April 17, 2012, The Heritage Foundation)

As in the above quote from the testimony of the Heritage Foundation’s Robert Rector before the U.S. House Committee on the Budget, when those not in poverty but of low-income are included, the average amount per person from the $927 billion total is not high, not to mention that about half of it went to cover medical care in 2011.

The actual Fiscal Year 2011 Medicaid spending figures cited in the above report were: $274.964 billion in federal spending and $157.600 billion in state spending, for a total of $432.564 billion.

In any case, consider the rough figures from the above quote: about half of the $927 billion went to medical care, and 10-12% to child development, job training and education aid and other minor functions; assuming these funds would be kept intact, it would leave only the 40% for cash, food and housing aid to be rolled into a basic income for the poor and low-income.

A more careful scrutiny of the Heritage Foundation report shows the amount for cash, food and housing aid as only around 38.1% in 2011: $182.12988 billion in cash aid – including Earned Income Tax Credit, Refundable Child Credit, Make Work Pay Tax Credit, and small amounts for refugee assistance and assistance to Indians, etc. – $109.41473 billion in food aid, and $56.143 billion in housing aid, for a total of $347.68761 billion of federal and state spending on cash, food and housing aid.

The availability of some of this $347.7 billion warrants further consideration: public housing has long been a useful subsidy for the needy and may deserve preservation; now if the $56.143 billion spending on housing aid is kept intact, the 2011 cash and food aid total would be under $292 billion – $140 billion short of the average $432.1 billion calculated earlier just for the households in poverty, for Santens’s proposal of basic income.

The 2012 U.S. federal budget showed a significant improvement over 2011, to around $325 billion in federal spending for cash aid and food aid, according to a U.S. House Committee on the Budget report in March 2014:

““Our aim is not only to relieve the symptom of poverty, but to cure it and, above all, to prevent it. No single piece of legislation, however, is going to suffice.”
– President Lyndon Johnson, 1964 State of the Union Address

Fifty years ago, President Lyndon Johnson declared war on poverty. Since then, Washington has created dozens of programs and spent trillions of dollars. But few people have stopped to ask, “Are they working?”

In “The War on Poverty: 50 Years Later,” the House Budget Committee majority staff starts to answer that question.

There are at least 92 federal programs designed to help lower-income Americans. For instance, there are dozens of education and job-training programs, 17 different food-aid programs, and over 20 housing programs. The federal government spent $799 billion on these programs in fiscal year 2012.

Program Area # Of Federal Programs Cost In FY2012
Cash aid

5

$220 billion

Education and job training

28

$94.4 billion

Energy

2

$3.9 billion

Food aid

17

$105 billion

Health care

8

$291.3 billion

Housing

22

$49.6 billion

Social Services

8

$13 billion

Veterans

2

$21.8 billion

TOTALS

92

$799 billion

But rather than provide a roadmap out of poverty, Washington has created a complex web of programs that are often difficult to navigate. Some programs provide critical aid to families in need. Others discourage families from getting ahead. And for many of these programs, we just don’t know. There’s little evidence either way.”

(“The War on Poverty: 50 Years Later: A House Budget Committee Report”, Committee on the Budget, Chairman Tom Price, M.D., March 3, 2014, U.S. House of Representatives)

The federal budget on poverty programs was substantially increased from $717 billion in 2011 to $799 billion in 2012. As a result, federal cash aid and food aid figures alone, i..e, not counting funding from the states, add up to $325 billion – $328.9 billion if energy aid is also added – that could be converted to basic income, with the other program expenditures intact.

The U.S. Census Bureau has more detailed data on poverty that can help my analysis on how far $325 billion could go for basic income:

“The nation’s official poverty rate in 2011 was 15.0 percent, with 46.2 million people in poverty. …

Thresholds

  • As defined by the Office of Management and Budget and updated for inflation using the Consumer Price Index, the weighted average poverty threshold for a family of four in 2011 was $23,021.

Age

  • In 2011, 13.7 percent of people 18 to 64 (26.5 million) were in poverty compared with 8.7 percent of people 65 and older (3.6 million) and 21.9 percent of children under 18 (16.1 million).
  • …”

(“Income, Poverty and Health Insurance Coverage in the United States: 2011”, September 12, 2012, U.S. Census Bureau)

A $10,000 – not $12,000 – basic annual income for each of the 30.1 million poor adults would come to $301 billion, and a $1,500 – not $4,000 – basic annual income for each of the 16.1 million children would come to another $24.15 billion, for a total of $324.25 billion, just under $325 billion – without cutting funding for medical care, housing, child development, job training and education.

I guess one can say that it is simply not enough, that while the adults may get by the children would have a hard time surviving.

A little more thought reveals that the adults could be squeezed further, because the current norm of both tax filing and social benefit allocation is not based on the number of individuals, but on the size of the household as a whole. For instance, the Social Security basic benefit for 2012, cited earlier, was $733/month for one person but $1,100 for a couple – the second adult in the household received only $367, i.e., half of the first adult’s amount.

Underlying such benefit rules is an official measure: the poverty guidelines.

Below are the 2011 poverty guidelines of the U.S. Department of Health and Human Services:

“The following guideline figures represent annual income.

The 2011 Poverty Guidelines for the 48 Contiguous States and the District of Columbia

PERSONS IN FAMILY

POVERTY GUIDELINE

1

$10,890

2

14,710

3

18,530

4

22,350

5

26,170

6

29,990

7

33,810

8

37,630

For families with more than 8 persons, add $3,820 for each additional person.”

(“FEDERAL REGISTER: JANUARY 20, 2011 (VOLUME 76, NUMBER 13)”, U.S. Department of Health and Human Services)

Note that in the earlier-quoted Census Bureau document published in September 2012, the “weighted average poverty threshold” for a 4-person household was $23,021, slightly higher than in the “poverty guideline” above.

As shown above, the poverty guideline for 1 person was just under $11,000, but for 2 persons was under $15,000; thus, if giving one person a basic annual income of $12,000 makes sense, giving a couple $16,000 instead of $24,000 would not be unreasonable.

Hence, I devise a different but reasonable notion of basic income:

  • basic income is granted to each household rather than each individual;
  • to eliminate poverty, the amount of the official poverty guideline+$1 is the granted basic income amount.

Now I ask the question, using the existing government budget figures reviewed earlier, i.e., 2011’s federal and state cash and food aid budget total of $292 billion, or 2012’s federal cash and food aid budget of $325 billion, will I be able to lift all people out of official poverty – without cutting funding to medical care, housing, child development, job training and education?

Simple estimations indicate that the answer is likely affirmative for the improved 2012 figure: the Census Bureau figure of poverty for 2011 was 46.2 million Americans, and the The Weekly Standard figure (reported from Census Bureau) of the 2011 number of households in poverty was 16,807,795, which come to an average of under 2.75 persons per household, consistent with Census Bureau’s 2.63 persons for all households; using the Department of Health and Human Resources’ 3-person household poverty guideline for 2011, $18,530+$1 for each of the 16,807,795 poverty households, would require a total of 311,465,249,145, i.e., under $312 billion – the 2011 federal and state cash and food aid budget total of $292 billion wasn’t enough, but the 2012 federal cash and food aid budget of $325 billion would be enough.

Another method of estimation yields a figure lower than $312 billion to lift all out of poverty. For this lower estimate, I first convert the 2011 poverty household number 16,807,795 to 16.808 million, and cite a more precise number of Americans in poverty in 2011: 46.247 million.

(“Income, Poverty, and Health Insurance Coverage in the United States: 2011”, by Carmen DeNavas-Walt, Bernadette D. Proctor and Jessica C. Smith, September 2012, U.S. Census Bureau)

Now in the 2011 poverty guidelines quoted above, I note that the guideline amount for a family or household of any size is 1x$10,890+(size-1)x$3,820, i.e., the first household member gets $10,890 and each of the rest gets $3,820.

(“2011 POVERTY GUIDELINE COMPUTATIONS”, December 31, 1969, U.S. Department of Health and Human Services)

Therefore, for 16.808 million poverty households of 46.247 million members, the total costs of poverty guideline+$1 per household would be: 16.808 million x ($10,890+$1) + (46.247 million – 16.808 million) x $3,820, which come to $295.512908 billion, under $296 billion – much lower than the estimate of $312 billion using average 3-person households, let alone the 2012 federal cash and food aid budget of $325 billion, though still exceeding the 2011 federal and state cash and food aid budget total of $292 billion.

Nonetheless, the higher $312 billion is a safer estimate than the lower $296 billion, because the poverty guidelines my estimations utilize are themselves a simplification of the much more detailed “poverty thresholds”.

(“U.S. FEDERAL POVERTY GUIDELINES USED TO DETERMINE FINANCIAL ELIGIBILITY FOR CERTAIN FEDERAL PROGRAMS”, January 25, 2016, U.S. Department of Health and Human Services)

Note that my poverty guideline-based basic income scheme so far has taken care of all those living under poverty, but has not done so with those living slightly above, who should be partially subsidized as well; the most obvious situation is to compare a 3-person household of an earned income of $18,530-$1, that would qualify for an additional basic income of $18,530+1 – for a total actual income of 2x$18,530 – to a household of an earned income of $18,530+$1, that would not qualify for any basic income.

A more generic reason that households above the poverty line should also benefit from some basic income is that the U.S. cash aid budget already included tax credits such as earned income tax credit, refundable child credit and making work pay tax credit – they can be claimed by households with modest income above the poverty line.

(“Taxation and the Family: What is the Earned Income Tax Credit?”, by Elaine Maag and Adam Carasso, updated February 12, 2014, Tax Policy Center; and, “Withholding of Income Taxes and the Making Work Pay Tax Credit”, by John J. Topoleski, January 30, 2013, and, “The Child Tax Credit: Current Law and Legislative History”, by Margot L. Crandall-Hollick, January 19, 2016, Congressional Research Service)

A reasonable modification to my revised basic income scheme is to adopt Charles Murray’s idea of a surtax on earned income (or income other than the basic income), but apply it to the entire earned income, not just $25,000 and above: allow all households, in poverty or not, to be eligible for a basic income of poverty guideline+$1, subject to a 50% surtax on earned income.

Now, in the earlier example, a household with earned income of poverty guideline-$1 and another with earned income of poverty guideline+$1, each can receive the basic income of poverty guideline+$1, and the difference of their incomes after the 50% surtax would be only $1. The benefit of the basic income ends when a household’s earned income reaches twice the poverty guideline, above which claiming the basic income becomes disadvantageous.

With this surtax, households under the poverty line most likely would not need to take up the average without-surtax figure of $312 billion (or the lower estimate of $296 billion), because many of them have some earned income. The question is how far the $312 billion can stretch to cover the households with modest income above the poverty line.

Consider an example where a hypothetical 3-person Household #1 has an earned income of $10,530, i.e., $8,000 under the poverty guideline. With the basic income and after the surtax, this household’s actual income should be $18,530+$1+$10,530/2 = $23,796. The surtax amount of $10,530/2, i.e., $5,265, saved from the $312 billion government spending, can cover the basic income subsidy for a hypothetical 3-person Household #2 with an earned income of $26,530, i.e., $8,000 above the poverty guideline, as follows: Household #2 should get the actual income of $18,530+$1+$26,530/2 = $31,796; so the actual basic income amount from the government should be $31,796-$26,530 = $5,266.

In a hypothetical scenario, when the earned incomes of the households that have earned income above $0 but below twice the poverty guideline are evenly spread, then the $312 billion for the 46.2 million Americans living under the poverty line, with the help of the surtax, can cover another 46.2 million Americans not in poverty but living under twice the poverty line, for a total of 92.4 million Americans.

A U.S. household with income below twice the poverty line is called a “low-income” household:

“Yet for a growing number of working families, economic security is out of reach. Between 2007 and 2011, the share of working families that are low-income—below 200 percent of the official poverty threshold—increased annually and rose from 28 percent to 32 percent nationally (see figure 1)”

(“LOW-INCOME WORKING FAMILIES: THE GROWING ECONOMIC GAP”, by Brandon Roberts, Deborah Povich and Mark Mather, Winter 2012-2013, Working Poor Families Project)

Low-income American families had increased from 28% to 32%, from 2007 to 2011.

The number of Americans in low-income households in 2011 can be found in a Census Bureau report: it was 106.011 million, 34.4% of the 308.456 million population.

(“Table 5. People With Income Below Specified Ratios of Their Poverty Thresholds by Selected Characteristics: 2011”, U.S. Census Bureau)

The hypothetical scenario when a basic income budget of $312 billion – not exceeding the 2012 federal cash aid and food aid budget of $325 billion – is sufficient for 92.4 million low-income Americans, i.e., twice the 46.2 million of Americans in poverty in 2011, likely would not cover all of the 106 million real-life low-income people; but that budget amount should be fairly close to meeting that goal.

Note that the 2012 federal cash and food aid budget of $325 billion has not included budgets of the states.

Thus, the conclusion from my simple estimations – based on U.S. government data – for a proposed household poverty guideline-based national basic income scheme incorporating an earned income surtax, to keep all Americans above the poverty line, is that the size of the existing U.S. government poverty budget should be able to cover most of the costs – without cutting the budget’s funding for medical care, housing, child development, job training and education.

The merit of this revised basic income scheme is its “simplicity”, solely characterized by two officially defined and practical parameters, poverty guideline and surtax rate, that also reflect “transparency” and “equity” – as Canadian government minister Jean-Yves Duclos would like to see.

The scheme can be implemented within the existing income tax system, with discrepancies taken care of through tax filing and assessment calculations; the only urgent adjustment mechanism needed is when a household experiences a sudden drop in income and produces proof such as job loss, whereby an increased basic income amount is dispensed by the government.

Looking forward with this basic income scheme, one can imagine that when the economy is strong, the surtax rate can be reduced so that all lower-income households get to keep more of their earned income, and more households of modest earned income can benefit from the basic income; one can also imagine that as the general living standards improve, the official annual poverty guidelines will also improve so that all people can enjoy a better basic living.

Observing the fact that in 2011 the percentage of poverty and low-income Americans was 34.4%, i.e., just over 1/3 of the total population, and taking my conclusion that a conversion of the standard welfare budget into a reasonably devised basic income scheme can lift all out of poverty and also better most of the low-income, I study the similarities to the budget numbers for Britain.

As quoted earlier from Jim Edwards of Business Insider UK, Britain’s total welfare spending for the year 2013-14 was £251 billion. The population was 64.5 million, with 15 million children, and if that budget was converted to a basic income, for all it would be £3,891/year each, while for adults only it would be £5,081/year each.

£3,891 is over $5,500 US, and £5,081 is over $7,200 US. So, while the current British welfare budget is not sufficient to cover a basic income for all, it would be enough for half of it, or for a basic income for half of the population.

But like with the U.S. budgets, the question is how much of the British total can be realistically converted to a national basic income.

According to the U.K. Office for National Statistics, the British welfare budget of £223 billion for 2009/10 consists of 6 parts:

  1. Pensions (state & public service), £104,442m, 42% of total;
  2. Incapacity, disability & injury benefits, £37,537m, 15% of total;
  3. Unemployment benefits, £4,945m, 2% of total;
  4. Housing benefits, £26,386m, 11% of total;
  5. Family benefits, income support & tax credits, £44,934m, 18% of total;
  6. Personal social services and other benefits, £33,028m, 13% of total.

(“How is the welfare budget spent?”, July 7, 2015, U.K. Office for National Statistics)

The figures appear better than with the U.S. budget.

Firstly, although the £223 billion spending included retirement pensions and various service programs, it did not need to cover medical care, and thus more of it could be converted into a basic income.

And secondly, the British state pension benefits, in part #1 above, do not pay out high amounts like the U.S. Social Security benefits do, and thus could be consolidated into a basic income:

“The full new State Pension will be £155.65 per week.

Your National Insurance record is used to calculate your new State Pension.”

(“The new State Pension”, Government of U.K.)

£155.65 per week comes to £8093.8 per year (52 weeks), or about $11,885 US in 2015 – close to but not exceeding the amount of $12,000 per adult in Scott Santens’s proposal for a U.S. basic income.

(last updated January 15, 2016, U.S. Internal Revenue Service)

The personal social services portion, i.e., part #6 of the Office for National Statistics’ breakdown, was not monetary benefits, according to the Institute for Fiscal Studies:

“… Total spending on social protection comes in at £251 billion in 2013-14, which is about 37% of total public spending of £686 billion (before accounting adjustments). Take off £83 billion of spending on state pensions and you get to £168 billion on “welfare” – very nearly a quarter of total spending.

What is included in that “welfare” total?

It includes £28.5 billion on “personal social services”. This is a number that in many analyses one would want to report separately from other welfare spending. It includes spending on a range of things, such as looked-after children and long term care for the elderly, the sick and disabled. Unlike other elements of “social protection” it is not a cash transfer payment and in many ways has more in common with spending on health than spending on social security benefits.

Another £20 billion of the spending counted under welfare is pensions to older people other than state pensions. That includes spending on public service pensions – to retired nurses, soldiers and so on[1]. This is not spending that would normally be classed as “welfare”. …”

(“What is welfare spending?”, November 4, 2014, by Andrew Hood and Paul Johnson, Institute for Fiscal Studies)

As stated in the above quote, for part #1, in 2014 £20 billion of it were public service pensions, not typical welfare. Still, £83 billion of the pensions were state pension, which as discussed earlier has a maximum payout similar to a basic income amount, and thus could be consolidated into a basic income – with special age consideration if necessary. 

Part #4, housing benefits, should probably be left intact as with the analysis of the U.S. budget.

The under £45 billion of part #5 was similar to the cash aid in the US budget.

The under £5 billion unemployment benefits of part #3 can probably be added to part #5 for a total of just under £50 billion. 

The under £38 billion of part #2, i.e., incapacity, disability & injury benefits, if consolidated into the basic income may require special consideration for the disabled:

“About £38 billion goes on benefits for people who are ill or disabled… Disabled people are more likely to live in deprived areas and work in routine occupations. In the 2011 Census, 18% of people (10 million) reported some form of disability.”

(July 7, 2015, U.K. Office for National Statistics)

So, if I only consider those amounts – as in the Office for National Statistics’ 2009/10 welfare spending figures – that can be easily consolidated into a basic income, namely the just under £50 billion in parts #3 and #5, it would come to about $74 billion US in 2010.

(last updated January 15, 2016, U.S. Internal Revenue Service)

Without going further into estimations that would be dependent on the British living standards and poverty data, I observe that just the under £50 billion, about $74 billion US in 2010, for Britain’s 64.5 million people is proportional to about $353 billion for the United States’ 308.5 million people – substantially more than the 2012 U.S. federal cash and food aid budget of $325 billion, or the $312 billion in 2011 in my estimation for a poverty guideline-based basic income scheme to lift all out of poverty and improve the lots of many of the low-income.

Hence, the fiscal picture for a national basic income for Britain should be even more optimistic than for the United States.

However, the official American and British poverty lines may be quite different.

The U.S. poverty guidelines were originally calculated in 1963-1964 on the basis of costs of living, and have been updated annually on the basis of the Census Bureau’s Consumer Price Index.

(“How the Census Bureau Measures Poverty”, U.S. Census Bureau)

The British poverty line can be higher, because it is calculated as 60% of the median household income in Britain:

“Each year, the Government publishes a survey of income poverty in the UK called Households Below Average income (HBAI).

This survey sets the poverty line in the UK at 60 per cent of the median UK household income. In other words, if a household’s income is less than 60 per cent of this average, HBAI considers them to be living in poverty.

The table below shows the HBAI poverty line for 2009 to 2012.1

Family Composition    
Lone parent Per month Per year
1 (under 14) £957 £11,484
2 (1 under 14, 1 over 14) £1,178 £14,136
     
Couple    
1 (under 14) £1,326 £15,912
2 (1 under 14, 1 over 14) £1,547 £18,564

 

1.Child poverty transitions: exploring the routes into and out of poverty 2009 to 2012, Department for Work and Pensions, 2015.”

(“The UK poverty line”, Child Poverty Action Group)

For a 3-person household of lone parent and 2 children, the British poverty line was £14,136, or about $21,781 US in 2011, and for one of 2 parents and one child, it was £15,912, or about $24,518 US.

(last updated January 15, 2016, U.S. Internal Revenue Service)

At 60% of the median household income, the British 3-person household poverty line can be as high as the 4-person U.S. household poverty guideline listed earlier.

The U.S. poverty guidelines are substantially less than 60% of the median household income. For example, in 2013 the median 3-person household income in U.S. states ranged from a low of $46,062 in Mississippi to a high of $87,206 in Maryland – and the 3-person household poverty guideline was only $19,530, i.e., about 42.4% of the Mississippi median and only about 22.4% of the Maryland median.

(“Census Bureau Median Family Income By Family Size (Cases Filed Between May 1, 2013, and November 14, 2013, Inclusive)”, U.S. Department of Justice; and, “2013 POVERTY GUIDELINES”, December 1, 2013, U.S. Department of Health and Human Services)

For Canada, the fiscal picture appears less clear.

When compared to the $325 billion US cash aid and food aid in the 2012 U.S. federal budget for a total population of around 308.5 million, and the £50 billion British cash and tax credit benefits in 2009/10 for a total population of 64.5 million, the figure of $32 billion Canadian dollars, mentioned by Toronto Star columnist Carol Goar in her 2014 article on basic income, seems easily attainable, even a tad conservative:

“…

  • Where do they set the income floor? A Senate committee seeking solutions to urban poverty did some rudimentary calculations six years ago. It found that bringing everyone up to 70 per cent of Statistics Canada’s low-income cut-off would cost roughly $20 billion. Using that as a yardstick — and taking inflation into account — it would cost about $32 billion to set the income floor at the poverty line.
  • What programs would be collapsed into the new benefit? The wider the net is cast, the lower the cost would be. Welfare and disability support and employment insurance are the obvious candidates. Beyond those three, tensions arise. Old age security is a possibility. But very few seniors live in poverty. The national child benefit could be included. But it, too, keeps thousands of low-income youngsters out of poverty. What about war veterans’ allowances, the universal child care benefit, funding to aboriginal organizations, support for agencies that serve the poor, the mentally ill, the homeless and hungry, new immigrants and racial minorities? What about the all the tax breaks targeted at low-income Canadians? The longer the list grows, the more potential losers there are.
  • …”

(Carol Goar, February 25, 2014, Toronto Star)

Canada’s population in 2014 was 35.5437 million, according to Statistics Canada.

(“Population by year, by province and territory (Number)”, September 29, 2015, Statistics Canada)

$32 billion Canadian, normally less than $32 billion US, for a 35.5 million Canadian population is proportional to about $278 billion US for a 308.5 million American population – much lower than my estimate of $312 billion US, or the lower $296 billion US, for a poverty guideline-based basic income scheme in the U.S.

But Goar based her number on a Canadian Senate committee estimate. So let’s accept it for now.

What, then, has the recent Canadian government budget fiscal picture been like, when compared to the American and the British?

Like with the states in the U.S., there was spending by the Canadian provincial governments; also like the U.S., there was Canadian federal government spending.

A Statistics Canada document on government spending on social services in 2007 – the most recent comprehensive survey I find online – showed that provincial spending on “social assistance”, i.e., cash benefits for the poor, was quite small; moreover, there was no mention of the federal funds going to regular social assistance:

“In the fiscal year ending March 2007, total social services spending in Canada amounted to $172.4 billion, compared with $79.5 billion in 1989.

Of the $172.4 billion, federal government spending on social services, including transfer payments to other levels of government, accounted for roughly 49% of expenditures in 2007, compared with 59% in 1989.

In 2007, the provincial, territorial and local governments’ share was 33% (34% in 1989) and the Canada and Quebec Pension Plans’ (CPP/QPP) was 20% (14% in 1989).

Federal government spending: Old Age Security and Employment Insurance are major components

The federal government is responsible for Old Age Security and Employment Insurance. Total spending for these two programs alone amounted to $44 billion, or 52% of gross federal spending on social services in 2007.

The other 48% was spent on a number of programs, including vocational rehabilitation for disabled persons, veteran’s benefits, day care assistance and social services for First Nations, as well as on contributions as an employer to workers’ compensation plans and to the CPP/QPP.

In 2007, the federal government spent $12.8 billion on Employment Insurance, representing 6.2% of program expenditures. …

Old Age Security, the other big component of social services spending at the federal level, amounted to $31.4 billion in 2007, or 15.1% of program expenditures. …

Provincial, territorial and local government spending more than doubles

Between 1989 and 2007, social services spending at the provincial, territorial and local levels of government more than doubled to $56.3 billion. This is the third largest component of spending after health and education.

Among social services expenditures, spending on social assistance, which consists of transfer payments to help individuals and families maintain a socially acceptable level of earnings, represented 33% of expenditures on social services in 2007.”

(“Government spending on social services”, June 22, 2007, Statistics Canada)

In short, total government spending in Canada on social services was $172.4 billion in 2007:

  • About 49% came from the federal government, 52% of it spent on Old Age Security and Employment Insurance benefits, and the other 48% on a number of programs including: vocational rehabilitation for disabled persons, veteran’s benefits, day care assistance and social services for First Nations, as well as on contributions as an employer to workers’ compensation plans and to the CPP/QPP;
  • Employment Insurance expenditure was $12.8 billion;
  • Old Age Security expenditure was $31.4 billion;
  • About 20% came from Canada and Quebec Pension Plans;
  • About 33% came from provincial, territorial and local governments, totalling $56.3 billion, of which 33% was spent on social assistance.

1/3 of $56.3 billion was about $18.77 billion. According to the Bank of Canada, $18.77 billion in 2007 would be worth $21.12 billion in 2014.

(“Inflation Calculator”, Bank of Canada)

By itself, $21 billion is far short of the $32 billion needed in Goar’s estimation for the basic income. However, Goar mentioned a Senate committee estimate 6 year earlier, which should be around 2007, when $20 billion would be enough:

“… A Senate committee seeking solutions to urban poverty did some rudimentary calculations six years ago. It found that bringing everyone up to 70 per cent of Statistics Canada’s low-income cut-off would cost roughly $20 billion. Using that as a yardstick — and taking inflation into account — it would cost about $32 billion to set the income floor at the poverty line.”

(Carol Goar, February 25, 2014, Toronto Star)

My sense is that, besides inflation, the Senate committee’s notion of “bringing everyone up to …” may not be the same as Goar’s “set the income floor at …”. In my earlier estimations for the U.S. poverty guideline-based basic income, not only that the poverty households would be lifted above the poverty line, but also that the low-income households would receive partial basic income.

In the earlier quote from her article, Goar referred to several types of social spending, in addition to welfare.

Goar mentioned the Canadian Employment Insurance, which had a $12.8 billion budget in 2007. This is not an entitlement benefit, but a government-administered insurance scheme with premiums paid by both employees and employers, and thus, contrary to Goar’s opinion, cannot be easily integrated into a basic income.

(“Employment insurance (EI)”, modified December 17, 2015, Canada Revenue Agency)

On the other hand, the Old Age Security mentioned by Goar, on which the federal government spent $31.4 billion in 2007, is an entitlement benefit:

“Apart from private money squirreled away in an RRSP or other savings vehicles, the OAS and complementary Canada Pension Plan are key components in the retirement planning of many Canadians. …

The Old Age Security pension is a monthly payment available to Canadians age 65 and older who apply and meet certain requirements. Unlike CPP, it is not dependent on a person’s employment history and a person does not need to be retired from a job to qualify for it.

The government adjusts the OAS payment every three months to account for increases in the cost of living according to the Consumer Price Index. The average monthly amount as of October 2012 was $514.56. The maximum payout for the first quarter of 2013 is $546.07, according to Service Canada.

There are also supplementary programs, including the Guaranteed Income Supplement, which provide additional income to low-income seniors.

The government claws back OAS payments from high-income Canadians. In 2013, if you are retired but have an income of more than $70,954 (from things like pensions and personal investments), the government will reclaim part of your OAS payment — 15 cents for every dollar of income above the $70,954 threshold, which is adjusted annually for inflation.”

(“Canada Pension Plan vs. Old Age Security – the differences explained”, February 1, 2012, CBC News)

However, because high-income seniors also received this benefit, probably only a small portion of 2007’s $31.4 billion went to seniors in poverty or of low-income, as Goar noted:

“Old age security is a possibility. But very few seniors live in poverty.”

(Carol Goar, February 25, 2014, Toronto Star)

Goar’s article mentioned the national child benefit and the universal child care benefit. The Canada Child Tax Benefit and the Universal Child Care Benefit are tax credits and of sizable government spending: $11.2 billion in the fiscal year 2006-2007, and $13.1 billion in the fiscal year 2013-2014.

(“Archived – Where Your Tax Dollar Goes”, modified September 19, 2008, and, “Your Tax Dollar: 2013–2014 Fiscal Year”, modified December 19, 2014, Department of Finance Canada)

But an integration of the child benefits with the basic income appears unlikely, as indicated by Jean-Yves Duclos, the Canadian Minister of Families, Children and Social Development cited earlier.

Goar’s article also mentioned “all the tax breaks targeted at low-income Canadians”. But the most obvious of them, the Goods and Services Tax/Harmonized Sales Tax Credit and the Working Income Tax Benefit, are both of small sizes.

(“Child and family benefits”, modified January 4, 2016, Canada Revenue Agency; “GST/HST Credit”, Nisga’a Nation Knowledge Network; and, “Enhancing the Working Income Tax Benefit”, Economic Action Plan 2015, Canada’s Economic Action Plan)

Perhaps my interpretation of the Statistics Canada data for 2007 isn’t accurate.

For instance, what about the “Canada Social Transfer”, the framework touted in the 2014 Liberal Party Priority Resolution for designing and implementing a Basic Annual Income?

The Canadian Department of Finance’s definition of Canada Social Transfer does mention “social assistance”:

What is the Canada Social Transfer (CST)?

  • The CST is a federal block transfer to provinces and territories in support of post-secondary education, social assistance and social services, and early childhood development and early learning and childcare.
  • …”

(“Canada Social Transfer”, modified December 19, 2011, Department of Finance Canada)

But despite its grandiose name, the transfer amount on social assistance was likely very small, because the Canada Social Transfer totalled only $8.5 billion in 2007 and was spread among post-secondary education, children’s programs and social programs – social assistance wasn’t even explicitly mentioned in this Department of Finance report:

“…

  • The Canada Social Transfer (CST)—to support post-secondary education, social programs, and programs for children—gave provinces and territories cash funding representing over 3½ cents of each federal tax dollar ($8.5 billion). ”

(modified September 19, 2008, Department of Finance Canada)

In the fiscal year 2013-2014, the Canada Social Transfer has increased to $12.2 billion, but social assistance still wasn’t mentioned in the online Department of Finance report:

“The Canada Social Transfer provided $12.2 billion for post-secondary education, social programs and programs for children, representing close to 5 cents of each tax dollar spent.”

(modified December 19, 2014, Department of Finance Canada)

A fraction of $8.5 billion, or of the increased $12.2 billion, would only be a very few billions at best for “social assistance”. Adding it to the $18.77 billion provincial spending on social assistance in 2007, worth about $21.12 billion in 2014, would still be far short of Carol Goar’s estimated need of $32 billion.

Hopefully, in the huge pool of Canada’s government spending on social services – $172.4 billion in 2007 – here and there money can be found to integrate with the provincial social assistance funds, to make up the $32 billion needed for the basic income according to Carol Goar’s 2014 analysis based on a Senate committee estimate.

But again, in my view, Goar’s figure seems low when compared to my estimations with the U.S. and U.K. budgets: $32 billion Canadian, less than $32 billion US, for a 35.5 million population, compared to the $325 billion US in the 2012 U.S. cash aid and food aid budget for a 308.5 million population, or to the £50 billion in the 2009/10 British cash benefits budget for a 64.5 million population, that can be converted to basic income.

Nonetheless, the Canadian Senate committee estimate has been aimed at “bringing everyone up to 70 per cent of Statistics Canada’s low-income cut-off”, as quoted earlier from Goar’s article.

Canada does not have official poverty guidelines or thresholds, only the Low-Income Cut-Offs as measured by Statistics Canada, which have been calculated from the Family Expenditure Survey done in a base year and then updated using the Consumer Price Index – the oldest survey base year was 1959, and the most recent was 1992.

(“Low income cut-offs”, modified November 27, 2015, Statistics Canada)

While in the U.S. the low-income line is 200% of the poverty guideline, in this case the Canadian Senate committee used a generous 70%, not 50%, of LICO as a poverty line.

For 2011, the LICO amount was $30,487 for a family of 4:

“…

Thus for 2011, the 1992 based after-tax LICO for a family of four living in an community with a population between 30,000 and 99,999 is $30,487, expressed in current dollars.”

(modified November 27, 2015, Statistics Canada)

70% of $30,487 was $21,340.9 Canadian, for a family of 4 in 2011 – just in the dollar number it was already considerably less than the 2011 U.S. Department of Health and Human Services’ poverty guideline quoted earlier, $22,350 US for a family of 4.

I think the lower Canadian poverty line partially explains why Carol Goar’s $32 billion in 2014 would be enough to lift all out of poverty in a basic income for Canada’s 35.5 million people, whereas in my estimation $312 billion US would be needed for the 308.5 million Americans in 2011.

The above comparison leads to the question: why is it that the estimations based on government data do not show Canada, a country of a long social welfare tradition and reputation, to be clearly more generous than the United States in this regard?

I do not have an obvious answer. But apparently my observation, derived from my analysis, is not alone, as in 2012 the U.S. surpassed Canada in total social expenditures as a share of the GDP:

“America spends a bigger share of its national paycheck on social services than its health care-providing neighbor to the north.

New numbers published by the Organization for Economic Cooperation and Development (OECD) show that public “social expenditures” in the US have overtaken those of Canada. The report defines “social expenditures” as essentially spending by public institutions aimed at households or people to support their welfare, such as jobless aid, healthcare and pension benefits. US social spending as a share of GDP hit 19.7% in 2012, compared to Canada’s 18.3%. In 2013, the US is expected to spend about 20%, compared to 18.3% for Canada.”

(“The US spends more on social services than its health care-loving neighbor Canada”, by Matt Phillips, July 25, 2013, Quartz)

This fiscally relatively conservative course on social spending is not an exclusive trademark of the recent Conservative government under former Prime Minister Stephen Harper from 2006 to 2015, but has also been observed by the Liberal Party and the government it now leads when it comes to basic income; as discussed earlier, the Liberal Party’s resolutions on basic income in 2014 did not become a part of the party’s 2015 election platform, and are not on the Liberal government’s current official agenda.

A most often cited concern about socialist economic policies, the basic income being one, is that they would lead to loss of productivity. Jim Edwards of Business Insider UK opined in regard to the basic income numbers he analyzed, cited earlier:

“One of the criticisms of basic income is that it would kill off the desire to work. Few studies have been done of this, but those that have indicate that people only reduce their work hours by a small amount on average.

The fact that a fiscally neutral basic income scheme would pay out only £423 per month (€585 or $644) means almost everyone receiving it would still need a job. £423 a month is simply not enough to survive — or even pay rent — in most areas of Britain.”

(Jim Edwards, December 13, 2015, Business Insider UK)

Edwards’s logic is that a basic income that is simply not enough for basic living is actually good, as it would keep people continue working, if at reduced work hours.

I can imagine this argument appeals to the conservative scholar Charles Murray, whose proposed basic income would be only $7,000/year after mandatory health insurance cost is deducted.

In the 2008-2009 German aid experiment in a village in Namibia, discussed earlier, the aid organizations reported that the basic income led to improved economic activity and improved savings, although these positive conclusions have been questioned by Rigmar Osterkamp as exaggerated:

“… The number of underweight children was said to have fallen considerably because of the basic income grant. According to the figures, school attendance rose, and so did attendance at the local health clinic. The BIG Coalition reported that a number of people living in Otjivero, encouraged by the cash transfer, had started small businesses as bakers, tailors and brick layers.

According to the BIG promoters’ calculations, the per capita income (minus the BIG payments) increased by 29% in 2008, and a sample of households supposedly revealed a private savings rate of 38%. The BIG supporters concluded that popular prejudices against the basic income grant were proven wrong, at least in Namibia, since the BIG has not made people “lazy”, but rather had led to socially desirable behaviour and even motivated recipients to become economically active.

However, the question arises whether these claims are plausible. In a single year, an economic growth of 29% would be three times the rate of China (and is supposed to be sustainable on top of that). According to the World Bank, Namibia’s national growth rate averaged a mere 4.4% from 2000 to 2008, and GDP even declined slightly in 2009. The extremely high savings rate that was indicated is similarly hard to believe, given that the households concerned are quite poor.

The figures on the reduction of hunger and the improvements in children’s weight are also astonishing in a just six months period. …”

(Rigmar Osterkamp, May 3, 2013, D+C Development and Cooperation)

Less controversial benefits of the basic income, such as improvements in nutrition, health and education, were also reported for a UNICEF-supported experiment in India cited earlier:

“The preliminary findings of the SEWA project have been published, and the results are extremely encouraging. The project was accompagnied by a study conducted in 20 villages including the eight villages where the unconditional cash transfer was provided. Residents of the other 12 villages were observed as a control group. Residents could do whatever they wanted with the money.

Positive results were found in terms of nutrition, health, education, housing and infrastructure, and economic activity. Researchers found a positive impact on health and access to medical treatment. The most visible impact however was on educational attainment. School attendance in the cash transfer villages shot up, three times the level of the control villages.”

(December 2012, Global Basic Income Foundation)

Pascal-Emmanuel Gobry, a proponent-turned opponent of the universal basic income, argued that a full universal basic income would lead to a vicious cycle of economic decline:

“… Many conservatives like the idea of a simple welfare system that would replace arcane programs and nosy bureaucracies.

And indeed, right-winger that I am, I was for a very long time a strong proponent of a UBI. But now I oppose it.

What happened? I looked at the best science and changed my mind.

Here I must make a slight detour into epistemology. Most social “science” research is actually not science, technically speaking. … Most published social science studies rely on modeling and statistical analysis to try to formulate theories as to what is going on. Most studies are really elaborate thought experiments that, until they are or can be validated by experiment, are not scientific results, properly speaking. …

There is, however, one way to gain relatively reliable social-scientific evidence: randomized field trials (RFTs). …

What does that have to do with the UBI? Well, it just so happens that the UBI is one of the very few, if not the only, domains of social science policy where we have exactly that: extensive, long-term, repeated RFTs, which are the gold standard of evidence in social science.

… more than 30 experiments were done in the U.S. from the ’60s to the ’90s and there was another set of experiments done in Canada in the ’90s. The universal basic income is one of the few areas of social policy where we can say with some confidence “Science says…”

And science says the UBI doesn’t work.

… All the evidence strongly suggests that if you have a UBI, the outcome is exactly what many conservatives fear will happen: Millions of people who could work won’t, just listing away in socially destructive idleness (with the consequences of this lost productivity reverberating throughout the society in lower growth and, probably, lower employment, in a UBI-enabled vicious cycle).”

(“Progressives’ hot new poverty-fighting idea has just one basic problem: Science”, Pascal-Emmanuel Gobry, July 21, 2014, The Week)

Gobry’s article referred to more than 30 experiments on the basic income in the United States from the 1960s to the 1990s, and one in Canada in the 1990s.

While I am unfamiliar with the Canadian experiment Gobry mentioned, another Canadian experiment during the 1970s – first of its kind in North America in terms of coverage – in the small city of Dauphin in Manitoba province, has been ‘rediscovered’ by social science researchers:

“Between 1974 and 1979, residents of a small Manitoba city were selected to be subjects in a project that ensured basic annual incomes for everyone. For five years, monthly cheques were delivered to the poorest residents of Dauphin, Man. – no strings attached.

And for five years, poverty was completely eliminated.

The program was dubbed “Mincome” – a neologism of “minimum income” – and it was the first of its kind in North America. It stood out from similar American projects at the time because it didn’t shut out seniors and the disabled from qualification.

The project’s original intent was to evaluate if giving cheques to the working poor, enough to top-up their incomes to a living wage, would kill people’s motivation to work. It didn’t.

But the Conservative government that took power provincially in 1977 – and federally in 1979 – had no interest in implementing the project more widely. Researchers were told to pack up the project’s records into 1,800 boxes and place them in storage.

A final report was never released.

Why Dauphin? How did a farming community play host to such a landmark social assistance program?

Good political timing didn’t hurt.

In 1969, the left-leaning provincial NDP led by Edward Schreyer swept into power for the first time. The transition injected new rural sensitivities and democratic socialist influences into politics.

On the federal level, Pierre Elliott Trudeau was prime minister. The two men worked swiftly to set up conditions for a basic income experiment.

In 1973, Manitoba and the federal government signed a cost-sharing agreement: 75 per cent of the $17-million budget would be paid for by the feds; the rest by the province.

The project rolled out the next year.”

(“A Canadian City Once Eliminated Poverty And Nearly Everyone Forgot About It”, by Zi-Ann Lum, December 23, 2014 (updated December 19, 2015), The Huffington Post Canada)

So although in the early 1970s then Prime Minister Pierre Trudeau dodged the idea of introducing a universal basic income, as recalled in Carol Goar’s article discussed earlier, his Liberal government did fund and lead a pilot experiment that lasted 5 years, ending due to changes of the provincial and federal governments.

About 1/3 of the Dauphin residents received the basic income, with the income amount set at 60% of Statistics Canada’s low-income cut-off:

“All Dauphinites were automatically considered for benefits. One-third of residents qualified for Mincome cheques. 

How Mincome cheques were calculated:

1. Everyone was given the same base amount: 60 per cent of Statistics Canada’s low-income cut-off. The cut-off varied, depending on family size and where they lived. But in 1975, a single Canadian who was considered low-income earned $3,386 on average.

  1975 2014 dollars
Individual $3,386 $16,094
Family of two $4,907 $20,443

2. Base amount was modified: 50 cents was subtracted from every dollar earned from other income sources

“It was sort of something new and utopian. It was completely different,” said Dauphin’s current mayor Eric Irwin. “It was an attempt to define social services in a different way.””

(by Zi-Ann Lum, December 23, 2014 (updated December 19, 2015), The Huffington Post Canada)

Note that the recent Canadian Senate committee basic income idea cited by Goar in her Toronto Star article considered 70% of LICO as the income amount, which would be an improvement over the Dauphin “Mincome” level.

In 2005, the archived documents of the Dauphin pilot experiment was ‘rediscovered’ by researcher Evelyn Forget:

“Dr. Evelyn Forget is the researcher at University of Manitoba credited for tracking down those 1,800 dusty boxes of Mincome raw data that sat forgotten for 30 years.

She first heard about the project in an undergraduate economics class at the University of Toronto in the ’70s. Mincome cheques were still being delivered when her professors praised the experiment as “really important,” saying it was going to “revolutionize” the delivery of social programs. It stuck with her.

In 2005, she began looking for the Mincome data. After a strenuous search, she located the records at the provincial archives in Winnipeg. She was the first to look at them since they were packed up in 1979.”

(by Zi-Ann Lum, December 23, 2014 (updated December 19, 2015), The Huffington Post Canada)

The ‘rediscovery’ of the Dauphin “Mincome” experiment led more interest to similar ideas, such as that advocated by Canadian Conservative Senator Hugh Segal:

“Former Conservative senator Hugh Segal is a longtime proponent of a guaranteed annual income policy. He believes the program could save provinces millions in social assistance spending on programs like welfare.

Instead of being forced through the welfare system, people’s eligibility would be assessed and reassessed with every income tax filing. Those who don’t make above the low-income cut-off in their area would be automatically topped up, similar to Mincome in Dauphin.

How guaranteed annual income could work today:

• Distributed as a federal Negative Income Tax
• Top-ups are calculated automatically and delivered after income tax filings
• Top-ups would render people ineligible for provincial welfare
• Provincial welfare money gets reallocated to other priorities (i.e. elder care, expanded early childcare programs)”

(by Zi-Ann Lum, December 23, 2014 (updated December 19, 2015), The Huffington Post Canada)

In reference to Pascal-Emmanuel Gobry’s opposing view, I point out that Hugh Segal’s notion of an earned income top-up would discourage work, because all who make below the low-income cut-off level would get top-up to the same.

The approach of the 1970s Dauphin “Mincome” experiment, on the other hand, was remarkably similar to my universal basic income scheme adapted and modified from Charles Murray’s: household was the basic unit, each granted an mount according to the official line of poverty or low-income; the other incomes were taxed at 50%, and so the more a household earned the more it would keep; and, similar to the U.S. case where in 2011 just over 1/3 of the population was low-income, in Dauphin 1/3 of the residents qualified for the “Mincome”.

The main difference between my scheme and the Dauphin “Mincome” is that I use the U.S. official poverty guideline which is 50% of the low-income line, whereas “Mincome” used 60% of Statistics Canada’s low-income cut-off.

Evelyn Forget, the leading expert on the Dauphin experiment, has also pointed out that “Mincome” did not have the effect of discouraging work:

“The American experiments had the same results. Few people stopped working and hardly anyone with a full time job reduced the hours they worked at all. This is because a well designed guaranteed income (and this one was designed like a refundable tax credit) creates incentives for people to work. it does a much better job of supplementing the incomes of the working poor than does other kinds of social assistance.”

(“A Way to Get Healthy: Basic Income Experiments in Canada”, August 7, 2013, Basic Income UK)

Despite being a Conservative Senator, Segal criticized the Conservative government of then Prime Minister Stephen Harper for showing no interest in universal basic income:

“But the idea never took off in Canada. The lessons of Mincome never spread. Simply put: The Mincome experiment discontinued because the governments changed.

Segal says what happened in Dauphin was a “classic Ottawa initiative,” with a lot of money spent putting a program in place, but without adequate investment to evaluate if it was effective or not.

During his nine years in the Senate, Segal advocated strongly for basic income for Canadians. But in his time as a member of the Conservative caucus, he “didn’t see the tiniest indication of interest on the part of the government” in another test site or implementation.

That’s because the current government shares the Mulroney administration view that “the best social policy is a job,” he said.

The one exception was late finance minister Jim Flaherty who established the working income tax benefit to aid working Canadians living in poverty. He was the only one to engage constructively, Segal says.

Segal said he doesn’t expect the concept to gain traction again among the Harper Conservatives.

In Canada, the idea of an universal basic income was first presented at a Progressive Conservative policy convention in October of 1969. Then-leader Robert Stanfield argued the idea would consolidate overlapping security programs and reduce bureaucracy.”

(by Zi-Ann Lum, December 23, 2014 (updated December 19, 2015), The Huffington Post Canada)

Like the Mulroney Progressive Conservatives in the 1980s, the Harper Conservatives have shown no interest. But now after Stephen Harper’s departure from the party leadership, Conservative MP and finance critic Lisa Raitt has expressed interest that the Canadian House of Commons finance committee should study the idea, as quoted earlier from a story reporting the view of Liberal government minister Jean-Yves Duclos.

I wonder if the Canadian Liberals’ knowledge of the Dauphin “Mincome” experiment in the era of party leader Justin Trudeau’s father – despite most Canadians’ unfamiliarity with it – had to do with the party’s recent adoption of the proposal of “a federal pilot of a basic income supplement” only as an ordinary resolution, but the proposal, “to design and implement a Basic Annual Income” “under the existing Canada Social Transfer System”, as a “Priority Resolution”.

If so, the son should pick up where the father had left off.

Canada can quickly overtake Finland and the Netherlands, where, for all the latest international media publicity, only pilot experiments are being planned.

But even if that happens, Canada is behind one country in the world, Brazil, in becoming the first.

In 2004, i.e., a year before Canadian researcher Evelyn Forget searched and found the archived documents of the 1970s Dauphin “Mincome” experiment, Brazil enacted a law for a universal basic income:

“While Alaska is the only place in the world with an ongoing basic income program, they are not the only jurisdiction to have shown interest. Brazil actually has a law mandating the progressive institution of a basic income program.

The law was introduced by Senator Eduardo Suplicy of the Brazilian Workers’ Party in 2001. He had previously introduced a bill to create a Negative Income Tax model of a guaranteed livable income, but that bill failed to pass. This second bill called for a universal basic income program to be progressively instituted, beginning with those most in need.

The bill was approved by the Senate in 2002 and by the Chamber of Deputies in 2003. It was signed into law by President Lula da Silva in 2004. The bill leaves implementation in the hands of the President.

No progress has been made toward implementing a basic income since then.

However, Brazil does have an interesting, albeit conditional, income security program for the poorest Brazilians. The Bolsa Familia, or Family Grant, was created in 2003 by merging 4 existing cash transfer programs. It could be used as a stepping stone to a basic income program, even though it was not created with that intention.

The Bolsa Familia is paid to 11 million of Brazil’s poorest families, which means that the money reaches 46 million people. It has contributed to a reduction in inequality, although it is not the only factor. Brazil, one of the most unequal countries in the world, has made astonishing progress in reducing inequality since 2001. In the last five years, the incomes of the poorest Brazilians have risen 22%, compared to only 4.9% for the richest Brazilians.”

(“Basic income in Brazil”, by Chandra Pasma, July 14, 2009, Citizens for Public Justice)

So Brazil, an ambitious developing country where progress was recently made in improving the livelihood of the poor families, already has had a universal basic income law for over a decade now, even though the government does not have the money for a full implementation.

Brazilian Senator Eduardo Matarazzo Suplicy’s bill that turned the “Citizen’s Basic Income” into law had been in the works since 1991.

(“Brazil: Imagine a World Free of Hunger and Need”, Rema Nagarajan, September 6, 2012, Pulitzer Center on Crisis Reporting)

Meanwhile, in the wealthy developed countries, where such a notion had been banished until the 1960s and 1970s, and is still treated with trepidation, some have come to wonder, ponder, and dither.

Or as Tim Worstall suggests, wait until the coming of the robots and see what happens then.

Leave a comment

Filed under Economy, Inequality, News and politics, Social science

A review of postings about scientific integrity and intellectual honesty, with observations regarding elite centrism – Part 4: academics in the kingdom of intellectual legacies and traditions

(Continued from Part 3)

In 1988 when Maria Klawe moved from IBM in California to the University of British Columbia in Vancouver, along with her theoretical computer scientist husband Nick Pippenger, to become UBC computer science department head, I had received a computer science postdoctoral fellowship offer from the University of Toronto, a fixed-term assistant professorship offer from UBC, and a tenure-track assistant professorship offer from Simon Fraser University, also in the Vancouver region of Canada. My decision to go to UBC led to my subsequent experience of an academic political dispute with my boss Klawe, described in earlier Parts, with an employment aspect involving senior colleagues David Kirkpatrick and Alain Fournier.

As in Part 3, had Klawe and Pippenger chosen not to go to UBC my job offer there would have been upgraded to tenure-track. Thus, my dilemma in choosing a faculty job between UBC or SFU in 1988 had a Klawe-related context.

Then in 1990 when I was again applying for a tenure-track position, Klawe chose not to inform me that 2 of the 5 recommendation letters I had requested did not arrive, one of the two to have been from her close friend, computer science professor Richard “Dick” Karp at the University of California, Berkeley, who had promised to affirm that my recent research was in theoretical computer science – my 1988 Berkeley Ph.D. had been in mathematics.

The deception involving both Klawe and Karp, deployed to my detriment, had started in 1988 at Berkeley when Karp suggested that I choose UBC because of Klawe and Pippenger, as I recalled in a July 2012 blog post:

“The arrival of Maria Klawe and her husband Nicholas Pippenger made UBC Computer Science more important but reduced my prospect of a longer employment, so between ‘fate’ and ‘luck’ – SFU professor Wo-Shun Luk had been particularly interested in my going there – I had chosen fate.

Worse yet, it was a ‘double’ fate: while still at UC Berkeley in April-May 1988 it had been Klawe’s good friend Professor Richard Karp – today the founding director of Simons Institute for the Theory of Computing – informing me of the couple’s decision to go to UBC – thus no immediate chance of tenure-track for me – and also advising me to choose UBC over Simon Fraser because of the new strength; then in 1990 when I applied for a tenure-track position under Head Klawe, “Dick” Karp’s promised reference letter was a no show without my knowledge …”

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 8) — when political power games rule”, July 6, 2012, Feng Gao’s Blog – Reflections on Events of Interest)

It was in May 1988 that I went to Karp’s office to seek his advice, telling him of Toronto’s postdoc offer, SFU’s tenure-track offer, and UBC’s fixed-term offer that would become tenure-track if Klawe and Pippenger decided not to go there.

Karp expressed surprise that I had a Toronto postdoc offer, i.e., without his help, and then informed me that Maria and Nick had just told him they would be going to UBC. Karp then suggested that I accept the UBC offer because the arrival of Maria and Nick would make it a much stronger department, adding, “Vancouver is a very liveable city. David Kirkpatrick is a very smart guy. Jim Varah is a distinguished numerical analyst; I met him when I visited UBC and he was the department head”.

But Karp also cautioned, “after one year you can go anywhere in Canada” – I would be applying for the Canadian immigrant status – and at some point advised, indirectly, that Pippenger was not to be contradicted, as previously quoted in Part 3:

“Whatever Nick says must be right.”

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 4) — when power and control are the agenda”, May 24, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

Karp gave me the phone number of Klawe, then manager of the mathematics and related computer science department at IBM Almaden Research Center, telling me that she would like me to contact her. I was disappointed that their going to UBC took away my prospect of a tenure-track job, but felt privileged to get acquainted with the distinguished couple in theoretical computer science prior to her becoming my boss. On the phone I accepted Klawe’s invitation to give a seminar presentation at IBM Almaden, and met her, Nick and a few others there.

My Berkeley Ph.D. was only in mathematics, but my bachelor’s degree from Sun Yat-sen University in Guangzhou, China, had been in computer science, with a concentration in computational mathematics. Computer science had just begun in China in the late 1970s-early 1980s, and Chinese computers of that time were mainframes comparable only to American computers of the 1960s.

(Marshall C. Yovits, ed., Advances in Computers, Volume 27, 1988, Academic Press)

In the academic year 1987-1988, my last at Berkeley, I audited graduate courses taught by computer science professors, such as computer vision by Jitendra Malik, computer architecture by Alvin Despain, parallel processing and pipelining by David Messerschmitt, operating systems by John Ousterhout, and two courses by Richard Karp, on the analysis of probabilistic and random algorithms and on the analysis of parallel algorithms.

Courses in mathematical theories usually did not attract large audiences but Karp’s were exceptional, with his sharp command of mathematical concepts and algorithmic techniques, and his logical organization and articulate presentation of the course materials.

Karp assigned every student, including me who was only auditing, the task of taking notes for at least one lecture, notes that he would review, compile and distribute so that everyone would have a full copy of his course as recorded by the students – reflecting Karp’s confidence in his lectures’ legibility and clarity – in addition to his concise course notes. Later at UBC, I taught these two graduate courses and utilized Karp’s lectures for 2/3 of my course materials.

So at that point in the summer of 1988, I felt confident of integrating into the computer science teaching and research profession in due course.

It had always been in my intellectually youthful personality not to settle on the safest route to personal progress, but on some degree of safety from which I could get into the intrigue of, and make better, the unknown.

In the case of going to Canada, Karp’s advice made sense to me as there should be enough personal security to focus on the scientific prospect: the UBC fixed-term job offer included helping with Canadian immigration, a given part of the SFU tenure-track offer but not in the U of T postdoc offer; UBC had a stronger academic research reputation, also a longer history and a more scenic campus, than SFU, although neither was at the level of overall strength as U of T, Canada’s leading university.

At the previous stage of my career, in 1981-1982 in China applying for Ph.D. study in the United States, I had done similarly with my choices.

I applied to 3 graduate programs: the scientific computing and computational mathematics program in Stanford University’s computer science department, the mathematics department of UC Berkeley, and the applied mathematics department at State University of New York at Stony Brook. Stanford computer science was ranked the best in the world, Berkeley’s mathematics department was the largest – with over 70 faculty members – and one of the very best in the world, while Stony Brook’s applied mathematics was good but did not enjoy a top ranking.

I soon received Stony Brook’s admission, partly because Professor Yung Ming Chen (陳永明), its applied math department chairman at the time, had in 1981 visited our department at Sun Yat-sen university, with me assigned his tour guide, and during his short visit I learned that his father had been an army general and deputy governor of Guangdong province, of which Guangzhou is the capital, during the Nationalist government era, i.e., before the 1949 Communist takeover of China.

My bachelor’s thesis adviser was Professor Yuesheng Li (李岳生), chairman of the computer science department founded after our computational mathematics major class had finished the first year in the mathematics department. Understanding of my ambitiousness Li suggested, as I was thinking, to wait for responses from the other two schools. But having gotten his graduate study, at Moscow State University in the 1950s, after working as a student interpreter for a Soviet mathematics professor fostering research at Jilin University in China’s Northeast, Li also tried to talk me into applying to the University of Wisconsin-Madison, where Professor Carl de Boor at the U.S. Army Mathematics Research Center was a peer of his. I recalled the anecdote in a March 2011 blog post:

“When I applied for graduate study in the United States Professor Li seriously recommended the U. S. Army Mathematics Research Center at the University of Wisconsin, Madison – Dr. Carl de Boor there and his General Motors connection were Professor Li’s favorite and it also had Dr. Grace Wahba – but I preferred the Math program at the University of California, Berkeley, although Computer Science at Stanford was my first choice.”

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 3) – when violence and motive are subtle and pervasive”, March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

But Berkeley’s politically liberal reputation, its mathematics department’s world leading status and the U.S. West Coast’s mild climate – I had grown up in Guangzhou without any snowy winter – were to my liking, and so I took Li’s respect for Carl de Boor seriously but not so much Wisconsin-Madison as my place of Ph.D. study.

I was accepted by Berkeley but not by Stanford. In a context resembling 6 years later choosing UBC over SFU, I then chose Berkeley over Stony Brook: a more leading Ph.D. program, a university of longer history, and a locality of easier life adjustment.

Berkeley was the original campus of the University of California, founded 90 years before the State University of New York in the 1940s, and a century before SUNY Stony Brook in the 1950s.

(“A brief history of the University of California”, Office of the President, University of California; “History of SUNY”, The State University of New York; and, “Fast Facts”, Stony Brook University)

I had understood that Stanford computer science was hard to get into, its brochure stating that it annually admitted only a small class and few foreign students; also, at the time I wasn’t familiar with the research of its faculty, except some of Prof. James Wilkinson’s in numerical analysis, although I was familiar with some of Stanford math professor Samuel Karlin’s research – more so than with the Berkeley mathematicians’ – as a result of my undergraduate study influenced by Prof. Li.

But the bottom line might be that I wasn’t that strong an applicant; and there was an interesting, related background story.

I had 3 SYSU professors agreeing to provide letters of recommendation: Prof. Li, Prof. Mingjun Chen (陳銘俊), Li’s close associate and former colleague at Jilin University, and Prof. Youqian Huang (黃友謙).

My entrance class of 1977, the first class admitted through a nationwide exam after the 1966-1976 Cultural Revolution, composed mostly of students of mixed school-and-work backgrounds – I had worked as a factory apprentice for over a year – and in our performance wasn’t rated too highly by Chen, whose favorites were the entrance class of 1979, the first composed primarily of fresh middle school graduates.

The Stanford application form asked the professor to rate the applicant as being in the top 2% of the class, the top 10%, or lower, whereas the Berkeley form had a top 5% category. I recall I was cleaning a window on a classroom-cleaning day and Chen was present telling me that he could not rate me as in the top 2% – I had no argument since at least several girls among our graduating class were ahead of me in the grade-point average – but he would rate me as in the top 5% for Berkeley given that it was the school I liked most.

Obviously not a chance for the world’s top computer science department to accept a not-at-the-top graduate from the developing world.

Prof. Huang was more reassuring. An original SYSU graduate and an articulate teacher, Huang was friendly partly because he and my maternal family were from the same Shantou (Swatow) region of Guangdong.

Born and growing up in Cantonese-speaking Guangzhou, I could also speak the Swatow dialect because my maternal grandparents lived with us and helped my parents raise me and my sister, as I have told in a 2010 Chinese blog post.

(“忆往昔,学历史智慧 (Reminiscing the past, learning history’s wisdom)”, April 10, 2010, FengGao.ORG)

Born in Indonesia, Prof. Mingjun Chen died of cancer on July 20, 2008, at the age of 74.

(“讣告”, by 程月华, and, “我们敬爱的陈老师——一路走好”, by Elsa, July 21, 2008, 中山大学, Yat-sen Channel)

A former student reminisced about what a conscientious and demanding teacher Chen had been:

“… 老师上课总是几十年如一日那样从来不迟到!实际上每次八点钟的课他都是七点半就到了!每次七点起身时就看到老师在慢慢向教室走去了!我用十分钟走的路程,想必老师要用半个钟才到吧!从他家里一直走到那,估计至少老师会提前一个小时出发!因为老师有癌症,化疗过,但是身体一直不是很好!每走几分钟就要停下来休息。在中大校道上你常会见到一个满脸皱纹的老头,大清晨带着个装着前晚写的讲义的塑料袋,蹲在路旁休息!每次经过他身边都想上前跟老师说一声:老师您辛苦了,但是一直没有,现在我再也没机会了!一次听同学说他常常五点就起了,或者是为了批改我们的作业会熬通宵!每当老师在评讲作业时说他前一晚的事情,我们都会觉得很心痛!老师看到我们不认真的字迹,错误的运用符号,乱增加言语,就会暴跳如雷,彻夜未眠,第二天仍要赶早去上课!其实如果是助教上课的话他完全可以不来的,因为他没讲一句话,但是他还是静静地做在最左边!不知道大家是不是习惯了,哪天老师不来,我们就会不专心,但是只要老师坐在那,大家就会觉得很安心,才会很认真的听课!”

(Elsa, July 21, 2008, 中山大学, Yat-sen Channel)

English translation of the above quote:

… Everyday in decades Teacher was never late for class! As a matter of fact, for every 8 o’clock class he arrived at 7:30! Each time I got up at 7 I saw Teacher walk slowly toward the classroom! A 10-minute walk for me, I think it took Teacher about half an hour! From his home walking there, I estimate that Teacher would start at least an hour ahead of time! Because Teacher had cancer, he had done chemotherapy, but his health has not been very good! Every few minutes of walk he needed a rest. On Sun-Yat-sen University campus roads you would often see a wrinkled old man, early in the morning carrying a plastic bag filled with lecture notes written the night before, squatting on the roadside resting! Every time walking by him I wanted to come up and say, Teacher your hard work is appreciated, but I never did, and now there is no more chance! I once heard from a classmate that he often got up at 5 o’clock, or staying up all night marking our homework! Whenever Teacher talked about the night before when commenting on our homework, we would all feel heartache! When Teacher saw our careless writings, erroneous used notations and arbitrarily added words, he would fly into a rage and lose sleep all night, but would still attend class early the next morning! Actually, if it was a teaching assistant’s class Teacher would not have to come at all, because he did not say a word, but still he sat quietly at the far left! I do not know if if had become our habit, that any day Teacher did not come, we would not concentrate, but as long as Teacher sat there, we would all feel reassured, and would then very seriously listen to the lecture!

I remember well that in teaching our courses, Chen’s favorite mathematical subject was “inequalities” – like professed by Louis Nirenberg of New York University’ Courant Institute of Mathematical Sciences, John Nash’s co-recipient of the 2015 Abel Prize, as in Part 2.

I have noticed the coincidence that in 1982 Chen’s evaluation ruled out any chance of my attending Stanford, and 8 years later in 1990 the plan of Alain Fournier, who had moved from U of T to UBC, to hire new Stanford Ph.D. Jack Snoeyink as in Part 3, ended my chance of a tenure-track position in theoretical computer science at UBC – and both Chen and Fournier later died of cancer, Chen nearly 8 years after Fournier.

In 1982 at Sun Yat-sen university a more senior, technically solid and practically experienced student, with a master’s degree earned under Prof. Li and closely associated with Prof. Chen, was also going to the U.S. for Ph.D. study. Guanrong Chen (陳關榮) would soon go by his English name of Ron Chen.

During the Cultural Revolution, like many youths of his age Ron was sent to work in a farm, in his case on the big island of Hainan. Like my later Berkeley roommate Kezheng Li (李克正) mentioned in Part 2, Ron studied mathematics on his own; later through a family connection Ron got Mingjun Chen’s tutoring, and after the Cultural Revolution joined Sun Yat-sen University’s first class of graduate students.

Ron was very good at applying what he had mastered. I recall that by the time of his master’s thesis he had found applications for spline functions – Prof. Li’s specialty – in control theory, and soon linked up with Prof. Charles K. Chui of Texas A&M University, College Station, where Chui, as well as Prof. Larry Shumaker, director of Texas A&M’s Center for Approximation Theory, were peers of Li’s.

(“Vita (August 6, 2015): Larry L. Schumaker”, Department of Mathematics, Vanderbilt University)

Along with Texas A&M’s admission Ron secured a teaching assistantship – most of us at SYSU going to U.S. graduate schools in 1982 couldn’t get it prior to attending – and in his Ph.D. study under Chui, continued to apply approximation theory techniques to control theory. One year before my Ph.D., in 1987 Ron received his Ph.D. and co-authored a book on Kalman filters – a mathematical estimation tool in control theory – with his adviser Chui.

(Charles K. Chui and Guanrong Chen, Kalman Filtering with Real-Time Applications, 1987, Springer-Verlag)

After teaching in Houston at Rice University and then at University of Houston, Ron became a tenured professor at the latter. In 1996 he became a fellow of the Institute of Electrical and Electronics Engineers – an honor Nick Pippenger has had as in Part 3 – and the next year he co-authored a book with his former mentor Mingjun Chen and his former SYSU fellow graduate student Zhongying Chen (陳仲英) – three Chens – on approximate solutions of operator equations, a subject I had once audited at Prof. Chen’s SYSU graduate course.

(Mingjun Chen, Zhongying Chen and Guanrong Chen, Approximate Solutions of Operator Equations, 1997, World Scientific; and, “GUANRONG CHEN”, Department of Electronic Engineering, City University of Hong Kong)

Since 2000, Guanrong Ron Chen has been a professor at the City University of Hong Kong, and is an honorary professor at around 30 universities internationally.

( “Guanrong (Ron) Chen, City University of Hong Kong, Hong Kong”, Hindawi Publishing Corporation; and, Department of Electronic Engineering, City University of Hong Kong)

In 2008, Ron missed delivering a keynote speech at a July 18-19 international workshop in Austria, due to attending the “final interview” in China for the 2008 “National Natural Science Award of China”:

“… As the date of final interview coincides with the workshop, he will not be able to attend and give his interesting talk. We wish Professor Chen all the best …”

(“Professor Guanrong Ron Chen, City University of Hong Kong, Hong Kong”, First International Workshop on Nonlinear Dynamics and Synchronization, July 18-19, 2008, Klagenfurt, Austria)

A day later Ron’s former mentor Mingjun Chen died at SYSU in Guangzhou, and a week later on July 25 Ron gave a eulogy at the memorial service.

(“许罗丹教授主持陈铭俊告别仪式”, guosheng.sunbo9.net)

In his tearful reminiscences, Ron told of his own mother’s passing half a year before, his last meeting with his mentor on Father’s Day, and the tale that he had been informed of receiving the National Natural Science Award but been waiting till after the October State Council signing event to give his mentor a surprise, but it was all too late:

陈铭俊老师:今天我也是60 岁的人了,这几年送走了好几位亲戚朋友,半年前才送走了我母亲,也在这里 ,但都没有觉得像今天那样伤心 。。。(哭泣-编者注) 这里的年轻同学们可能不知道,我和陈铭俊老师的关系非常特殊。简单说来,没有陈铭俊老师的过去,也就没有我陈关荣的今天。。。(哭泣–编者注) 陈铭俊老师:本来,我今年获得了国家自然科学奖,打算到十月份国务院签字以后再告诉您,给您一个惊喜,但都来不及了 。。。(又是哭泣–编者注) 幸好今年父亲节的时候,还见到过您最后一面。当时您对我说的最后一句话是:“陈关荣,只要您还有一口气,就不要停止(工作)。” 陈铭俊老师:我不会停止的……”

(“现场播报:陈关荣致陈铭俊的悼词”, guosheng.sunbo9.net)

English translation of the above quote:

Teacher Chen Mingjun: Today I am also a 60-year-old person, in the last few years I bid farewell to several of my relatives and friends, and six months ago to my mother, also here [at the cemetery’s farewell hall], but none felt as sad as it is today. . . (crying – editor’s note) Young students here may not know, that my relationship with Teacher Chen Mingjun was very special. Simply put, without Teacher Chen Mingjun’s past, there would not be I Chen Guanrong’s today. . . (crying – editor’s note) Teacher Chen Mingjun: Originally, this year I received the National Natural Science Award, and intended to wait until after the October State Council signing before telling you, to give you a surprise, but it is all too late. . . (again crying – editor’s note) Fortunately on Father’s Day this year, I got to see your one last time. At that time the last sentence your said to me was: “Chen Guanrong, as long as you still have a breath, do not stop (working).” Teacher Chen Mingjun: I will not stop……

Ron won one of the second prizes, i.e., second-class awards, in 2008 and went on to win another second-class award in 2012.

(“IAS Benjamin Meaker Visiting Professor: Guanrong Chen, June-August 2013”, Institute for Advanced Studies, University of Bristol; and, “Staff Achievements”, Department of Electronic Engineering, City University of Hong Kong)

Ron has dedicated a 2010 book he co-edited, Evolutionary Algorithms and Chaotic Systems, “to the memory of his mentor Professor Mingjun Chen (1934-2008).”

(“Evolutionary Algorithms and Chaotic Systems, Editors: Prof. Ivan Zelinka, Prof. Sergej Celikovsky, Prof. Hendrik Richter, Prof. Guanrong Chen”, Springer)

After his UC Berkeley retirement, in 1995 my former Ph.D. adviser Stephen Smale became a University Distinguished Professor – a type of prominent professorship former U.S. President Jimmy Carter holds at Emory University as in Part 3 – at City University of Hong Kong, before Ron Chen moving there in 2000 and founding the Center for Chaos and Complex Networks – Smale had been a founder of the mathematical theory of chaos in the 1960s.

(“Finding a Horseshoe on the Beaches of Rio”, by Steve Smale, 1998, Volume 20, Number 1, The Mathematical Intelligencer; “Prof. Stephen SMALE (史梅爾)”, City University of Hong Kong; and, “Guanrong (Ron) Chen, Director”, Center for Chaos and Complex Networks, City University of Hong Kong)

By 2003, an international symposium held in Shanghai, China, listed Smale as an honorary chairman of its organizing committee, and Guanrong Chen the vice chairman, and assured international scientists that China, Shanghai especially, was now safe from the SARS epidemic:

“Now SARS is under well control in China. WHO announced on June 24 that WHO had canceled its warning for traveling to Beijing – the last city in the mainland of China, and Beijing now is also not in the list of SARS infected areas again. Thus, people can travel all over China safely now. Even in the worst days, Shanghai has always been lucky. No people were really infected in Shanghai (except one who is the father of a patient coming back from southern China and was infected there). No doctors and nurses were infected in Shanghai. The total number of SARS patients were 8. There is no identified patient or suspect now. …

Honorary Chairs:

Chaohao Gu, Fudan University, China

Stephen Smale, University of California at Berkeley, USA

General Chairs:

Gaolian Liu, Shanghai University, China

A. Jameson, Stanford University, USA

Vice Chairman:

Guanrong Chen, City University of Hong Kong, China”

(“SHANGHAI INTERNATIONALSYMPOSIUM ON NONLINEAR SCIENCE AND APPLICATIONS ( Shanghai NSA’03ˈNovember 9 – 13, 2003 ): Call for Paper”)

SARS was of course scarier and deadlier than cancer.

Chen and Smale coming together was a tale of convergence of separate figures from my SYSU days and Berkeley days – not known to have been connected back then.

Back in August 1982 when Ron and I went to the United States, there was not yet Smale in the picture. Unlike Ron, I was going to a school where no math professor was an expert in spline function theory or the more general approximation theory, in which Ron had done his masters’ thesis and I my bachelor’s thesis under Yuesheng Li.

I had to start afresh.

In the spring of 1982, Prof. Li had seriously advised that when I got to Berkeley I should pursue my Ph.D. study under Professor Alexandre Chorin, a leading expert on the computation of fluid dynamics, whose “random vortex methods” Li had taught us during his precious teaching time outside of his department chair duties. Influenced by his Soviet math training, Li deemed fluid dynamics very important. He also told me that Chorin was of a Soviet Union-related background and received substantial research funding.

To earn a Ph.D. at a world-leading mathematics program I needed to learn the comprehensive basics in order to have a solid foundation. The mathematics of fluid flows centered on the subject of “partial differential equations”, i.e., equations involving not only derivatives – calculus of variations of mathematical functions – but derivatives in space of 3 dimensions, plus a time dimension. I had learned some at the advanced undergraduate level, but the research specialties of Yuesheng Li and Mingjun Chen had been in “ordinary differential equations”, i.e., in only one spatial dimension.

However, at Berkeley I soon found out that Chorin did not really do the mathematics of partial differential equations: he led a large group of students and researchers doing experimental computing of fluid dynamics equations.

More so than Ron Chen utilizing his specialty since his graduate school days, Chorin had practiced his specialty of fluid-dynamics computing since before graduate school, and his focus was not too mathematical as he has admitted in a recent interview:

“… Born in Poland only a few years before Hitler’s invasion, Chorin is no stranger to a different sort of flight: his family fled through Lithuania and Russia before spending 10 years in Israel and 11 years in Switzerland. By the time Chorin came to the United States for graduate study at the age of 23, he had already started working on the front line of computational mathematics, programming algorithms to solve equations describing ocean tides. Over the last half century, Chorin has tackled questions relating to the motion of fluids—some of the most challenging problems in applied mathematics.

Chorin developed computational methods that are used not only to study the flow of air around aircraft wings, but also the innards of combustion engines, the movement of blood through the heart, and the formation of stars. …

AL: When did you decide to study math?

AC: When I was a small kid, some distant relative of ours (whom I’ve never been able to find as a grown-up) used to ask me math questions, and decided I was good at math. If you asked me when I was seven or eight what I wanted to do, I would have said I wanted to be a mathematician. In Israel I’d been in the gifted program. In Switzerland, I was a fairly mediocre student. My grades did not qualify me to study math. So I studied engineering instead. In college, I did very well in math and got lots of encouragement, so I went back to it.

AL: Do you think that detour into engineering drew you to be in applied math rather than pure math?

AC: Oh, it’s very likely. Actually, at the end of engineering school I intended to be in pure math… but in Israel I was a programmer for someone who did numerical analysis for physics problems, and I got enamored with it.

AL: I’m sure you’ve seen the field of applied math change over the course of your career.

AC: It’s changed tremendously! The words “applied math” are too vague—I do mostly computational mathematics, and that has changed a lot. In fact, it is getting more distant from math. When I was a student, the computational issue was “How do you approximate [this] partial differential equation?” That’s a math question. Nowadays, we’ve been successful, so you can ask much more specific questions which are less mathematical. …”

(“ALEXANDRE CHORIN”, by Anna Lieb, April 29, 2015, Berkeley Science Review)

As quoted above, Chorin wasn’t from the Soviet Union but a Polish Jew escaping Nazi occupation by way of the Soviet Union. Nevertheless I wasn’t the only one with a misconception about his origin, as even his former Ph.D. adviser, Professor Peter Lax – a colleague of Louis Nirenberg’s – at NYU’s Courant Institute, had mistaken Chorin for a Hungarian compatriot:

“… In a fateful chance encounter on the street, he met his former teacher, de Rham, who advised him to pick a school in the United States and wrote letters on his behalf. De Rham counseled him, in particular, to study with another famous mathematician, Peter Lax, at New York University’s Courant Institute. Courant was then what Chorin calls the “mother ship” of computational and applied math. …

Lax, now 87 and a professor emeritus at Courant, recollects being initially drawn to Chorin because of a misunderstanding. Lax thought that Chorin was a fellow Hungarian emigré because there was a very prominent family named Chorin in Hungary. But Chorin’s original Polish surname was Choroszczański; the family had changed it in Israel. (In contrast, he never Anglicized his French first name because he wanted to avoid being confused with another fluid-mechanics researcher whose first name was “Alexander” and whose surname was similar to Chorin.)”

(“Science Lives: Alexandre Chorin”, by Douglas Steinberg, May 8, 2014, Simons Foundation)

In contrast to Stephen Smale whose anti-war history had risked his eligibility for U.S. National Science Foundation grants as in Part 2, Chorin’s research received substantial U.S. military funding, such as acknowledged in his 1973 papers, partly collaborated with his Berkeley student Peter S. Bernard, that established his “vortex methods”:

“… the Office of Naval Research under Contract no. N00014-69-A-0200-1052.”

(“Numerical study of slightly viscous flow”, by Alexandre Joel Chorin, 1973, Volume 57, Part 4, Journal of Fluid Mechanics, and, “Discretization of a vortex sheet, with an example of roll-up”, by Alexandre Joel Chorin and Peter S. Bernard, November 1973, Volume 13, Issue 3, Journal of Computational Physics, in Alexandre Joel Chorin, Computational Fluid Mechanics: Selected Papers, 1989, Academic Press)

That was a navy grant, a type mentioned in Part 2 about John Nash’s former MIT senior colleague, former Communist party member Norman Levinson, who in the 1960s held both NSF and navy grants.

Still, Lax described Chorin as “very independent”:

“Even though Chorin turned out not to be Hungarian, Lax recounts that once he got to know him, “I thought highly of him. He had a very lively mind.” Lax also notes that Chorin “was always very independent.” …”

In my impression Chorin was a proud loner, often strolling alone with his German Shepherd or a pet like it. His “lively” comment about the United States, “as if I belonged”, drew remarks from President Barack Obama on November 20, 2014, when awarding the National Medal of Science to Chorin and others:

“After he came here as a foreign student from Israel, Eli Harari co-founded SanDisk with two colleagues, one from India, another from China. Alexandre Chorin, whose accomplishments led to a sea change in the way a generations of mathematicians use computers, sums up his experience this way:  “I came here as a foreigner on an American fellowship, received the opportunity to study at great schools and work at great universities, and have been treated as if I belonged.”

Treated as if I belonged. You do belong — because this is America and we welcome people from all around the world who have that same striving spirit. We’re not defined by tribe or bloodlines. We’re defined by a creed, idea. And we want that tradition to continue. But too often, we’re losing talent because — after the enormous investment we make in students and young researchers — we tell them to go home after they graduate. We tell them, take your talents and potential someplace else.”

(“Remarks by the President at National Medals of Science and National Medals of Technology and Innovation Award Ceremony”, November 20, 2014, The White House)

Look, “a foreigner on an American fellowship” was clearly treated better than ones like Ron Chen arriving on a teaching assistantship, let alone others like me.

In fact, receiving a National Medal at 76 while active as one of only 24 University Professors in the entire University of California, including UC Berkeley, UCLA, UC San Diego and UC Riverside mentioned in earlier Parts, and other campuses, Chorin has belonged to the U.S. more than Smale.

Smale received a National Medal of Science after he had retired from Berkeley and gone to CityU of Hong Kong – in 1996 along with Richard Karp and 6 others nationally, from President Bill Clinton:

“Also receiving a medal was Stephen Smale, 66, a professor, emeritus, of mathematics at the University of California, Berkeley. Smale, who now conducts research and teaches at the City University of Hong Kong, was cited “for four decades of pioneering work on basic research questions which have led to major advances in pure and applied mathematics. He is responsible for formulating key definitions, proofs, and conjectures which have energized an ever-growing number of mathematicians and scientists.””

(“Eight Researchers Accept The National Medal Of Science For 1996”, by Thomas Durso, August 19, 1996, The Scientist)

Nonetheless, Obama chose a well-suited example to make his point, which must have felt personal because his own father, Barack Obama, Sr., had studied at the University of Hawaii in Honolulu and then earned a graduate degree at Harvard University but eventually returned to Kenya, where his intellectual goals were never realized. I wrote about this history in a November 2009 blog post:

“Barack Obama, Sr. had left Kenya at age 23 to study at the University of Hawaii (the same age also for me when I went to study in the U.S., but I later taught at the University of Hawaii…), where he met and married President Obama’s mother and produced a future U.S. president – the only one from Hawaii; he then went to study at Harvard, met Mark Ndesandjo’s mother and returned to Kenya with her, where Mark was born.

In 1963 when his father left him and his mother Stanley Ann Dunham, a white student at the University of Hawaii born in Kansas and grown up in various parts of the continental United States, Barack Obama was only two-years-old; neither side of his parents’ families had approved of the interracial marriage, which at the time was legally permitted in Hawaii but not in 22 other states in the United States. Barack’s mother later remarried an Indonesian student and took Barack to that country after her husband was summoned back to his home country following the rise to power of military strongman Suharto, sending Barack back to Hawaii to live with the grandparents when she wanted a good education for her son at the prestigious Punahou School in Honolulu, which Barack liked better than schooling in Indonesia.

It could be Kenyan presidential politics as Barack Obama described in his 1995 book, or the economy of life in Africa, or still something more, that his ambitious, U.S.-educated economist father later suffered career setback in Kenya and became an alcoholic and physical abuser…”

(““Nairobi to Shenzhen”, and on to Guangzhou (Part 1)”, November 22, 2009, Feng Gao’s Blog – Reflections on Events of Interest)

Another well-suited reason for Obama’s remarks on Chorin’s lively comment, from my vantage point, is that Chorin had been the Ph.D. adviser of Nathaniel Whitaker I knew at Berkeley, one of the few African-American mathematics Ph.D. students in the U.S., as I noted in a February 2013 blog post:

““Whitaker” is reputed to be one of the most ancient Anglo-Saxon names, traceable in the written records to the 11th-12th centuries or earlier, with “white acre” and “wheat acre” being its origin…

Back in my UC Berkeley days there was a fellow graduate student one year ahead of me by the name of Nathaniel Whitaker, except that he was African American so I don’t know how his name had been inherited. …

Nate and my classmate friend Paul Wright … were among a very small number of African American Mathematics Ph.D. students in the United States.”

(“Guinevere and Lancelot – a metaphor of comedy or tragedy, without Shakespeare but with shocking ends to wonderful lives (Part 2)”, February 28, 2013, Feng Gao’s Posts – Rites of Spring)

At Berkeley I continued to think of Chorin as of Russian Jewish background, and that the only Polish mathematician I knew in the early 1980s was a visiting assistant or associate professor whose name I have not been able to recall, who taught the first-semester graduate course on set theory and topology, was very friendly and wrote a strong letter of recommendation for my application for the foreign student tuition waiver and teaching assistantship.

Another basic course, on measure theory and functional analysis, was a second-semester continuation of it and was taught by a strict Prof. William Bade, the department’s vice chair of graduate affairs from whom obtaining the tuition waiver and teaching assistantship was not easy for me.

My Berkeley roommate Kezheng Li, as in Part 2 a senior math graduate student who gave me considerable help, for his Ph.D. studied algebraic geometry, a specialty field within algebra, the latter an important basic subject on which I took another 2-semester course.

The first semester of algebra was taught by a retired professor, Gerhard Hochschild, an easygoing teacher from whom most students expected good grades.

The second semester was taught by another senior professor, Abraham Seidenberg, a demanding teacher who gave a final exam so long that it came like a shock; learning the Galois theory for the first time I could not finish the exam, but the final grade wasn’t too bad as many students didn’t finish.

An interesting episode about Seidenberg, a narcissistic version of Albert Einstein in his looks, happened near the end of the last class when students were filling out teaching evaluation forms, a time when the professor was supposed to be absent and a student would bring the evaluations to the department: Seidenberg not only stayed in the classroom but took peeks into the filled-out forms on the lectern, sometimes as a form was being placed there by a student.

I would encounter this kind of intimidating antic only one more time, as in Part 1 by UBC computer science department head Maria Klawe in February 1992, when she insisted on staying in the room for a faculty meeting that would hear my grievance about her management.

The first time was easier and the second time harder – that seemed to be a pattern in my experience, not just for these two 2-semester courses.

The other 2-semester course I took was partial differential equations – as mentioned important for my anticipated future study – taught by Professor Tosio Kato.

An exemplary scholar, Kato’s presentation was detailed and thorough, yet covering a broad range of topics of interest, especially impressive when his own education had been in physics. S. T. Kuroda, a former physics student of Kato’s at the University of Tokyo, had had a similar experience:

“My recollection of Kato goes back to my younger days when I attended his course on mathematical physics at the Department of Physics, University of Tokyo. It was 1953–54. The course covered, thoroughly but efficiently, most of the standard material from the theory of functions through partial differential equations. The style of his lecture never gave an impression that he went quickly, but at the end of each class I was surprised by how much he had covered within one session.”

(“Tosio Kato (1917–1999)”, by Heinz Cordes, Arne Jensen, S. T. Kuroda, Gustavo Ponce, Barry Simon, and Michael Taylor, June/July 2000, Volume 47, Number 6, Notices of the American Mathematical Society)

Recall, as in Part 2, the tale of John Nash’s Princeton Ph.D. work in game theory in the early 1950s, that later won him the Nobel Prize in Economics: Nash’s original idea was dismissed by John von Neumann, a powerful mathematician who had founded game theory but become preoccupied with nuclear bomb development and advising U.S. military leaders; David Gale, later a UC Berkeley math professor I worked as a teaching assistant for, then encouraged Nash to pursue the idea for his Ph.D. thesis.

Half a world away in Japan, Kato’s entrance into advanced mathematical research also had a von Neumann factor, in this case von Neumann’s acceptance of Kato’s work on Hamiltonian operators, and a World War II wartime factor. The story was told by the mathematical physicist Barry Simon in 2000, in memory of Kato who had passed away in October 1999:

“Kato’s most celebrated result is undoubtedly his proof, published in 1951 [K51a], of the essential self-adjointness of atomic Hamiltonians:

In a case of scientific serendipity, J. von Neumann concluded his basic work on the theory of unbounded self-adjoint operators just as quantum theory was being invented, and he had realized by 1928 that the critical question was to define the Hamiltonian as a self-adjoint operator. Kato proved that the operator … defined initially on smooth functions of compact support, has a unique self-adjoint extension (and he was even able to describe that extension).

I have often wondered why it took so long for this fundamental question to be answered. As Kato remarks in his Wiener Prize acceptance [K80], the proof is “rather easy.” … I would have expected Rellich or K. O. Friedrichs to have found the result by the late 1930s.

One factor could have been von Neumann’s attitude. V. Bargmann told me of a conversation he had with von Neumann in 1948 in which von Neumann asserted that the multiparticle result was an impossibly hard problem and even the case of hydrogen was a difficult open problem (even though the hydrogen case can be solved by separation of variables and the use of H. Weyl’s 1912 theory, which von Neumann certainly knew!). Perhaps this is a case like the existence of the Haar integral, in which von Neumann’s opinion stopped work by the establishment, leaving the important discovery to the isolated researcher unaware of von Neumann’s opinion.

Another factor surely was the Second World War. … In [K80] Kato remarks dryly: “During World War II, I was working in the countryside of Japan.” In fact, from a conversation I had with Kato one evening at a conference, it was clear that his experiences while evacuated to the countryside and in the chaos immediately after the war were horrific. He barely escaped death several times, and he caught tuberculosis. …

Formally trained as a physicist, Kato submitted his paper to Physical Review, which could not figure out how and who should referee it, and that journal eventually transferred it to the Transactions of the American Mathematical Society. Along the way the paper was lost several times, but it finally reached von Neumann, who recommended its acceptance. The refereeing process took over three years.”

(Heinz Cordes, Arne Jensen, S. T. Kuroda, Gustavo Ponce, Barry Simon, and Michael Taylor, June/July 2000, Notices of the American Mathematical Society)

I can understand the “von Neumann’s attitude” factor leading to a long delay of a mathematical discovery, as can be glimpsed also in Nash’s case. But I can’t quite agree with Simon’s take of the “Second World War” factor: Kato’s difficult life during that time no doubt was a delay to Japanese research in mathematical physics, but wouldn’t that have allowed researchers in the West to achieve Kato’s “most celebrated result” first – had it not been for von Neumann’s oppressive or authoritarian influence?

A different biography of Kato seemed to deemphasize the devastation of World War II to his research career:

“… In 1941 he earned his bachelor’s degree from the University of Tokyo, commencing a relationship that would last two more decades. After a year away Kato rejoined the University of Tokyo in 1943 to start his teaching career while also pursuing a doctorate there. In 1944 he married Mizue Suzuki. In 1951 he became a doctor of science, and the same year he became a full professor. In 1962 he immigrated to the United States to accept a professorship at the University of California at Berkeley from which he retired in 1989.

In 1949, while still working on his doctorate, Kato published one of his early important papers, “On the Convergence of the Pertubation Method, I, II” in Progressive Theories of Physics. …”

(Elizabeth H. Oakes, Encyclopedia of World Scientists, 2007, Infobase Publishing)

In 1942, the year of the Midway naval battles and the year Kato was away from his university, there was no real war in Japan proper, whatever the reason for his evacuation to the countryside; and in the year 1944 when war was closing in on Tokyo, Kato got married.

Having studied with Prof. Kato, I would not make the kind of leap of logic Prof. Barry Simon did.

Well, unless there was no choice, I suppose. As in the pattern of the second being harder, near the end of the second semester Kato gave the students a list of, I have forgotten how many, probably 7 or 8 problems, and several weeks of time for each to independently solve 3 of them. One of them, in topology, had been posed in class weeks earlier and a few days later I had gone to his office and communicated a solution to him; and another I soon solved; but for a third problem, I went around the rest, worked on several of them in some details, and did not obtain any complete solution with full confidence – in the end I also handed in what I got for one of the others, no doubt with leap of logic somewhere.

As the first year ended, I liked partial differential equations well enough that I was contemplating Ph.D. study under Kato.

In the summer of 1983, on July 11-29 at UC Berkeley there was a major international conference partially funded by the NSF, Summer Research Institute on Nonlinear Functional Analysis and Its Applications, on a subject with deep connections to partial differential equations. The organizing committee consisted of Kato, Felix Browder and Louis Nirenberg – 2 senior mathematicians mentioned in Part 2 in relation to John Nash – French mathematicians Haim Brezis and Jacques-Louis Lions, and Paul Rabinowitz of the University of Wisconsin.

I attended the plenary presentations and paid special attention to one by Paul Rabinowitz, partly because he was a Wisconsin-Madison colleague of Carl de Boor whom my undergraduate adviser Yuesheng Li had highly recommended, and he had the same last name as Philip Rabinowitz in the numerical analysis field of my undergraduate study. Later in 1997 when de Boor was elected to the National Academy of Sciences and Rabinowitz received the George David Birkhoff Prize in Applied Mathematics, their department’s newsletter gave informative introductions of the two:

“Carl de Boor, Professor of Mathematics and Computer Sciences, was among the 60 scholars elected this year to the National Academy of Sciences. Carl grew up in East Germany and received the PhD from the University of Michigan in 1966. …

… Splines were introduced in the 40’s (by the late I.J. Schoenberg of Wisconsin) as a means for approximating discrete data by curves. Their practical application was delayed almost twenty years until computers became powerful enough to handle the requisite computations. Since then they have become indispensible tools in computer-aided design and manufacture (cars and airplanes, in particular), in the production of printer’s typesets, in automated cartography… Carl is the worldwide leader and authority in the theory and applications of spline functions. His contributions have been more fundamental and numerous than any other researcher in this field, ranging from rigorous theories through highly efficient and reliable algorithms to complete software packages. Carl has made Wisconsin-Madison a major international center in approximation theory and numerical analysis…

Professor Paul Rabinowitz has been awarded by the American Mathematical Society (AMS) and the Society for Industrial and Applied Mathematics (SIAM), the 1998 G.D. Birkhoff Prize for his outstanding contributions to mathematics. … Paul received the PhD in Mathematics from New York University in 1966…

The citation for the Prize reads in part:

“Perhaps more than anyone else Paul Rabinowitz has deeply influenced the field of non linear analysis. His methods for the analysis of nonlinear systems has changed the way we think of them. … Paul Rabinowitz broke new ground to invent general mini-max methods for problems not necessarily satisfying the Palais-Smale [compactness] condition and that are indefinite. …

He has also introduced the use of sophisticated topological tools to obtain multiple solutions of nonlinear problems. Rabinowitz is a powerful mathematician who combines abstract mathematics with concrete applications to problems arising in various fields.””

(“Van Vleck Notes: Dedications, Honors and Awards …”, Fall 1997, Department of Mathematics, University of Wisconsin)

At the summer 1983 Berkeley conference, Rabinowitz’s powerful mathematical theorem, the “Mountain-Pass Lemma”, was a focus of interest, even though the conference proceedings’ titles showed that term only in another researcher’s paper, “The topological degree at a critical point of mountain-pass type” by Helmut Hofer; the titles also referred to the “Palais-Smale condition” bearing the name of Berkeley’s Stephen Smale, in one paper, “A generalized Palais-Smale condition and applications” by Michael Struwe.

(Felix E. Browder, ed., Nonlinear Functional Analysis and Its Applications, 1986, Volume 45, Part 1, Proceedings of Symposia in Pure Mathematics, American Mathematical Society)

As I recall, Peking University professor and Berkeley visiting scholar Gongqing Zhang (張恭慶) attended the conference. Several Chinese mathematicians presented papers, prominent among them Jilin University professor Zhuoqun Wu (伍卓群) – a 1950s classmate of Yuesheng Li’s.

(Felix E. Browder, ed., 1986, American Mathematical Society)

Just prior to the Fall 1983 semester I passed the Preliminary Examination, the required written exam for the Ph.D. program. Kezheng had helped me over the summer by having me study previous prelim exam problems, discuss with him and hear his insights on the fine points. Kato happened to be the examiner, and though I did not fully solve all the problems in a long list, I did pretty well.

The oral Qualifying Examination remained, to be taken in front of a committee after a Ph.D. adviser was chosen.

Prof. Kato was generous. When I went to his office, likely shortly after passing the Prelim Exam, to ask him to be my Ph.D. adviser, he told me that he was retiring, but would give me a research assistantship for one semester while I looked for a younger, more research active Ph.D. adviser – it was very helpful as my teaching assistantship did not start until the spring semester of the 1983-1984 academic year – and he also advised that, given my interest in the mathematics of computation more than in computing, Andrew Majda or Stephen Smale would be the choice.

Except the more junior Polish visiting professor, it had been a distinguished group of senior professors teaching these basic graduate courses in 1982-1983: Tosio Kato was a 1980 winner of the Norbert Wiener Prize, a leading prize in applied mathematics awarded jointly by the American Mathematical Society and the Society for Industrial and Applied Mathematics – Alexandre Chorin later received it in 2000 – while Gerhard Hochschild was a 1980 winner of the Leroy P. Steele Prize for his research, a leading prize awarded by the AMS – John Nash later received it in 1999.

(“The Leroy P Steele Prize of the AMS”, MacTutor History of Mathematics archive; and, “The Norbert Wiener Prize”, Society for Industrial and Applied Mathematics)

So 5 years before taking Karp’s advice regarding a faculty job at UBC or SFU, I followed Kato’s advice in finding a Ph.D. adviser.

Kato’s advice was sensible. From a background in partial differential equations and computation, Chorin would have been the choice had I liked programming and computing more than the mathematics.

The next professor who did research in mathematics in relation to fluid dynamics computation was Ole Hald. In 1978, Hald and collaborator Vincenza Mauceri Del Prête were the first to prove some desirable property of convergence, i.e., that the computed approximate solutions would converge to the true solution, for Chorin’s random vortex methods:

Introduction. In this paper we will prove the convergence of Chorin’s vortex method for the flow of a two dimensional, inviscid fluid. …

The convergence of Chorin’s method has already been considered by Dushane [4]. However, his proof is incorrect… Our proof follows the general outline of Dushane but introduces two new ideas. …

In one respect our result is less than satisfying. It can be shown that the solution of the Euler equations for a two dimensional flow exists for all time (see Wolibner [14], McGrath [7] and Kato [6]). However, we have only been able to prove the convergence of Chorin’s method for a small time interval. …”

(“Convergence of Vortex Methods for Euler’s Equations”, by Ole Hald and Vincenza Mauceri Del Prête, July 1978, Volume 32, Number 143, Mathematics of Computation)

Look, someone else did the mathematics for Alexandre Chorin.

I recall it was also in the Fall of 1983 that I took a numerical matrix computation course taught by Hald, did erratically with the large amount of hand calculations, and received a B-level grade – any C-level grade would disqualify a student from the Ph.D. program.

Something I heard might give insight to Ole Hald’s conscientiously demanding attitude about subtle details. My classmate Mei Kobayashi, daughter of Prof. Shoshichi Kobayashi – as cited in Part 2 in the 1960s he had applied for NSF grants together with Smale – told me that Hald’s wife was a former fashion model and their car was not only vacuumed very clean but delicately treated with fragrance.

I don’t know if Mei’s observation was accurate but like Chorin, Hald had earned his Ph.D. from NYU’s Courant Institute; also, Catherine Willis, Hald’s Ph.D. graduate prior to Mei, became a financial analyst at the Wall Street investment firm Kidder-Peabody after working for the U.S. Geological Survey.

(“Catherine Willis: Modeling the World of High Finance”, 1991, Association for Women in Mathematics; Anna Lieb, April 29, 2015, Berkeley Science Review; and, “Ole Hansen Hald”, Mathematics Genealogy Project)

Mei’s Ph.D. thesis, which I helped proofread, was titled “Discontinuous Inverse Sturm-Liouville Problems with Symmetric Potentials”. I had first encountered inverse problems for partial differential equations in Stony Brook professor Yung Ming Chen’s lecture at SYSU; at the time, Ron’s master’s classmate Luoluo Li (黎羅羅) worked on trigonometric spline functions, but my bachelor’s thesis turned out  to do better in that subject, and Li then switched to the inverse problems.

Mei was a Princeton graduate and liked to boast she had been a Princeton cheerleader, adding, “actually I was a pom pom” – I obviously knew about cheerleaders, but Mei would explain that “pom poms” were ones lifted up by the cheerleaders.

After Hald, the next professor who came to Berkeley and did research in the mathematics of fluid dynamics equations was Andrew Majda.

Like Hald, Majda did research related to Chorin’s vortex methods; in the early 1980s he and collaborator J. Thomas Beale introduced a new class of vortex methods improved over Chorin’s, and proved their excellent convergence property:

“… The methods of Leonard and Del Prête require a large amount of detailed information … On the other hand, the three-dimensional vortex blob method recently introduced by Chorin [4], [5] is more flexible and requires less information. … Can such a “crude” three-dimensional vortex algorithm accurately represent fluid flows?

In the work presented here, we answer this question affirmatively. We formulate below a new class of three-dimensional vortex methods and then prove that these 3-D vortex methods are stable and convergent with arbitrarily high order accuracy. In these new algorithms, we update the velocity crudely in a fashion completely analogous to the 3-D vortex blob method of Chorin; however, unlike the algorithm in [4], we incorporate the vortex stretching through a Lagrangian update. …”

(“Vortex Methods. I: Convergence in Three Dimensions”, and, “Vortex Methods. II: Higher Order Accuracy in Two and Three Dimensions”, by J. Thomas Beale and Andrew Majda, July 1982, Volume 39, Number 159, Mathematics of Computation)

Majda’s sense of ‘smarterness’ – than Chorin, that is – was brimming in the above quote.

As in Part 2, in the fall of 1983 I asked Smale to be my Ph.D. adviser and he said yes. At the time I was taking Smale’s graduate topics course, and I was also taking Majda’s seminar course, who said he was in the process of moving to Princeton and would not accept new students – unless taking me to Princeton along with his current Ph.D. students, that he might.

I told Smale that I would opt for Princeton if I could but otherwise at Berkeley he would be my adviser, and it was okay with him.

In an October 2010 blog post I recalled Majda’s strong temperament:

“Prof. Andrew Majda was known for his strong temperament, and the year I was auditing a graduate class from him he was in the process of moving to Princeton which had made him an important job offer. Near the Berkeley classroom there was construction work going on at the time and the noise sometimes got really loud, and Majda would burst into tantrums like, “It’s driving me crazy, they are driving me out of Berkeley.””

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 1) — when democracy can be trumped by issue-based politics”, October 8, 2010, Feng Gao’s Blog – Reflections on Events of Interest)

The show of temper cited above was during a graduate class on solutions of partial differential equations, not the seminar course.

The seminar course saw Majda present the basics of equations for fluid flows with shock waves, pose a long list of open problems, and then schedule the students to give presentations, either on their research or surveying other researchers’ works.

To me who had not studied shock waves before, Majda handed a paper by Tai-Ping Liu, a University of Maryland mathematician, pointed to the last paragraph that had claimed a result without giving the proof, and asked me to present the proof.

Prof. Tai-Ping Liu had done significant pioneering work on shock waves, such as in a 1982 paper titled, “Nonlinear Stability and Instability of Transonic Flows Through a Nozzle”, that also referenced past key papers in the field.

(“Nonlinear Stability and Instability of Transonic Flows Through a Nozzle”, T.-P. Liu, 1982, Volume 83, Communications in Mathematical Physics)

I went through the thrust of the paper Majda provided, the relevant references listed in the paper, and what I thought were relevant sections of a book Majda lent me, Shock Waves and Reaction-Diffusion Equations by Joel Smoller, in search of mathematical lemmas and theorems that might serve as basis to prove that claim. I did not find any obviously useful ones, and settled on trying to tackle it through some mathematical techniques I had learned.

What I came up with was not a full proof, but I decided to present it in the seminar course as my result. As scheduled my presentation was the last of the semester, and I started with an overview on the physics background, the differential equations, the problem and the claim, and only halfway into my proof the class time was up; others left, Majda asked me what the rest was, I mentioned the techniques to him, and he said, “That’s an original idea, but the proof is harder than that” – I knew but just hadn’t reached the end to show the deficiency.

Majda then said, “since you want to study with Smale, I am not going to take you to Princeton”.

Something about “inequalities”, perhaps? It might be Majda’s polite way to say that I wasn’t technically strong enough for his liking.

Nevertheless, previously when I mentioned to him that if he did not take me to Princeton I would study with Smale, Majda would become visibly uncomfortable, and once stated, “Steve Smale is a great mathematician, but he knows nothing about numerical analysis”.

Majda might not care much about Smale, but what choices did I have at Berkeley?

At the time other than Chorin, Hald and Majda, there were only two Berkeley mathematics professors with significant research in computational mathematics of differential equations: Paul Concus and Keith Miller, both with Ph.D. earlier than Chorin’s.

Paul Concus was not active at all when it came to producing Ph.D. students. A senior scientist at Lawrence Berkeley National Laboratory but only an adjunct professor, in his entire academic career since receiving his Harvard Ph.D. in 1959 under adviser George Francis Carrier, Concus has produced only one Ph.D. graduate: Anne Greenbaum in 1981.

(“Capillary Surfaces in Microgravity”, by Paul Concus and Robert Finn, in Jean N. Koster and Robert L Sani, eds., Low-Gravity Fluid Dynamics and Transport Phenomena, 1990, American Institute of Aeronautics and Astronautics, Inc.; and, “Paul Concus”, Mathematics Genealogy Project)

In contrast, from 1986 to 1995 Chorin served as head of the LBNL mathematics department where Concus was a senior scientist, and in that period at least 11 Ph.D.s graduated under his Berkeley professorship from 1987 to 1995.

(“Alexandre Chorin”, Lawrence Berkeley National Laboratory; and, “Alexandre Joel Chorin”, Mathematics Genealogy Project)

Keith Miller was better than Paul Concus in producing Ph.D. students, 4 of them in the 1970s after receiving his Ph.D. in 1963 from Rice University. But during the entire 1980s Miller produced only one Ph.D. student: Steve Oslon in 1986.

(“C. Keith Miller”, Mathematics Genealogy Project)

So shouldn’t Miller have time to take in another Ph.D. student or two? Not necessarily. The personal experience my classmate Robert Rainsberger told me, that of Miller’s rejection of his quest to be Miller’s Ph.D. student, revealed something stern yet preferential.

Miller specialized in the finite element method for solving partial differential equations, fluid dynamics in particular, and was the only Berkeley math professor to have the finite elements as the primary expertise.

Robert was a returning student, with prior U.S. air force experience, a University of Illinois degree in 1979, and computational work background at the Lawrence Livermore National Laboratory, about 40 miles from Berkeley. He enrolled at Berkeley mathematics department’s Ph.D. program while continuing to work part-time at LLNL, and we took Kato’s course together; he told me he was interacting with Miller, possibly doing some project if I recall correctly.

Then at some point after I had become Smale’s Ph.D. student, Robert told me that Miller stated if he continued to work at LLNL Miller would not be his Ph.D. adviser, because the national lab was involved in weapons research.

Disappointed, Robert eventually found Heinz Cordes, who like Kato was an older professor and whose expertise was in differential equations and operators, but not in computation, as his adviser. A 1987 book by Cordes on differential operators acknowledged Rainsberger’s help in proofreading; and Rainsberger’s thesis was titled, “On L2 Boundedness of Pseudo-Differential Operators”.

(Heinz Otto Cordes, Spectral Theory of Linear Differential Operators and Comparison Algebras, 1987, Cambridge University Press; and, “Robert Bell Rainsberger”, Mathematics Genealogy Project)

Robert’s Ph.D. study turned out to be mostly pure-math labor. Before and after, Robert’s work was in computation, particularly mesh generation applicable to CAD in general, according to his current biography at Stanford:

“Robert Rainsberger spent his earliest 4 years in the U.S. Air Force before entering the University of Illinois to earn his B.S. in mathematics in 1979. He was then employed by Lawrence Livermore National Laboratory for two years as a computer scientist before entered the University of California, Berkeley. In 1988 he received his Ph.D in mathematics. After completing his Ph.D., Dr. Rainsberger returned to Lawrence Livermore National Laboratory to continue working on 3D hexa mesh generation. During this same period, Dr. Rainsberger contracted to Control Data Corporation to develop the first versions of ICEM CFD. In 1991, Dr. Rainsberger founded XYZ Scientific Applications, Inc. where he remains the principle code developer of TrueGrid, President, and CEO of the corporation.”

(“Our People: Robert Rainsberger”, Stanford Composites Manufacturing Innovation Center)

A 2003 presentation by him at U.S. National Institute of Standards and Technology showed that a usage of Rainsberger’s mesh generation was in the finite element method for fluid dynamics:

“The first part of this expository talk is an introduction on some of the elementary and advanced techniques of mesh generation for finite element analysis. The second part describes a technique to form nearly orthogonal meshes based on the solution to various systems of elliptic partial differential equations in fluid dynamics, hydrodynamics, heat transfer, solid and structural mechanics in order to minimize lower order error terms. …

… He is currently a consultant to NIST Mathematical and Computational Sciences Division on developing finite element analysis codes for applications in the NIST World Trade Center (WTC) investigation project.”

(“MESH GENERATION FOR NON-LINEAR FINITE ELEMENT ANALYSIS”, by Robert Rainsberger, December 4, 2003, Information Technology Laboratory, National Institute of Standards and Technology)

The fall of the World Trade Center towers, which Robert studied in 2003, was not like fluid flow, though, but like crumbling rocks.

Some may comment that, given UC Berkeley’s strong anti-war tradition, Keith Miller was probably like Stephen Smale, in this case taking a stand against weapons research.

It appeared that way, until one learns about Miller’s next Ph.D. student, Andrew Kuprat, whose 1992 Ph.D. thesis was titled, “Creation and Annihilation of Nodes for the Moving Finite Element Method” – the kind of topic Rainsberger had wanted for his Ph.D. study.

(“Andrew Paul Kuprat”, Mathematics Genealogy Project)

What about Kuprat? Right after receiving his Ph.D. he became a computational scientist at Los Alamos National Laboratory in New Mexico, working extensively on high-energy physics:

“Dr. Kuprat is an expert at computational geometry, mesh generation, and the solution of partial differential equations using finite elements. His current interests include generation of optimized meshes for computational fluid dynamics simulations on human and animal airway and cardiovascular geometries, efficient schemes for conservatively mapping quantities in multiphysics simulations, and moving finite element methods for materials microstructure modeling. Dr. Kuprat was a primary developer of LaGriT (Los Alamos Grid Toolbox) at Los Alamos between 1994 and 2005. …

Education

  • B.Sc., Mathematics and Physics – University of Victoria, Victoria Canada – 1984
  • Ph.D., Mathematics – University of California, Berkeley – 1992
  • Post-Doc, Plasma Physics – Los Alamos National Laboratory – 1992–1995
  • Post-Doc, Mechanics of Materials – Los Alamos National Laboratory – 1995–1996

Positions and Employment

  • 1996–2005 – Scientist, Los Alamos National Laboratory, Los Alamos, NM
  • 2005–Present – Senior Scientist, Battelle/Pacific Northwest Division (PNNL), Richland, WA”

(“Andrew Kuprat, Ph.D., Adjunct Professor”, The Gene & Linda Voiland School of Chemical Engineering and Bioengineering, Washington State University, Pullman)

Much more than LLNL in weapons research, LANL was the original birthplace of atomic bombs, including the ones exploding over Hiroshima and Nagasaki, Japan, in 1945, and is the world’s leading nuclear weapons lab.

(“The U.S. Nuclear Weapons Complex: Major Facilities”, Union of Concerned Scientists)

Still, some may argue that Kuprat was a math and physics graduate from the University of Victoria, Canada, and so he might not have any prior weapons background – unlike Rainsberger – and if with his Berkeley Ph.D. he chose to work in LANL, his adviser Miller might not have expected that.

Well, Miller greeted his new Ph.D. graduate’s LANL job with acceptance and further collaboration, as indicated in the proceedings of a “physics computing” conference in Albuquerque, New Mexico, in 1993:

Physics Computing ’93

The Division of Computational Physics will host Physics Computing’93 in Albuquerque, New Mexico, May 31 – June 4, 1993. Co-sponsors of the meeting, which will also be known as The 5th International Conference on Computational Physics, are the AIP journal Computers in Physics and the European Physical Society. The venue will be the Albuquerque Convention Center.

Tutorial Subjects

2D Moving Finite Elements: An Adaptive Grid Method for Computational Fluid Dynamics, Alan H. Glasser, Los Alamos National Laboratory, C. Keith Miller, University of California, Berkeley, and Andrew P. Kuprat, Los Alamos National Laboratory”

(“PHYSICS COMPUTING NEWS – SPRING 1993, Newsletter of the Division of Computational Physics”, American Physical Society)

See, Keith Miller went all the way to LANL’s heartland to participate in a tutorial on his specialty, surrounded by the nuclear weapons lab experts.

Even if Andrew Kuprat’s academic origin from Canada’s Victoria carried a magical aura of ‘peace’ – in contrast to Rainsberger’s U.S. air force and Illinois backgrounds – it wouldn’t be sufficient ground to justify his going all the way to become an expert at the world’s leading nuclear weapons lab and still enjoying (his Berkeley Ph.D. adviser’s) blessing of ‘peace’!

Perhaps not coincidentally, UC Berkeley had had a historical role in the invention of nuclear weapons, when the physicist Robert Oppenheimer founded LANL and led the atomic bomb development, as noted in my February 2013 blog post:

“The physicist Robert Oppenheimer, the director of IAS at Princeton with whom von Neumann discussed his pending move in 1956, had hailed from UC Berkeley to become “father of the atomic bomb”, leading the development of nuclear bombs at Los Alamos National Lab founded by him in northern New Mexico; Oppenheimer later also died of cancer, at the age of 63.”

(“Guinevere and Lancelot – a metaphor of comedy or tragedy, without Shakespeare but with shocking ends to wonderful lives (Part 2)”, February 28, 2013, Feng Gao’s Posts – Rites of Spring)

This was the Robert Oppenheimer who, as in Part 2, brought the Chinese mathematician Shiing-shen Chern and his family to the IAS in Princeton in December 1948 when China was on the verge of being taken over by the Communists.

The former Berkeley physicist who had led the development of the atomic bomb later opposed the development of the hydrogen bomb, and presumably by that virtue became a victim of McCarthyism:

“In April of 1954, Robert Oppenheimer, the former head of the Manhattan Project, the director of the Institute for Advanced Study, and the most famous scientist in America, was declared a security risk by Eisenhower and stripped of his security clearance in the full glare of national publicity. The ostensible reason was Oppenheimer’s youthful left-wing association, but the real reason, as von Neumann and most scientists testified at the time, was Oppenheimer’s refusal to support the development of the H-bomb.”

(Sylvia Nasar, A Beautiful Mind, 1998, Simon & Schuster)

Hmm, “the most famous scientist in America” wasn’t Albert Einstein? Oh well, at least it wasn’t Norman Levinson, like in Nasar’s book as discussed in Part 2.

J. Robert Oppenheimer’s brother Frank, also a physicist involved in the original atomic bomb development, was also a victim of McCarthyism and became an activist for “nuclear disarmament and peace”:

“Here’s a look at the past. Items have been culled from The Chronicle’s archives of 25, 50, 75 and 100 years ago.

1985

Feb. 3: Frank Oppenheimer, the distinguished physicist who founded San Francisco’s famed Exploratorium, died in his Sausalito home Feb. 2 after a long illness. He was 72. In his varied career, Dr. Oppenheimer pioneered research in radiation and cosmic rays, conducted secret studies on uranium isotope separation during the Manhattan Project of World War II, spent years in university teaching and finally created and directed the unique museum whose imaginative presentation of science has earned it worldwide renown. He was the younger brother of J. Robert Oppenheimer, who won fame as the director of the Los Alamos laboratory, which designed and built America’s first atomic bombs. Frank Oppenheimer was passionately committed to the cause of nuclear disarmament and peace. As a graduate student during the Depression, he briefly joined the Communist Party, and that short-lived membership cost him dearly later in life. In 1949, he was summoned before the House Un-American Activities Committee, then forced to resign his assistant professorship at the University of Minnesota. …”

(“Exploratorium founder Frank Oppenheimer dies”, by Johnny Miller, January 31, 2010, SFGate.com)

So one shouldn’t take the weapons-or-peace logic too orthodoxly, or even too seriously, but should do so with a grain of salt when one realizes that the famous activists opposing the weapons of mass destruction, like the Oppenheimer brothers, could well be the ones who had invented them in the first place.

On the other hand, the scientific institutions created and/or shaped by these talented physicists could be treasures, like the Exploratorium founded by Frank Oppenheimer at San Francisco’s Palace of Fine Arts:

“After the war, Frank became a physics professor at the University of Minnesota. But in 1949, he was forced to resign as a result of harassment by the House Un-American Activities Committee. Blackballed by McCarthy-era paranoia, Frank was unable to continue his physics research, and spent the next ten years as a cattle rancher in Pagosa Springs, Colorado.

With improvement in the political climate, Frank was offered an appointment at the University of Colorado in 1959. There, he revamped the teaching laboratory, creating a “library of experiments” that was in many ways a prototype for the Exploratorium.

In 1965, while in Europe on a Guggenheim fellowship, Frank explored and studied European museums and became convinced of the need for science museums in the United States that could supplement the science taught in schools. When he returned home, Frank was invited to plan a new branch of the Smithsonian, but he declined, preferring instead to work on what he called his “San Francisco project”— a museum of his own.

Frank proposed to house his new museum in the vacant Palace of Fine Arts in the Marina district of San Francisco. The proposal was accepted by the city, and in 1969, with no publicity or fanfare, the doors opened to Frank’s Exploratorium. Frank nurtured and shaped the growing museum until 1985, when he died from lung cancer.”

(“Dr. Frank Oppenheimer”, The Exploratorium)

The Palace of Fine Arts is the only surviving structure from the 1915 Panama Pacific International Exhibition hosted in a grandiose and palatial group of buildings made from temporary materials.

(“His Castles Outlive Their Kings: How Cal’s Architect Shaped and Scraped the Skyline”, by Cirrus Robert Wood, November 2, 2015, California Magazine, shared on Arts and the Community, and Fashion Statements)

So getting the famous mathematician Steve Smale, instead of one of the numerical analysts, as my Ph.D. adviser did not feel bad. In fact, in the early 1960s after he had moved from Berkeley to Columbia, Smale was offered a professorship by Princeton after he had solved the Generalized Poincare Conjecture, bettering the work of Princeton professor John Milnor, who then brought him the Princeton job offer. But Smale was more interested in negotiating for a higher salary at Berkeley to return to California.

(Steve Batterson, Stephen Smale: The Mathematician Who Broke the Dimension Barrier, January 2000, American Mathematical Society)

Smale also had fine tastes beyond mathematics, assembling and owning one of the world’s best private collections of mineral rocks.

(“The Very Model of a Modern Mineral Dealer”, May 20, 2015, Priceonomics)

My fellow math Ph.D. student friend William Geller cautioned me that Smale might not have much time for his students; but I felt that I was intellectually independent, anyway.

Smale was to go on sabbatical for the academic year 1984-1985, which he would spend in Paris, France, and so he scheduled my Qualifying Exam to be in the late spring, May 1984, not long before his departure.

In around February-March 1984 I moved out of the apartment shared with Kezheng Li and into an apartment rented by computer science Ph.D. student David Chin, mentioned in my March 2011 blog post:

“By the spring of 1984 I had moved to an apartment in Richmond near Albany and my new roommate David (Ngi) Chin was a Computer Science Ph.D. student specializing in a subfield of Artificial Intelligence – focusing on the role of intelligent agents in natural language systems. An MIT grad from Boston, born in Hong Kong, David played some chess and softball but I only played tennis and volleyball regularly with him. His previous roommate and fellow Computer Science student Vincent Lau had left school early for the computer industry.

By late spring I also passed the Ph.D. qualifying exam, marking the start of research-oriented Ph.D. study under my thesis adviser.”

(March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

The move was related to the upcoming qualifying exam. Kezheng was an active leader in the community of the Berkeley graduate students and visiting scholars from China, and so his place was busy, like an informal social activity center.

Kezheng had kindly offered to share his apartment with me before I had even left China:

“… a computer systems technician at Sun Yat-sen university who was also a friend and South China Teacher’s College’s affiliated middle school alumnus of my roommate “Jie Wang”, was from the circle of kids at South China Institute of Technology and knew a professor there whose son was a fellow Berkeley Math graduate student and former roommate of “Li”’s, and so before my journey a new connection was already made.”

(March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

It was in a gated apartment complex on Durant Avenue, with a small but representative group of math Ph.D. students living there, and a larger number of Chinese graduate students and some scholars.

Kezheng’s former roommate who had arranged for my sharing was Guojun Liao (廖國鈞), studying for his Ph.D. in the pure math field of differential geometry. Guojun’s late grandfather in the 1910s was the president of Guangdong Higher Normal College, one of the colleges that later in 1924 were combined to constitute National Guangdong University – I noted in a November 2010 blog post that it was soon renamed Sun Yat-sen University in honor of the father of the Chinese Republican Revolution, whose government founded the university at the provisional national capital Guangzhou, and who had been born in a village only 60 miles away.

(“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 2) – when violence is politically organized”, November 22, 2010, Feng Gao’s Blog – Reflections on Events of Interest; and, “寻访梅州籍大学校长廖道传 梁启超称他为“嘉应健生””, July 15, 2015, 梅州日报)

In the 1990s, a professor at the University of Texas at Arlington, Guojun shifted his research focus to mesh generation related to the finite element method, i.e., to the same field as Keith Miller. A SIAM 45th anniversary meeting session at Stanford in July 1997, which Guojun co-organized with Paul Zegeling of Utrecht University – the place new Stanford Ph.D. Jack Snoeyink spent a postdoc year at in 1990 after getting an UBC tenure-track position I unsuccessfully sought, as in Part 3 – listed 4 presentations: one by R. D. Russell, W. Huang, and Weiming Cao of Simon Fraser University in Canada, one by Keith Miller of UC Berkeley, one by Andrew Kuprat of LANL, and one by Feng Liu of UC Irvine and Guojun G. Liao.

(“Moving-Grid Methods for Partial Differential Equations (Part II of II)”, July 17, 1997, SIAM’s 45th Anniversary Meeting, Stanford University) 

After Guojun, Kezheng’s roommate in the 1981-1982 academic year was math Ph.D. student Xiaolu Wang (王曉麓), who then moved to another unit in the same complex to share with a Caucasian graduate student. After his Ph.D., Xiaolu went to work in the U.S. East Coast and eventually ended up on Wall Street.

After Xiaolu, in the summer before my late August 1982 arrival, Kezheng’s roommate was Zhaowei Meng (孟昭偉) – as spelled by the Chinese Pinyin – who was on his way to attend Stanford’s business school, as mentioned and referred to as “ZWM” in my November 2009 blog post.

(November 22, 2009, Feng Gao’s Blog – Reflections on Events of Interest)

A couple of white math Ph.D. students, my classmate Peter Detre and his wife Catherine Carroll, lived in the apartment complex. In late 1999 when I was looking for work in Silicon Valley I ran into Catherine, originally from Britain as I recall; she was teaching at Texas A&M University at Kingsville – a different campus of Ron Chen’s alma mater – and said that she and Peter had parted ways, and that Peter had gotten a Yale law degree and was a “practicing lawyer somewhere”. Peter was Canadian and William Bade’s Ph.D. graduate, but I would not have expected to see a The New York Times wedding announcement of his – without Catherine.

(“WEDDINGS; Claire O. Finkelstein, Peter A. Detre”, June 5, 1994, The New York Times; and, “ccarroll”, wikidot)

In comparison to where Zhaowei, Xiaolu and Peter later got to, in 1984 my move ended only with sharing with a Ph.D. student in computer science – my undergraduate discipline even if AI was a newer field – and a Chinese American who could speak his family’s native tongue Taishanese (Toisan) – a variant of Cantonese.

Steve Smale was quite pleased to accept me as his first Ph.D. student from the People’s Republic of China – as noted in Part 2 – and with a prior background in numerical analysis, a field he had recently moved into.

As in Part 2, Smale was a prominent mathematician who had received the Fields Medal, the highest honor of the mathematical community, in 1966 at the International Congress of Mathematician held that year in Moscow’s Kremlin Palace.

Smale had moved from one area of mathematics to another, discovering and proving original results and founding new theories. In his topics courses I took over the several years, Smale would present most of his core achievements, in fields ranging from differential manifolds to dynamical systems, from mathematical economics to models of population growth, and from the simplex method in linear programming to Newton’s method for solving equations – with the exception of his early work in topology and the generalized Poincare conjecture.

But from time to time criticisms could be heard from some experts, especially ones in more applied fields, as to whether Smale really understood their issues when making mathematics there.

For one, Smale concentrated on the mathematics of it, and experts he conversed with tended to be the highly mathematical types. For example, in 1981 he published an important paper studying some mathematics related to numerical analysis, and acknowledged several persons:

“1. The main goal of this account is to show that a classical algorithm, Newton’s method, with a standard modification, is a tractable method for finding a zero of a complex polynomial. Here, by “tractable” I mean that the cost of finding a zero doesn’t grow exponentially with the degree, in a certain statistical sense. …

Before stating the main result, we note that the practice of numerical analysis for solving nonlinear equations, or systems of such, is intimately connected to variants of Newton’s method; these are iterative methods and are called fast methods and generally speaking, they are fast in practice. The theory of these methods has a couple of components; one, proof of convergence and two, asymptotically, the speed of convergence. But, not usually included is the total cost of convergence.

There is a final comment on the spirit of the paper. I feel one problem of mathematics today is the division into the two separate disciplines, pure and applied mathematics. Oftentimes it is taken for granted that mathematical work should fall into one category or the other. This paper was not written to do so.

I would like to acknowledge useful conversations, with a number of mathematicians including L. Blum, S. S. Chern, G. Debreu, D. Fowler, W. Kahan, R. Osserman, R. Palais, G. Schober and H. Wu.

Special thanks are due Moe Hirsch and Mike Shub.”

(“The fundamental theorem of algebra and complexity theory”, by Steve Smale, 1981, Volume 4, Number 1, Bulletin of the American Mathematical Society)

For the quote above I have selected several points Smale made in the paper: with standard modification, Newton’s method is a “tractable” algorithm in a statistical sense; the “total cost of convergence” had not been well addressed by the theory of numerical analysis; and, a problem of mathematics today was “the division into the two separate disciplines, pure and applied mathematics”, but his paper “was not written to do so”.

These were good points and worthwhile goals to pursue. But looking at the names Smale acknowledged, 9 for “useful conversations” plus 2 with special thanks to, a total of 11, only William Kahan was in the field of numerical analysis, even though other established Berkeley professors were acknowledged, such as mathematicians Shiing-shen Chern and Hung-Hsi Wu, and mathematical economist Gérard Debreu.

If Smale had not actually interacted with numerical analysts much, how could he be sure, given that his earlier background had not been in numerical analysis, that the progress he made would be useful for that field?

But the separation may have been the other way around; in other words, when Majda asserted that Smale knew nothing about numerical analysis, it wasn’t due to Smale’s lack of trying but the numerical analysts’ dismissive attitudes toward his efforts.

The Ph.D. oral qualifying exam committee would consist of several professors within the department, including the adviser, and one from outside the department. For my committee, Smale suggested: Prof. Charles Pugh, a close colleague in his former field of dynamical systems, whose graduate course on that subject I was taking; Alexandre Chorin, for my background in numerical analysis; and an engineering professor; and I suggested Andrew Majda.

The two numerical analysts took a little explanation to persuade.

I went to Chorin’s office to ask and his reaction, which I can only recall vaguely, was like, “You want to study with Smale; then why do you ask me to be on your committee?” I explained that my undergraduate background had been in numerical analysis and I would be doing related research, and Chorin responded with something like, “Smale doesn’t understand numerical analysis”, but grudgingly agreed.

This would become a pattern in my remaining years at UC Berkeley interacting with the professors in numerical analysis, that I had to emphasize my prior background for them to take me more seriously, and also had to ignore their comments about Smale.

I asked Majda to be on my committee and also to write a support letter for my application for a graduate fellowship, and his response was like, “since you want to study with Smale, you should not ask me for either”. After my explanation that his seminar course had been important and my research would be related to numerical analysis, Majda grudgingly agreed, wrote a letter on the spot, handed it to me – it was supposed to be confidential – and said, “this is the last I can do for you; from now on you should not ask me for more”.

With Majda’s letter, and confidential letters from Kato and Smale if I remember correctly, I later received my only fellowship, the Earl C. Anthony graduate fellowship, for the academic year 1984-1985.

It was a fellowship, even if not as much in prestige or amount as Kezheng received, which was either a UC Regents or UC Berkeley Chancellor’s graduate fellowship, not to mention the multi-year National Science Foundation fellowship my friend Will Geller received. But being sensitive, I sometimes wondered: my fellowship’s name had a “C.”.

In the spring of 1984, my former undergraduate adviser Yuesheng Li came by Berkeley on his way back to Sun Yat-sen University to assume its presidency. Prof. Li had spent some time at Texas A&M where his former master’s student Ron Chen was and where Prof. C. K. Chui and Prof. Larry Schumaker were peers of his in spline function theory and approximation theory as mentioned earlier.

Li told me he had also visited the mathematician Garrett Birkhoff at Harvard – son of the mathematician George David Birkhoff for whom the Birkhoff Prize for applied mathematics, which Wisconsin’s Paul Rabinowitz later won in 1997, had been named.

My Qualifying Exam was a near disaster. Since childhood I had had problems with shyness and nervousness, although I was okay with teaching when preparations ahead of time would enable me to present contents in structured and orderly fashions. When caught in an unprepared situation and I was nervous, for the moment I could sometimes be at a loss. In an oral exam in front of professors who could determine my fate added to my anxiety.

A part of the exam time was spent on my answers going in wrong directions. In one instance, Chorin asked a basic question but I misunderstood it as a question about something less well-defined and went on a long discussion, arguing with him before realizing it was my misinterpretation. In another instance, Pugh asked about the proof of a theorem from his dynamical systems course, I embarked on something halfway before realizing that it was a proof for another theorem, and said oh I should do it over, Pugh smiled and the committee deemed it enough time spent on that question.

After the exam I waited in the hallway for the committee’s decision; eventually Majda emerged first from the room, said with a stern look, “congratulations you passed”, and walked away; then the others came out and congratulated me. A few days later I met Majda, thanked him, and he said, “what you did was crazy. Your adviser was the only one who insisted on letting you pass”.

The Prelim Exam had been easier for me but the Qualifying Exam was harder partly due to my own fault, and it became etched in my mind that I passed because Smale was about to go to Paris for sabbatical and told other committee members he had no time for a second exam try – normally several months down the road.

For the Ph.D. degree two foreign languages were required, by passing exams translating math literature. I chose French first, passing easily as most French math terms had similar English versions. Then I took the German exam and failed, and subsequently spent much more time learning before passing on the second try. I later wondered if I could have gotten my Ph.D. had I chosen the third foreign language option, Russian, with its alien alphabet – I had looked at it back in my teenage days as my father knew some, and of course my undergraduate adviser had studied at Moscow State University.

After passing the two exams a Ph.D. student would file for the Ph.D. candidacy, in Mathematics or Applied Mathematics depending on the research field. Smale could be viewed as in math or applied math, as an earlier quote from his 1981 paper showed he did not consider himself separated by the two disciplines; the department’s graduate secretary, Janet Yonan, also said that as Smale’s student I could choose one or the other; so I filed for candidacy in “Applied Mathematics”. But when the certificate was issued to me, it was in “Mathematics”, I went to Janet, and she sort of said it just happened this way and if I wanted to change I would need to write a request letter to the department.

Given the hostility toward Smale I had encountered from the numerical analysts, I had the sense that for the applied mathematics category it would need to be approved by someone close to them and especially to Chorin, someone like Prof. Alberto Grünbaum, soon to become director of the Center for Pure and Applied Mathematics, 1985-1989, and later department chairman, 1989-1992. Originally from Argentina and with a Rockefeller University Ph.D., Grünbaum had taught at NYU’s Courant Institute – the “mother ship” Chorin, Hald and Majda had gone through either in Ph.D. study or teaching.

(“Francisco Alberto Grünbaum”, Department of Mathematics, University of California, Berkeley)

I decided it wasn’t worth the trouble, that mathematics was fine.

My classmate Mei Kobayashi had Ole Hald as her Ph.D. adviser, as described earlier, and her Ph.D. candidacy and later Ph.D. degree were in applied mathematics. At Berkeley most math graduate students went for their Ph.D. directly but Mei, whose Princeton bachelor’s degree had been in chemistry, felt the need to also get a master’s degree in mathematics prior to her Ph.D. in applied mathematics.

(“Class Notes”, November 11, 1987, Volume 88, Princeton Alumni Weekly; and, “Mei Kobayashi”, prabook)

Another classmate Paul Wright, previously cited in a quote in Part 3, was one of two African-American math Ph.D. students I knew at Berkeley – Chorin’s student Nate Whitaker being the other – and he also earned a master’s degree in mathematics, which he told me was useful because being originally from Jamaica he had only U.S. permanent residency at the time. Paul did his master’s thesis under Smale as I can recall, and then chose Grünbaum as his Ph.D. adviser; so like Mei, Paul’s Ph.D. was most likely in applied mathematics.

(“Paul Emerson Wright”, Mathematicians of the African Diaspora, Department of Mathematics, University at Buffalo; and, Nathaniel Dean, ed., African Americans in Mathematics: DIMACS Workshop, June 26-28, 1996, 1997, American Mathematical Society)

Grünbaum’s field, partial differential equations with soliton solutions, was among the trendiest in the department, like Chorin’s, with a large number of graduate student followers. A more recent presentation by Grünbaum described it as follows:

“The study of nonlinear partial differential equations of mathematical physics such as those of Korteweg-deVries, Toda, nonlinear Schroedinger, etc starting around 1970 has given a unifying push to several parts of mathematics … All of these equations exhibit solitons. a nonlinear version of the superposition principle going back at least to Fourier in the case of linear equations. I run myself into this enchanted land while studying a concrete problem in medical imaging: X-ray tomography with a limited angle of views, and I am definitely not an expert on this grand scheme.”

(“Colloquium – F. Alberto Grünbaum – Soliton mathematics as a unifying force”, April 11, 2012, Department of Mathematics, Boğaziçi University)

Once, a student who regularly attended Chorin’s seminars commented to me that several famous computational fluid dynamics methods, including Chorin’s, were run on the same equations and same data, and their resulting flow graphs were drastically different from one another; “but it was the same flow so at most one of them was correct”, he said.

Chorin’s Ph.D. student Jim Shearer told me he found the soliton tunneling effect, presented in Grünbaum’s graduate course on nonlinear partial differential equations, quite hard to believe, that a same soliton could disappear and reappear on different sides of a barrier.

Having come from Communist China, during my Berkeley years I was keen at familiarizing myself with the broader modern intellectual curriculum. After the Preliminary Exam and after the Qualifying Exam were times when I could afford more time studying other subjects.

As mentioned in Part 2, besides the ideological indoctrination of the Cultural Revolution my father was a university philosophy teacher, and so I had read much of Mao Zedong’s published works, a selection of which I more recently surveyed in a blog post; and I had also read some of the works of Vladimir Lenin, Karl Marx, Friedrich Engels and Joseph Stalin.

(“Power, avengement, ideological cementation — Mao Zedong’s class politics in great forward leaps, tactical concessions”, April 6, 2015, Feng Gao’s Posts – Rites of Spring)

In another recent blog post, originally posted on my website in 2011, I mentioned that I got to read some Chinese classics starting at the elementary-school age, during the Cultural Revolution time.

(“Some Chinese Cultural Revolution politics and life in the eyes of a youth”, November 7, 2015, Feng Gao’s Posts – Rites of Spring)

In fact, Sun Yat-sen University’s library collections were one of the best among the libraries in China, as I remember my father said, that the several million volumes might be smaller than that of the National Library and that of Peking University Library but probably none else. In my November 2010 blog post, I mentioned that some Western reports of burning of library books during the Cultural Revolution were inaccurate.

(November 22, 2010, Feng Gao’s Blog – Reflections on Events of Interest)

In my middle-school days in the early to mid-1970s, using my father’s library card I read a good selection of Chinese editions of Western classics, including Greek philosophy, Roman history, and the Renaissance and Enlightenment thinkers. What the Chinese publications really lacked was in modern Western thoughts, most not available until the 1980s.

So at Berkeley sometime during 1983-1985, I audited several upper-division undergraduate courses at the philosophy department, covering the philosophy of mind, meaning, language, etc., one probably taught by Prof. John Searle. I recall writing home telling my father, that UC Berkeley philosophy department’s graduate courses were mostly seminars and the Ph.D. curriculum used the upper-division courses as requirements.

One philosophy course I audited was taught by a visiting lecturer from Harvard University, on the Austrian philosopher Ludwig Wittgenstein. The lecturer was unusually energetic – reminding me a little of Lenin – and I sometimes saw him take fast strides between campus locations, cutting a standout figure from the culturally more reserved west-coast professors. I forgot his name at some point but believe it was Warren Goldfarb, today the Walter Beverly Pearson Professor of Modern Mathematics and Mathematical Logic, and a founder of Harvard’s gay and lesbian movement in 1984 although I have not found public info of his visiting Berkeley – an internationally leading city of gay and lesbian movements – around that time.

(“Stories Transform Goldfarb Into Activist”, by Anna D. Wilde, June 10, 1993, and, “Nine Secondary Fields Approved”, by Peter R. Raymond, November 17, 2006, The Harvard Crimson; and, “Who’s Who: Members of the Board and leadership team of the HGSC”, Harvard Gender and Sexuality Caucus)

One of my keenest interests while in my middle-school senior and university undergraduate years was in the history of science and philosophy of science. Unfortunately, neither seemed to be among the research focuses at the world famous UC Berkeley, but I continued reading, without auditing courses, including books by Karl Popper and radical Berkeley philosopher Paul Feyerabend, and attending the occasional seminar talks of relevance.

On one occasion, the MIT philosopher of science Thomas Kuhn returned to give a lecture at Berkeley, where he was once a professor when publishing his most famous book, The Structure of Scientific Revolutions, before moving to Princeton and then MIT. So I got to hear the author of one of my two most favorite books as an undergraduate – the other being Mathematical Thought from Ancient to Modern Times by Morris Kline.

Kuhn stated in the 1962 book:

“If science is the constellation of facts, theories, and methods collected in current texts, then scientists are the men who, successfully or not, have striven to contribute one or another element to that particular constellation. Scientific development becomes the piecemeal process by which these items have been added, singly and in combination, to the ever growing stockpile that constitutes scientific technique and knowledge. And history of science becomes the discipline that chronicles both these successive increments and the obstacles that have inhibited their accumulation. …”

(Thomas S. Kuhn, The Structure of Scientific Revolutions, 1962, Volume II, Number 2, International Encyclopedia of Unified Science, The University of Chicago Press)

The accumulative growth of scientific knowledge described above is probably what the public typically think of scientific research. But according to Kuhn, some historians of science found it untrue:

“In recent years, however, a few historians of science have been finding it more and more difficult to fulfil the functions that the concept of development-by-accumulation assigns to them. As chroniclers of an incremental process, they discover that additional research makes it harder, not easier, to answer questions like: When was oxygen discovered? Who first conceived of energy conservation? Increasingly, a few of them suspect that these are simply the wrong sorts of questions to ask. Perhaps science does not develop by the accumulation of individual discoveries and inventions. Simultaneously, these same historians confront growing difficulties in distinguishing the “scientific” component of past observation and belief from what their predecessors had readily labeled “error” and “superstition.” … If these out-of-date beliefs are to be called myths, then myths can be produced by the same sorts of methods and held for the same sorts of reasons that now lead to scientific knowledge. If, on the other hand, they are to be called science, then science has included bodies of belief quite incompatible with the ones we hold today.”

(Thomas S. Kuhn, 1962, The University of Chicago Press)

Indeed, as I have described in the case of the Berkeley math professors’ research in numerical analysis, which I was learning at the time, they could be dismissive of one another, and though aware that their work was not always scientific each of them and the followers marched on.

Kuhn wrote that historians of science began to pay attention to such scientific research:

“… Gradually, and often without entirely realizing they are doing so, historians of science have begun to ask new sorts of questions and to trace different, and often less than cumulative, developmental lines for the sciences. Rather than seeking the permanent contributions of an older science to our present vantage, they attempt to display the historical integrity of that science in its own time. They ask, for example, not about the relation of Galileo’s views to those of modern science, but rather about the relationship between his views and those of his group, i.e., his teachers, contemporaries, and immediate successors in the sciences. Furthermore, they insist upon studying the opinions of that group and other similar ones from the viewpoint—usually very different from that of modern science—that gives those opinions the maximum internal coherence and the closest possible fit to nature. …”

(Thomas S. Kuhn, 1962, The University of Chicago Press)

Kuhn referred to this type of scientific research as “normal science”, i.e, it was actually the norm:

“… Normal science, the activity in which most scientists inevitably spend almost all their time, is predicated on the assumption that the scientific community knows what the world is like. Much of the success of the enterprise derives from the community’s willingness to defend that assumption, if necessary at considerable cost.”

(Thomas S. Kuhn, 1962, The University of Chicago Press)

“Normal science” tended to suppress other things that contradicted it:

“Normal science, for example, often suppresses fundamental novelties because they are necessarily subversive of its basic commitments.”

(Thomas S. Kuhn, 1962, The University of Chicago Press)

I can agree with that, given my experience discussed in earlier Parts, that at the University of British Columbia even the faculty association would take part in suppressing issues that might unravel the politically correct appearance of the academia.

But Kuhn argued that such suppression would not last very long:

“Nevertheless, so long as those commitments retain an element of the arbitrary, the very nature of normal research ensures that novelty shall not be suppressed for very long. … The extraordinary episodes in which that shift of professional commitments occurs are the ones known in this essay as scientific revolutions. They are the tradition-shattering complements to the tradition-bound activity of normal science.”

(Thomas S. Kuhn, 1962, The University of Chicago Press)

“So long as those commitments retain an element of the arbitrary” – that was a big “so long as”.

Prof. Kuhn might not have expected that contemporary political correctness could easily give a “revolutionary” label to a vested power status, thus usurping the notion of “scientific revolutions”. In my case as in Part 1, the faculty association was official not “arbitrary”, and an anti-Reagan and anti-Thatcher stereotype posture by its president William Bruneau was, by political correctness, “revolutionary” enough to exclude me of any political merit.

Thomas Kuhn’s The Structure of Scientific Revolutions has been purchased or read by more people in the world than any other book on the history and philosophy of science – books by Aristotle and René Descartes included:

“Thomas S. Kuhn was the most important, and the most famous, historian and philosopher of science within living memory. The Structure of Scientific Revolutions has been read, or purchased, by more people than any book on either subject ever written—the closest competitors in philosophy must be the Posterior Analytics and the Discourse on Method…”

(“Thomas S. Kuhn 1922-1996”, by N. M. Swerdlow, 2013, U.S. National Academy of Sciences)

But in his Berkeley talk I attended, sometime in 1986-1988, probably 1987, Kuhn did not speak on the scientific revolutions. In the room packed to its full seating and standing capacity, I noticed a familiar figure standing not far from Prof. Kuhn at the lectern: Betul Tanbay, a fellow math Ph.D. student from Istanbul, Turkey, and officemate of my friends Will Geller and Samy Zafrany, both Jewish and all three studying mathematical logic at the time.

Of the three, Samy, an Ethiopian Israeli Jew with a congenial personality, had been my officemate for my first period of TA stints. But by no later than 1985 my assigned officemates became more senior Ph.D. students Jeff McIver and Steve Pomerantz, both interested in financial banking; Steve studied under Prof. Murray Protter, a colleague of Kato’s in partial differential equations, and Jeff under Prof. Jack Wagoner, who later would also be Will’s adviser and the department chairman after Grünbaum, though in 1987 Will was probably studying mathematical logic under Prof. Leo Harrington as I recall.

Will was the fellow math graduate student I knew best outside of the Chinese student circle, as I recalled in a January 2013 blog post:

“… In UC Berkeley student days a classmate and good friend of mine had been William Geller, whose fiancee and later wife, Stephanie Montague, received her Ph.D. in 1989 from California School of Professional Psychology…

From a Italian American garment business family in New York City, Stephanie had liked to say, “My family name is that of Romeo’s in Romeo and Juliet”.”

(“Guinevere and Lancelot – a metaphor of comedy or tragedy, without Shakespeare but with shocking ends to wonderful lives (Part 1)”, January 29, 2013, Feng Gao’s Posts – Rites of Spring)

Prof. Alfred Tarski, founder of UC Berkeley’s group in mathematical logic and methodologies of science, who had died in 1983, had been on the advisory committee that published Kuhn’s book in 1962. Tarksi has sometimes been compared to Kurt Gödel as the two most influential mathematical logicians of the 20th century.

(Thomas S. Kuhn, 1962, The University of Chicago Press; and, “Book Review: Alfred Tarski. Life and Logic”, by Hourya Benis Sinaceur, August 2007, Volume 54, Number 8, Notices of the American Mathematical Society)

I did not really study mathematical logic at Berkeley. As an undergraduate I had acquainted with some basics of Gödel’s work, which shed light on mathematics as a discipline by fundamentally clarifying the scopes of mathematical systems; but I had done so by independent reading, such as a book by NYU Courant Institute’s Prof. Martin Davis. My SYSU roommate Jie Wang (王潔) and his master’s adviser, Soviet Union-educated professor Guangkun Hou (侯廣坤), “Teacher Hou” as in my March 2011 blog post, did some research in that field.

(Martin Davis, Computability and Unsolvability, 1958, McGraw-Hill; and, March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

Another intellectual subject of my interest, which I became fascinated with at Berkeley, was cultural anthropology, especially structuralism. I spent time reading literature available at the anthropology department library.

In the late 19th century and early 20th century, the Columbia University anthropologist Franz Boas was the champion of cultural relativism:

“Boas outlined his views denying the importance of nature and emphasizing the influence of culture in several books, including The Mind of Primitive Man, published in 1911 and now considered a classic. Boas argued that the development of Western civilization was not the consequence of intrinsic genius on the part of the white race but simply owing to favorable circumstances. Other cultures could not be termed inferior, he declared, only different. …

Boas’s arguments were greatly strengthened because of a kind of linguistic revolution he launched in American social science by his novel use of the term “culture.” Before the nineteenth century, European writers spoke only of “civilization.” The term civilization comes from the Latin word civis meaning citizen. Used in the singular, civilization implies a certain form of government and a certain level of achievement. In the nineteenth century, Matthew Arnold used the term “culture” in contrast with “civilization.” … The word culture comes from the Latin term cultura meaning to grow or cultivate. “Culture” arises out of “agriculture.” Arnold used culture in an elevated sense, however; culture was represented by the high traditions of Athens and Jerusalem. …

Boas, however, used a new definition, one that was initiated by the English anthropologist Edward Tylor, who defined culture as the “knowledge, belief, arts, morals, customs and any other capabilities and habits acquired by man as a member of society.” … Only some people may be considered to have civilization, but all people in the Boasian sense have culture, in that they have customs and beliefs. … Through the democratization of the term, Boas found it much easier to suggest the essential relativity of cultures… Over time, even Americans who did not espouse relativism began to speak of “culture” in the Boas sense of the term, which is the way we commonly use the term today.

Historically, one of the strongest arguments for Western civilizational superiority has been the spectacular political and economic successes of modern industrial society. According to George Stocking in The Ethnographer’s Magic, Boas qualified but never abandoned his belief in the concepts of modernity, technology, rationality, and civilization. … Boas was not a strict cultural relativist, although he moved increasingly in that direction over the years. He emphasized that his main argument with the scientific racists was not that they were wrong, but that their case was unproven.”

(Dinesh D’Souza, The End of Racism: Finding Values In An Age Of Technoaffluence, 1995, Simon & Shuster)

UC Berkeley’s anthropology department was founded at the beginning of the 20th century by Prof. Alfred Kroeber, and strongly influenced by Kroeber and Prof. Robert Lowie:

“Lowie was attracted to anthropology because it represented intellectual fulfillment without the difficulties of physical manipulation of objects. He was also no doubt attracted to it because Boas represented a liberal point of view and had devoted himself to fighting the prejudices directed toward Jews and other ethnic and racial minorities as well as toward the teaching of anthropology. Lowie never became a political activist but his sympathies were definitely on the liberal side and he wrote extensively on racist problems.

During 1917—1918 Lowie was invited to become visiting lecturer in anthropology at the University of California at Berkeley by A. L. Kroeber, who had founded the department fifteen years earlier. In 1921, Lowie was appointed a permanent member of the staff at Berkeley and remained such until his retirement, although he held many visiting professorships and lectureships.

(“Robert Harry Lowie 1883—1957”, by, Julian H. Steward, 1974, U.S. National Academy of Sciences)

Kroeber and Lowie were former Ph.D. students of Franz Boas, and brought cultural relativism further to a level free of racism:

“As often happens with an influential teacher, however, Boas’s students extended his principles to construct a radically new framework for understanding race in the modern world. The names of Boas’s students at Columbia University read like a Who’s Who of early American anthropology: Margaret Mead, Ruth Benedict, … Alfred Kroeber, Robert Lowie, …

  • Alfred Kroeber and Robert Lowie insisted that culture should be studied entirely independent of biology or heredity, which went far beyond anything Boas wrote.
  • Margaret Mead virtually denied that human nature had anything to do with heredity: “Human nature is almost unbelievably malleable, responding . . . to contrasting cultural conditions.”
  • …”

(Dinesh D’Souza, 1995, Simon & Shuster)

Also influenced by Franz Boas, the French anthropologist Claude Lévi-Strauss took a different intellectual route he deemed more scientific, that of discovering and analyzing hidden social relational patterns in human cultures, even the most primitive ones. In his influential book I read, Structural Anthropology, Lévi-Strauss compared structural anthropology to the study of language grammars, crediting the motivation and inspiration to Franz Boas but asserting that the anthropology study was vastly underdeveloped:

“… We propose to show that the fundamental difference between the two disciplines is not one of subject, of goal, or of method. They share the same subject, which is social life; the same goal, which is a better understanding of man; and, in fact, the same method, in which only the proportion of research techniques varies. They differ, principally, in their choice of complementary perspectives: History organizes its data in relation to conscious expressions of social life, while anthropology proceeds by examining its unconscious foundations.

Boas must be given credit for defining the unconscious nature of cultural phenomena with admirable lucidity. By comparing cultural phenomena to language from this point of view, he anticipated both the subsequent development of linguistic theory and a future for anthropology whose rich promise we are just beginning to perceive. He showed that the structure of a language remains unknown to the speaker until the introduction of a scientific grammar. Even then the language continues to mold discourse beyond the consciousness of the individual, imposing on his thought conceptual schemes which are taken as objective categories. Boas added that “the essential difference between linguistic phenomena and other ethnological phenomena is, that the linguistic classifications never rise to consciousness, while in other ethnological phenomena, although the same unconscious origin prevails, these often rise into consciousness, and thus give rise to secondary reasoning and to reinterpretations.” …

In the light of modern phonemics we can appreciate the immense scope of these propositions, which were formulated eight years before the publication of Cours de linguistique générale by Ferdinand de Saussure, which marked the advent of structural linguistics. But anthropologists have not yet applied these propositions to their field. …”

(Claude Lévi-Strauss, trans. by Claire Jacobson and Brooke Grundfest Schoepf, Structural Anthropology, 1963, Basic Books, Inc.)

Referring to the social relational patterns as “social structure”, Lévi-Strauss ambitiously declared that understanding a society’s social structure would lead to, in particular, comprehension of the culture and prediction of the results of changes:

“The term “social structure” has nothing to do with empirical reality but with models which are built up after it. This should help one to clarify the difference between two concepts which are so close to each other that they have often been confused, namely, those of social structure and of social relations. It will be enough to state at this time that social relations consist of the raw materials out of which the models making up the social structure are built, while social structure can, by no means, be reduced to the ensemble of the social relations to be described in a given society. …

The question then becomes that of ascertaining what kind of model deserves the name “structure.” This is not an anthropological question, but one which belongs to the methodology of science in general. Keeping this in mind, we can say that a structure consists of a model meeting with several requirements.

First, the structure exhibits the characteristics of a system. It is made up of several elements, none of which can undergo a change without effecting changes in all the other elements.

Second, for any given model there should be a possibility of ordering a series of transformations resulting in a group of models of the same type.

Third, the above properties make it possible to predict how the model will react if one or more of its elements are submitted to certain modifications.

Finally, the model should be constituted so as to make immediately
intelligible all the observed facts.”

(Claude Lévi-Strauss, 1963, Basic Books, Inc.)

Lévi-Strauss gave credits to Kroeber and Lowie, as well as other cultural anthropologists, for the structuralist aspects of their studies.

Credits were given to Kroeber on women’s fashion studies:

“Some of the researches of Kroeber appear to be of the greatest importance in suggesting approaches to our problem, particularly his work on changes in the styles of women’s dress. Fashion actually is, in the highest degree, a phenomenon that depends on the unconscious activity of the mind. We rarely take note of why a particular style pleases us or falls into disuse. Kroeber has demonstrated that this seemingly arbitrary evolution follows definite laws. These laws cannot be reached by purely empirical observation, or by intuitive consideration of phenomena, but result from measuring some basic relationships between the various elements of costume. The relationship thus obtained can be expressed in terms of mathematical functions, whose values, calculated at a given moment, make prediction possible.”

(Claude Lévi-Strauss, 1963, Basic Books, Inc.)

And credits were given to Lowie on kinship studies:

“… When he became active in research as well as in theoretical ethnology, the latter field was fraught with philosophical prejudices and an aura of sociological mysticism; therefore, his paramount contribution toward assessing the subject matter of social anthropology has sometimes been misunderstood and thought of as wholly negative. … However, it is Lowie who, as early as 1915, stated in modern terms the role of kinship studies in relation to social behavior and organization: “Sometimes the very essence of social fabric may be demonstrably connected with the mode of classifying kin.” …

… Thus he was laying the foundations for a structural analysis of kinship on two different levels: that of the terminological system, on the one hand, and, on the other, that of the correlation between the system of attitudes and terminology, thus revealing which later on was to be followed by others.”

(Claude Lévi-Strauss, 1963, Basic Books, Inc.)

Lévi-Strauss compared his structuralist views of anthropology to Karl Marx’s views on history, on the analysis of societies and on primitive societies:

“… the famous statement by Marx, “Men make their own history, but they do not know that they are making it,” justifies, first, history and, second, anthropology. At the same time, it shows that the two approaches are inseparable.

Marx himself, therefore, suggests that we uncover the symbolic systems which underlie both language and man’s relationship with the universe. …

If we grant, following Marxian thought, that infrastructures and superstructures are made up of multiple levels and that there are various types of transformations from one level to another, it becomes possible—in the final analysis, and on the condition that we disregard content—to characterize different types of societies in terms of the types of transformations which occur within them. …

Actually, Marx and Engels frequently express the idea that primitive, or allegedly primitive, societies are governed by “blood ties” (which, today, we call kinship systems) and not by economic relationships. If these societies were not destroyed from without, they might endure indefinitely. …”

(Claude Lévi-Strauss, 1963, Basic Books, Inc.)

Professor Claude Lévi-Strauss sounded more than my father later did, who, mentioned in Part 2, started as a Chinese literature student and ended as a professor in the history of Marxist philosophy.

(November 7, 2015, Feng Gao’s Posts – Rites of Spring; and, “High [Gao] Qiyun Self Selection (Chinese Edition)”, by Gao Qiyun, 2000, Abe Books)

Lévi-Strauss also credited the mathematician John von Neumann’s game theory for bringing the economist and the anthropologist closer together, and closer to Marxian thought:

“The complete upheaval of economic studies resulting from the publication of Von Neumann and Morgenstern’s book ushers in an era of closer cooperation between the economist and the anthropologist, and for two reasons. First—though economics achieves here a rigorous approach—this book deals not with abstractions such as those just mentioned but with concrete individuals and groups which are represented in their actual and empirical relations of cooperation and competition. Surprising though the parallel may seem, this formalism converges with certain aspects of Marxian thought.”

(Claude Lévi-Strauss, 1963, Basic Books, Inc.)

My reading cultural anthropology without auditing courses meant missing two public lectures in September 1984 by Lévi-Strauss, who was in Berkeley for the academic year 1984-1985 as the Charles M. and Martha Hitchcock Visiting Professor.

(“UCSF News”, 1984, The University of California, San Francisco; and, “Charles M. and Martha Hitchcock Lectures”, Berkeley Graduate Lectures)

But I did not miss it completely because later, sometime in 1986-1988, Prof. Lévi-Strauss gave a huge public lecture at the Crown Zellerbach Hall, the leading performing-arts venue at UC Berkeley, and I attended.

I have found one online reference, albeit anecdotal by a former Berkeley student, to that lecture:

“The first time I set foot into this huge hall was in my freshman year of college, to hear a talk by Claude Levi-Strauss.  Sure, I thought, I’d love to hear the founder of Levi’s!  I wear Levi’s after all!  Actually, I went b/c I had met a hot anthro major in a miniskirt who invited me to the talk, for, truth be told, I don’t give a whit about the jeans industry.

I found soon that Claude Levi-Strauss, among the last great intellectuals of his generation, right up there with Michel Foucault (who did NOT write Foucault’s Pendulum, apparently), has nothing to do with denim.

That didn’t keep the 5 level hall from being completely full that warm afternoon.  And I don’t quite know whatever happened to the lovely anthropologist in the miniskirt.”

(“Cal Performances: Recensioni Consigliate”, by T. W., January 25, 2007, yelp)

Michel Foucault mentioned above was a famous French philosopher who became a visiting professor at UC Berkeley in the 1980s, contracted HIV/AIDS in the San Francisco Bay Area and died in France in June 1984:

“… The original reports about the cause of death were ambiguous and contradictory. … Foucault’s fondness for San Francisco bath houses was widely discussed at the time. What we knew about AIDS, however, was not entirely clear. Nor was it clear what exactly was at stake in thinking AIDS was the cause of Foucault’s death. In 1984, it was possible (it still seems plausible, today) to believe in an AIDS conspiracy, in campaigns of disinformation disseminating news about the “gay plague,” and in a practiced neglect of AIDS cases because they were reported by homosexuals. …”

(“Fact and Fiction: Writing the Difference Between Suicide and Death”, by John Carvalho, 2006, Volume 4, Contemporary Aesthetics)

That was the same year when philosophy professor Warren Goldfarb co-founded Harvard’s gay and lesbian movement, who if I am not mistaken sometime in that period was a Berkeley visiting lecturer whose course I audited.

My experience with that Lévi-Strauss lecture had a similarity to the Cal freshman’s quoted above, that helped me remember when it took place: waiting in line at the Zellerbach Hall I saw and went in together with “Karen”, an undergraduate student who had been in my calculus TA class; a returning white student, Karen and her young Chinese American friend Kevin Mah, and their classmates were supportive of my teaching and I received an Outstanding TA Award for the academic year that I taught them, 1985-1986; so it could not have been the September 1984 Lévi-Strauss lectures.

As for the namesake of Levi Strauss jeans, a fame of San Francisco, they were too pricey for me then. Living in America I soon liked wearing jeans, but for my budget they started out Wranglers, and later mostly of the inexpensive start-up brand Gap from the Levi Strauss retailer The Gap.

Across Durant Avenue from the anthropology building was Cafe Roma, its patio the most popular hangout around campus. I often saw Alexandre Chorin, in his air of authority, walk in for a cup. Some math graduate students liked to hang around their bikes in the open space across the street, near the anthropology building, ones like Eric Kostlan, more senior than me and the only Ph.D. student of Smale’s also in Chorin’s circle.

The anthropology building was named after Alfred Kroeber, and the impressive anthropology museum after Robert Lowie, two late professors who had given prominence to UC Berkeley’s cultural anthropology:

“When a new anthropology building, which had been Kroeber’s lifelong ambition, was finally built at the University of California at Berkeley, it was officially named the Robert H. Lowie Museum of Anthropology. This museum, together with the Museum of Art, was part of the A. L. Kroeber Hall, but the honor paid Lowie was especially significant in that Lowie was never identified with or personally attracted to museum work. His early connections with the American Museum of Natural History were mainly a means whereby he had the opportunity to do fieldwork under the direction of Clark Wissler, and he relinquished this job in 1921 to accept the more congenial role of Professor of Anthropology at the University of California.”

(Julian H. Steward, 1974, U.S. National Academy of Sciences)

Today the anthropology museum is no longer the Robert H. Lowie Museum. After my departure from Berkeley, in the early 1990s it was renamed Phoebe A. Hearst Museum of Anthropology in honor of the museum’s original founder, according to the museum’s official history:

“Museum Founding and Growth

The Phoebe A. Hearst Museum of Anthropology, formerly the Lowie Museum of Anthropology, was founded in 1901. Its major patron, Phoebe Apperson Hearst, supported systematic collecting efforts by both archaeologists and ethnographers to provide the University of California with the materials for a museum to support a department of anthropology. …

… The Museum’s collections have grown from an initial nucleus of approximately 230,000 objects gathered under the patronage of Phoebe Hearst to an estimated 3.8 million items. The Museum was accredited by the American Association of Museums in 1973, and re-accredited in 1990.

Museum Locations

The Museum was physically housed from 1903 to 1931 in San Francisco, where exhibits opened to the public in October, 1911. A key figure during these years was Ishi, a Yahi Indian who lived at the Museum from 1911 until his death in 1916 and worked with the anthropologists to document the ways of his people. When the Museum moved back to the Berkeley campus in 1931, there was no space for public exhibitions, and the Museum focused on research and teaching. With the construction of a new building housing the Museum and anthropology department in 1959, space for exhibition again became available. The building, which the Museum continues to occupy, was named Kroeber Hall, and the Museum was named in honor of Robert H. Lowie, a pioneer in the Berkeley anthropology department. In 1991, the Museum’s name was changed to recognize the crucial role of Phoebe A. Hearst as founder and patron.”

(“History of the Museum”, Phoebe A. Hearst Museum of Anthropology)

Who said “economic successes”, or wealth, in this case of the Hearst family fame discussed in Part 2, and love of the museum did not matter? In the end, the millions in the museum collections and the thousands of visitors meant a more important law to the UC Berkeley anthropology museum than the laws of cultural anthropology Professor Robert Lowie may have discovered.

The Charles M. and Martha Hitchcock lectures that had featured Claude Lévi-Strauss in 1984, in 1988 featured the physicist Stephen Hawking, the Lucasian Professor of Mathematics at Cambridge University, a title once held by Isaac Newton.

I attended one of these March-April lectures by Hawking, and this time a familiar person I saw there, and chatted with, was my adviser Steve Smale.

During the timespan of his Berkeley lectures, on April Fool’s Day Prof. Hawking’s bestselling popular book, A Brief History of Time, was published. 8 years earlier in 1980, Smale had published a collection of articles under an interestingly similar title, The Mathematics of Time, on dynamical systems and mathematical economics. And 5 years before that in 1975, Hawking and co-author G. F. R. (George) Ellis had published an astrophysics textbook – his first book while the 1988 one his second – under a related but more advanced title, The Large Scale Structure of Space-Time.

(S. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-Time, 1975, Cambridge University Press; Steve Smale, The Mathematics of Time, 1980, Springer-Verlag; “Origin of the Universe”, March 21, 1988, “Black Holes, White Holes, and Worm Holes”, April 5, 1988, and, “Direction of Time”, April 7, 1988, by Stephen Hawking, Berkeley Graduate Lectures; and, “Excerpt from ‘A Brief History of Time’”, by Stephen Hawking, April 5, 2007, USA Today)

Theories of physics had always been among my interests since my middle-school days, when browsing through the Sun Yat-sen University central library bookshelves I liked to read the Scientific American magazine. In the 1977 national university entrance exams I didn’t do as well on the physics exam as on the math exam, and my hands-on experimental ability was rather poor; so that prospect did not become a reality.

A reason I focused on partial differential equations in my first year at Berkeley and wanted to do my Ph.D. study under Tosio Kato was that these equations were a major mathematical foundation of physics. I recall Prof. David Gale, whose math specialty was in ordinary differential equations, making an even stronger statement in a colloquium talk, asserting that the universe was governed by differential equations – as in Part 2 Gale had helped John Nash start Ph.D. study in game theory despite von Neumann’s dismissal of Nash’s idea.

So in 1984-1985 while Smale was away on sabbatical, I made an unsuccessful attempt at starting some research in mathematical physics.

In 1983 Professor Robert Anderson, originally Canadian and a U of T graduate, had moved from Princeton to Berkeley, and my first TA job was for his introductory calculus course. A professor of both mathematics and economics, Bob was nonetheless not a particularly electrifying speaker and would soon focus his teaching at the graduate level.

During the 1984-1985 academic year Anderson gave a graduate course on nonstandard analysis, a specialty of his since his Yale Ph.D. study, and Will Geller and I were two loyal students among the small class, Will being especially impressed that Bob was an openly gay faculty member active in community politics.

Formulated using mathematical logic, nonstandard analysis is a mathematical system that includes infinitesimals, i.e., infinitely small numbers and infinitely large numbers, a system constructed to be an equivalent and alternative to the standard calculus-based mathematical analysis, which handles infinity via the notion of limits.

Historically, the need for the infinite calculus had arisen from physics and astronomy, and the infinitesimals were used intuitively, without sufficient mathematical rigor:

“The concept of infinitesimal was beset by controversy from its beginnings. The idea makes an early appearance in the mathematics of the Greek atomist philosopher Democritus c. 450 B.C.E., only to be banished c. 350 B.C.E. by Eudoxus in what was to become official “Euclidean” mathematics. We have noted their reappearance as indivisibles in the sixteenth and seventeenth centuries: in this form they were systematically employed by Kepler, Galileo’s student Cavalieri, the Bernoulli clan…

However useful it may have been in practice, the concept of infinitesimal could scarcely withstand logical scrutiny. Derided by Berkeley in the 18th century as “ghosts of departed quantities”, in the 19th century execrated by Cantor as “cholera-bacilli” infecting mathematics, and in the 20th roundly condemned by Bertrand Russell as “unnecessary, erroneous, and self-contradictory”, these useful, but logically dubious entities were believed to have been finally supplanted in the foundations of analysis by the limit concept which took rigorous and final form in the latter half of the 19th century. …”

(“Continuity and Infinitesimals”, by John L. Bell, July 27, 2005 (revised September 6, 2013), Stanford Encyclopedia of Philosophy)

The German philosopher Gottfried Wilhelm Leibniz developed a special liking for the infinitesimals, but their mathematical use was later replaced by the theory of limits started by Leibniz’s contemporary, the physicist Isaac Newton, until the 1960s when nonstandard analysis was invented by the mathematician Abraham Robinson:

“Newton developed three approaches for his calculus, all of which he regarded as leading to equivalent results, but which varied in their degree of rigour. The first employed infinitesimal quantities which, while not finite, are at the same time not exactly zero. Finding that these eluded precise formulation, Newton focussed instead on their ratio, which is in general a finite number. If this ratio is known, the infinitesimal quantities forming it may be replaced by any suitable finite magnitudes—such as velocities or fluxions—having the same ratio. This is the method of fluxions. Recognizing that this method itself required a foundation, Newton supplied it with one in the form of the doctrine of prime and ultimate ratios, a kinematic form of the theory of limits.

Among the best known of Leibniz’s doctrines is the Principle or Law of Continuity. In a somewhat nebulous form this principle had been employed on occasion by a number of Leibniz’s predecessors, including Cusanus and Kepler, but it was Leibniz who gave to the principle “a clarity of formulation which had previously been lacking and perhaps for this reason regarded it as his own discovery” …

The Principle of Continuity also played an important underlying role in Leibniz’s mathematical work, especially in his development of the infinitesimal calculus. … Given a curve determined by correlated variables x, y, he wrote dx and dy for infinitesimal differences, or differentials, between the values x and y: and dy/dx for the ratio of the two, which he then took to represent the slope of the curve at the corresponding point. …

… The first signs of a revival of the infinitesimal approach to analysis surfaced in 1958 with a paper by A. H. Laugwitz and C. Schmieden. But the major breakthrough came in 1960 when it occurred to the mathematical logician Abraham Robinson (1918–1974) that “the concepts and methods of contemporary Mathematical Logic are capable of providing a suitable framework for the development of the Differential and Integral Calculus by means of infinitely small and infinitely large numbers.” This insight led to the creation of nonstandard analysis, which Robinson regarded as realizing Leibniz’s conception of infinitesimals and infinities as ideal numbers possessing the same properties as ordinary real numbers.”

(John L. Bell, July 27, 2005 (revised September 6, 2013), Stanford Encyclopedia of Philosophy)

In studying nonstandard analysis I was also intrigued by two facts, that the axiomatic framework can lead to different mathematical systems containing infinitesimals, and that in mathematical physics certain conventional use of infinities was quite common.

A well-known example of infinity use has been the Dirac delta function, originated from physics where when a charge is concentrated at an infinitely small local point, its magnitude is infinitely large:

“DIRAC DELTA FUNCTION

In electromagnetic field analysis we come across the source density and the point source. Take the situation of a point charge q and the corresponding charge density ρv. Obviously the charge density must be zero everywhere in space and become infinite at the location …”

(Devendra K. Misra, Practical Electromagnetics: From Biomedical Sciences to Wireless Communication, 2007, John Wiley & sons)

Such a one-point distribution in standard mathematical analysis is defined as a “generalized function”, and is a useful tool in approximation and solutions of differential equations.

(Philip J. Davis, Interpolation and Approximation, 1975, Dover Publications, Inc.; and, Sadri Hassani, Mathematical Physics: A Modern Introduction to Its Foundations, 2013, Springer)

I found two classical books, one by Paul Adrien Maurice Dirac on quantum mechanics and one by Robert Davis Richtmyer on mathematical physics, to be well written, suitable readings for me who had studied some relevant mathematics through Kato’s course on partial differential equations.

(P. A. M. Dirac, The Principles Of Quantum Mechanics, 1958, Oxford University Press; and, Robert D. Richtmyer, Principles of Advanced Mathematical Physics, 1978, Springer-Verlag)

The mathematics of quantum mechanics employed differential operators, related to differential equations, in a central role. Recall that my classmate Robert Rainsberger’s Ph.D. thesis under Heinz Cordes was related to differential operators; in fact, Cordes has written a book on quantum theory in the spirit and approach of Dirac.

(Heinz Otto Cordes, Precisely Predictable Dirac Observables, 2007, Springer)

But neither Dirac’s book nor Richtmyer’s book really covered a type of infinity I was most interested in: “divergence” occurring in the calculations of physical quantities in particle physics – related to Dirac delta function – and “regularization” and “renormalization” – procedures to remove the infinity, involving ad hoc rules that lack a rigorous and comprehensive mathematical foundation.

(“Ultraviolet divergence”, Wikipedia)

I needed directly relevant references; but in the 1980s there were few published resources accessible to non-specialists on quantum divergence. As I recall I settled on trying to understand the book, Quantum Mechanics and Path Integrals, by Richard Feynmann and Albert Hibbs.

(R. P. Feynmann and A. R. Hibbs, Quantum Mechanics and Path Integrals, 1965, McGraw-Hill)

I found Feynman’s mathematics very different from those of Dirac and Richtmyer, not to mention his particle physics-centered presentation. I gained some sense of the occurrence of divergence but not any thorough understanding of either the physics or the particular mathematics to proceed with my attempt.

I spoke with a Berkeley biophysics Ph.D. student friend, Dar-Chone Chow (周達仲), who gave me an informal interpretation of quantum states and state transitions, as in ‘quantum leaps’, drew my attention to Feynmann’s notion of path integrals where divergence could occur, and otherwise said it was “too fundamental” to try.

I realized that on my own I was unable to really proceed, and soon put it on the backburner of my studies.

Later in 1987, Berkeley physics professor Eyvind Wichmann gave a math graduate course on von Neumann algebra. I audited, and at some point went to his office to try to gain a better understanding of divergence. I recall Prof. Wichmann drawing an unbounded stationery potential and a bounded moving potential, stating that their interaction was where divergence would occur. Somehow I felt that, even if so, the real physics was not truly captured at this level of mathematical modeling.

Dodging my interest in learning more about the physics, Wichmann said instead, quite seriously, “John von Neumann understood quantum physics, better than the physicists did”.

Given my limited knowledge I could not argue with Prof. Wichmann. I had read some literature on von Neumann’s mathematical formulation of quantum mechanics, who supposedly proved that his probabilistic model was complete for the physics and “hidden parameters” could not exist, that in this sense “causality in nature” was highly unlikely – a claim I felt very questionable – and whose highly abstract, algebraic approach Wichmann espoused was mathematically elegant but of little pragmatic interest to me.

(Miklós Rédei and Michael Stöltzner, eds., John von Neumann and the Foundations of Quantum Physics, 2001, Springer Science and Business Media; and, “Quantum Theory: von Neumann vs. Dirac”, by Fred Kronz, 2012, Stanford Encyclopedia of Philosophy)

In contrast to me, the more senior Xiaolu Wang, who later went to Wall Street as mentioned earlier, did his math Ph.D. thesis in the field of C*-algebra related to von Neumann algebra, with the title, “On the C*-algebras of a family of Solvable Lie Groups and Foliations”.

(“Xiaolu Wang”, Mathematics Genealogy Project)

So in 1984-1985 my months of efforts, on and off, did not lead to any progress in starting research in mathematical quantum physics.

I also needed to find Ph.D. research topics that would be of interest to my adviser Steve Smale.

Recall Smale’s comment in his 1981 paper regarding the theory of numerical analysis, that it was lacking in the total cost of convergence:

“The theory of these methods has a couple of components; one, proof of convergence and two, asymptotically, the speed of convergence. But, not usually included is the total cost of convergence.”

(Steve Smale, 1981, Bulletin of the American Mathematical Society)

I felt that an important component of the total cost was due to arithmetic round-off errors in the numerical computation. UC Berkeley had an expert, Professor William Kahan, a professor of both mathematics and computer science, in floating-point arithmetic and round-off error analysis; he had been the only numerical analyst acknowledged in Smale’s 1981 paper, as quoted earlier.

I approached Kahan, who gave me a few papers to read, and I began to attend seminars he led. I became aware of the research of Kahan’s former Ph.D. student James Demmel, an earlier part of which employed interval analysis to estimate fixed-point arithmetic round-off errors in solving systems of linear equations.

(“An interval algorithm for solving systems of linear equations to prespecified accuracy”, by James W. Demmel and Fritz Krückeberg, July 6, 1983, Computer Science Division (EECS), University of California, Berkeley)

However, for practical efficiency floating-point arithmetic was the choice of numerical computation. I looked into whether interval analysis, in which error intervals for the operands of an arithmetic operation lead to an error interval for the result, was applicable to floating-point arithmetic error analysis, and concluded that interval analysis did not seem useful for floating-point arithmetic error estimation in a worst-case model generally, i.e., without more concrete, problem-dependent assumptions about the possible distribution of actual error.

Kahan was popular with students wanting to get a master’s degree in computer science before going to the computer industry; his Ph.D. student from Hong Kong, Ping Tak Peter Tang, was in the mathematics department and Kahan asked that I helped read his thesis before he graduated to join Intel Corporation; and Demmel, a computer science professor at NYU’s Courant Institute before returning to Berkeley, was of course Kahan’s protégé.

Kahan’s office at the electrical engineering and computer science department’s computer science division, located in the same Evans Hall as the mathematics department, was next-door to the office of Prof. Lofti Zadeh, famous for his theory of fuzzy sets and fuzzy logic that he applied to artificial intelligence. So I also did some reading on Zadeh’s theory, in relation to my interest in arithmetic round-off errors, probabilistic analysis and interval analysis.

(“Fuzzy Probabilities”, by Lofti A. Zadeh, 1984, Volume 20, Number 3, Information Processing & Management)

As earlier, by this time my roommate was computer science Ph.D. student David Chin, who specialized in artificial intelligence.

Like with quantum physics, I continued to maintain an interest in arithmetic round-off error issues, which were relevant to my research in numerical analysis. In the early 1990s when I met NASA scientist David Bailey, who had been pioneering multiprecision arithmetic, I felt it was a promising direction toward variable-precision arithmetic in general, that would bridge fixed-point arithmetic and floating-point arithmetic; so I requested and eagerly went over a preprint of his paper.

(“A Portable High Performance Multiprecision Package”, by David H. Bailey, (revised) May 18, 1993, RNR Technical Report RNR-90-022)

When Smale notified me that he would be back during the Christmas season in 1984 and would like to learn about my progress, I had little to show but a formulation of the observation that as a numerical algorithm’s accuracy of approximation increases, floating-point arithmetic errors become dominant and a limit to the total accuracy – thus raising an issue of the optimal choice of approximation relative to the precision of floating-point arithmetic.

In the note of several pages I sketched for a problem to illustrate the above, there was no advanced mathematics involved. When I met him, Smale was unhappy with what I had submitted, and asked me to look into a different matter instead: he had drafted a paper that included some average analysis of the cost of approximation of integrals, and encountered rebuttal from some researchers.

I studied it, and communicated my findings to Smale in the spring of 1985. When he returned in the summer, Smale was more encouraging: he had made appropriate changes, and the paper was being published in October 1985 and would be the main basis of his graduate topics course in Fall 1985.

(“On the efficiency of algorithms of analysis”, by Steve Smale, October 1985, Volume 13, Number 2, Bulletin of the American Mathematical Society)

In this paper, Smale acknowledged conversations with, among others, A. Grunbaum, as discussed earlier the powerful new director of Center for Pure and Applied Mathematics, close to Alexandre Chorin and the numerical analysts:

“ACKNOWLEDGMENT. Conversations with L. Blum, A. Grunbaum, E. Kostlan, and A. Ocneanu were helpful to me in developing the ideas in this section.”

(Steve Smale, October 1985, Bulletin of the American Mathematical Society)

Unlike in his 1981 paper, this time Grünbaum was the only established Berkeley professor acknowledged by Smale.

Apparently during his sabbatical Steve did not only spent time in Paris, as in this paper he mentioned that in July 1984 he was in Caracas, Venezuela:

“Besides the help of Traub and Wozniakowski, conversations with A. Calderon, P. Collet, J. Franks, M. Shub, and especially David Elworthy in Caracas, July 1984 (where I found these results) were important for me. …”

(Steve Smale, October 1985, Bulletin of the American Mathematical Society)

After the mention of Caracas in July 1984, Steve acknowledged conversations with me and his son Nat, also a Berkeley math Ph.D. student:

“… So also were conversations with Feng Gao and Nat Smale.”

(Steve Smale, October 1985, Bulletin of the American Mathematical Society)

In the section of the paper where I was mentioned, Smale deployed Gaussian measure as the setting for his average analysis of approximation of integrals, mentioning the related Wiener measure.

Gaussian measure is named after the 19th-century German mathematician Carl Friedrich Gauss, whose name is also associated with the Fundamental Theorem of Algebra in the title of Smale’s 1981 paper; Wiener measure is named after the late MIT mathematician and founder of cybernetics, Norbert Wiener, as is the Wiener Prize in Applied Mathematics Tosio Kato had received in 1980. These probability measures later became the settings for my Ph.D. thesis as well.

Later in 1988, this paper of Smale’s in Bulletin of the American Mathematical Society – the U.S. mathematical research community’s leading journal that had also published Smale’s 1981 paper – was awarded the Chauvenet Prize by the Mathematical Association of America for the outstanding exposition.

(“Chauvenet Prizes”, Mathematical Association of America)

During his Fall 1985 course, Smale also presented his α-theory, to be published in 1986, on computable convergence estimates for his modified Newton’s method-type algorithm for finding zeros of complex polynomials. The inspiration I got from Smale’s α-theory later led to the main body of my Ph.D. thesis.

Smale was quite pleased with my work in progress; in the paper for his plenary address at the 1986 International Congress of Mathematicians, he also acknowledged me when he acknowledged Lenore Blum and Jim Curry:

“… Lenore Blum’s MSRI talk on a condition number for linear programming via the LCP helped put the LCP back in my mind. Her comments and those of Jim Curry and Feng Gao have been generally useful to me.”

(“Algorithms for Solving Equations”, by Steve Smale, 1986, Proceedings of the International Congress of Mathematicians, Berkeley, California)

Steve’s collaborator Lenore has previously been mentioned in Parts 2 & 3.

Jim Curry, who preferred to be referred to as “James”, not “Jim”, was a rare African-American math Ph.D. graduate of Berkeley originally from Oakland, a neighboring city mentioned in Part 2. James was an associate chairman of the mathematics department at the University of Colorado, Boulder, on sabbatical at Berkeley’s Mathematical Sciences Research Institute; he had done computational research in dynamical systems using a Cray supercomputer, and was doing research related to Smale’s analysis of Newton’s method.

(“On zero finding methods of higher order from data at one point”, by James H. Curry, June 1989, Volume 5, Issue 2, Journal of Complexity; “James Howard Curry”, Mathematical Association of America; and, “James H. Curry”, Department of Applied Mathematics, University of Colorado, Boulder)

The International Congress of Mathematicians, as in Part 2, was the once-in-4-years international gathering of the mathematical community that in 1966 was held at the Kremlin Palace in Moscow where Smale and 3 others were awarded the Fields Medal.

20 years later in 1986, the ICM was held at UC Berkeley, and Smale led its 15 plenary speakers:

“Since the “World Congress of Mathematicians,” held in Chicago in 1893, the mathematicians of the world – as urged then by Felix Klein – have gone far in forming unions and holding international congresses. In the summer of 1986 the twentieth such congress took place at the University of California (Berkeley).

The city of Berkeley from which the University takes its designation was named for the Anglican bishop, George Berkeley (1685-1753), not for his perceptive comments regarding the newly invented calculus, but for another perceptive comment – “Westward the course of empire takes its way” – which occurs in a work entitled “On the Prospect of Planting Arts and Learning in America.”

The Congress itself was distinguished by an increasing emphasis on computer science. The New York Times headlined MATHEMATICIANS FINALLY LOG ON. Steve Smale, Berkeley’s own Fields Medalist (Moscow 1966), led off the stellar lineup of fifteen plenary speakers with a lecture on “Complexity aspects of numerical analysis” —a far cry from his Moscow lecture on “Differentiable dynamical systems.” …”

(Donald J. Albers, G. L. Alexanderson and Constance Reid, International Mathematical Congresses: An Illustrated History 1893-1986, 1987, Springer-Verlag)

Actually, the 1986 ICM Proceedings listed 16 plenary speakers, only that the speech of one of them, Jürgen Fröhlich, supposed to be on the mathematics of quantum mechanics, was listed as a no show:

“FRÖHLICH, JÜRG  (Paper not read at the Congress. No manuscript received.), Analytical approaches to quantum field theory and statistical mechanics”.

(“PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS AUGUST 3-11, 1986”, 1987, Berkeley, California, American Mathematical Society)

Haha! I wasn’t the only one who had spent time on quantum mechanics but come up with nothing to show.

It’s unclear why Fröhlich’s talk was a no show. The Swiss mathematical physicist’s official resume lists an invited address at the 1978 ICM in Helsinki and a plenary address at the 1994 ICM in Zurich, with no mention of the 1988 Berkeley ICM halfway in between.

(“Mathematical Physics — Prof. Jürg Fröhlich: CURRICULUM VITAE”, Institute for Theoretical Physics, Swiss Federal Institute of Technology Zurich)

In 1990, in my paper contributed to the Smalefest conference celebrating Smale’s 60th birthday – as in Part 2 at that conference Smale gave a speech on his former Communist history – I recalled the inspiration of Smale’s α-theory to my Ph.D. research:

“In fall 1985 at Berkeley, when Steve Smale was disseminating this α-theory in his graduate course and also in one Mathematics Department colloquium talk, I was at the preliminary stage of my Ph.D. research under his supervision. Smale’s observation on computable error estimates struck me as pointing out a potentially important direction for the analysis of numerical approximation algorithms. … ”

(“On the Role of Computable Error Estimates in the Analysis of Numerical Approximation Algorithms”, by Feng Gao, in Morris W. Hirsch, Jerrold E. Marsden and Michael Shub, eds., From Topology to Computation: Proceedings of the Smalefest, 1993, Springer-Verlag)

Smale was pleased with the progress of my Ph.D. research. In the fall of 1986 when I asked him to be a reference for my academic job search, Smale responded, “Apply to the top-20 or so math departments, tenure-track positions, and I will write a letter for you”.

I applied to well over 20, mathematics as well as computer science departments. Then interestingly, all the job interviews – a total of 3 – came at computer science departments: at Princeton University, State University of New York at Buffalo, and University of Maryland at College Park – it resembled the 3 schools, Stanford, SUNY Stony Brook and UC Berkeley, 5 years earlier when I applied for Ph.D. study, as well as the 3 Canadian schools, U of T, SFU and UBC that offered me jobs in the following year.

An obvious reason for their all being in computer science in 1988 was that many of the university CS departments were much newer than their math counterparts and the needs for computer science teaching and research were fast growing.

A second possible reason, pointed out by Smale’s collaborator Lenore Blum, a professor at the all-women Mills College – mentioned in Part 3 in the context of 5 new college graduates featured in Life magazine in 1969 – in Oakland, was that Smale’s research was not universally accepted by mathematicians and none of his Ph.D. graduates had become professors at leading math departments in the United States.

I had heard something like it, from none other than Professor Shiing-shen Chern, founding director of Berkeley’s MSRI and a mathematical patriarch who, as in Part 2, had co-organized research activities with Smale. On one private occasion after I had become Smale’s Ph.D. student, Chern cautioned me that mathematicians might not always view Smale’s work as genuine mathematics.

Yet another possible reason, mentioned by Will Geller as discussed earlier, was that Smale might not have given a lot of attention to his students. Smale’s Ph.D. graduate James Renegar, who had taught mathematics for several years at the remote Colorado town of Fort Collins then returned for postdoctoral research at MSRI and at Stanford, before getting a job at the operations research department of Cornell University – a leading U.S. university that as in Part 3 has named its medical college after businessman Sanford Weill – lamented to me that Smale did not necessarily write strong support letters for his students, and that Smale’s collaborator Michael Shub wasn’t easy to get to know: Jim had lobbied Mike for a job at IBM Thomas J. Watson Research Center in New York State, where Mike was a scientist, but got an offer only after Cornell had offered.

As cited in Part 2, Mike Shub had been Smale’s Ph.D. student in the 1960s following Smale into the Berkeley anti-war movement. In his 1986 ICM paper, right after mentioning Lenore, Jim Curry and me as quoted earlier, Smale gave special thanks to Renegar and, especially, Shub:

“Especially important through all of this has been the work of, and conversations with, Jim Renegar and Mike Shub. That contribution from Mike Shub, to me, has persisted over many years indeed.”

(Steve Smale, 1986, Proceedings of the International Congress of Mathematicians, Berkeley, California)

Jim was very bright. During his postdoctoral time at Berkeley’s MSRI and Stanford he combined inspiration from Smale’s focus on Newton’s method with Stanford operations research department’s focus on linear programming, and came up with an innovative Newton’s method-type algorithm for the latter.

(“LINEAR PROGRAMMING (1986)”, by Nimrod Megiddo, 1987, Volume 2, Annual Review of Computer Science)

Still, Mike Shub’s contribution had persisted for much longer as Smale said. Shub had started following Smale in the early 1960s as a Columbia University undergraduate student. An anecdote discussed in Part 2 that, fearing a nuclear war and angry with President John Kennedy during the October 1962 Cuban Missile Crisis, Smale abandoned his Columbia class and headed with his family towards Mexico, happened when Shub was taking Smale’s graduate course.

(Steve Batterson, January 2000, American Mathematical Society)

But even Lenore expressed to me sentiments similar to Jim’s, that Steve was a great mathematician but he did not provide much help for his students. In that respect, Lenore filled much of that void for Steve’s younger followers.

James Curry also told me his experience similar to Jim Renegar’s, though not about Steve. After his Berkeley Ph.D., James went to a tenure-track assistant professorship at Howard University, a private, leading black U.S. university located in Washington, D.C., only to find that faculty members there were not into math research; so after 2 years, James left for postdoctoral research at MIT, starting over looking for a faculty job.

James was very driven. I recall his reminiscence that as a graduate student he had seen familiar persons chat all the time in the department’s common coffee room, and after becoming a faculty member elsewhere he returned to visit and saw the same persons still hang out in the coffee room – “they did not accomplished much”, James said.

That might be the case, but the department coffee room was popular, especially at 3-4 in the afternoon on weekdays when free refreshments, like cookies, were available.

It’s a matter of one’s ambitions and standards, I guess. I remember David Witte, a tall, handsome and popular postdoctoral fellow who brought his bicycle everywhere, including to the coffee room, and who in around 1987-1988 told me he had a tenure-track assistant professorship offer from the University of Wisconsin-Madison but chose to go to MIT next as a short-term instructor; David said that in a few years some senior professors at the top universities would retire and positions would be open, but if one settled into a university at the next level it would be harder to move up.

Wow! For me, had there not been a famously liberal UC Berkeley with nice weather on the U.S. West Coast, I would have been happy to get into Wisconsin-Madison for graduate study as advised by my undergraduate adviser in China.

In the case of my seeking an academic job, Smale phoned his friend James Yorke, director of the Institute for Physical Sciences and Technology at the University of Maryland, before the institute contacted me for a joint-candidacy interview with the computer science department at College Park. So Steve did provide real help.

Steve’s phone lobbying for me happened after my February 1987 interviews at Princeton and Buffalo, likely in March as the Maryland interview took place in April; he dampened my enthusiasm of the Princeton interview experience – in 1984 Andrew Majda had talked about bringing me there as a Ph.D. student – by informing me that his other graduating Ph.D. student, Joel Friedman, was also being interviewed by Princeton.

Joel solved a conjecture of Smale’s, a feat few graduate students achieved. He had hailed from Harvard in 1984 with a multitude of fellowships, and was someone Will raved about, that Joel was also doing research at IBM Almaden Research Center with prominent theoretical computer scientist Nick Pippenger – in comparison, at the time I didn’t even know about Pippenger’s wife Maria Klawe, the IBM research group’s manager.

(“On the convergence of newton’s method”, by Joel Friedman, March 1989, Volume 5, Issue 1, Journal of Complexity)

I was also informed that Joel was son of the mathematician Avner Friedman, who was well known, in that year 1987 the new director of Institute for Mathematics and its Applications at the University of Minnesota – Berkeley’s MSRI and Minnesota’s IMA were the only U.S. math research institutes financed by the National Science Foundation.

(“Avner Friedman: Mathematician in Control”, Interview of Avner Friedman by Y.K. Leong, 2007, Issue 10, Newsletter of Institute for Mathematical Sciences, National University of Singapore; and, “DMS Mathematical Sciences Research Institutes Update”, November 1, 2015, Amstat News)

So in 1987 Jim Renegar landed a tenure-track job in operations research at Cornell, and Joel Freidman landed a tenure-track job in computer science at Princeton – Smale’s students were getting tenure-track positions at leading U.S. universities in various departments, just not mathematics.

As for me, 3 interviews were not bad; that none led to a tenure-track job offer probably had to do with the fact that I was in the U.S. on a foreign student-visiting scholar type visa and the extra requirements for such a hiring were not something academic institutions would easily undertake.

At College Park, I met a mathematician and numerical analyst I had great respect for. In a February 2013 blog post I quoted about Professor Ivo Babuska, when comparing pure mathematicians and applied mathematicians:

“Politically, the personalities in the fields of pure mathematics and those in the fields of applied mathematics were quite different. A large number of academicians in pure math were known to be political left-wings, active to various degrees as political dissidents, international human-rights activists, or in the case of Americans, anti-war activists.

But despite my Ph.D. adviser’s pure math pedigrees, my professional interests and studies were more in applied and computational mathematics, in fields where politics was usually not openly discussed as much, not the least because research was substantially funded by industry and by military-related sources.

Babuska’s background had a link to a major political event… he came to the United States from Czechoslovakia in 1968. But here is one biography that offered more details about his 1968 move:

“… The Communists had seized control of the country in 1948 and it was under strong Soviet influence over the following years. Mathematics was allowed to develop without interference, however, and the applied and computational methods developed by Babuska found favour. Beginning in 1964 reformers had won many concessions which became more clear-cut in early 1968 when the country began to implement “socialism with a human face”. The reforms came to a sudden end, however, in August 1968 when Soviet tanks rolled into Prague. Babuska had just been appointed as a professor at the Charles University of Prague but, given the political situation, he travelled with his family to the United States where he spent a year as a visiting professor at the Institute for Fluid Dynamics and Applied Mathematics at the University of Maryland at College Park. He was given a permanent appointment as a professor at the University of Maryland in the following year and he held this position until 1995. He was then appointed Professor of Aerospace Engineering and Engineering Mechanics, Professor of Mathematics, and appointed to the Robert Trull Chair in Engineering at the University of Texas at Austin. …

After coming to the United States, Babuska became the world-leading expert in finite element analysis.””

(“Guinevere and Lancelot – a metaphor of comedy or tragedy, without Shakespeare but with shocking ends to wonderful lives (Part 2)”, February 28, 2013, Feng Gao’s Posts – Rites of Spring)

Just like Carl de Boor being “the worldwide leader and authority in the theory and applications of spline functions”, quoted earlier, Ivo Babuska was “the world-leading expert in finite element analysis”, or at least in my understanding a leading expert of great depth in both the mathematics and the applications of finite elements. My interest in a potential extension of my Ph.D. research direction to the finite elements had led to my wide reading of literature on that subject, including publications by prominent numerical analysts such as, in addition to Babuska: Babuska’s former Maryland colleague, Werner Rheinboldt of the University of Pittsburgh; Babuska’s collaborator, Olgierd Zienkiewicz of the University College of Swansea in Wales who – like Babuska going to the University of Texas at Austin in 1995 – became UT Austin’s  Joe C. Walter Chair of Engineering in 1989; Richard Varga of Kent State University; and Gilbert Strang of MIT.

(“The Finite Element Method—Linear and Nonlinear Applications”, by Gilbert Strang, 1974, Proceedings of the International Congress of Mathematicians, Vancouver; “Celebration of a Wide-ranging Community at Kent State”, by Daniel B. Szyld, July 23, 1999, Society for Industrial and Applied Mathematics; “OBITUARY: OLGIERD C. ZIENKIEWICZ (18 May 1921–2 January 2009)”, September 1, 2009, International Journal for Numerical Methods in Engineering; and, “Prof. Dr. Werner Rheinboldt: Honorary Professor at TUM since 2007”, Technische Universität München)

Note that finite element analysis was also the field of Berkeley professor Keith Miller who, as discussed earlier, denied my classmate Robert Rainsberger the chance to do Ph.D. study in it.

Babuska’s accomplishments have given him recognition beyond mathematics and engineering, as I noted:

“Ivo Babuska is one such fine example, an applied mathematician originally from Prague, capital of the Czech Republic, someone so accomplished that his birthday has been hailed as among:

“Prague’s “top 11 historical events” between 1197 and 1966 a.d. compiled by Mlada Fronta Dnes, the Czech Republic’s largest newspaper”.”

(February 28, 2013, Feng Gao’s Posts – Rites of Spring)

So was the distinction Zienkiewicz received in 1989 as he became UT Austin’s Joe C. Walter Chair of Engineering, when he also became the UNESCO Chair of Numerical Methods in Engineering at Universitat Politecnica de Catalunya in Barcelona, Spain – the first UNESCO chair in the world, hence an honor also for the field of numerical analysis:

“After retirement as Head of Civil Engineering at Swansea in 1987, Olek spent two months each year at the International Center for Numerical Methods in Engineering (CIMNE) at Universitat Politécnica de Catalunya (UPC) in Barcelona, Spain. In 1989 Olek was appointed as the UNESCO Chair of Numerical Methods in Engineering at UPC. This was the very first UNESCO chair in the world and arose from interactions with Geoff Holister who was working at UNESCO developing support to technology and engineering. The idea of such a position arose from an idea in the book “Small World” by David Lodge. In the book professors of English imagine a UNESCO Chair that will allow them to retire into a world of continuous travel with no lecture obligations at an extravagant salary – mostly things Olek already had achieved!”

(“Some Recollections of O.C. Zienkiewicz”, by R.L. Taylor, in “IACM Expressions”, Number 25, July 2009, Bulletin for The International Association for Computational Mechanics)

In 1987 at College Park, Maryland, Babuska was a professor at both the math department and at the institute headed by Smale’s peer Jim Yorke. With adaptive finite elements and computable a posteriori error estimates among his expertise, Babuska showed considerable interest in my computable average error estimates for numerical integration and their close links to spline functions in approximation theory.

After the computer science department’s decision not to offer me a job, Babuska forwarded my file to his peer, Prof. Thomas Seidman at Maryland’s Baltimore County campus, who on behalf of the math department there offered me a 2-year visiting assistant professorship.

As a foreign student, my U.S. study visa would expire after my Ph.D. graduation or another 18 months of postdoctoral research, unless a job led to a work visa. The visiting position would not include that; so after some thoughts I decided to stay in the Ph.D. program for one more year, and thanked Prof. Seidman for his help.

At this point in the early summer of 1987, Babuska agreed to forward my paper based on my upcoming Ph.D. thesis to a leading numerical analysis journal, of which he was an editorial board member, in Vienna, Austria. He also advised that for the next year 1988 I should focus on applying to Canadian universities. During that phone conversation, Babuska said something that made a lasting impression, something like, “It is possible. I believe everything is possible”.

Acting on the advice of Babuska and also that of Lenore Blum, whom I respected as a mentor and for whose Mills College math class I had once substituted for a week, in my last year at Berkeley I spent most of my study time on computer science, which had been my undergraduate major.

The fact that using my Ph.D. research work under a mathematician, whose current interest was in numerical computation but who was not a numerical analyst, I was able to become a computer science faculty candidate at three very good U.S. universities certainly boosted my confidence.

As discussed, my Ph.D. research was in the average analysis of numerical computation algorithms, a subject brought to my attention by Smale, and my focus on computable error estimates was inspired by Smale’s α-theory. In my Smalefest paper, after crediting Smale’s inspiration as quoted earlier I further elaborated on my rationale:

“Most numerical approximation methods simply do no have guaranteed and yet computable error bounds under a weak differentiability assumption; whereas a strong differentiability assumption is one way to obtain computable error estimates, estimates applicable under a weak differentiability assumption are also important, for this assumption usually captures the generality of a method and is the starting point of many algorithms in practice; most practical algorithms, thus, use computable but not guaranteed error estimates; they are heuristics that may be incorrect some of the time but prove to be generally useful in practice.”

(Feng Gao, in Morris W. Hirsch, Jerrold E. Marsden and Michael Shub, eds., 1993, Springer-Verlag)

As quoted, my goal was to use average analysis to bridge mathematical analysis and practical computable error estimates.

In 1987, Smale was turning in a different direction, a much more mathematical one.

In his 1981 and 1985 papers cited earlier, Smale had extensively used mathematics related to the Bieberbach conjecture, a major math problem since the 1910s, first solved by Louis de Branges in 1985, who like Smale became a plenary speaker at the 1986 International Congress of Mathematicians at Berkeley.

(1987, Berkeley, California, American Mathematical Society)

So in his 1987 graduate course, Smale spent much time developing some mathematics he hoped might lead to an alternate proof of the conjecture, i.e., by this time de Branges’s theorem; I followed it with considerable interest.

In contrast to his 1985 computable error estimates for his modified Newton’s method-type algorithm finding zeros of complex polynomials, namely his α-theory, in 1986-1988 Smale spent much time presenting another theory he and Mike Shub, in relation to the work of Curtis McMullen, developed regarding algorithms that would not need computable error estimates at all: when an algorithm is run in endless, “purely iterative” cycles it would be “generally convergent”, i.e., approach a solution pretty much every run.

In his 1985 paper, Smale had conjectured that for the class of complex polynomials with a fixed degree – one of the mathematically nicest classes of functions – no purely iterative algorithm is generally convergent.

(Steve Smale, October 1985, Volume 13, Number 2, Bulletin of the American Mathematical Society)

In his Harvard math Ph.D. thesis and a 1988 paper, Curtis McMullen proved Smale’s negative conjecture. On the positive side, Smale and Shub had shown in a 1986 paper that if – in addition to the standard arithmetic of addition, subtraction, multiplication and division – the operation of complex conjugation is also used, then there are purely iterative algorithms that are generally convergent.

In his paper contributed to the 1990 Smalefest conference, Mike Shub summarised it:

“… Newton’s method is an example of a purely iterative algorithm for solving polynomial equations. … A purely iterative algorithm is generally convergent if for almost all (f, x) iterating the algorithm on x, the iterates converge to a root of f. Smale [1986] conjectures that there are no purely iterative generally convergent algorithms for general d. McMullen [1988] proved this for d ≥ 4 and produced a generally convergent iterative algorithm for d = 3. For d = 2, Newton’s method is generally convergent. Doyle and McMullen [1979] have gone on to add to this examining d = 5 in terms of a Galois theory of purely iterative algorithms. In contrast, Steve and I showed in [Shub-Smale, 1986b] that if complex conjugation is allowed, then there are generally convergent purely iterative algorithms even for systems of n complex polynomials of fixed degree in n variables.”

(“On the work of Steve Smale on the theory of computation”, by Michael Shub, in Morris W. Hirsch, Jerrold E. Marsden and Michael Shub, eds., 1993, Springer-Verlag)

I recall that the positive result when complex conjugation is permitted was presented by Smale in his graduate course; McMullen’s proof of Smale’s conjecture was presented by Smale at a department colloquium talk, but it came late as I was about to graduate and preoccupied with studying computer science. My lack of time and lack of specialization in the algebraic topology and Galois theory-related topics meant that I did not acquire a full technical understanding of it.

But I remember thinking that there was a Chris Mullin and now came a Curtis McMullen: Oakland NBA basketball team Golden State Warriors’ biggest star player was Chris Mullin, a former U.S. Olympic Dream Team gold medalist from St. John’s University in Queens, New York, while Curtis McMullen’s Ph.D. adviser Dennis Sullivan was the Einstein Chair of Science at Queens College of the City University of New York.

(“MAYOR GIULIANI HONORS FOUR NEW YORKERS FOR EXCELLENCE IN SCIENCE AND TECHNOLOGY”, February 24, 1998, New York City Mayor’s Press Office; and, “Chris Mullin Set for Appearance at Citi Field on Wednesday”, September 2, 2015, St. John’s University)

Subsequently in the 1990s McMullen became a UC Berkeley professor, but in 1997 returned to his alma mater Harvard before he was awarded the Fields Medal at the 1998 ICM, mainly for his research in chaos theory – Smale had been a founder of that field – and in that same year 1997 Mullin left the Warriors for the Indiana Pacers.

(September 2, 2015, St. John’s University; and, “1998 Fields Medalist Curtis T. McMullen”, American Mathematical Society)

McMullen was no doubt very bright. But the results in this direction by him, Smale and Shub also illustrated that, even for computational issues, focusing on a more pure-mathematics direction can lead to more advanced mathematical achievements.

More generally in 1987-1988, Smale and his collaborators took a more pure-mathematics direction by moving from analysis of algorithms to complexity theory.

There was an interesting title discrepancy for Smale’s plenary address at the 1986 ICM at Berkeley: the title was “Algorithms for Solving Equations” as in the ICM proceedings, but “Complexity aspects of numerical analysis” according to an 1987 book on the history of ICMs, quoting The New York Times, both cited earlier.

The The New York Times’ report in August 1986 quoted Smale on the difference between algorithm and complexity:

“… At the International Congress of Mathematicians this month in Berkeley, Calif., signs of the computer were everywhere. The opening plenary speaker, Stephen Smale of the University of California at Berkeley – a pure mathematician with a record of bringing fellow mathematicians into new areas – focused on the developing theory of complexity, which addresses questions of what sorts of problems can and cannot be solved on computers.

The problem-solving abilities of computers, Dr. Smale said, have created a challenge that is philosophical, logical and mathematical. “This subject is now likely to change mathematics itself,” he said. “Algorithms become an object of study, not just a means of solving problems.””

(“MATHEMATICIANS FINALLY LOG ON”, by James Gleick, August 24, 1988, The New York Times)

A computer algorithm is a means of solving problems, and so practical considerations would have to be important. Complexity is about what sorts of problems can or cannot be solved on computers, and so it may involve more fundamental mathematics about the problems and the algorithms – but understandably, studying if something “can” be done does not necessarily require all the practicalities of doing it.

Besides its dismissive title, “Mathematicians finally log on”, similar to the UC Berkeley numerical analysts’ attitudes I have described, the The New York Times article referred to Smale as a “pure mathematician with a record of bringing fellow mathematicians into new areas” – obviously, if they just got into the computer how could what they did be “applied”?

When first entering the numerical analysis field, Smale’s 1981 paper was titled, “The fundamental theorem of algebra and complexity theory”, concerned with fundamental mathematics and complexity theory. A few years later, his October 1985 paper’s title was “On the efficiency of algorithms of analysis”, concerning algorithms, and so more practically oriented. Now in 1986 his plenary address at the world’s leading gathering of mathematicians had both an “algorithm” title – for the printed proceedings – and a “complexity” title – for the actual talk at the congress.

More modestly on my part, in 1986 I derived computable average error estimates for a type of error estimation in numerical integration, and presented them for the first time in around May at an MSRI seminar – fulfilling a seminar-presentation requirement for the Ph.D. degree. That was also when my adviser first learned of the details of what would become my Ph.D. thesis.

But Smale was, after all, more of a “pure mathematician”: in 1987-1988 he was returning, along with his collaborators Lenore Blum and Michael Shub, to a complexity focus; they embarked on an ambitious project to develop an algebra-oriented, comprehensive complexity theory that would rival, and even encompass, the existent complexity theories in computer science.

Recall as in Part 3 that Lenore’s husband Manuel Blum had been a pioneer in complexity theory and for his achievements would be the 1995 winner of the A. M. Turing Award, computer science’s highest honor. Thus Lenore’s summary of their work was succinct, informative and telling:

“In 1989, Mike Shub, Steve Smale and I introduced a theory of computation and complexity over an arbitrary ring or field R [BSS89]. If R is Z2 = ({0, 1}, +, ⋅), the classical computer science theory is recovered. If R is the field of real numbers, Newton’s algorithm, the paradigm algorithm of numerical analysis, fits naturally into our model of computation.

Complexity classes P, NP and the fundamental question “Does P = NP?” can be formulated naturally over an arbitrary ring R. The answer to the fundamental question depends in general on the complexity of deciding feasibility of polynomial systems over R. When R is Z2, this becomes the classical satisfiability problem of Cook- Levin [Cook71, Levin73]. When R is the field of complex numbers, the answer depends on the complexity of Hilbert’s Nullstellensatz.

The notion of reduction between problems (e.g. between traveling salesman and satisfiability) has been a powerful tool in classical complexity theory. But now, in addition, the transfer of complexity results from one domain to another becomes a real possibility. …”

(“Computing over the Reals: Where Turing Meets Newton”, by Lenore Blum, October 2004, Volume 51, Number 9, Notices of the American Mathematical Society)

As Lenore put it in 2004, their theory cut across classical complexity theory of computer science, Newton’s method in numerical analysis, and the complexity aspect of classical mathematics – David Hilbert’s algebraic geometry a century earlier.

Lenore also pointed out that when certain “reduction” relationship between two things is established, a rich body of significant mathematical results in one can be transferrable to the other.

By the time of 2004, Lenore was the Distinguished Career Professor of Computer Science at Carnegie Mellon University, where Manuel and their son Avrim were also computer science professors:

“Professor Mom takes the office on one side of him, Professor Dad takes an office on the other. Son is caught up in parents’ well-meaning meddling both at work and at home. Hilarity ensues.

Think “Everybody Loves Avrim.”

The recruitment of Manuel and Lenore Blum, once happily ensconced in Berkeley, Calif., to the Carnegie Mellon computer science school two years ago was considered a coup, capping almost 20 years of effort.

A measure of their impact came last month, when Carnegie Mellon garnered a $24 million share of the National Science Foundation’s $156 million Information Technology Research program. Of the 14 Carnegie Mellon projects to receive funding, the largest was a $5.5 million award for Aladdin, which includes Manuel, Lenore and Avrim Blum among its investigators.”

(“Dad, mom join son to form a potent computer science team at CMU”, by Byron Spice, October 21, 2001, Pittsburgh Post-Gazette)

Such a happy family finale would be incomplete without a story about the humbler beginnings:

“The romance began in Caracas, Venezuela, where Manuel and Lenore grew up.

Manuel’s parents had fled Europe ahead of the Holocaust and, unable to get into the United States, settled in Caracas.

His father, a jeweler and expert watchmaker, came from the town of Chernovtsy, then part of Romania, but in a border region that often changed hands (it is now part of Ukraine). The joke was that the people of Chernovtsy spoke the language of whoever won the last war. Manuel grew up in Caracas speaking German.

“I thought if I could only understand how the brain works, I could be smarter,” he said. It set him on a course that would lead him to the field of computer science. But when he left Caracas for the United States in the mid-1950s to attend Massachusetts Institute of Technology, no such field existed. So he pursued his interest in the electrical activity of the brain by studying electrical engineering, then switched to neurobiology.

Lenore didn’t reach Caracas until she was 9, moving with her family from New York City in the 1950s. Unfamiliar with Spanish and initially unhappy in school, she persuaded her parents to let her take a year off. When she returned to school, her class was being taught long division. She found it fascinating, beginning her lifelong love of mathematics.

Following the advice of a high school teachers, she decided to study architecture, not mathematics, in college. Unable to gain admission to MIT, where Manuel was a graduate student, Lenore headed for the Carnegie Institute of Technology.

But after her first year at Carnegie Tech, she realized that math remained her true interest. She switched majors and ended up in an experimental mathematics class taught by Alan Perlis, a pioneer who would establish the computer science department at Carnegie Tech and, later, at Yale University.

…”

(Byron Spice, October 21, 2001, Pittsburgh Post-Gazette)

So it had begun in Caracas, Venezuela, where Smale later began his one-year sabbatical in July 1984, and now it has been happiness ever after for the Blums. I note that from Central America, Manuel went to a U.S. school where John Nash happened to teach though in a different field, and Lenore to Nash’s undergraduate alma mater in a different field.

But regarding me, as in Part 1, by 1999 Lenore refused to help my academic job prospect, stating:

If you can program, you don’t need to be in the academia”.

(January 29, 2013, Feng Gao’s Posts – Rites of Spring)

Back in 1987-1988, my focus on auditing computer science graduate courses was more practically oriented, and did not include the complexity theory course taught by none other than Manuel Blum. Fortunately, the two algorithm courses taught by Richard Karp did provide a reasonable coverage of the basic framework and concepts of complexity. As mentioned earlier, I then taught the corresponding graduate courses at UBC in 1988-1992. At UBC, the computer science complexity theory courses were the prerogative of our boss Maria Klawe’s husband, prominent theoretical computer scientist Nick Pippenger.

By the early fall of 1987 I had also produced first results in computer science research, on the analysis of intrinsic communication costs in parallel computation for some numerical computation algorithms, and so had a second topic for seminar presentation from that point on.

With Lenore’s networking help and arrangement, in October 1987 I gave seminar presentations at CUNY Queens College where Mike Shub had taught for many years, at IBM Thomas J. Watson Research Center where Mike was a scientist for about 2 years now, and at the University of Toronto’s computer science department that then offered me a 2-year postdoctoral fellowship, in affiliation with both the theoretical computer science group and the numerical analysis group.

The Toronto postdoc offer was the second job offer for me, after Baltimore County’s earlier in 1987, but the first from a major center of academic research. Berkeley math and CS professor Beresford Parlett, whom I respected as a mentor like Lenore, had suggested, after listening to my presentation at his numerical analysis seminar, that I do 18 months of postdoctoral research at MSRI, whereby I told him my consideration concerning my visa problem.

In December 1987 or January 1988, at James Curry’s invitation I gave seminar presentations at Boulder’s math and CS departments, and met also with computational scientists at the university’s computing center and at the Colorado School of Mines.

My interest in quantum physics and divergence had not vanished, and so on my own initiative I visited the retired physicist, Professor Robert Richtmyer in his office, whose book cited earlier was a major reference when I explored quantum mechanics in 1984-1985. Not unlike Berkeley’s Eyvind Wichmann, Richtmyer did not give me any real lead on the physics; instead he pointed to a book on his desk, “Difference Methods for Initial-Value Problems”, and asked if I knew.

(Robert D. Richtmyer, Difference Methods for Initial-Value Problems, 1957, Interscience Publishers; and, “Difference Methods for Initial-Value Problems (Robert D. Richtmyer and K. W. Morton)”, by Burton Wendroff, July 1968, Volume 10, Number 3, SIAM Review)

I had noticed Richtmyer’s numerical analysis book in my past library literature searches but had not really studied it, and so answered that my Ph.D. work also involved the use of divided differences – for computable average error estimates.

So when UBC computer science department acting head, numerical analyst Uri Ascher mentioned in Part 3, brought me there for a formal interview around mid-February, he emphasized that he had heard about me from a recent trip of his to Boulder and from his contacts at Toronto. The other UBC numerical analyst, former computer science head Jim Varah cited in Part 3, arranged with David Kirkpatrick to secure funding for the fixed-term position offered to me by Uri.

While visiting UBC, I was also brought over by the University of Victoria’s computer science department for an unofficial interview there.

In around early May when I was formally interviewed by Simon Fraser University’s school of computing science, which had a strong theoretical computer science group but did not have any numerical analyst, among the professors attending my seminar presentation was Bob Russell, a numerical analyst at the mathematics department.

Chatting at the lectern after my talk, Bob pointed to a spot in my resume where my paper, based on my Ph.D. thesis and submitted through Babuska to a numerical analysis journal, was listed as “to appear”, and asked when it would appear. I replied that I had not heard from the journal after the submission. Bob said then it should not be described as “to appear”; I acknowledged he was right.

I was offered a tenure-track position, and was expected to open numerical analysis courses within the school of computing science. After returning to Berkeley, I phoned school director James Delgrande – previously cited in Part 3 – to apologize for the infraction in my resume, and he replied that they liked what they saw so it did not matter.

Fortunately, my paper was later accepted and was published in October 1989.

Like Babuska, Bob Russell’s expertise included the finite element method. He was often cited as R. D. Russell, including in a SIAM 45th-anniversary meeting session in July 1997 at Stanford, mentioned earlier, where presentations were given by Bob, Berkeley’s Keith Miller, Miller’s Ph.D. graduate and UVic graduate, LANL’s Andrew Kuprat, and my former fellow Berkeley math Ph.D. student Guojun Liao.

After my arrival in Vancouver, Bob, who had a University of New Mexico math Ph.D., told me he was the son of a LANL scientist – a physicist if I am not mistaken.

Locally in the Vancouver region, Bob had often co-authored papers with UBC’s Uri Ascher – as in Part 3 Uri had once been a scientist at the Army Mathematical Research Center at Wisconsin-Madison, the place Carl de Boor was.

So, all 3 Canadian job offers were in relation to both theoretical computer science and numerical analysis.

In Summer 1988 I got to be the teaching assistant for a 3rd-year numerical analysis course, after years of TA work for introductory calculus classes including under professors Bob Anderson, David Gale and Ken Ribert. The instructor was Nate Whitaker, Alexandre Chorin’s African-American Ph.D. student who had graduated and was doing postdoctoral research at Lawrence Berkeley National Laboratory as I recall, and getting ready to become a tenure-track assistant professor at the University of Massachusetts Amherst.

It was a last-minute practice for me, as my scheduled UBC teaching duty in Fall 1988 would include a graduate course on numerical linear algebra (matrix computation), the specialty of former department head Jim Varah who by this time was the director of CICSR, the university-level Centre for Integrated Computer Systems Research.

During that summer Prof. Beresford Parlett, Berkeley’s expert on numerical linear algebra, gave me a month’s “postdoc” support partly because I was writing a short research paper with him.

But when I went to Bernice Gangale who, assisted by Jeanne Coffee, handled administrative support for the center for pure and applied mathematics, to finalize the paperwork for the financial support, I was told that the center director, Alberto Grünbaum, said my work should be categorized as “postgraduate” and the job description “postgraduate researcher”.

It was only a month’s time so whatever the title, and I understood that it wasn’t a serious postdoctoral fellowship in duration, work or pay. But when I had just received a doctoral degree, wouldn’t “postdoctoral” make sense? My classmate friend Mei Kobayashi had worked for a year at Harvard after her 1981 Princeton bachelor’s degree, and that I knew was “postgraduate” work.

Clearly, to the critically demanding I was short of the postdoctoral research and visiting faculty experiences.

Months before the May graduation commencement, Smale told me that he had nominated me to the department for the Bernard Friedman Memorial Prize in Applied Mathematics for outstanding graduate student research.

I did not pin much hope on that, knowing the numerical analysts’ dismissiveness of Smale’s work and Grünbaum’s close relationship with them. As discussed, back in 1984 I did not bothered to apply to change my Ph.D. candidacy to my original declaration of “applied mathematics”, after it had been categorized as “mathematics”. Also, for the committee to approve my thesis, consisting of two professors in addition to the adviser, I sought the consents of Bob Anderson and Morris Hirsch, Smale’s long-time friend since the anti-war days cited in Part 2, rather than the numerical analysts.

The winner of the Bernard Friedman Prize that year was Yong-Geun Oh, a classmate from South Korea, with a Ph.D. thesis titled, “Nonlinear Schrodinger Equations with Potentials: Evolution, Existence, and Stability of Semi classical Bound States”. The title indicated relevance to quantum mechanics, while the type of issues studied were similar to what I had learned under Kato. I thought it must be close to Grünbaum’s interest, although Oh’s adviser was Prof. Alan Weinstein, a former Ph.D. student of Shiing-shen Chern’s, and Oh’s collaborators included Prof. Jerrold Marsden – later an editor, along with Hirsch and Shub, for the proceedings of the Smalefest conference cited earlier.

(“Stability of Semiclassical Bound States of Nonlinear Schrδdinger Equations with Potentials”, by Yong-Geun Oh, 1989, Volume 121, Number 1, Communications in Mathematical Physics; and, “Yong-Geun Oh”, Mathematics Genealogy Project)

It wasn’t the first time Smale recommended me for an honor, though every time not to be.

Back in 1986 I went to ask if he could nominate me for the Sloan doctoral dissertation fellowship, Smale responded that he had already nominated Joel Friedman. I said that I should have been considered if only because of my seniority as his student, but since he had decided it was okay with me. A day later Smale left a note in my mailbox to get me to his office; there he said that, for technical reasons only and not as a precedent, he would also nominate me. In the end neither received it, but Joel soon graduated in 1987 for the Princeton job.

Unlike me or Joel Friedman, but like David Witte, the Bernard Friedman Prize winner Yong-Geun Oh started his academic career with years of postdoctoral research and short-term teaching at top universities, including a postdoctoral year at MSRI, 2 years of instructorship at NYU’s Courant Institute and a year of membership at Princeton’s Institute for Advanced Study, before starting as an tenure-track assistant professor at Wisconsin-Madison – Witte could have settled on one there had not been for a higher ambition as discussed – where he became a full professor in 2001.

(Fall 1997, Department of Mathematics, University of Wisconsin; and, “Yong-Geun Oh: Brief Narrative Research Resume”, Center for Geometry and Physics, Institute for Basic Science, Pohang University of Science and Technology)

My choice of an immediate faculty job leaned toward an independent academic career but was a compromise: goal-oriented in choosing a fixed-term position at the academically stronger UBC partly because of the arrival of Klawe and Pippenger, as suggested by Karp, over a tenure-track one at SFU, yet with some disappointment that it wasn’t a scientifically more advanced route like at Canada’s academically leading University of Toronto.

The scientific prominence of the Berkeley professors I studied under over 6 years certainly reaffirmed and strengthened my sense of optimism in possibilities and potentials: Tosio Kato was a recent winner of the Wiener Prize in Applied Mathematics, Andrew Majda was in the process to take up a professorship at the prestigious Princeton University, and of course Steve Smale was a Fields Medalist; as mentioned earlier, among my first-year professors was also recent Steele Prize winner Gerhard Hochschild, besides Kato.

The award-winning prominence was more ground-breaking, as both Smale and Kato were the first Berkeley winners of the respective prizes, which were among the leading prizes of mathematics.

In the 1980s, the leading mathematical prizes awarded by the mathematical community were the Fields Medal of the International Mathematical Union, the Steele Prizes of the American Mathematical Society, and the Wiener Prize and Birkhoff Prize awarded jointly by AMS and the Society for Industrial and Applied Mathematics.

The International Mathematical Union, which organizes the International Congress of Mathematicians every 4 years with awarding of prizes, describes its current set of prizes as follows:

“… The Fields Medal recognizes outstanding mathematical achievement. The Rolf Nevanlinna Prize honors distinguished achievements in mathematical aspects of information science. The Carl Friedrich Gauss Prize is awarded for outstanding mathematical contributions that have found significant applications outside of mathematics. The Chern Medal is awarded to an individual whose accomplishments warrant the highest level of recognition for outstanding achievements in the field of mathematics.

The Fields Medal was first awarded in 1936, the Rolf Nevanlinna Prize in 1982, and the Carl Friedrich Gauss Prize in 2006. The Chern Medal was awarded for the first time in 2010.”

(“IMU Awards and Prizes”, International Mathematical Union)

As described, in 1936 the Fields Medal became the first and only IMU prize, until 1982 when the Rolf Nevanlinna Prize was added for mathematical aspects of information science; more recently the Gauss Prize was added in 2006 in relation to applying mathematics to other subjects, and the Chern Medal in 2010 as “the highest level of recognition” in mathematics – without the Fields Medal’s age requirement of not exceeding 40 in the year of the award.

The Field Medal was first established through the efforts of Canadian mathematician J. C. Fields, who had organized the 1924 ICM:

“At the 1924 International Congress of Mathematicians in Toronto, a resolution was adopted that at each ICM, two gold medals should be awarded to recognize outstanding mathematical achievement. Professor J. C. Fields, a Canadian mathematician who was Secretary of the 1924 Congress, later donated funds establishing the medals, which were named in his honor. In 1966 it was agreed that, in light of the great expansion of mathematical research, up to four medals could be awarded at each Congress.”

(“Fields Medal Details”, International Mathematical Union)

The IMU list of Fields Medal recipients shows that Stephen Smale, one of the four 1966 recipients, was the first for UC Berkeley. Per the IMU list, so was every of Smale’s co-recipients the first for his university: Paul Joseph Cohen of Stanford University, Michael Francis Atiyah of Oxford University and Alexander Grothendieck of the University of Paris; but as detailed in Part 2, Smale had stood out as a left-wing student activist during the McCarthy era of the 1950s at the University of Michigan, and then as a Berkeley anti-Vietnam War movement leader in 1965.

Tosio Kato enjoyed a similar status as Berkeley’s first with the Wiener Prize in Applied Mathematics.

According to SIAM, the Norbert Wiener Prize and the George David Birkhoff Prize are the only two “highest” prizes, awarded for:

“an outstanding contribution to applied mathematics in the highest and broadest sense”.

(“Prizes, Awards and Lectures Sponsored by SIAM”, Society for Industrial and Applied Mathematics)

Both prizes have been jointly awarded by SIAM and AMS.

The SIAM list of recipients shows the Wiener Prize was first awarded in 1970, and has been awarded every several years; its second winner, in 1975, was Peter Lax of New York University – Chorin’s former Ph.D. adviser mentioned earlier.

Kato and co-recipient Gerald Whitham were winners of the Wiener Prize’s third awarding, in 1980, and Kato was the first UC Berkeley recipient.

The other leading prize awarded by the mathematical community was the Leroy P. Steele Prizes. Per AMS’s descriptions, among its prizes the Steele Prizes have been the only ones for mathematics in general and open to all mathematicians, awarded for lifetime achievement, research, or exposition; other prizes are specialized ones, such as the Oswald Veblen Prize in Geometry that Smale had received in 1964 as in part 2, the Delbert Ray Fulkerson Prize “in the area of discrete mathematics”, and the Ruth Lyttle Satter Prize in Mathematics for “mathematics research by a woman in the previous six years”. Beginning in 1988 there has been a National Academy of Sciences Award in Mathematics .

(“Prizes and Awards”, American Mathematical Society)

My first-year graduate algebra professor Gerhard Hochschild was a Steele Prize winner in 1980 for his research, but unlike Kato and Smale, Hochschild was not the first UC Berkeley recipient of that prize.

The Steele Prizes was first awarded in 1970. The second winner, Phillip Griffiths of Princeton University in 1971 for a 1970 paper, had formerly been a UC Berkeley fellow and faculty member, 1962-67.

(“Phillip A. Griffiths: Curriculum Vitae”, School of Mathematics, Institute for Advanced Study)

The honor of the Steele Prizes’ first UC Berkeley recipient went to Professor Hans Lewy in 1979 for his research, one year before Hochschild.

Lewy’s award was like Smale’s, politically speaking. Lewy had been one of 3 tenured math professors among 29 Berkeley professors and 2 UCLA professors fired in the early 1950s in the McCarthy era due to their refusal to sign the University of California’s Loyalty Oath to declare that they were not and had never been Communists:

“Fear of communism was being developed and fanned for political purposes by a junior senator from Wisconsin, Joseph McCarthy, in the late 1940s. McCarthy eventually formed a committee that went to universities to question professors concerning their connection to the Communist Party. More widely known are the inquisitions of Hollywood actors, but it extended to all levels of public influence. McCarthy was spreading fear of educators as well.

Wanting to show proof of loyalty, Robert Gordon Sproul, then President of the University of California, proposed the Loyalty Oath which would have all professors declare they were not and never had been communists.

Some 29 tenured professors from UC Berkeley and two from UCLA (one of whom later became a UC President) refused to sign. They declared that political affiliation should not be required to be made public, and the Communist Party was a legal party in the US. It was a matter of principle.

The Regents of the time mandated that all professors had to sign, or be fired. In the Mathematics Department, three professors refused: John Kelley, Hans Lewy, and Pauline Sperry. Another professor, D.H. Lehmer, attempted to avoid signing by taking a leave of absence to take a federal job at UCLA as Director at the Institute for Numerical Analysis. However, he was told he needed to sign before he could go on the payroll. With five children to support, he eventually signed but with objection.

Finally, in 1952-53 the California Supreme Court ruled the Loyalty Oath to be unconstitutional. …”

(“Loyalty Oath Controversy: Interview with Leon Henkin”, Fall 2000, Vol. VII, No. 1, Berkeley Mathematics Newsletter)

Lewy’s prior experience as a Jewish mathematician fleeing the rise of Nazism in Germany played a role in his resolve to refuse what he viewed as totalitarian tendency in the Loyalty Oath requirement:

“Hans Lewy was born on October 20, 1904 in Breslau, Germany (now Wroclaw, Poland)…

Göttingen, in 1922 when Hans Lewy matriculated, was among the premier mathematical establishments in the world. With Klein and Hilbert still in residence, as well as Runge, Prandtl, Landau, and Emmy Noether, Courant, of course, and many others {Reid, 1970} it radiated irresistible scientific excitement. …

Friedrichs also arrived in 1922 and their lifelong friendship took hold immediately. …

He completed his thesis in 1926 with Courant and became, together with Friedrichs, Courant’s Assistant and a Privatdozent. …

Lewy left Germany quite soon after Hitler assumed power in 1933. He went first to Italy and then to Paris where Hadamard had managed to obtain for him a year’s support. … In Paris, on the recommendation of Hadamard, he was offered a one to two year position at Brown, funded by the Duggan Foundation. In the fall of 1933, Lewy was in Providence. …

On the invitation of G.C. Evans, Lewy went to Berkeley in 1935. Courant, visiting Berkeley in 1932, had spoken enthusiastically of Lewy’s work on that occasion. Courant himself was, in fact, offered a position at Berkeley. …

With the outbreak of the war, Lewy took flying lessons and obtained a solo license in the hope of offering his services, but was soon called to Aberdeen Proving Grounds as part of the University of California contingent. He also worked half-time with the Office of Naval Research in New York. Here he became interested in water waves and the Dock problem, resuming his collaboration with Friedrichs…

During the Loyalty Oath controversy in the state of California, Lewy was part of the group dismissed for refusal to sign the oath. Having seen Fascism at fist hand in Italy, and watched its rise in Germany, he was wary of cooperating with any totalitarian tendencies in his new country. … He was on the faculty at Harvard in the fall of 1952 and then at Stanford in 1952 and 1953. … In the settlement of this dispute by the courts, the professors were reinstated and Lewy returned to Berkeley. …”

(“Hans Lewy: A Brief Biographical Sketch”, by D. Kinderlehrer, in David Kinderlehrer ed., Hans Lewy Selecta, Volume 1, 2002, Birkhäuser)

Lewy’s stand against McCarthyism led not only to his firing by UC but also the German consul general’s refusal to extend his passport:

“A new exhibition at UC Berkeley’s Magnes Collection of Jewish Art and Life tells the stories of more than 70 scholars, writers and artists — many of them Jewish, related to Jews or political dissidents — who escaped the rise of Nazism and fascism in Europe in the 1930s and ‘40s and brought their talents and dreams with them to UC Berkeley.

During an opening reception, University of California President Janet Napolitano praised the exhibition’s remarkable story and called the collaboration “a priceless learning opportunity.”

… The exhibit features, for example, war-rations books and anonymous hate mail sent to musicologist Alfred Einstein. Visitors can also see a German consult general’s rejection of a request to extend a passport, in response to math professor Hans Lewy’s three-year suspension when he refused to sign the campus Loyalty Oath. Family photos and a 1985 certificate, awarding Austria’s Medal of Honor to Max Knight, are also on view.”

(“Intellectual migration from fascist Europe to Berkeley”, by Kathleen Maclay, February 4, 2014, University of California)

So it was fitting, that a Jewish mathematician with a math Ph.D. from the world renowned University of Göttingen under the mathematical patriarch Richard Courant, having escaped Nazi Germany and participated in U.S. military research during World War II, took a stand against McCarthyism and later in 1979 became the first Berkeley winner of American Mathematical Society’s top general prize – in line with Steve Smale’s honor as Berkeley’s first Fields Medal winner in 1966.

As in Part 2, Courant, founder of NYU’s Courant Institute, and another mathematical patriarch, Oswald Veblen of Princeton’s Institute for Advanced Study, were once featured with the young mathematician John Nash in a Fortune magazine article in the summer of 1958 – only months before Nash’s miserable attempt to start a world peace movement at MIT that would end with Nash’s psychiatric committal.

As quoted above, in the early 1930s UC Berkeley had tried to recruit Courant, who did not move there but recommended his student Hans Lewy enthusiastically. Lewy was only one of many young mathematicians who had worked under Courant at Göttingen, in a manner that later drew criticisms about Courant:

“As a Privatdozent, whose meager fees were paid by his students, usually few in number, Lewy jokingly reported in 1928 a “lucrative” semester: “Now when people ask why I went into mathematics I can answer, ‘For the money!’”

In addition to teaching, he served with Friedrichs as one of Courant’s many assistants. Although in the future Courant was often to be criticized for exploiting such younger men, Lewy always considered the time he spent as assistant exceedingly valuable.”

(“Hans Lewy, 1904-1988”, by Constance Reid, in David Kinderlehrer ed., Volume 1, 2002, Birkhäuser)

It was scientific prestige, but meager pay, to be a Privatdozent and Prof. Richard Courant’s assistant at the University of Göttingen.

Lewy’s specialty was in partial differential equations, the field in which I studied under Tosio Kato and my officemate Steve Pomerantz received his Ph.D. under Murray Protter, and related to the field my classmate Robert Rainsberger received his Ph.D. under Heinz Cordes as earlier.

After his passing in 1988, Lewy’s colleagues and peers Protter and Kato, along with colleagues John Kelley and Derrick Lehmer who had in 1950 shared his opposition to McCarthyism, wrote about his important contributions:

“Professor Lewy was known as a person of integrity and strong moral principles. In 1950, he refused to sign a special loyalty oath imposed on the faculty by the University of California’s Board of Regents; for this reason he and a number of other professors were fired. They were later vindicated and reinstated when the courts determined that taking the oath would have violated their civil rights.

Hans Lewy was one of the great mathematicians of the twentieth century; he showed unparalleled originality in his work, which was characterized by the unexpected. In 1957 he started the mathematical world by exhibiting a simple partial differential equation which has no solution at all, thus changing the thinking of the experts in the field.

In another of his best known works, written in 1928 with Courant and Friedrichs, he developed criteria for determining conditions which guarantee the stability of numerical solutions of certain classes of differential equations. This work turned out to be crucial later for the use of high speed computers in solving such equations; thousands of research articles have been written on numerical solutions of differential equations based on his pioneering work.

While still in Göttingen, he published a series of fundamental papers on partial differential equations and the calculus of variations. He solved completely the initial value problem for general nonlinear hyperbolic equations in two independent variables. … He proved the well-posedness of the initial value problem for wave equations in what is now called Sobolev spaces two decades before these spaces became a common tool for specialists. …”

(Hans Lewy, Mathematics: Berkeley: 1904-1988 Professor Emeritus”, by M. Protter, J.L. Kelley, T. Kato and D.H. Lehmer, 1988, University of California)

While studying under Kato I regularly, and later from time to time, attended the weekly seminar on partial differential equations and became familiar with this group of professors who were quite senior, and very genial and affable.

Quite a few times I mentioned Hans’s research to my roommate Kezheng. Sometimes when Kezheng menitoned the name “Hans” I wasn’t sure which one he was referring to, because I also knew another, Kezheng’s auto insurance agent who became my agent: soon after my arrival at Berkeley, Kezheng taught me driving and added my name as a driver to the Farmers Insurance auto policy for his old Ford, and his agent Hans was a very friendly man.

In the summer of 1983 with Kezheng and I the drivers, we and several Chinese visiting scholars took a tour of Western U.S., sightseeing at places like Salt Lake City, Yellowstone, Grand Canyon, and Death Valley where our rental car had a flat tire as in my February 2013 blog post.

(February 28, 2013, Feng Gao’s Posts – Rites of Spring)

One of those visiting scholars, Zhujia Lu (陸柱家), by the early 2000s when I visited Beijing was the director of scientific research at the Academy of Mathematics and Systems Science in the Chinese Academy of Sciences.

Another of the visiting scholars, Zeke Wang (王則柯), had just finished a visiting scholarship at Princeton where he did research with Prof. Harold Kuhn on a topic inspired by Smale’s work on finding zeros of complex polynomials. I hadn’t chosen Smale as my Ph.D. adviser at that point but Kuhn – the same last name as the philosopher Thomas Kuhn, once a Princeton professor also – was well-known for the mathematical work he had done with another Princeton professor, Albert Tucker – as in Part 2 John Nash’s Ph.D. adviser.

(“On the cost of computing roots of polynomials”, by Harold W. Kuhn, Zeke Wang and Senlin Xu, February 1984, Volume 28, Issue 2, Mathematical Programming; and, “Karush–Kuhn–Tucker conditions”, Wikipedia)

At the time I did not know about Tucker’s former Ph.D. student John Nash, but my father was a former graduate student of Zeke’s father, SYSU Chinese literature Professor Wang Qi (王起), mentioned in my Chinese blog posts in February and June 2011. At Berkeley was when I met Zeke, as he had gone to study and then teach at Peking University since the 1960s, and was only recently moving to Sun Yat-sen University – known in China as Zhongshan University for Sun Yat-sen’s more common name, Sun Zhongshan. 

(November 22, 2010, Feng Gao’s Blog – Reflections on Events of Interest; “忆往昔,学历史智慧(三)——文革“破旧立新”开始的记忆”, February 20, 2011, and, “忆往昔,学历史智慧(四)——青少年时代的部分文化熏陶”, June 22, 2011, 高峰的博客 – A refreshed feeling)

Though the Farmers Insurance agent Hans was of a considerably larger build  and a more authoritative personality, than Hans Lewy, their friendliness reminded me of the old-time farmers, or perhaps in the case of Lewy as described by mathematical biographer Constance Reid, a gardener:

“The Lewy’s had one son, Michael, who is now also a mathematician.

Once he became a family man and a householder, Lewy discovered the joys of home repair, woodwork and gardening. He took pride in the fact that, unlike professors he had known in his youth, he could – and would – do things with his hands. … Because of his propensity for seeing both sides of a question, he could not make life and death decisions even for plants and left undisturbed “volunteers” that other gardeners would have eradicated. …

He was often joyously unrestrained and enthusiastic. …

The most dramatic event of his more than fifty years at Berkeley occurred during the McCarthy era when the Regents of the University decided that members of the faculty should sign a loyalty oath … Lewy recognized the threat to academic freedom and refused to sign… He was not, however, doctrinaire on the subject and counseled others, especially younger colleagues, that they must look at their weapons before they decided to fight. He himself was well armed, having resolved when he had had to leave Göttingen that he would save a year’s salary as soon as possible in case he ever had to leave another job. … he was one of the few who managed to remain on friendly terms with those who had different views. Ultimately the California Supreme Court declared the oath unconstitutional and ordered the University to reinstate the non-signers with back pay and privileges. …”

(Constance Reid, in David Kinderlehrer ed., Volume 1, 2002, Birkhäuser)

That sounds like Hans the professor I knew, joyful, intellectual, hardworking, principled yet flexible, and willing to counsel others with his wisdom – except that back at Göttingen his meager income as a Privatdozent and Courant’s assistant probably wouldn’t have let him save much.

I wonder if and what Lewy may have “counselled” the young John Nash in 1957, when Nash seriously questioned quantum mechanics and had an argument with the physicist Robert Oppenheimer at the IAS in Princeton – according to Sylvia Nasar’s book, A Beautiful Mind, Lewy might be among the persons Nash had discussions with:

“Nash left the Institute for Advanced Study on a fractious note. In early July he apparently had a serious argument with Oppenheimer about quantum theory – serious enough, at any rate, to warrant a lengthy letter of apology from Nash to Oppenheimer written around July 10, 1957… After calling his own behavior unjustified, Nash nonetheless immediately justified it by calling “most physicists (also some mathematicians who have studied Quantum Theory) . . . quite too dogmatic in their attitudes,” complaining of their tendency to treat “anyone with any sort of questioning attitude or a belief in “hidden parameters” . . . as stupid or at best a quite ignorant person.”

“I embarked on [a project] to revise quantum theory,”, Nash said in his 1996 Madrid lecture. “It was not a priori absurd for a non-physicist. Einstein had criticized the indeterminacy of the quantum mechanics of Heisenberg.”

He apparently had devoted what little time he spent at the Institute for Advanced Study that year talking with physicists and mathematicians about quantum theory. Whose brains he was picking is not clear. Freeman Dyson, Hans Lewy, and Abraham Pais were in residence at least one of the terms. …

It was this attempt that Nash would blame, decades later in a lecture to psychiatrists, for triggering his mental illness – calling his attempt to resolve the contradictions in quantum theory, on which he embarked in the summer of 1957, “possibly overreaching and psychologically destabilizing.””

(Sylvia Nasar, 1998, Simon & Schuster)

You see, according to Nash, “most physicists” believed quite dogmatically that quantum mechanics as the theory of particle physics did not need other “hidden parameters”; “some mathematicians” like John von Neumann then catered to these physicists’ worldview by making efforts to prove that no “hidden parameters” exist – a claim I had serious doubts about as discussed earlier.

As in Part 2, while formally at the Institute for Advanced Study in the academic year 1956-1957, Nash spent most of his time at NYU Courant Institute instead; his looking into quantum theory was mentioned in the 1958 Fortune article mentioned earlier, along with his stock-market prediction hobby:

“He is now an associate professor at M.I.T. and is looking into quantum theory. He also applies mathematics to one of his hobbies: stock-market predictions.”

(“This 1958 Fortune article introduced the world to John Nash and his math”, by Stephen Gandel, May 30, 2015, Fortune)

Perhaps I should have brought my questions about the mathematics of quantum physics to Prof. Kato and his Berkeley peers in partial differential equations: their mathematics was close to physics and Kato himself had a physics Ph.D.

Or perhaps lucky that I didn’t. As Nash later insinuated in 1997, quoted earlier from Nasar’s book, even without getting into politics like he did in 1958 – and I did in 1992-1993 – that kind of scientific ambition could already be a ground for mental-health concern!

In Nash’s mind maybe, but not in mine.

But the professors who taught me, or whom I got to know, in my first year at Berkeley were overwhelmingly older seniors. While the computer science professors I audited graduate courses with in 1987-1988, namely Jitendra Malik, Alvin Despain, David Messerschmitt, John Ousterhout, and Richard Karp, are all living today, my first-year graduate math course professors – except the younger Polish visiting professor I have not re-identified – namely William Bade, Gerhard Hochschild, Abraham Seidenberg, and Tosio Kato, have all passed away.

The first to go, at an intriguing time from my vantage point, was Seidenberg, on May 3, 1988, only weeks short of his 72nd birthday on June 2, in Milan, Italy:

“The distinguished mathematician and historian of mathematics Abraham Seidenberg, who taught at Berkeley for 42 years, died in Milan, Italy, on May 3, 1988. He had been born in Washington, D.C., on June 2, 1916 and received his B.A. degree at the University of Maryland in 1937 and his Ph.D. at Johns Hopkins in 1943 before joining the Department of Mathematics at Berkeley as Instructor in 1945. He became Professor in 1958 and Professor, Emeritus in 1987. His career included a Guggenheim Fellowship, Visiting Professorships at Harvard and at the University of Milan, and numerous invited addresses, including several series of lectures at the University of Milan, the National University of Mexico, and at the Accademia Nazionale dei Lincei in Rome. At the time of his death, he was in the midst of another series of lectures at the University of Milan.”

(“Abraham Seidenberg, Mathematics: Berkeley: 1916-1988 Professor Emeritus”, by M. A. Rosenlicht, G. P. Hochschild and P. Lieber, 1989, University of California)

May 3, 1988, only days before my graduation commencement in mid-May.

This wasn’t the first such timing for me. As in Part 2, 10 years earlier in February 1978 when I entered Sun Yat-sen University mathematics department as a freshman, around that time Professor Lifu Jiang ( 姜立夫) passed away, who was the most senior professor at SYSU and a former teacher of Berkeley MSRI founding director Shiing-shen Chern.

Jiang was in a sense the founder of modern mathematics in China, as the founding director of the Institute of Mathematics at Academia Sinica – today’s Chinese Academy of Sciences – in the 1940s, but with Chern soon in actual charge:

“… The Institute had been in preparation since 1942 in wartime Kunming, but all its members had full-time jobs at universities, and sometimes even abroad. The director Chiang Li-fu (Jiang Lifu 姜立夫) left China for the USA in May 1946, and actual leadership passed into the hands of S.S. Chern, then a professor at Tsinghua University, who had become a leading expert on differential geometry during his studies in Hamburg and Paris in the mid-1930s. S.S. Chern had spent the years 1943-5 at the Institute for Advanced Studies in Princeton and arrived back in Shanghai in April 1946. He turned the Preparatory Office into a kind of graduate school …”

(Jiri Hudecek, Reviving Ancient Chinese Mathematics: Mathematics, History and Politics in the Work of Wu Wen-Tsun, 2014, Routledge)

Prof. Jiang lived to 87 years of age, passing on February 3, 1978:

“Lifu Jiang, original name Jiang Jiangzuo (born July 4, 1890, Zhejiang—died February 3, 1978, Guangzhou), mathematician, educator, founder of Department of Mathematics of Nankai University, and once was the director of Institute of Mathematics at the Academia Sinica.”

(“Lifu Jiang”, November 10, 2015, School of Mathematical Sciences, Nankai University)

That was the 3rd day of the month in which I became a mathematics freshman, when Jiang died. Then 10 years later, on the 3rd day of the month of my mathematics Ph.D. graduation, Seidenberg died.

Quite an eerie coincidence, given that I was the only member of the SYSU and UC Berkeley mathematics departments at those respective times.

Maybe it wasn’t a coincidence, or it was a more serious one, as Jiang had been a University of California graduate:

“Jiang graduated from University of California with a B.S. degree in 1915. In 1919, Jiang received a doctor of sciences degree from Harvard University.”

(November 10, 2015, School of Mathematical Sciences, Nankai University)

Was that Berkeley, math? In 1915 Berkeley was still the only UC campus, when what would later become UCLA was still a part of San Jose State University.

(“University of California”, Wikipedia)

Jiang’s original name was Jiang Jiangzuo (姜蔣​​佐); according to Chinese media sources his undergraduate education was at Berkeley, and his English name was Chan-Chan Tsoo as in his Harvard doctoral thesis.

(“数学家姜立夫”, August 13, 2002,  Xinuanet)

Now I can find “Chan Chan Tsoo” in the UC official record, published in 1916 by the University of California Press, Berkeley. Chan, i.e., Jiang, was the only obvious Asian on the graduation honor roll of December 20, 1915:

“Of the 165 who received the bachelor’s degree on December 20, 1915, twenty-six received “Honors” as follows: Anatomy, Alverda Elva Reische; Astronomy, Charles Donald Shane; Drawing, Elva Britomarte Spencer; English, Samuel Francis Batdorf, Sidney Coe Howard, Isabelle Elizabeth de Meyer, Neil Louise Long; French, Belle Elliott Bickford; German, Jennie Schwab; Hygiene, Florence Harriett Cadman; Latin, Mildred Goyette; Mathematics, Maryly Ida Krusi, Chan Chan Tsoo; Philosophy, Ruth Eloise Beckwith, Ada Rebecca Bray Fike; Physical Education, Frederick Warren Cozens; Zoology, Ebba Olga Hilda Braese, Pirie Davidson, Dorothy Sherman Rogers, Katherine Badeau Rogers, Frances Ansley Torrey; College of Mining, Omar Allen Cavins; College of Agriculture, Laurence Wood Fowler, Amram Khazanoff, William E. Gilfillan, Edith Henrietta Phillips.”

(“GRADUATED WITH HONORS”, 1916, Page 272, The University of California Chronicle An Official Record, Volume XVIII, University of California Press, Berkeley)

It looked like in those Zoology days the only subjects the university students were good at were English and Agriculture!

And maybe a little Mathematics and Philosophy?

Jiang was a professor at SYSU since 1952:

“… Jiang helped found and was the director of Institute of Mathematics at the Academia Sinica. Jiang founded department of Mathematics at Lingnan University in 1949, he taught there and Zhongshan University in 1952.”

(November 10, 2015, School of Mathematical Sciences, Nankai University)

1952 was when the private Christian Lingnan University was taken over by Sun Yat-sen (Zhongshan) University, on the decision of the Chinese Communist government as noted in my November 2010 blog post.

(November 22, 2010, Feng Gao’s Blog – Reflections on Events of Interest)

While Abraham Seidenberg at UC Berkeley might not be as prominent as Lifu Jiang at Sun Yat-sen University, he was a brilliant research mathematician:

“Seidenberg’s writings, as were his lectures, are noted for their meticulous clarity of expression. His publications in pure mathematics include some very influential work in commutative algebra, notably his joint paper with I.S. Cohen that greatly simplified the existing proofs of the so-called going-up and going-down theorems of ideal theory… His papers on differential algebra include several on … the so-called Lefschetz-Seidenberg principle of differential algebra, an analog of the Lefschetz principle for algebraic geometry, which says, very roughly, that algebraic geometry of characteristic zero is the same as algebraic geometry over the field of complex numbers. Another famous result is the Tarski-Seidenberg theorem, to the effect that there is a decision procedure for algebra over the real number field and for elementary geometry, first proved by Tarski using complicated logical machinery, then restated more simply by Seidenberg and given a much simpler mathematical proof.

Among Seidenberg’s publications are a large number of articles on the mathematics of primitive peoples and on the history of mathematics, in particular on ancient mathematics. Most of these articles are in support of his thesis that both arithmetic and geometry have their origins in ritual. His sources are the anthropological literature and Egyptian, Babylonian, Greek, Indian and Chinese documents. Although this work was a center of some controversy, in large part because of the general aversion among anthropologists to diffusion theories of culture, important aspects of it have received striking vindication. …”

(M. A. Rosenlicht, G. P. Hochschild and P. Lieber, 1989, University of California)

No doubt Seidenberg’s English was better than his peers’, and he could help improve what they did if he figured out what they were doing.

The Berkeley math professors were among the top researchers in their fields, even ones like Seidenberg who have not been awarded major prizes. I remember Professor Shoshichi Kobayashi, on the occasion of a dinner at his home at the invitation of Mei, telling me that the department’s hiring criteria required the successful candidate to be within the top 3 or 5 in his or her research field.

There is another way to look at Seidenberg’s brilliance and generosity. Born in the United States Capital and educated in Maryland of that region, Seidenberg grew to love Italy, especially Milan, spending a significant amount of his time there lecturing on mathematics, as told in the UC article on his death.

Even though Berkeley’s Seidenberg live a life some 15 years shorter than SYSU’s Lifu Jiang, his wife Ebe Cagli Seidenberg, an Italian Jewish writer whom he had met at Johns Hopkins University in Baltimore, Maryland, later moved to Rome and lived to 87 – the same age as Jiang.

(“Ebe Cagli Seidenberg”, Institute of Modern Languages Research, School of Advanced Study, University of London; and, “Abraham Seidenberg”, School of Mathematics and Statistics, University of St Andrews, Scotland)

Like Zeke Wang mentioned earlier, son of my father’s former graduate adviser Prof. Wang Qi, Prof. Lifu Jiang’s son Boju Jiang (姜伯駒), more senior than Zeke, studied and then taught mathematics at Peking University since the 1950s, and Zeke later became a student of his and then a colleague. Like his contemporary Peking University professor Gongqing Zhang, mentioned earlier, Prof. Boju Jiang spent time as a visiting scholar at UC Berkeley during my time there.

Shortly before my graduation, my Berkeley biophysics Ph.D. student friend Dar, mentioned earlier, suggested that I buy a car and bring it to Canada, for the reasons that a new immigrant’s belongings were free of customs duties and consumer cars were about 30% cheaper in the U.S.

Dar suggested, further, that I consider the Milano sedan, made by the Italian auto maker Fiat under the Alfa Romeo brand, a car his professor had liked very much during a recent European sabbatical – in Italy if I remember right – and brought two back.

Milano, that is Milan in Italian, no?

Dar knew more than I did about many things, not just quantum physics. In a December 2009 blog post I recalled about a Berkeley girl student I had had some infatuations with, and what Dar might have found out:

“And speaking of the Justine Bateman-lookalike student I had come across often at Berkeley in the 1980s, I returned to Berkeley in the summer of 1990 for a research stay when I was already teaching at UBC in Vancouver, and one evening a Berkeley old-time friend “Dar” who was also from Guangzhou, and I went to a Telegraph Avenue pub for a drink, and “Shawna” was sitting right there with a boy friend.

“Dar” has since finished his post-doctoral work at the Salk Institute in San Diego and at Stanford, and now works in Houston, Texas.”

(““Nairobi to Shenzhen”, and on to Guangzhou (Part 2)”, December 15, 2009, Feng Gao’s Blog – Reflections on Events of Interest)

There was no “Alfa Romeo and Juliet” in the plans of some, was there? Now in 2015 there is “Juliet and Alfa Romeo”, finally, but made in Slovenia and not – Ivo Babuska’s – Czech Republic.

(“FNE at Slovenian Film Festival in Portoroz: Juliet and Alfa Romeo”, by Damijan Vinter, September 14, 2015, Film New Europe)

In any case, the Milano model was recent and well equipped and Dar said it did not sell well in North America and so a buyer might get a good deal. Indeed, visiting a few dealerships, including one in San Jose with Dar, I found that the existing 1987 stock was selling slowly – at around $18,000 and equipped like a luxury car costing $10,000 more, but Alfa Romeo’s quality and maintenance costs were not reassuring.

In the end, the one I bought for Canada was a slightly used one costing around $15,000. The most appealing feature for me was the anti-lock braking system, ABS, available only in high-end cars at that point, something that was to be very useful in snowy climates, such as in Canada.

But I did not expect that the ABS was to become one of the faultiest parts of my car, as I recalled in my January 2013 blog post:

“… my previous car in Vancouver and Honolulu, an Alfa Romeo Milano sedan, had an elusive, borderline malfunctioning anti-lock braking system.”

(January 29, 2013, Feng Gao’s Posts – Rites of Spring)

I arrived in Vancouver on August 24, 1988, unaware that it happened to be an anniversary of the most powerful domestic terror bombing in the United States up to that point, that had occurred in 1970 targeting none other than the Army Mathematics Research Center at Wisconsin-Madison – the place my undergraduate adviser had highly recommended for my graduate study.

I wrote about it in my March 2011 blog post, that the Army math research center headed by the mathematician J. Barkley Rosser was the target but the physics department bore the losses, and that one of the killers, Leo Frederick Burt, then escaped to Canada and was never caught:

“When I applied for graduate study in the United States Professor Li seriously recommended the U. S. Army Mathematics Research Center at the University of Wisconsin, Madison …

Little did I know that the Army Math Research Center had been a target of deadly violence, by anti-Vietnam War students in the 1970 “Sterling Hall bombing”: led by the mathematician J. Barkley Rosser the Center survived the most powerful domestic bombing prior to the 1995 Oklahoma City bombing but much of the Physic department’s laboratories were destroyed and a talented postdoc researcher, Robert Fassnacht, was killed; one of the perpetrators, university rowing athlete Leo Frederick Burt, escaped to Canada and remains one of America’s Most Wanted to this day.

I arrived in Vancouver on August 24, 1988 – unaware that coincidentally it was an anniversary date of the 1970 Sterling Hall Bombing in Madison.”

(March 29, 2011, Feng Gao’s Blog – Reflections on Events of Interest)

Actually, I didn’t just arrive in Vancouver but drove across the U.S.-Canada border. The date of my arrival wasn’t exactly my choice, as the tentative date range was agreed upon with UBC computer science acting head Uri Ascher – he happened to have previously worked at the Wisconsin-Madison army math research center – in consideration of the Fall 1988 semester schedule and the time needed for my housing search, while Uri arranged in advance for my temporary stay at the Faculty Club. My car was then shipped from Berkeley to Seattle and I booked a San Francisco-Seattle flight for a day when my car would be available for the 140-mile drive to Vancouver.

What I didn’t know was not only it being an anniversary of a past U.S. army math research center bombing but also, on the eve of my crossing into Canada, the death of Hans Lewy:

“He died on August 23, 1988 in Berkeley. He and Helen had recently returned from a trip to Europe where he delivered his last paper, [73], in honor of Ennio De Giorgi.”

(D. Kinderlehrer, in David Kinderlehrer ed., Volume 1, 2002, Birkhäuser)

Jesus Christ, a second Berkeley math professor death with an Italian factor in the summer of 1988 – this one a leading mathematical prize winner, and related to the Italian mathematician Ennio De Giorgi.

Recall as in Part 2, De Giorgi was the Italian mathematician in Pisa who had proved a theorem before John Nash did in the 1950s, and still Nash’s credit for it contributed to his receiving the Abel Prize in May 2015, and unfortunately to the deaths of him and his wife in a taxi accident returning from the Oslo ceremony!

In Part 2 I have wondered about the metaphor of “vampire”, as opposed to “guardian angel”, over the circumstances of the deaths, that the Nash couple had a limo service prescheduled for 5 hours later and Lisa Macbride, daughter of Nash’s Abel Prize co-recipient Louis Nirenberg, suggested they take a taxi, and the taxi driver turned out to be named Girgis.

As for Hans Lewy, he may have been suffering from cancer, according to his The New York Times obituary at the time:

“Hans Lewy, a professor emeritus of mathematics at the University of California at Berkeley, died of leukemia Aug. 23 in Berkeley, where he lived. He was 83 years old.”

(“Dr. Hans Lewy, 83, Mathematics Profesor”, September 2, 1988, The New York Times)

At least it hadn’t been an auto accident. But Lewy’s cancer was not the direct cause of death – his Italian trip was, as later told by his friends such as his former Ph.D. student David Kinderlehrer, quoted earlier, and the mathematical biographer Constance Reid here:

“Lewy became emeritus in 1972, but he did not stop doing mathematics. Even in the summer of his death he gave a talk at a meeting in Cortona, Italy, on new work that involved attacking the Carathéodory conjecture from a different, variational angle. He had hoped to finish off the problem, but the complete solution was not to be granted to him. While in Europe, following a strenuous schedule in order to see as many as possible of his European friends, he caught a cold that developed into pleurisy. On his return to Berkeley he was hospitalized, fatally ill. He died on August 23, 1988, two months before his eighty-fourth birthday.”

(Constance Reid, in David Kinderlehrer ed., 2002, Birkhäuser)

It was only a cold, that then turned into pleurisy – two deaths in 4 months due to visiting Italy, one might have to wonder about the condition of public hygiene there.

Hans Lewy had a lifelong love of Italy since 1929:

“In 1929, again on Courant’s recommendation, Lewy obtained a year’s fellowship from the Rockefeller Foundation. He spent the first semester in Rome – the beginning of a lifelong love affair with Italy, to which he was to return on countless occasions – and the second semester in Paris. …”

(Constance Reid, in David Kinderlehrer ed., 2002, Birkhäuser)

For over two decades, Hans did collaborative research at the University of Pisa where De Giorgi was:

“In 1964-1965 Lewy accepted an invitation from the Scuola Normale Superiore and the University of Pisa. During that time, Pisa was achieving stature as a vigorous world center with Andreotti, De Giorgi, and Stampacchia among its leaders. Here, Lewy helped create the emerging area of variational inequalities with Stampacchia. Their work was fundamental to the growth of the subject, introducing new types of free boundary problems. He published in this area through the 1980s. In 1969-1970, he returned to Rome on the invitation of the Accademia dei Lincei. He was elected a Foreign Member in 1972. He also retired in 1972, continuing his research with undiminished vitality.”

(D. Kinderlehrer, in David Kinderlehrer ed., 2002, Birkhäuser)

Hans Lewy’s relationship with China can also be traced back a long time. In 1947 when Hans married his wife Helen Crosby, they spent 3 months of their around-the-world honeymoon in China, including teaching mathematics there:

“ In 1947 he and Helen Crosby, an artist, writer, and translator, were married. Their honeymoon included a trip around the world, beginning with a return to Europe. Also included was a two month stay in Chengtu, Szechuan (Chengdu, Sichuan) where Lewy gave a course on water waves and a third month visiting other institutions in China. Still under Nationalist rule, China was rarely visited by westerners in those years.”

(D. Kinderlehrer, in David Kinderlehrer ed., 2002, Birkhäuser)

Hans became fond of the Chinese, calling them “the Italians of the Orient”:

“Lewy became very fond of the Chinese people, whom he liked to describe as “the Italians of the Orient”.”

(Constance Reid, in David Kinderlehrer ed., 2002, Birkhäuser)

Perhaps his “guardian angel” was no longer present – when he died only weeks short of his 84th birthday of October 20 – for the talented Hans Lewy, who spoke multiple languages fluently and “actively studied Chinese up to the time of his death”:

“Lewy was the first son and second child of Max and Greta (Rösel) Lewy. His father was a merchant who dealt in accessories for women’s millinery, and his mother before her marriage had been a teacher of German in an expatriate enclave in Hungary. Her interest and ability in languages were inherited by her son. In addition to a thorough grounding in Greek and Latin, which he had received in his Gymnasium days, he was so fluent in French and Italian that he was frequently mistaken for a native on the streets of Paris and Rome. He could converse in Russian, once delivering a lecture in that language, and he actively studied Chinese up to the time of his death. …”

(Constance Reid, in David Kinderlehrer ed., 2002, Birkhäuser)

In his youth, Hans could have chosen to become a professional musician, as his wife Helen reminisced:

“The combination of mathematics and music, while not uncommon, especially in the Europe of  Hans Lewy’s time, was, in his case, serious enough to force him to make, in his teens, a difficult career choice. He played the violin, the viola and the piano with mastery, and, at times in his life also the clarinet and the cello. He composed many string quartets—which unfortunately were lost,—and at least one string trio …

His parents would not permit the unusual skills of their son to be exploited, until, under pressure from his teacher, they agreed when he was 16 to allow him to perform in public, for one time only: the concert took place in Bautzen in July, 1920. …

Actually the teenager was allowed another performance before the public; it took place also in Bautzen, in October of the same year. This time he was soloist for Mozart’s D-sharp violin concerto. The reviews were, to say the least, enthusiastic. Here are excerpts…:

Bautzner Tageblatt: “… Mozart’s splendid work was interpreted with classic poise and superior precision, and the youthful violinist received rich applause that came from the heart.”

The Bautzner Nachrichten review hailed him as a “Virtuoso” and “Wunderkind with true Mozartian charm” and predicted for him a great future as a musician.

His father favored mathematics; his mother was not sure; and he, of course, wanted both.

In the end mathematics won, and he soon left home for his studies in Göttingen. There he played with the city orchestra …”

(“The Music in Hans Lewy’s Life”, by Helen Lewy, in David Kinderlehrer ed., 2002, Birkhäuser)

In August 1988 when I drove into Canada, the other Hans I had known, i.e., Kezheng’s Farmers Insurance agent, was no longer my auto insurance agent. By 1984 I no longer shared Kezheng’s old Ford but had jointly bought a used Datsun with a Chinese visiting scholar studying law, also by the family name Lu (陸), who later became an international trade lawyer in Beijing; establishing a good driving record then allowed me to switch to a State Farm Insurance policy with a cheaper rate.

The Ford had an auto transmission, but the Datsun and the Alfa Romeo were stick-shift cars. I had learned to drive the stick-shift in 1983-1984, taught by a musician friend, a Chinese orchestra conductor and composer who had come to UC Berkeley as a visiting music scholar, earned a San Francisco State University master’s degree, become a successful music teacher, worked as the music director for a North Berkeley Christian church, and excelled as an organizer for Bay Area community music events, including a Chinese music concert at San Francisco’s Davies Symphony Hall which she conducted. She also co-signed my auto loan for the Milano.

When this talented female musician died of cancer in 1991, she was only 48.

(Continuing to Part 5)

Leave a comment

Filed under Academia, Education, History, Politics, Science, Society

Some Chinese Cultural Revolution politics and life in the eyes of a youth

The following, originally posted on October 7, 2011, on FengGao.ORG – Portal to Various Blog Topics and Articles, is an English synopsis of one part of a series of blog posts in Chinese. The title here is from an excerpt on the Facebook community page, History, Culture and Politics.

==========

Part 4, “青少年时代的部分文化熏陶 (some cultural influences experienced as a youth)”, recalls Feng Gao’s time of youth during the ten years of Cultural Revolution, 1966-1976.

As mentioned in Part 3, in 1966 before Feng was to begin elementary school, Red Guards from Mother’s middle school conducted a “home raid” at Mother’s dorm apartment where their family lived. Not long after, Feng went with maternal Grandma to visit Shantou city in her home region, and by the time they returned the family moved into the university campus where Father was a junior faculty member.

Feng then attended July 1 Elementary School –  the name of Sun Yat-sen University’s affiliated elementary school during Cultural Revolution era. Feng’s 1966 class consisted of two sections of around 50 each, more of them children of faculty members in his section while in the other section more of them children of administrators and workers. Later, students of the other section would move on to No. 52 Middle School while ones in Feng’s section would attend No. 6 Middle School – the latter formerly the affiliated school of Whampoa Military Academy as discussed in Feng’s English blog post, “Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 3) – when violence and motive are subtle and pervasive”. The first two years of elementary school were spent during a period of Red Guard violence as discussed in the preceding part of that English blog post.

Feng experienced some group conflicts among his classmate friends, which he recalls as having co-relations to different family backgrounds, such as some of them being from families where the parents were administrators or workers while some others being with overseas family connections.

Feng studied quite independently, did well and had time for other readings also, and he also enjoyed playing with other kids in various sports, athletic games and outdoor activities but he lacked physical strength or physical bravery.

Feng often caught tonsillitis at that age. The parents of one classmate girl, “Dan Zheng”, were good friends with Feng’s Mother because like Mother they were of Shantou origin; Dan’s father was a male nurse promoted to be a doctor at the start of Cultural Revolution, and he was very kind and helpful to Feng’s maternal grandparents as well as to Feng. At the urging of Dan’s older brother “Bin”, Feng skipped class once with them, found it interesting and did it once or twice more on his own, claiming sick reasons. Then Dr. Zheng moved to work in Hong Kong, and the entire Zheng family emigrated to Hong Kong by the time Feng was in middle school.

Parts 1 & 2 of this article were first posted on Grandma’s lunar-calendar birthday, Part 3 was first posted on Father’s Western-calendar birthday, and now for this Part 4 Feng chooses his mother’s lunar-calendar birthday as an appreciation for how much she has done for her family over the years.

Feng remembers the various agricultural farm work he and his classmates participated in during elementary school years.

He remembers the first big field-work trip outside of the university campus to be in Grade 2 and going to Zengcheng County, where they also got to taste the famous “Zengcheng Lychee”, which had once been appreciated by the famous, the likes of Song-dynasty literati Su Dongpo and Qing-dynasty Empress Dowager Cixi, and the price for the very best has made the Guinness World Records according to Chinese media reports.

He remembers some of the trips for farm work to be near New Phoenix Village, bordering the university campus on the campus’s northwest side. Classmate “Peifu Feng”’s family lived in the village, and the village center had a Chinese herbal medicine store where prescriptions by university hospital doctors needed to be filled. Unfortunately according to a research survey, the peasant laborers living there today still may not be able to afford medicine.

He also remembers farm work on university campus where some of the lawns and grass fields were turned into farm fields, and at elementary school’s own farm – areas around and nearby turned into sweet-potato and rice fields. Once at a lunch provided by the university farm during work break, a classmate found a cockroach in his bowl of rice, and that ruined Feng’s appetite. Feng is shocked to learn from news reports that in recent years children in his alma mater can still often find worms in their school lunch, and have had a serious bout of food poisoning during which many have been hospitalized for observation.

There was work in school’s mechanical workshop, too, but that was mostly for kids with handy skills, Feng not among them.

Real factory work wasn’t until around the last year of elementary school in Grade 5, when the class went to the Guangdong Provincial Tractor Factory across the street from the university’s south-facing southwest-gate near the school. Feng wasn’t good at manual work at the assembly line producing ball bearings for the diesel engine, so often was assigned to push a trolley moving parts around, with a few classmates, the boy “Youzhi Tang” among them. One of the assembly-line workers was a boastful martial-arts expert, and Youzhi’s older brother happened to be on the Guangzhou City’s all-middle-school martial-arts team.

Feng remembers seeing remnants of factory building scars from a major Red Guard militant battle in August 1968 when Sun Yat-sen University’s “August 31” Red Guards came over with a cannon to take part. As recalled in an English blog post (“Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 2) – when violence is politically organized”), the “Red Flag” and affiliated “August 31” Red Guards had been dominant on Sun Yat-sen University campus in 1968, culminating in a deadly triumphant “June 3 Incident”, and prior to that sniper fires had often come onto the campus from the factory.

Grade 3 and Grade 4 were when many of Feng’s classmates began to mature.

The boy Peifu Feng from New Phoenix Village was the first to master middle-and-long distance swim, having learned it in fishponds on university campus, and by the third year of junior middle school quite a few of Feng’s classmates participated in the large-scale, organized “Pearl River Swim”. Feng didn’t learn to swim until junior middle-school age, and never swam in the section of the Pearl River just outside the campus north-gate, but the section of the Pearl River in Guangzhou may have been the first major river Mao Zedong had swum in (in 1956) as the national leader.

Maturing boys liked to discuss about the girls. Feng remembers Peifu Feng as being very explicit while Youzhi Tang as quite romantic, and the two as the ones who liked to talk about it most.

A sign for Feng’s maturity was his picking up the ability to read classical Chinese novels during Grade 3 and Grade 4.

The notion of the “Four Classics” of traditional Chinese novels had begun in the late-Ming dynasty as first proposed by the literati Feng Menglong, at the time consisting of the novels “Three Kingdoms”, “Journey to the West”, “Water Margin”, and “Golden Lotus” (“The Plum in the Golden Vase”), each with unique representative contents and together providing a broad coverage. “The Plum in the Golden Vase” had its origin from “Water Margin”, and was a sexual erotica in which its main male character was a powerful businessman whose initial fortune came from his Chinese herbal medicine store.

As mentioned in Part 3, from early-Qing dynasty to mid-1980s “The Plum in the Golden Vase” was officially banned, with the exception that in 1957 Chairman Mao ordered limited copies for officials at or above the level of provincial vice governor and national vice minister. From a certain point in the Qing dynasty forward its place on the Four Classics list was replaced by “Dream of the Red Chamber”.

The first several Chinese classics Feng read were brought by Father from the university library. Obviously among the Four Classics list Feng didn’t get to read “The Plum in the Golden Vase”; in fact he hasn’t read it to this day. But Feng didn’t read “Dream of the Red Chamber” either until around the third year of junior middle school; instead, Father checked out the novel “Romance of Sui and Tang Dynasties”, allowing it to be among Feng’s first several classics to read.

From then on to when he finished middle school, Feng read many more other Chinese classics. Nonetheless, Feng is fascinated by Father’s choice of “Romance of Sui and Tang Dynasties” as among Feng’s early classics readings, and connects the fact to Father’s earlier academic background – Father was a Philosophy faculty member – in Chinese Literature.

Father’s academic career began with study of the famous poet Li Bai in the era of the Xuanzong Emperor of Tang dynasty. Father was influenced by his professor, Zhan Antai, whose 1953 Marxist analysis-oriented article, “Spirits of affinity to the people and realism in The Book of Poetry” (The Book of Poetry was the oldest-known published collection of Chinese poems, officially complied before or during Confucius’s era) was viewed as a milestone work. Father’s 1956 article, “On the artistic achievements of Li Bai’s poems”, with its emphasis on placing the poetry art in the context of national and social politics, gave out early signs of why later he could be transferred to teach Marxist philosophy.

To highlight some of the national and social politics, Feng makes selectively quotes from Father’s article, paraphrased as in the following passage:

Li Bai lived most of his life in the Xuanzong Emperor’s era (713-755 A.D.), the so-called “Prime Tang”, when China was the most advanced and most powerful country in the world; the era was the peak of Tang dynasty’s development but also the turning point toward its decline, as behind the economic prosperities there were complex class and ethnic conflicts. In his later years Emperor Xuanzong indulged in the pleasures of life, relied on officials with imperial marriage relations or those with dirty tricks to run the government, and these officials attacked and excluded talented and honest intellectuals such as Li Bai. Xuanzong especially desired of conquering Tibet, and resorted to stealth military attacks that broke prior agreements, but lost several times, sacrificing the lives of tens of thousands of soldiers. Premier Yang Guozhong who had come to power through imperial marriage cronyism, especially wanted to conquer Yunnan but also lost several times, incurring the casualties of two hundred thousand soldiers; the next year General An Lushan, of Northern minority origin and trusted by Emperor Xuanzong, waged a rebellion; Xuanzong had no ability to resist, had to run off to Sichuan (located between Yunnan and Tibet) for refuge and let his son the Suzong Emperor assume power, who could not fully solve the problems either.

Thus began the decline of at the time the world’s most advanced and most powerful country, the “Prime Tang”.

Feng points out that, of particular interest in that era was also Concubine-Empress Yang Yuhuan, who was the cousin of Yang Guozhong and the latter’s imperial connection to power, is known as one of the “Four Beauties” in ancient Chinese history, and especially loved the famous Lychee fruit from the southern region of Guangdong.

Feng notes that Father had such early academic interest in this kind of historical “romance”, and that later one of his graduate students of the 1980s, Liao Xiaoyi (Sheri Liao), became a distinguished environmentalist appreciated by then U.S. President Bill Clinton and First Lady Hillary Clinton, and won the year 2000 Sophie Prize.

Part 3 has mentioned that Father was unhappy about his transfer/reassignment from Chinese Literature to Philosophy, but Feng thinks Father was already lucky that he wasn’t branded a “rightist” as his professor Zhan Antai was. The poet Li Bai has been commonly viewed as a literary romanticist in contrast to his contemporary poet Du Fu as a literary realist, but father’s analysis touched upon many facets of Li Bai’s achievements and particularly focused on Li Bai’s literary realism. This 1956 article of Father’s was later included in a national collection of representative articles in Li Bai studies dating back to the May 4th era in 1919 (start of a movement to make the Chinese language and literature more populist and accessible to the common people), published in 1964.

After the early period of Cultural Revolution with Red Guard violence was over, Father indeed became involved in politics. He was assigned to the university’s Cultural-Revolution (CR) writing group, and became one of the persons penning official political articles during the first few years of the 1970s.

The most well-known CR writing groups of that period included one from Beijing University and Tsinghua University jointly, one at Ministry of Culture in the State Council, one at Central Party School of the Communist Party, and the two at the Party’s Beijing City Committee and Shanghai City Committee, respectively.

Today many people have the conception that writers in these CR writing groups were all “leftist”, but that was not the case. Feng points out that Professor Feng Youlan, founder of Sun Yat-sen University’s Philosophy Department and then a Beijing University professor, was a member of the Beijing-Tsinghua Universities group, which was under Chairman Mao’s direct guidance; also, “Uncle Yuan Weishi”, Father’s colleague in the Philosophy Department, Feng’s classmate buddy Ling’s father, and a well-known contemporary advocate of civil liberty and individual freedom, was a member of the group with Father as was “Uncle Huang Tianji”, a Chinese drama expert who has received a National Distinguished Teacher Award in 2006.

It was officially assigned duty, as Feng recalls that the leader of the Sun Yat-sen University CR writing group was “Elder Uncle Zhang Hai”, a former military officer in charge of political indoctrination as a university official, and that his deputy was “Auntie Luo Wanhua”, a human-resources administrator and mother of Feng’s classmate friend Jun Gao.

Jun was one of the classmates tasked with implementing the teacher’s rules and requirements, and was politically much more correct than Feng. When Feng’s family first moved into the university campus their (assigned) dwelling was the downstairs of a two-story house, vacated by the family of Professor Xu Xiangong, a U.S.-educated senior chemist and provincial leader of the democratic organization Jiusan Society (September Third Society), who continued to live upstairs; but it was only about two years before Feng’s family moved out and Jun’s family moved in.

Ling on the other hand was exceedingly shrewd. Once in around Grade 4 after a session studying some instructions by Vice Chairman Lin Biao (Chairman Mao’s deputy), Ling asked Feng, on their way home barely out of the school courtyard, that if Feng noticed differences between Vice Chairman Lin’s instructions and Chairman Mao’s. Feng normally was very careful in making this kind of comments for fear of political trouble, but because Ling was so insistent and friendly, and it was a private occasion so Feng replied that there appeared to be some particular differences. The next day Ling reported to their head teacher, “Luo Dezhen”, saying Feng badmouthed Vice Chairman Lin. Teacher Luo summoned Feng and he had a lot of explaining to do, fortunately Feng was a good student but still Ms. Luo would speak to Mother and urge the family to educate him more strictly.

Also in Grade 4, two students who barely spoke Cantonese joined the class, the boy “Xiangqian Qi” and a girl, and Xianqian was assigned a seat directly in front of Ling who was seated directly in front of Feng. Feng interprets it as that the new classmates’ parents of northern Chinese origins arrived after military intervention had ended Red Guard violence and the university administration became headed by military personnel.

In his 20 years of school life from entering elementary school to finishing Ph.D., Grade 5 was when Feng was treated most highly. That year a new teacher, “Ruan Jiabi”, arrived at their school and she promoted Feng to among the leadership of the school’s Little Read Guards to be in charge of propaganda, and in that role also co-leading the performance-art troupe even though Feng could not sing or dance – Xianqian was a talent in this regard and put in charge of the troupe’s performing. Before or after Grade 5, Feng’s normal duty was within his class section, in charge of study or propaganda.

During the first 3 to 4 years of the 1970s, Father wrote articles as part of the university’s CR writing group, sometimes in residence away from family for weeks, including occasionally at the location of the Communist Party Guangdong Provincial Committee’s writing group. The office of Father’s group was the university’s Sun Yat-sen Museum, which prior to Cultural Revolution had been headed by Father’s former Chinese Literature professor, (Ms.) Xian Yuqing.

During these several years, Father were in Beijing at least 5 or 6 times, writing at PLA Daily, The People’s Daily or Guangming Daily.

For one of the writing stints in Beijing, Father was on loan to a CR writing group at State Council, which had initially requested to transfer him (and probably Huang Tianji) there. Feng cannot be sure if exactly it was Ministry of Culture’s; but as also mentioned in Part 2, (at the time) Feng’s family lived downstairs from “Auntie Zeng” whose husband prior to his early Cultural Revolution suicide had led the cultural affairs department at Guangzhou Military District, and Jin Jingmai, a writer from there and author of the novel “The Song of Ouyang Hai” – Feng’s first introduction to a grownup book at less than 7-years-old – had led Ministry of Culture during early Cultural Revolution but was then put in jail due to a political conflict with Chairman Mao’s wife Jiang Qing.

Later in1980s, Father was on the editorial committee for China’s 8-volume “History of Marxist Philosophy”, and his co-chief-editor for Volume 3, a Beijing University faculty member, had been on the Beijing and Tsinghua universities CR writing group guided by Mao. One of the three co-Editor-in-Chiefs for the entire eight volumes was Ms. Lin Li, daughter of Lin Boqu, an important political veteran in three successive Chinese political eras, the United League, the Nationalist Party and the Communist Party; back in 1930s, Lin Li was in school in Soviet Union along with Mao’s wife He Zizhen and others, and this one time when they sat in the club lounge listening to radio newscast of a Soviet TASS interview with Chairman Mao in Yanan, China, they all – including Ms. He herself – were surprised to learn from the newscast that Chairman Mao already had a different wife now.

In any case, during Cultural Revolution even persons like Father who took part in writing officially-guided political articles rarely signed their own names because the contents weren’t their own. Feng finds only one article with Father’s name on it during the ten years of Cultural Revolution from summer 1966 to summer 1976, co-authored with two colleagues in 1973 for the political campaign to denounce Lin Biao and Confucius together. Feng is surprised to see that the article did not mention Confucius at all.

Most of the articles appearing in that same journal issue also denounced or criticized Confucius, probably more than they did Lin Biao. Classmate girl “Danwei Yang”’s father, Professor Yang Rongguo, Chairman of the Philosophy Department where Father was a member, was not only the leader in this journal issue but the leader in this national campaign to denounce Confucius, and for that he was praised by Mao personally. One of Father’s two coauthors in this article used a penname, which Feng recalls was that of Ling’s father, Yuan Weishi, but then he had a separate article in his real name to denounce Confucius.

Feng notes that according to now-disclosed historical record, among the Communist Party leadership in 1973 a faction led by Premier Zhou Enlai supported only denunciation of Lin Biao, not denunciation of Confucius.

After Cultural Revolution ended, Yang Rongguo soon died of cancer in 1978 at the age of 71. The former leader of Father’s writing group, Zhang Hai, also died of cancer in late 1970s, not yet reaching his 70 as far as Feng remembers.

Father’s former colleague and roommate mentioned in Parts 2 & 3, “Uncle Xiong Maosheng”, had been an activist in studying Chairman Mao’s teachings, and published in early Cultural Revolution in a journal issue in which his article on studying the works of Chairman Mao appeared just ahead of an article praising the novel, “The Song of Ouyang Hai”. Uncle Xiong died of cancer during Cultural Revolution.

Part 3 has discussed about three Chinese calligraphers from maternal Grandpa’s Shantou region whose calligraphies have been included in a historical collection along with Grandpa’s and to whom Father had cultural links in one way or another, that they all died of cancer – including Father’s professor, Zhan Antai.

Feng’s English post, “Team Canada female athletes disqualified from Commonwealth silver medal, jailed Chinese democracy activist awarded with Nobel peace prize, and others in between (Part 2) – when violence is politically organized”, has mentioned that in 1972 after Richard Nixon’s historical visit to China and just before Nixon’s visit to Soviet Union, Premier Zhou Enlai was diagnosed with cancer, which would ultimately kill him in 1976. But Feng notes that Zhou was in Beijing, where he died before Mao did and lived a life nearly 5 years shorter than Mao’s.

Father had had hepatitis since late 1950s, and as a result while working at the CR writing group he was diagnosed with cirrhosis approaching late stage. A period of time later, he was also diagnosed with both congenital and rheumatic heart diseases. Afterwards, Father went through Chinese herbal medicinal treatments and the supposedly irreversible cirrhosis would eventually disappear, magically.

During early Cultural Revolution, as intellectuals quite a few teachers suffered denunciation and maltreatments. For instance, Mother suffered a “home raid” by her middle school’s Red Guards. Ling’s father also suffered denunciation by his students. Father faired better because he not only was just a junior faculty member in a university but also taught Marxism-Leninism.

During Cultural Revolution, Feng asked Father, “Have you posted big-character posters denouncing others you knew?” Father replied that he didn’t know much about other people’s things so had posted one only to criticize Vice Chairman Liu Rong of the Philosophy Department, accusing Liu Rong of being patriarchal and unwilling to accept different opinions.

Feng felt Father was bookishly foolish. “Elder Uncle Liu Rong” had been the hotshot even before Cultural Revolution for his research specializing in Mao Zedong Thought – Father’s has been the philosophical thoughts of Marx and Engels – and his being leader of the Communist Party organ at the Department as well as the vice chair supervising teaching and research in the Marxist fields. Father had published an article with him in 1964, and in Feng’s opinion had not been for Liu Rong, relying only on his roommate Xiong Maosheng’s introduction Father probably wouldn’t have been able to join the Party just before Cultural Revolution.

Father had a temper, sometimes tending to argue with others to an unpleasant ending.  At elementary school age, Feng sometimes would say he heard some local things differently from what Father told him, asserting, “Yuan Ling told me,” and Father would respond, “Don’t believe everything Yuan Ling says. If Yuan Ling asked you to die, would you?” By middle school age, Father liked to discuss some of his writing ideas with Feng, but if Feng disagreed with some of them and insisted, Father would give him a slap, and say afterwards, “If you are so good then go outside to debate, go to die. Do you think you are Einstein?”

By late 1970s and early 1980s Feng was a Math and Computer Science undergraduate student at Sun Yat-sen University, and Professor Liu Rong was the University Vice President overseeing political indoctrination; all students at the university listened to Vice President Liu’s political education speeches on occasions.

At the time Father was a member of the University Academic Committee, within which the Social Sciences and Humanity fields were under the leadership of Vice President Liu. Many years later Feng has learned that during a 1980s’ Committee review of proposed academic-rank promotions for faculty members, some candidates supported by Father was vetoed, and Father wrote a formal letter to the university administration accusing Vice President Liu Rong of suppressing talents.

Professor Liu Rong was originally from Yuhuan County in Zhejiang Province, a county with the same name as Tang-dynasty Emperor Xuanzong’s Concubine-Empress Yang Yuhuan – a controversial imperial character in the era of the poet Li Bai whose literature Father originally studied before he was transferred by the university to philosophy. During 1980s Prof. Liu academically supervised China’s first Ph.D. in Mao Zedong Thought, and received the National Outstanding Teacher Award in 1989.

When Teacher Liu Rong died of lung cancer at the age of around 81, that day happened to be February 20, Father’s birthday. But regardless, he lived a life 8 or 9 years longer than Father’s when later Father died of heart diseases at over 72.

Leave a comment

Filed under Academia, Chinese Cultural Revolution, Culture, History, Politics

A review of postings about scientific integrity and intellectual honesty, with observations regarding elite centrism – Part 3: peeking into the academic hierarchies

(Continued from Part 2)

A notable case of an elite academic’s lack of academic integrity and intellectual honesty has been that of Leslie Berlowitz, former president and CEO of the American Academy of Arts and Sciences, whose 17-year long reign ended in 2013 upon revelations of her untruthful resumes that falsely claimed a Ph.D. degree. I reviewed the case in a September 5, 2014 post titled, “The end of Leslie Berlowitz’s reign at American Academy of Arts and Sciences – about academic integrity, management style, and?”, on the Facebook community page, History, Culture and Politics:

“On July 31, 2013, Leslie Berlowitz, president and chief executive of the prestigious American Academy of Arts and Sciences, resigned following reports she had embellished her resume.

In June, The [Boston] Globe had reported that in at least two applications for federal grants over the past decade, Berlowitz had stated she received a doctorate in English from New York University in 1969. …

The nonexistent doctorate was also in a draft of an obituary the Academy prepared for use in the event of her death. The obituary praised her as “a scholar of American literature” who “received undergraduate and doctoral degrees from New York University”.

NYU spokesman James Devitt said the university had no record of Berlowitz receiving a doctorate or completing her dissertation. A resume on file at NYU from when Berlowitz worked there indicated she was still working on her doctorate in the late 1980s or early 1990s.”

(“The end of Leslie Berlowitz’s reign at American Academy of Arts and Sciences – about academic integrity, management style, and?”, September 5, 2014, Facebook page History, Culture and Politics)

The non-existent Ph.D. was the official reason for Berlowitz’s resignation, as a leading academic explained that “academic integrity is what we hold most dearly”:

“Academics typically have little tolerance for people exaggerating their educational credentials. At other academic institutions, people who fabricate degrees have often faced severe consequences. Marilee Jones, a popular admissions dean at the Massachusetts Institute of Technology, left in disgrace in 2007 after she admitted falsifying her degrees, and Doug Lynch, a vice dean at the University of Pennsylvania, resigned in 2012 after revelations that he had falsely claimed to have a doctorate from Columbia University.

“In most situations at a university, lying about a professional degree would be grounds for instantaneous dismissal”, said Ronald G. Ehrenberg, director of the Cornell Higher Education Research Institute. “In academia, academic integrity is what we hold most dearly.””

(September 5, 2014, Facebook page History, Culture and Politics)

Without a doctoral degree, had Berlowitz cheated to get the top job at one of the world’s most prestigious honorary societies and was then unexposed for a long 17 years?

It hadn’t been such a flagrant foul. Prior to the Academy, Berlowitz had been a vice president in charge of fundraising at her alma mater New York University, one of the world’s best private universities, although her untruthful Academy resume also made it appear she had been in charge of academics:

“The NYU record indicates a fast career launch and smooth rise for Berlowitz within NYU, on an administrative track: in 1970 as a graduate student she became an assistant to the Dean, and a year later was on the faculty and 2 years later Assistant Dean for Administration. From 1981 on, she was a university-level executive as Assistant Vice President, Associate Vice President, and Deputy Vice President for Academic Affairs, and in the 1990s prior to moving to the Academy she was Vice President for Institutional Advancement.

Others noticed that her Academy resume had identified herself as former NYU vice president for academic advancement – her most senior NYU position – when it was actually vice president for institutional advancement – management of fund-raising rather than academic programs.

(September 5, 2014, Facebook page History, Culture and Politics)

So Berlowitz was an elite academic administrator; but her lack of a doctoral degree suggests that she was not an elite scholar.

Reviewing the press coverage, I pointed out that the case involved more serious issues about Berlowitz’s management style:

“Berlowitz also came under fire for harshly treating staffers, micromanaging the Academy’s affairs, barring scholars from viewing the Academy’s historic archives, and receiving an outsized pay package—m