How the Net Was Won

Most people today don’t know that the heart of the internet was once on North Campus.
– Craig Labovitz, Merit engineer
  1. Chapter 1 Proposal Accepted

    Douglas Van Houweling was collapsed in a chair, overjoyed — but daunted by the task ahead.

    Van Houweling had received unofficial word a few weeks earlier that the National Science Foundation had accepted his group’s proposal to upgrade the agency’s overloaded computing backbone – NSFNET – connecting the nation’s handful of supercomputing sites and nascent regional networks. But many details still needed to be negotiated with the NSF before any public announcement. Now, with those arrangements finally completed, that announcement, with some fanfare, would come the following day — on November 24, 1987.

    The core of the team that Van Houweling and colleague Eric Aupperle had knit together — and that for six long weeks had labored 20 hours a day, seven days a week, obsessing over every detail of its response to the NSF’s request for a proposal — had gathered in Aupperle’s Ann Arbor home, and had stayed late into the night. It would only have these next few hours to exchange congratulations, celebrate, and start thinking about what would come next — before the real work would begin.

    The Aupperle living room surged all evening with anticipation and speculation. As the night wound down, someone sitting on the floor beside the sofa said, “I think this is going to change the world.”

    And yet they had no idea.

    • Douglas Van Houweling was hired in 1984 as U-M's first vice provost for information technology.
      Caption
      Douglas Van Houweling was hired in 1984 as U-M's first vice provost for information technology.
      Image: University of Michigan
  2. Chapter 2 Michigan at the Forefront

    Van Houweling was hired in late 1984 as the University of Michigan’s first vice provost for information technology. Michigan Engineering Dean James Duderstadt and Associate Dean Daniel Atkins had fought to create the position, and to bring in Van Houweling, believing it critical to the University’s efforts to solidify and extend an already substantial computer standing rooted in the 1950s.

    The transformative power of computing would begin to gain credence by the 1960s, but the University of Michigan was at the forefront of the movement an entire decade earlier. In 1953 its Michigan Digital Automatic Computer — designed and built to help solve complex military problems — was only the sixth university-based high-speed electronic digital computer in the country and the first in the Midwest. And in 1956, the legendary Arthur Burks, co-creator of the Electronic Numerical Integrator and Computer – considered the world’s first computer — had established one of the nation’s first computer science programs at Michigan.

    Michigan also became involved in a U.S. Department of Defense project known as CONCOMP, which focused on the CONversational use of COMPuters (and hence the name). By the mid-1960s, U-M had established the Michigan Terminal System, one of the world’s first time-sharing computer systems and a pioneer in early forms of email, file-sharing, and conferencing. In 1966, U-M, Michigan State University, and Wayne State University also created the Michigan Educational Research Information Triad, known as Merit, to connect those universities’ mainframe computers.

    Though the National Science Foundation and the State of Michigan funded Merit, it was (and still is) hosted by U-M, and all its employees are University employees. Michigan Engineering professor Bertram Herzog was named Merit’s first director in 1966, and Eric Aupperle — Merit’s president and principal investigator for the NSFNET bid proposal — had been Herzog’s first hire as senior engineer.

    And it was Merit, with Van Houweling as its chairman, that would be critical in securing this latest NSF grant to rescue the sputtering NSFNET.

  3. Chapter 3 Slow Spread of Networks

    As computers grew in importance among academics, computer scientists, and private and government researchers, efforts intensified to link them to share data among various locations.

    The Department of Defense Advanced Research Projects Agency led this work, and by the mid-1960s determined the best method for long-distance sharing among machines was a new packet switching method using electronic connecting rather than the established circuit-switching process. This technique led to ARPANET, a major packet switching computer network created in 1969 to connect researchers. DARPA would test these communications links for several years among a few private contractors and select U.S. universities, including U-M and Merit.

    Merit also was among the first to support ARPANET’s protocols for how computers would talk to each other. Called Transmission Control Protocol/Internet Protocol, TCP/IP would play a more significant role as networked computing evolved.

    The ARPANET slowly proved an extremely useful networking tool for the relatively small and limited communities of scientists, engineers, and scholars. By 1981, the NSF supported another network – the Computer Science Network – to connect U.S. academic computer science and research institutions unable to connect to the ARPANET due to funding or other limitations. But as the networks grew, concerns nonetheless were building among scientists and academics that the United States had been falling behind the rest of the world — and in particular Japan — in the area of supercomputing.

    To address the perceived supercomputing gap, the NSF purchased access to several research laboratory and university-based supercomputing centers, and launched a competition to establish more U.S. supercomputer centers. Michigan was among those that had prepared a bid, which it had already submitted by the time Van Houweling arrived on campus. But Van Houweling would quickly learn that Michigan’s proposal, though among the top-rated technically, would fail — if for no other reason than because it contemplated using a supercomputer built in Japan. NSF awarded the first supercomputing sites to Cornell, Illinois, Princeton, the University of California-San Diego, and Carnegie Mellon-University of Pittsburgh.

    With a burgeoning group of supercomputing centers now in place — and a growing number of NSF-supported regional and local academic networks now operating across the country — the NSF needed to develop a better, faster network to connect them. Its NSFNET, operational in 1986, was at first modestly effective. But an immediate surge in traffic quickly swamped its existing infrastructure and frustrated its users.

    By 1987 the NSF was soliciting bids for an NSFNET upgrade. The Merit team and Van Houweling — who had been discussing this precise kind of network with the NSF for several years — were ready to pounce.

    With a burgeoning group of supercomputing centers now in place, the National Science Foundation needed to develop a better, faster network to connect them.
  4. Chapter 4 A Stronger Backbone

    The NSF had encouraged those bidding to upgrade its computing backbone to involve the private sector, but nobody needed to tell Van Houweling. As Merit’s chairman, Van Houweling already had been cajoling his well-established contacts at IBM, which convinced an upstart telecommunications company called MCI to join the fold. IBM committed to providing hardware, software, and network management, while MCI would offer transmission circuits to the NSFNET backbone at reduced rates.

    With these commitments in place, Michigan Gov. James Blanchard agreed to contribute $1 million per year over five years from state funds.

    And the bid was won.

    Now the team would need to build an extensive and upgraded infrastructure, using newer and more sophisticated networking hardware of a type never used before — and it would have to do it fast.

    Forwarding data among networks requires a router, and the first-generation NSFNET used one nicknamed Fuzzball that ran at 56 kilobits per second. But the next generation was supposed to run at 1.5 megabits per second—or nearly 30 times faster.

    “A whole different category,” says Van Houweling, “and nothing like that existed. Today, you can buy a router for your house for about $50 to $100. But there were no routers to speak of then. You could buy one — for about a half million. But IBM committed to build it and write the software — for free!”

    MCI’s Richard Liebhaber later recalled, during a 2007 NSFNET 20th anniversary celebration, how quickly things were moving — and how much more there was to learn. “All this baloney about, ‘We knew what we were doing,’” said Liebhaber. “When we committed to this, we didn’t have anything. We had ideas, but that was about it.”

    But somehow, it all worked.

    • As an associate dean at the College of Engineering, Daniel Atkins was among those pushing to extend U-M's computing legacy and impact.
      Caption
      As an associate dean at the College of Engineering, Daniel Atkins was among those pushing to extend U-M's computing legacy and impact.
      Image: Lee Katterman, Michigan Alumnus July/August 1985
  5. Chapter 5 Heart of The Internet

    Merit committed to making the new backbone operational by August 1988, and it accomplished that feat by July of that year — just eight months after the award. The newer, faster NSFNET connected 13 regional networks and supercomputer centers, representing more than 170 constituent campus networks. This upgraded network of networks experienced an immediate surge in demand of 10 percent in the first month — a growth rate that would hold firm year after year.

    “At first we thought it was just pent-up demand, and it would level off,” says Van Houweling. “But no!”

    Merit had exceeded its early expectations — though Aupperle modestly attributed that to “the incredible interest in networks by the broader academic communities” rather than the new network’s speed and reliability. But the results were indisputable. Merit’s staff, operating nonstop, nearly tripled, and it overflowed into a series of trailers behind the North Campus computing center until U-M built a new operations facility.

    Craig Labovitz was a newly hired Merit engineer who had abandoned his Ph.D. studies in artificial intelligence at Michigan because he was so fascinated by his NSFNET work assignment. “Most people today don’t know that the heart of the internet was once on North Campus,” Labovitz says. “It was where the operations and on-call center was, and where all the planning and the engineering took place.”

    The NSFNET soon proved to be the fastest and most reliable network ever. The new NSFNET technology quickly replaced the Fuzzball. The ARPANET was phased out in 1990, followed a year later by the Computer Science Network. The NSFNET was connecting all computer scientists. Almost all traffic from abroad was traversing the NSFNET as well, and its most fundamental achievement — construction of an ever-evolving high-speed network service — would essentially cover the world.

    “Throughout this whole period, it was all about the need to support university research that drove this project,” says Van Houweling. “Researchers needed to have access to these supercomputing facilities, and the way to do it was to provide them with this network. Nobody had the notion that we were building the communications infrastructure of the future.”

    But that’s the way it turned out.

    Nobody had the notion that we were building the communications infrastructure of the future.
    – Doug Van Houweling
  6. Chapter 6 The Protocol Wars

    To the extent that anyone is said to have “invented” the internet, credit generally goes to American engineers Vinton Cerf and Robert Kahn. Along with their team at DARPA in the mid-1960s, Cerf and Kahn developed and later implemented the TCP/IP protocols for the ARPANET. The TCP/IP protocols were also referred to as “open” protocols — and later, simply, as the internet protocols. (Cerf also may have been the first to refer to a connected computer network as an “internet” — though the “internet” would not fully come to the attention of the public for another two decades.)

    The significance of the NSFNET’s success was that it scaled readily and well and did so using the open protocols during a time of stress and transition. The open protocols had proved popular among computer scientists accustomed to using the ARPANET and the CSNET, but they still were relatively new and untested. There remained deep skepticism — and perhaps no small amount of self-interest — among commercial providers that the open protocols could effectively scale. Every interested corporate enterprise was pressing for its own proprietary protocols.

    The NSFNET’s immediate challenge, therefore, was to avoid a flameout. Getting overrun would have given this open model “a black eye,” Van Houweling says — enabling the telecommunications and computing companies to “rush in and say, ‘See, this doesn’t work. We need to go back to the old system where each of us manages our own network.’”

    But as Aupperle noted, “those networks weren’t talking to each other.” Proprietary protocols installed in the products of Digital Equipment Corporation, IBM, and other computer manufacturers at that time were hierarchical, closed systems. Their models were analogous to the telephone model, with very little intelligence at the devices, and all decisions and intelligence residing at the center. In contrast, the internet protocols have an open, distributed nature. The power is with the end-user — not the provider — with the intelligence at the edges, within each machine.

    “AT&T’s model was top-down management and control. They wouldn’t have done what the NSFNET did,” says Van Houweling. Unlike their proprietary counterparts, no one owned open protocols, which meant no one was charging fees or royalties. And that anyone could use them.

    Winning the battle over proprietary standards would not be easy or quick. Before the open protocols could prove their feasibility, a major competing effort in Europe to build and standardize a different set of proprietary network protocols — the Open Systems Interconnect — was continuing to garner support from the telecommunications industry and the U.S. government. This debate wouldn’t fully and finally end for at least a decade — or until the end of the 1990s.

    Until the upgraded NSFNET started to gain traction, “everything had been proprietary. Everything had been in stovepipes,” says Van Houweling. “There had never been a network that had the ability to not only scale but to also connect pretty much everything.”

    “It was the first time in the history of computing that all computers spoke the same language,” recalled IBM’s Allan Weis at the NSFNET 20th anniversary.

    Proprietary protocols “had a control point,” he added. “They were controlled by somebody, owned by somebody. TCP/IP was beautiful in that you could have thousands of autonomous networks that no one owned, no one controlled, just interconnecting and exchanging traffic.”

    And it was working.

    • Eric Aupperle (left), the first president of Merit, and network engineer Hans-Werner Braun.
      Caption
      Eric Aupperle (left), the first president of Merit, and network engineer Hans-Werner Braun.
  7. Chapter 7 Bumpy Road to Commerce

    But continued growth would bring change, and change would bring controversy.

    “When the NSFNET was turned on, there was an explosion of traffic, and it never turned off,” says Van Houweling. Merit had a wealth of experience, and along with MCI and IBM, it had for more than two years exceeded all expectations. But Merit was a nonprofit organization created as a state-based enterprise. To stay ahead of the traffic, the NSFNET would have to upgrade again — from a data transmission system known as T1 to T3. No one had ever built a T3 network before.

    “To do this, you had to have an organization that was technically very strong, and was run with the vigor of industry,” reasoned Weis. This upgrade would require more funding, which was not likely to come from the NSF.

    In September 1990, the NSFNET team announced the creation of a new, independent nonprofit corporation: Advanced Network & Services, Inc., with Van Houweling as its chairman. With $3 million in commercial investments, ANS subcontracted the network operation from Merit, and the new T3 backbone service, representing a 30-fold increase in bandwidth, was online by late 1991.

    At this point, the NSFNET still was servicing only the scientific community. With the T3 network in place, commercial entities also began seeking access. ANS charged commercial users more, with the surplus used for infrastructure and other network improvements.

    But several controversies soon arose. Regional networks wanted commercial entities as customers for the same reasons ANS did. But they felt constrained by the NSF policies prohibiting purely commercial traffic from being conveyed over the NSFNET backbone, designed to support research and education.

    At the same time, the research and education community raised concerns that commercialization would affect the price and quality of its own connections. And on another front, businesses in the fledgling market of providing Internet service complained that the NSF was unfairly competing with them through its ongoing financial support of the NSFNET.

    Inquiries into these matters — including congressional hearings and an internal report by the inspector general of the NSF — ultimately resulted in federal legislation in 1992 that somewhat expanded the extent to which commercial traffic was allowed on the NSFNET.

    But the NSF always understood that the commerce needed to support the network if it was going to last—and it never intended to run the NSFNET indefinitely. Thus a process soon commenced whereby regional networks became, or were purchased by, commercial providers. In 1994 the core of ANS was sold to America Online (now AOL), and in 1995 the NSF decommissioned the NSFNET backbone.

    And the NSFNET was history.

    “When we finally turned it over [to the commercial providers], the Internet hiccupped for about a year,” according to Weis, because the corporate entities weren’t as knowledgeable or prepared as they needed to be. IBM, which had several years’ head start on the competition in building capable Internet routers, didn’t pursue that business because others at IBM still thought proprietary networks would ultimately win the protocol wars. Cisco stepped into the breach, and following this initially rocky period, it developed effective Internet router solutions — and has dominated the field ever since.

    “Whenever there are periods of transition … by definition they involve change and disruption,” says Labovitz, the engineer who was with Merit in its early days. “So initially, it was definitely bumpy. Lots of prominent people were predicting the collapse of the Internet.

    “In hindsight, we ended up in a very successful place.”

    • Craig Labovitz was a young engineer who worked on the NSFNET upgrade.
      Caption
      Craig Labovitz was a young engineer who worked on the NSFNET upgrade.
    Initially, it was definitely bumpy. Lots of prominent people were predicting the collapse of the Internet.
    – Craig Labovitz
  8. Chapter 8 But What If It Had Failed

    Van Houweling is fond of saying the internet could only have been invented at a university because scholars comprise “the only community that understands that great things can happen when no one’s in charge.”

    “The communications companies that resisted did so on the basis that there was no control,” Van Houweling says. “From its own historical perspective, this looked like pure chaos — and unmanageable.”

    Labovitz agrees. “It was an era of great collaboration because it was a non-commercial effort. You were pulling universities together, so there were greater levels of trust than there might have been among commercial parties.”

    So what would the internet look like today if it had gone AT&T’s way? One possible scenario is that the various commercial providers may have created a network in silos, with tiered payments depending on the type of content, the content creator, and the intended consumer — without unlimited information sharing.

    “It’s hard to predict precisely what would have happened,” says Atkins, now professor emeritus of electrical engineering and computer science at Michigan Engineering and professor emeritus of information at the School of Information. But instead of being “open and democratizing,” it might have been “a balkanized world,” with “a much more closed environment, segmented among telecommunications companies.”

    “Now, you just have to register, and you can get an IP address, and you can put up a server and be off and going,” he says. “If AT&T were running it, it would have to set it up for you. It would have control over what you could send, what rates you could charge.”

    “In the early [1980s pre-NSFNET] CompuServe/AOL days, you could only get the information they provided in their walled gardens,” says Van Houweling. “The amount of information you could access depended on the agreements CompuServe or AOL had with their various information providers.”

    How might we have progressed from the old “walled gardens” to what we have today — that no matter which computer or smartphone you own or are using, you can access the world?

    “I frankly don’t know if we would have gotten there,” Van Houweling says. “It might have been the end of the internet.”

     

    Sources include Eric Aupperle; Douglas Van Houweling; Daniel Atkins; Craig Labovitz; Merit — Who, What, and Why Part One: The Middle Years, 1983-1993 by Eric M. Aupperle, President, Merit Network, Inc.; NSFNET: A Partnership for High-Speed Networking Final Report 1987-1995; Retiring the NSFNET Backbone Service: Chronicling the End of an Era, Susan R. Harris, Ph.D., and Elise Gerich; A Century of Connectivity at the University of Michigan, Edited by Nancy Bartlett, Nancy Deromendi, Alice Goff, Christa Lemelin, Brian Williams, Bentley Historical Library; assorted Merit internal “Link Letters”; NSFNET: The Partnership That Saved the World, Celebrating 20 Years of Internet Innovation and Progress (video recordings from Nov. 29-30, 2007), Bentley Historical Library.

     

    An earlier version of this story was published by the College of Engineering.

    If AT&T were running it, it would have to set it up for you. It would have control over what you could send, what rates you could charge.
    – Daniel Atkins