Software-Defined Networking and The New Internet

Tonight, I am sitting on an panel sponsored by NSF and Discover Magazine about “The New Internet”.  The panel has four panelists who will be discussing their thoughts on the future of the Internet.  Some of the questions we have been asked to answer involve predictions about what will happen in the future.  Predictions are a tall order; as Yogi Berra said: “It is hard to make predictions, especially about the future.”

Predictions aside, I think one of the most exciting things about this panel is that we are having this discussion at all.  Not even ten years ago, Internet researchers were bemoaning the “ossification” of the Internet.  As the Internet continues to mature and expand, the opportunities and challenges seem limitless.  More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad.  This changing landscape presents tremendous opportunities for innovation.   The challenge, then, is developing a platform on which this innovation can occur.  Along these lines, a multicampus collaboration is pursuing a future Internet architecture that proposes to architect the network to make it easier for researchers and practitioners to introduce new, disruptive technologies on the Internet.  The “framework for innovation” that is proposed in the work rests on a newly emerging technology called software-defined networking.

Software-defined networking. Network devices effectively have two aspects: the control plane (in some sense, the “brain” for the network, or the protocols that make all of the decisions about where traffic should go), and the data plane (the set of functions that actually forward packets).  Part of the idea behind software-defined networking is to run the network’s control plane in software, on commodity servers that are separate from the network devices themselves.  This notion has roots in a system called the Routing Control Platform, which we worked on about five years ago and now operates in production at AT&T.  More recently, it has gained more widespread adoption in the form of the OpenFlow switch specification.  Software-defined networking is now coming of age in the NOX platform, an open-source OpenFlow controller that allows designers to write network control software in high-level languages like Python. A second aspect of software-defined networking is to make the data plane itself more programmable, for example, by engineering the network data plane to run on hardware.  People are trying to design data planes that are more programmable with FPGAs (see our SIGCOMM paper on SwitchBlade), with GPUs (see the PacketShader work), and also with clusters of servers (see the RouteBricks project).

This paradigm is reshaping how we do computer networking research.  Five years ago, vendors of proprietary networking devices essentially “held the keys” to innovation, because networking devices—and their functions—were closed and proprietary.  Now a software program can control the behavior not only of  individual networking devices but also of entire networks.  Essentially, we are now at the point where we can control very large networks of devices with a single piece of software.

Thoughts on the New Internet. The questions asked of the panelists are understandably a bit broad. I’ve decided to take a crack at these answers in the context of software-defined networking.

1. What do you see happening in computer networking and security in the next five to ten years? We are already beginning to see several developments that will continue to take shape over the next ten years. One trend is the movement of content and services to the “cloud”. We are increasingly using services that are not on our desktops but actually run in large datacenters alongside many other services.  This shift creates many opportunities: we can rely on service providers to maintain software and services that once required dedicated system and network administration.  But, there are also many associated challenges.  First, determining how to help network operators optimize both the cost and performance of these services is difficult; we are working on technologies and algorithms to help network operators better control how users reach services running in the cloud to help them better manage the cost of running these services while still providing adequate performance to the users of these services. A second challenge relates to security: as an increasing number of services move to the cloud, we must develop techniques to make certain that services running in the cloud cannot be compromised and that the data that is stored in the cloud is safe.

Another important trend in network security is the growing importance of controlling where data goes and tracking where it has been; as networks proliferate, it becomes increasingly easy to move data from place to place—sometimes to places where it should not go.  There have been several high-profile cases of “data leaks”, including a former Goldman Sachs employee who was caught copying sensitive data to his hedge fund.  Issues of data-leak prevention and compliance (which involves being able to verify that data did not leak to a certain portion of the network) are becoming much more important as more sensitive data moves to the Internet, and to the cloud.Software-defined networking is allowing us to develop new technologies to solve both of these problems. In our work on Transit Portal, we have used software routers to give cloud service providers much more fine-grained control over traffic to cloud services. We have also developed new technology based on software-defined networking to help stop data leaks at the network layer.

2. What is the biggest threat to everyday users in terms of computer security? Two of the biggest threats to everyday users in terms of computer security are the migration of data and services to the cloud and the proliferation of well-provisioned edge networks (e.g., the buildout of broadband connections to home networks).  The movement of data to the cloud offers many conveniences, but it also presents potentially serious privacy risks.  As services ranging from email to spreadsheets to social networking move to the cloud, we must develop ways to gain more assurance over who is allowed to have access to our data.  Another important challenge we will face with regards to computer security is the proliferation of well-provisioned edge networks. The threat of botnets that mount attacks ranging from spam to phishing to denial-of-service will become even more acute as home networks—which are, today, essentially unmanaged—proliferate. Attackers look for well-connected hosts, and as connectivity to homes improves and as the network “edge” expands, mechanisms to secure the edge of the network will also become more important.

3. What can we do via the Internet in the future that we can’t do now? The possibilities are limitless.  You could probably imagine that anything you are doing in the real world now might take place online in the future.  We are even seeing the proliferation of entirely separate virtual worlds, and the blending of the virtual world with the physical world, in areas such as augmented reality.  Pervasive, ubiquitous computing and the emergence of cloud-based data services make it easier to design, build, and deploy services that aggregate large quantities of data.  As everything we do moves online, everything we do will also be stored somewhere.  This trend poses privacy challenges, but, if we can surmount those challenges, there may also be significant benefits, if we can develop ways to efficiently aggregate, sort, search, analyze and present the growing volumes of data.

The Economist had a recent article that suggested that the next billion people who come onto the Internet will do so via mobile phone; this changing mode of operation will very likely give rise to completely new ways of communicating and interacting online.  For example, rural farmers are now getting information about farming techniques online; services such as Twitter are affecting political dynamics, and may even be used to help defeat censorship.

Future capabilities are especially difficult to predict, and I think networking researchers have not had the best track record in predicting future capabilities.  Many of the exciting new Internet applications have actually come from industry, both through large companies and from startups.  Networking research has been most successful at developing platforms on which these new applications can run, and ongoing research suggests that we will continue to see big successes in that area.  I think software-defined networking will make it easier to evolve these platforms as new applications develop and we see the need for new applications.

4. What are the big challenges facing the future of the Internet? One of the biggest challenges facing the future of the Internet is that we don’t really yet have a good understanding of how to make it usable, manageable, and secure.  We need to understand these aspects of the Internet, if for no other reason than we are becoming increasingly dependent on it.  As Mark Weiser said, “The most profound technologies are those that disappear.”  Our cars have complex networks inside of them that we don’t need to understand in order to drive them.  We don’t need to understand Maxwell’s equations to plug in a toaster.  Yet, to configure a home network, we still need to understand arcana such as “SSID”, “MAC Address”, and “traceroute”.  We must figure out how to make these technologies disappear, at least from the perspective of the everyday user.  Part of this involves providing more visibility to network users about the performance of their networks, in ways that they can understand.  We are working with SamKnows and the FCC on developing techniques to improve user visibility into the performance of their access networks, for example.  Software-defined networking probably has a role to play here, as well: imagine, for example, “outsourcing” some of the management of your home network to a third party service who could help you troubleshoot and secure your network.  We have begun to explore how software-defined networking could make this possible (our recent HomeNets paper presents one possible approach).  Finally, I don’t know if it’s a challenge per se, but another significant question we face is what will happen to online discourse and communication as more countries come online; tens of countries around the world implement some form of surveillance or censorship, and the technologies that we develop will continue to shape this debate.

5. What is it going to take to achieve these new frontiers? The foremost requirement is an underlying substrate that allows us to easily and rapidly innovate and frees us from the constraints of deployed infrastructure.  One of the lessons from the Internet thus far is that we are extraordinarily bad at predicting what will come next.  Therefore, the most important thing we can do is to design the infrastructure so that it is evolvable.

I recently read a debate in Communications of the ACM concerning whether innovation on the Internet should happen in an incremental, evolutionary way or whether new designs must come about in a “clean slate” fashion.  But, I don’t think these philosophies are necessarily contradictory at all: we should be approaching problems with a “clean slate” mentality; we should not constrain the way we think about solutions simply based on what technology is deployed today. On the other hand, we must also figure out how to deploy whatever solutions we devise in the context of real, existing, deployed infrastructure.  I think software-defined networking may effectively resolve this debate for good: clean-slate, disruptive innovation can occur in the context of existing infrastructure, as long as the infrastructure is designed to enable evolution.  Software-defined networking makes this evolution possible.

Advertisement

Networking Meets Cloud Computing (Or, “How I Learned to Stop Worrying and Love GENI”)

If you build it, will they come? In Field of Dreams, Ray Kinsella is confronted in his cornfield by a whisper that says, “If you build it, he will come,” which Ray believes refers to building a baseball field in the middle of a cornfield that will play host to Shoeless Joe and members of the 1919 Black Sox.  Only Ray can see the players initially, leading others to tell him that he should simply rip out the baseball field and replant his corn crop.  Eventually, other people see the players, too, and decide that keeping the baseball field might not be such a bad idea after all.

I can’t help but wonder if  this scenario might have an analogy to the Global Environment for Network Innovations (GENI) effort, sponsored by the National Science Foundation.   The GENI project seeks to build a worldwide network testbed to allow Internet researchers to design and test new network architectures and protocols.  The project has many moving parts, and I won’t survey all of those here.  A salient feature of GENI, though, is that it funds infrastructure prototyping and development, but does not directly fund research on that infrastructure.   One of the most interesting challenges for me has been—and still is—how to couple projects that build infrastructure with projects that directly use that infrastructure to develop interesting new technologies and perform cutting-edge research.

Can prototyping spawn new research? This is, in its essence, the bet that I think GENI is placing: If we build a new experimental environment for networking innovation, the hope is that researchers will come use it.  Can this work? I think the answer is probably “yes”, but it is too soon to know the answer to this question in this context.  Instead, I would like to talk about how our GENI projects have spawned new research—and new educational material—here at Georgia Tech.

The Prototype: Connectivity for Virtual Networks. One of the the GENI-funded projects is called the “BGP Multiplexer” or, simply the “BGP Mux”.  If that sounds obscure, then perhaps you can already begin to understand the challenges we face. Simply put, the BGP Mux is like a proxy for Internet connectivity for virtual networks (BGP is the protocol that connects Internet Service Providers to one another).  The basic idea is that a developer or network researcher might build a virtual network (e.g., on the GENI testbed) and want to connect that network to the rest of the Internet, so that his or her experiment could attract real users.  You can read more about it on the GENI project Web page.

Some people are probably familiar with the concept of virtualization, or creating “virtual” resources (memory, servers, hardware, etc.) based on some shared physical substrate.  Virtual machines are now commonplace; virtual networks, however, are less so.  We started building a Virtual Network Infrastructure (VINI) in 2006.  The main motivation for VINI was to allow experimenters to build virtual networks on a shared physical testbed.  One of the big challenges was connecting these virtual networks to the rest of the Internet.  This is the problem that the BGP Mux solves.

Providing Internet connectivity to virtual networks is perhaps an interesting problem within the context of building a research testbed, but, in my view, it lacked broader research impact.  Effectively, we were building a “hammer” that was useful for building a testbed, but I wanted to find a “nail” that was solving a real problem, could be published, and could also be used in the classroom.  This was not easy.

The Research: Networking for Cloud Computing.  To broaden the applicability of what we had built, essentially we had to find a “nail” that might need fast, flexible way for setting up and tearing down Internet connections.   Cloud computing applications seemed like a natural fit: services on Amazon’s EC2, for example, might want to control inbound and outbound traffic with their customers.  They might want to do this for cost or performance reasons, for example.  Today, this is difficult.   When you rent servers in EC2, you have no control over how traffic comes over the Internet to reach those servers—if you want paths with less delay or otherwise better performance, you are out of luck.  Using the hammer that we had built with the BGP Mux, however, this was much easier: instead of solving a problem in terms of “virtual networks for researchers” (something only a small community might care about), we were solving the same problem, but in terms of users of EC2.   Essentially, the BGP Mux offers EC2 “tenants” the ability to control their own network routing.  This capability is now deployed in five locations and we are planning to expand its footprint.  A paper on this technology will appear at the USENIX Annual Technical Conference in June. We welcome any other networks that would like to help us out with this deployment (i.e., if you can offer us upstream connectivity at another location, we would like to talk to you!).

Education: Transit Portal in the Classroom. I’ve been teaching a course called “Next-Generation Networking”, a course on Future Internet Architectures that I plan to discuss at more length on this blog at some point.  Typical networking courses are not as “hands on” as I would prefer: I, for one, graduated from college without ever even seeing a router in person, let alone configuring one.  I wanted networking students to have more “street cred”—they should be able to say, for example, that they’ve configured routers on a real, running network that’s connected to the Internet and routing real traffic.  This sounds like lunacy.  Who would think that students could play “network operator for a day”?  It just sounds too dangerous to have students play around on live networks with real equipment.   But with virtual networking and the BGP Mux, it’s possible.  I recently assigned a project in this course that had students build virtual networks, connect them to the Internet, and control inbound and outbound traffic using real routing protocols.  Seeing students configure networks and “speak BGP with the rest of the Internet” was one of my proudest days in the classroom.  You can see the assignment and videos of these demos if you’d like to learn more.

Prototyping and research.  Will the researchers come? Our own GENI prototyping efforts have been an exercise in “working backwards” from solution to networking research problem.  I have found that exercise rewarding, if somewhat counter to my usual way of thinking about research (i.e., seek out the important problems first, then find the right hammer).  I think now the larger community will face this challenge, on a much broader scale: Once we have GENI, what will we do with it?  Some areas that seem promising include deployment of secure network protocols and services (our current protocols are known to be insecure), better support for mobility (the current Internet does not support mobility very well), new network configuration paradigms (networks of all kinds, from the transit backbone to the home, are much too hard to configure), and new ways of pricing and provisioning networks (today’s markets for Internet connectivity are far too rigid).  We have  just finished work on a large NSF proposal on Future Internet Architectures that I think will be able to make use of the infrastructure that we and others are building; in the coming months, I think we’ll have much more to say (and much more to see) on this topic.

A New Window for Networking

It’s an exciting time to be working in communications networks.  Opportunities abound for innovation and impact, in areas ranging from applications, to network operations and management, to network security, and even to the infrastructure and protocols itself.

When I was interviewing for jobs as networking faculty about five years ago, one of the most common questions I heard was, “How do you hope to effect any impact as a researcher when the major router vendors and standards bodies effectively hold the cards to innovation?”   I have always had a taste for solving practical problems with an eye towards fundamentals.  My dissertation work, for example, was on deriving correctness properties for Internet routing, and developing a tool, router configuration checker (rcc), to help network operators check that their routing configurations actually satisfied those properties.  The theoretical aspects of the work were fun, but the real impact was that people could actually use the tool; I still get regular requests for rcc today from both operators and various networking companies who want to perform route prediction.

This question about impact cut right to the core of what I think was a crisis of confidence for the field.  Much of the research seemed to be focused on performance tuning and protocol tweaks.  Big architectural ideas were confined to paper design, because there was simply no way to evaluate them.  Short of interacting directly with operators and developing tools that they could use, it seemed to me that truly bringing about innovation was rather difficult.

Much has happened in five years, however; there are seemingly countless exciting opportunities in networking; there are more exciting problems than there is time to work on them.  There are many areas where exciting innovation is happening, and it is becoming feasible to effect fundamental change to the network’s architecture and protocols.   I think several trends are responsible for this wealth of new opportunities:

  • Network security has come to the forefront.  The rise of spam, botnets, phishing, and cybercrime over the past few years cannot be ignored.  By some estimates, as much as 95% of all email is spam.  In a Global Survey by Deloitte, nearly half of the companies surveyed reported an internal security breach, a third of which resulted from viruses or malware.
  • Enterprise, campus, and data-center networks are facing a wealth of new problems, ranging from access control to rate limiting and prioritization to performance troubleshooting.  I interact regularly with the Georgia Tech campus network operators, as a source of inspiration for problems to study.  One of my main takeaways from that interaction is that today’s network configuration is complex, baroque, and low-level—far too much so for the high-level tasks that they wish to perform.  This makes these networks difficult to evolve and debug.
  • Network infrastructure is becoming increasingly flexible, agile, and programmable.  It used to be the case that network devices were closed, and difficult to modify aside from the configuration parameters they exposed.  Recent developments, however, are changing the game.  The OpenFlow project at Stanford University makes it much more tenable to write software programs to control the entire network at a higher level of abstraction, and provides more direct control over network behavior, thus potentially providing operators easier ways to control and debug their network.
  • Networking is increasingly coming to blows with policy.  The collision of networking and policy is certainly not new, but it is increasingly coming to the forefront, with front-page items such as network neutrality and Internet censorship.  As the two areas continue on this crash course, it is certainly worth thinking about the respective roles that policy and technology play with respect to each of these problems.
  • Networking increasingly entails direct interaction with people of varied technical backgrounds.  It used to be that a “home network” consisted of a computer and a modem.  Now, home networks comprise a wide range of devices, including media servers, game consoles, music streaming appliances, and so forth.  The increasing complexity of these networks makes each and every one of us a network operator, whether we like it or not.  The need to make networks simpler, more secure, and easier to manage has never been more acute.

The networking field continues to face new problems, which also opens the field to “hammers” from a variety of different areas, ranging from economics to machine learning to human-computer interaction.  One of my colleagues often says that networking is a domain that draws on many disciplines.  One of the fun things about the field is that it allows one to learn a little about a lot of other disciplines as well.  I have had a lot of fun—and learned a lot—working at many of these boundaries: machine learning, economics, architecture, security, and signal processing, to name a few.

The theme of my blog will be problems and topics that relate to network management, operations, security, and architecture.  I plan to write about my own (and my students’) research, current events as they relate to networking, and interesting problem areas and solutions that draw on multiple disciplines.  I will start in the next few posts by touching on each of the bullets above.