Software-Defined Networking and The New Internet
September 28, 2010 4 Comments
Tonight, I am sitting on an panel sponsored by NSF and Discover Magazine about “The New Internet”. The panel has four panelists who will be discussing their thoughts on the future of the Internet. Some of the questions we have been asked to answer involve predictions about what will happen in the future. Predictions are a tall order; as Yogi Berra said: “It is hard to make predictions, especially about the future.”
Predictions aside, I think one of the most exciting things about this panel is that we are having this discussion at all. Not even ten years ago, Internet researchers were bemoaning the “ossification” of the Internet. As the Internet continues to mature and expand, the opportunities and challenges seem limitless. More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad. This changing landscape presents tremendous opportunities for innovation. The challenge, then, is developing a platform on which this innovation can occur. Along these lines, a multicampus collaboration is pursuing a future Internet architecture that proposes to architect the network to make it easier for researchers and practitioners to introduce new, disruptive technologies on the Internet. The “framework for innovation” that is proposed in the work rests on a newly emerging technology called software-defined networking.
Software-defined networking. Network devices effectively have two aspects: the control plane (in some sense, the “brain” for the network, or the protocols that make all of the decisions about where traffic should go), and the data plane (the set of functions that actually forward packets). Part of the idea behind software-defined networking is to run the network’s control plane in software, on commodity servers that are separate from the network devices themselves. This notion has roots in a system called the Routing Control Platform, which we worked on about five years ago and now operates in production at AT&T. More recently, it has gained more widespread adoption in the form of the OpenFlow switch specification. Software-defined networking is now coming of age in the NOX platform, an open-source OpenFlow controller that allows designers to write network control software in high-level languages like Python. A second aspect of software-defined networking is to make the data plane itself more programmable, for example, by engineering the network data plane to run on hardware. People are trying to design data planes that are more programmable with FPGAs (see our SIGCOMM paper on SwitchBlade), with GPUs (see the PacketShader work), and also with clusters of servers (see the RouteBricks project).
This paradigm is reshaping how we do computer networking research. Five years ago, vendors of proprietary networking devices essentially “held the keys” to innovation, because networking devices—and their functions—were closed and proprietary. Now a software program can control the behavior not only of individual networking devices but also of entire networks. Essentially, we are now at the point where we can control very large networks of devices with a single piece of software.
Thoughts on the New Internet. The questions asked of the panelists are understandably a bit broad. I’ve decided to take a crack at these answers in the context of software-defined networking.
1. What do you see happening in computer networking and security in the next five to ten years? We are already beginning to see several developments that will continue to take shape over the next ten years. One trend is the movement of content and services to the “cloud”. We are increasingly using services that are not on our desktops but actually run in large datacenters alongside many other services. This shift creates many opportunities: we can rely on service providers to maintain software and services that once required dedicated system and network administration. But, there are also many associated challenges. First, determining how to help network operators optimize both the cost and performance of these services is difficult; we are working on technologies and algorithms to help network operators better control how users reach services running in the cloud to help them better manage the cost of running these services while still providing adequate performance to the users of these services. A second challenge relates to security: as an increasing number of services move to the cloud, we must develop techniques to make certain that services running in the cloud cannot be compromised and that the data that is stored in the cloud is safe.
Another important trend in network security is the growing importance of controlling where data goes and tracking where it has been; as networks proliferate, it becomes increasingly easy to move data from place to place—sometimes to places where it should not go. There have been several high-profile cases of “data leaks”, including a former Goldman Sachs employee who was caught copying sensitive data to his hedge fund. Issues of data-leak prevention and compliance (which involves being able to verify that data did not leak to a certain portion of the network) are becoming much more important as more sensitive data moves to the Internet, and to the cloud.Software-defined networking is allowing us to develop new technologies to solve both of these problems. In our work on Transit Portal, we have used software routers to give cloud service providers much more fine-grained control over traffic to cloud services. We have also developed new technology based on software-defined networking to help stop data leaks at the network layer.
2. What is the biggest threat to everyday users in terms of computer security? Two of the biggest threats to everyday users in terms of computer security are the migration of data and services to the cloud and the proliferation of well-provisioned edge networks (e.g., the buildout of broadband connections to home networks). The movement of data to the cloud offers many conveniences, but it also presents potentially serious privacy risks. As services ranging from email to spreadsheets to social networking move to the cloud, we must develop ways to gain more assurance over who is allowed to have access to our data. Another important challenge we will face with regards to computer security is the proliferation of well-provisioned edge networks. The threat of botnets that mount attacks ranging from spam to phishing to denial-of-service will become even more acute as home networks—which are, today, essentially unmanaged—proliferate. Attackers look for well-connected hosts, and as connectivity to homes improves and as the network “edge” expands, mechanisms to secure the edge of the network will also become more important.
3. What can we do via the Internet in the future that we can’t do now? The possibilities are limitless. You could probably imagine that anything you are doing in the real world now might take place online in the future. We are even seeing the proliferation of entirely separate virtual worlds, and the blending of the virtual world with the physical world, in areas such as augmented reality. Pervasive, ubiquitous computing and the emergence of cloud-based data services make it easier to design, build, and deploy services that aggregate large quantities of data. As everything we do moves online, everything we do will also be stored somewhere. This trend poses privacy challenges, but, if we can surmount those challenges, there may also be significant benefits, if we can develop ways to efficiently aggregate, sort, search, analyze and present the growing volumes of data.
The Economist had a recent article that suggested that the next billion people who come onto the Internet will do so via mobile phone; this changing mode of operation will very likely give rise to completely new ways of communicating and interacting online. For example, rural farmers are now getting information about farming techniques online; services such as Twitter are affecting political dynamics, and may even be used to help defeat censorship.
Future capabilities are especially difficult to predict, and I think networking researchers have not had the best track record in predicting future capabilities. Many of the exciting new Internet applications have actually come from industry, both through large companies and from startups. Networking research has been most successful at developing platforms on which these new applications can run, and ongoing research suggests that we will continue to see big successes in that area. I think software-defined networking will make it easier to evolve these platforms as new applications develop and we see the need for new applications.
4. What are the big challenges facing the future of the Internet? One of the biggest challenges facing the future of the Internet is that we don’t really yet have a good understanding of how to make it usable, manageable, and secure. We need to understand these aspects of the Internet, if for no other reason than we are becoming increasingly dependent on it. As Mark Weiser said, “The most profound technologies are those that disappear.” Our cars have complex networks inside of them that we don’t need to understand in order to drive them. We don’t need to understand Maxwell’s equations to plug in a toaster. Yet, to configure a home network, we still need to understand arcana such as “SSID”, “MAC Address”, and “traceroute”. We must figure out how to make these technologies disappear, at least from the perspective of the everyday user. Part of this involves providing more visibility to network users about the performance of their networks, in ways that they can understand. We are working with SamKnows and the FCC on developing techniques to improve user visibility into the performance of their access networks, for example. Software-defined networking probably has a role to play here, as well: imagine, for example, “outsourcing” some of the management of your home network to a third party service who could help you troubleshoot and secure your network. We have begun to explore how software-defined networking could make this possible (our recent HomeNets paper presents one possible approach). Finally, I don’t know if it’s a challenge per se, but another significant question we face is what will happen to online discourse and communication as more countries come online; tens of countries around the world implement some form of surveillance or censorship, and the technologies that we develop will continue to shape this debate.
5. What is it going to take to achieve these new frontiers? The foremost requirement is an underlying substrate that allows us to easily and rapidly innovate and frees us from the constraints of deployed infrastructure. One of the lessons from the Internet thus far is that we are extraordinarily bad at predicting what will come next. Therefore, the most important thing we can do is to design the infrastructure so that it is evolvable.
I recently read a debate in Communications of the ACM concerning whether innovation on the Internet should happen in an incremental, evolutionary way or whether new designs must come about in a “clean slate” fashion. But, I don’t think these philosophies are necessarily contradictory at all: we should be approaching problems with a “clean slate” mentality; we should not constrain the way we think about solutions simply based on what technology is deployed today. On the other hand, we must also figure out how to deploy whatever solutions we devise in the context of real, existing, deployed infrastructure. I think software-defined networking may effectively resolve this debate for good: clean-slate, disruptive innovation can occur in the context of existing infrastructure, as long as the infrastructure is designed to enable evolution. Software-defined networking makes this evolution possible.