A New Window for Networking

It’s an exciting time to be working in communications networks.  Opportunities abound for innovation and impact, in areas ranging from applications, to network operations and management, to network security, and even to the infrastructure and protocols itself.

When I was interviewing for jobs as networking faculty about five years ago, one of the most common questions I heard was, “How do you hope to effect any impact as a researcher when the major router vendors and standards bodies effectively hold the cards to innovation?”   I have always had a taste for solving practical problems with an eye towards fundamentals.  My dissertation work, for example, was on deriving correctness properties for Internet routing, and developing a tool, router configuration checker (rcc), to help network operators check that their routing configurations actually satisfied those properties.  The theoretical aspects of the work were fun, but the real impact was that people could actually use the tool; I still get regular requests for rcc today from both operators and various networking companies who want to perform route prediction.

This question about impact cut right to the core of what I think was a crisis of confidence for the field.  Much of the research seemed to be focused on performance tuning and protocol tweaks.  Big architectural ideas were confined to paper design, because there was simply no way to evaluate them.  Short of interacting directly with operators and developing tools that they could use, it seemed to me that truly bringing about innovation was rather difficult.

Much has happened in five years, however; there are seemingly countless exciting opportunities in networking; there are more exciting problems than there is time to work on them.  There are many areas where exciting innovation is happening, and it is becoming feasible to effect fundamental change to the network’s architecture and protocols.   I think several trends are responsible for this wealth of new opportunities:

  • Network security has come to the forefront.  The rise of spam, botnets, phishing, and cybercrime over the past few years cannot be ignored.  By some estimates, as much as 95% of all email is spam.  In a Global Survey by Deloitte, nearly half of the companies surveyed reported an internal security breach, a third of which resulted from viruses or malware.
  • Enterprise, campus, and data-center networks are facing a wealth of new problems, ranging from access control to rate limiting and prioritization to performance troubleshooting.  I interact regularly with the Georgia Tech campus network operators, as a source of inspiration for problems to study.  One of my main takeaways from that interaction is that today’s network configuration is complex, baroque, and low-level—far too much so for the high-level tasks that they wish to perform.  This makes these networks difficult to evolve and debug.
  • Network infrastructure is becoming increasingly flexible, agile, and programmable.  It used to be the case that network devices were closed, and difficult to modify aside from the configuration parameters they exposed.  Recent developments, however, are changing the game.  The OpenFlow project at Stanford University makes it much more tenable to write software programs to control the entire network at a higher level of abstraction, and provides more direct control over network behavior, thus potentially providing operators easier ways to control and debug their network.
  • Networking is increasingly coming to blows with policy.  The collision of networking and policy is certainly not new, but it is increasingly coming to the forefront, with front-page items such as network neutrality and Internet censorship.  As the two areas continue on this crash course, it is certainly worth thinking about the respective roles that policy and technology play with respect to each of these problems.
  • Networking increasingly entails direct interaction with people of varied technical backgrounds.  It used to be that a “home network” consisted of a computer and a modem.  Now, home networks comprise a wide range of devices, including media servers, game consoles, music streaming appliances, and so forth.  The increasing complexity of these networks makes each and every one of us a network operator, whether we like it or not.  The need to make networks simpler, more secure, and easier to manage has never been more acute.

The networking field continues to face new problems, which also opens the field to “hammers” from a variety of different areas, ranging from economics to machine learning to human-computer interaction.  One of my colleagues often says that networking is a domain that draws on many disciplines.  One of the fun things about the field is that it allows one to learn a little about a lot of other disciplines as well.  I have had a lot of fun—and learned a lot—working at many of these boundaries: machine learning, economics, architecture, security, and signal processing, to name a few.

The theme of my blog will be problems and topics that relate to network management, operations, security, and architecture.  I plan to write about my own (and my students’) research, current events as they relate to networking, and interesting problem areas and solutions that draw on multiple disciplines.  I will start in the next few posts by touching on each of the bullets above.

Advertisement

About Nick Feamster
Nick Feamster is Neubauer Professor of Computer Science and the Director of Research at the Data Science Institute at the University of Chicago. Previously, he was a full professor in the Computer Science Department at Princeton University, where he directed the Center for Information Technology Policy (CITP); prior to Princeton, he was a full professor in the School of Computer Science at Georgia Tech. His research focuses on many aspects of computer networking and networked systems, with a focus on network operations, network security, and censorship-resistant communication systems. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. He was an early-stage employee at Looksmart (acquired by AltaVista), where he wrote the company's first web crawler; and at Damballa, where he helped design the company's first botnet-detection algorithm. Nick is an ACM Fellow. He received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His other honors include the Technology Review 35 "Top Young Innovators Under 35" award, the ACM SIGCOMM Rising Star Award, a Sloan Research Fellowship, the NSF CAREER award, the IBM Faculty Fellowship, the IRTF Applied Networking Research Prize, and award papers at ACM SIGCOMM (network-level behavior of spammers), the SIGCOMM Internet Measurement Conference (measuring Web performance bottlenecks), and award papers at USENIX Security (circumventing web censorship using Infranet, web cookie analysis) and USENIX Networked Systems Design and Implementation (fault detection in router configuration, software-defined networking). His seminal work on the Routing Control Platform won the USENIX Test of Time Award for its influence on Software Defined Networking. Nick is an avid distance runner, having completed more than 20 marathons, including Boston, New York, and Chicago, as well as the Comrades Marathon, an iconic ultra-marathon in South Africa. He lives in Chicago, Illinois.

2 Responses to A New Window for Networking

  1. beki70 says:

    Clearly I agree with this… I think the last one gets even more interesting when you think about the rest of the world… (of course I’m biased 🙂

    w.r.t to networks and policy I find Janet Abbate’s history of the Internet, and her more recent history of it’s privatization by the NSF very compelling. You can see the seeds of today’s situation in the historical context…

  2. feamster says:

    Thanks for the pointer! Yes, actually, I think the last one poses some very interesting questions: How can our consideration of the way humans “expect” networks to behave better inform protocol design? And, vice versa: how do protocols create behavioral “artifacts” (I think you guys are on top of the latter! The first question is also interesting.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: