The Visible Effects and Hidden Sources of Internet Latency

Most Internet Service Providers advertise their performance in terms of downstream throughput.  The “speed” that one pays for reflects, effectively, the number of bits per second that can be delivered on the access link into your home network.  Although this metric makes sense for many applications, it is only one characteristic of network performance that ultimately affects a user’s experience.  In many cases, latency can be at least as important as downstream throughput.

For example, consider the figure below, which shows Web page load times as downstream throughput increases—the time to load many Web pages decreases as throughput increases, but downstream throughput that is faster than about 16 Mbps stops having any effect on Web page load time.

web-plt

Page load times decrease with downstream throughput, but only up to 8–16 Mbits/s.

The culprit is latency: For short, small transfers (as is the case with many Web objects), the time to initiate a TCP connection and open the initial congestion window is dominated by the round-trip time between the client and the Web server.  In other words, the size of the access link no longer matters because TCP cannot increase its sending rate to “fill the pipe” before the connection has completed.

The role of latency in Web performance is no secret to anyone who has spent time studying it, and many content providers including Google, Facebook, and others have spent considerable effort to reduce latency (Google has a project called “Make the Web Faster” that encompasses many of these efforts).  Latency plays a role in the time it takes to complete a DNS lookup, the time to initiate a connection to the server, and the time to increase TCP’s congestion window (indeed, students of networking will remember that TCP throughput is inversely proportional to the round-trip time between the client and the server).  Thus, as throughput continues to increase, network latency plays an increasingly predominant role in the performance of applications such as the Web.  Of course, latency also determines user experience for many latency-sensitive applications as well, including streaming voice, audio, video, and gaming.

The question, then, becomes how to reduce latency to the destinations that users commonly access.  Content providers such as Google and others have taken several approaches: (1) placing Web caches closer to users; (2) adjusting TCP’s congestion control mechanism to start sending at a faster rate for the first few round trips.  These steps, however, are only part of the story, because the network performance between the Web cache and the user may still suffer, for a variety of reasons:

  • First, factors such as bufferbloat and DSL interleaving can introduce significant latency effects in the last mile.  Our study from SIGCOMM 2011 showed how both access link configuration and a user’s choice of equipment (e.g., DSL modem) can significantly affect the latency that a user sess.

  • Second, a poor wireless network in the home can introduce significant latency effects; sometimes we see that 20% of the latency for real user connections from homes is within the home itself.

  • Finally, if the Web cache is not close to users in the first place (e.g., in the case of developing countries), the paths between the users and their destinations can still be subject to significant latency.  These factors can be particularly evident in developing countries, where poor peering and interconnection can result in long paths to content, and where the vast majority of users access the network through mobile and cellular networks.

In the Last Mile

In our SIGCOMM 2011 paper “Broadband Internet Performance: A View from the Gateway” (led by Srikanth Sundaresan and Walter de Donato), we pointed out several aspects of home networks that can contribute significantly to latency.  We define a metric called last-mile latency, which is the latency to the first hop inside the ISP’s network. This metric captures the latency of the access link.

We found in this study that last-mile latencies are often quite high, varying from about 10 ms to nearly 40 ms (ranging from 40–80% of the end-to-end path latency). Variance is also high. One might expect that variance would be lower for DSL, since it is not a shared medium like cable. Surprisingly, we found that the opposite was true: Most users of cable ISPs have last-mile latencies of 0–10 ms. On the other hand, a significant proportion of DSL users have baseline last-mile latencies more than 20 ms, with some users seeing last-mile latencies as high as 50 to 60 ms. Based on discussions with network operators, we believe DSL companies may be enabling an interleaved local loop for these users.  ISPs enable interleaving for three main reasons: (1) the user is far from the DSLAM; (2) the user has a poor quality link to the DSLAM; or (3) the user subscribes to “triple play” services. An interleaved last-mile data path increases robustness to line noise at the cost of higher latency. The cost varies between two to four times the baseline latency. Thus, cable providers in general have lower last-mile latency and jitter. Latencies for DSL users may vary significantly based on physical factors such as distance to the DSLAM or line quality.

dsl-latencies

Most users see latencies less than 10 ms, but there are a significant number of users with the last mile latency greater than 10 ms.

Customer provided equipment also plays a role.  Our study confirmed that excessive buffering is a widespread problem afflicting most ISPs (and the equipment they provide). We profile different modems to study how the problem affects each of them. We also see the possible effect of ISP policies, such as active queue and buffer management, on latency and loss.  For example, when measuring latency under load (the latency that a user experiences when the access link is saturated due to an upload or a download), we see more than an order of magnitude of difference between modems. The 2Wire modem we tested had the lowest worst-case last-mile latency, 800 ms. Motorola’s was about 1.6 seconds, and the Westell modem we tested had a worst case latency of more than 10 seconds.

modem-bufferbloat

Empirical measurements of modem buffering. Different modems have different buffer sizes, leading to wide disparities in observed latencies when the upstream link is busy.

Last-mile latency can also be high for particular technologies such as mobile.  In a recent study of fixed and mobile broadband performance in South Africa, we found that, although the mobile providers consistently offer higher throughput, the latency of mobile connections is often 2–3x higher than that of fixed-line connectivity in the country.

In the Home Wireless Network

Our recent study of home network performance (led by Srikanth Sundaresan) found that a home wireless network can also be a significant source of latency.  We have recently instrumented home networks with a passive monitoring tool that determines whether the access link or the home wireless network (or both) are potential sources of performance problems.  One of the features that we explored in that work was the TCP round-trip time between wireless clients in the home network and the wireless access point in the home.  In many cases, due to wireless contention or other sources of wireless bottlenecks, the TCP round-trip latency in home wireless networks was a significant portion of the overall round-trip latency.

We analyzed the performance of the home network relative to the wide-area network performance for distributions of real user traffic in for about 65 homes over the course of one month. We use these traces to compare the round-trip times between the devices and the access point to the round- trip times from the access point to the wide-area destination for each flow. We define the median latency ratio for a device as the median ratio of the LAN TCP round-trip time to the WAN TCP round-trip time across all flows for that device. The figure below shows the distribution of the median latency ratio across all devices. The result shows for 30% of devices in those homes, at least half of the flows have end-to-end latencies where the home wireless network contributes more than 20% of the overall end-to-end latency.  This technical report provides more details concerning the significant role that home wireless networks can play in end-user performance; a future post will explore this topic at length.

lan-rtts

The distribution of the median ratio of the LAN TCP round-trip time to the WAN TCP round-trip time across all flows for that device, across all devices.

Our findings of latency in home networks suggest that the RTT introduced by the wireless network may often be a significant fraction of the end-to-end RTT. This finding is particularly meaning- ful in light of the many recent efforts by service providers to reduce latency to end-to-end services with myriad opti- mizations and careful placement of content. We recommend that, in addition to the attention that is already being paid to optimizing wide-area performance and host TCP connection settings, operators should also spend effort to improve home wireless network performance.

In Developing Regions

Placing content in a Web cache has little effect if the users accessing the content still have high latency to those destinations.  A study of latency from fixed-line access networks in South Africa using BISmark data that was led by Marshini ChettySrikanth Sundarean, Sachit Muckaden, and Enrico Calandro in cooperation with Research ICT Africa showed that peering and interconnectivity within the country still has a long way to go: in particular, the plot below shows the average latency from 16 users of fixed-line access networks in South Africa to various Internet destinations.  The bars are sorted in order of increasing distance from Johannesburg, South Africa.  Notably, geographic distance from South Africa does not correlate with latency—the latency to Nairobi, Kenya is almost twice as much as the latency to London.  In our study, we found that users in South Africa experienced average round-trip latencies exceeding 200 ms to five of the ten most popular websites in South Africa: Facebook (246 ms), Yahoo (265 ms), LinkedIn (305 ms), Wikipedia (265 ms), and Amazon (236 ms). Many of these sites only have data centers in Europe and North America.

jnb-latencies

The average latencies to Measurement Lab servers around the world from South Africa. The numbers below each location reflect the distance from Johannesburg in kilometers, and the bars are sorted in order of increasing distance from Johannesburg.  Notably, latency does not increase monotonically with distance.

People familiar with Internet connectivity may not find this result surprising: indeed, many ISPs in South Africa connect to one another via the London Internet Exchange (LINX) or the Amsterdam Internet Exchange (AMS-IX) because it is cheaper to backhaul connectivity to exchange points in Europe than it is to connect directly at an exchange point on the African continent.  The reasons for this behavior appears to be both regulatory and economic, but more work is needed, both in deploying caches and improving Internet interconnectivity to reduce the latency that users in developing regions see to popular Internet content.

Advertisement

The Resilience of Internet Connectivity in East and South Africa: A Case Study

On March 27, 2013 at 6:20 a.m. UTC, the SeaMeWe-4 cable outage affected connectivity across the world.  SeaWeMe-4 is currently the largest submarine cable connecting Europe and Asia.  The Renesys blog recently covered the effect of this outage on various parts of Asia and Africa (Pakistan, Saudi Arabia, the UAE, etc.).  In this post, we explore how the fiber cut affected connectivity from other parts of the world, as visible from the BISmark home router deployment.  The credit for the data analysis in this blog post goes to Srikanth Sundaresan, one of Georgia Tech’s star Ph.D. students whose work on BISmark has garnered a number of awards.

Background: BISmark

The BISmark project has been deploying customized home gateways in home broadband access networks around the world for more than two years; we currently have more than 130 active home routers measuring the performance of access links in nearly 30 countries.  The high-level goal of the project is to gather information from inside home networks to help users and ISPs better debug their home networks.  Two years ago, we published the first paper using BISmark data in SIGCOMM.  The paper explores the performance of broadband access networks around the United States and has many interesting findings:

  • We showed how a technique called “interleaving” on DSL networks can introduce tends of milliseconds of additional latency on home networks.
  • We explored how a user’s choice of equipment can introduce “bufferbloat” effects on home access links.
  • We showed how technologies such as PowerBoost can also introduce sudden, dramatic increases in latency when interacting with buffering on the access link.

The image below shows the current deployment of BISmark.  We have more than 80 routers in North America, nearly 20 in Southeast Asia, about fifteen in the European Union, about 15 in South Africa, and about ten in East Asia.  You can explore the data from the deployment yourself on the Network Dashboard; all of the active measurements are available for download in raw XML format as they are collected.

The BISmark deployment as of May 28, 2013.

The BISmark deployment as of May 28, 2013.

Each BISmark router sits in a home broadband access network.  The routers are NetGear WNDR 3700 and 3800s; we ship routers to anyone who is interested in participating.  As an incentive for participating, you gain access to your own data on the network dashboard.  We are also actively seeking researchers and developers; please contact us below if you are interested, and feel free to check out the project GitHub page.

Every BISmark router measures latency to the Google anycast DNS service and to 10 globally distributed Measurement Lab servers every 10 minutes.  Those servers are located in Atlanta, Los Angeles, Amsterdam, Johannesburg, Nairobi, Tokyo, Sydney, New Delhi, and Rio de Janiero.

Effects of the SMW4 Fiber Cut: A Case Study

We first explore the effects of the fiber cut on reachability from the active BISmark routers to each of the Measurement Lab destinations.  At the time of the outage (6:20a UTC), the Measurement Lab server in Nairobi became completely unreachable for more than four hours.  The Nairobi Measurement Lab server is hosted in AS 36914 (KENet, the Kenyan Education Network).

Connectivity was restored at 10:34a UTC.  Interestingly, between 9a and 10a UTC, reachability from many of our other BISmark routers to all of the Measurement Lab destinations was affected.  We have not yet explored which of the BISmark routers experienced these reachability problems, but, as we explore further below, this connectivity blip coincides with some connectivity being restored to Kenya via Safaricom, the backup ISP for the Measurement Lab server hosted in KENet.  It is possible that other convergence events were also occurring at that time.

Reachability from BISmark routers to each of the Measurement Lab servers on March 27, 2013.

Reachability from BISmark routers to each of the Measurement Lab servers on March 27, 2013.

Analysis of the BGP routing table information from RouteViews shows that connectivity to AS 36914 was restored at 10:34a UTC. The following figure shows the latencies from all nodes to Nairobi before and after the outage. As soon as connectivity returns, the first set of latencies seem to be roughly the same as before, but latencies almost immediately increase to all destinations, except for a router situated in South Africa in AS 36937 (Neotel).  This result suggests that Neotel may have better connectivity to destinations within Africa than some other ISPs, and that access ISPs who use Neotel for “transit” may see better performance and reliability to destinations within the continent. Because only the SEACOM cable was affected by the cut, not the West African Cable System (WACS) or EASSy cable, Neotel’s access to other fiber paths may have allowed its users to sustain better performance after the fiber cut.

Latencies from BISmark routers in various regions to Naorobi, Kenya (AS 36914, Neotel).

Latencies from BISmark routers in various regions to Naorobi, Kenya (AS 36914, Neotel).

This incident—and Neotel’s relative resilience—suggests the importance of exploring the effects of undersea cable connectivity in various countries in Africa and how such connectivity affects resilience.  (In a future post, we will explore the effects of peering and ISP interconnectivity on the performance that users in this part of the world see.)   

Internet Routing to KENet during the Outage

6:20a: The Fiber Cut. The reachability and performance effects caused by the SWM4 fiber cut beg the question of what was happening to routes to Kenya (and, in particular KENet) at the time of the outage.  We explore this in further detail below.  The first graph below shows reachability to KENet (AS 36914, the large red dot) at 6:20:50 UTC, around which time the fiber cut occurred.  The second plot shows the routes at 6:23:51 UTC; by 6:27:06 UTC, AS 36914 became completely unreachable.

Internet reachability to KENet (AS 36914) at 6:20:50a, 6:23:51a, and 6:27:06a UTC.

Internet reachability to KENet (AS 36914) at 6:20:50a, 6:23:51a, and 6:27:06a UTC.

9:05a: Connectivity is (partially) restored through a backup path. About two-and-a-half hours later, at 9:05:49 UTC, AS 36914 starts to come back online, and connectivity is restored within about one minute, although all Internet paths to this destination go through AS 33771 (SafariCom), which is most likely KENet’s backup (i.e., commercial, and hence more expensive) provider.  This is an interesting example of BGP routing and backup connectivity in action: Many ISPs such as KENet have primary and backup Internet providers, and paths only go through the backup provider (in this case, SafariCom) when the primary path fails.  

Connectivity to KENet (AS36914) is restored, via the commercial backup provider, SafariCom (AS 33771).  It is interesting to note that although connectivity was restored at 9:06a through this backup path, the server hosted in this network was still unreachable until paths switched back to the primary provider (UbutuNet) at 10:34a.

Connectivity to KENet (AS36914) is restored, via the commercial backup provider, SafariCom (AS 33771). It is interesting to note that although connectivity was restored at 9:06a through this backup path, the server hosted in this network was still unreachable until paths switched back to the primary provider (UbutuNet) at 10:34a.

Note that although connectivity to KENet was restored through SafariCom at around 9:06a UTC, none of the BISmark routers could reach the Measurement Lab server hosted in KENet through this backup path!  This pathology suggests that the failover didn’t really work as planned, for some reason.  Although this disconnection could result from poor Internet “peering” between SafariCom and the locations of our BISmark routers around the world, it is unlikely that bad peering would affect reachability to all of our destinations.  Still it is not clear why the connectivity through SafariCom was not sufficient to restore connectivity to at least some of the BISmark nodes.  The connectivity issue we observed could be something mundane (e.g., SafariCom simply blocks ICMP “ping” packets), or it could be something much more profound.

It is also interesting to note that Internet routing took more than two hours to restore!  Usually, we think of Internet routing as being dynamic, automatically reconverging when failures occur to find a new working path (assuming one exists).  While BGP has never been known for being zippy, two-and-a-half hours seems excessive.  It is perhaps more likely that some additional connectivity arrangements were being made behind the scenes; it might even be the case that KENet purchased additional backup connectivity (or made special arrangements) during those several hours when they were offline.

10:35a: Connectivity returns through the primary path.  At around 10:34a UTC, routes to KENet begin reverting to the primary path, as can be seen in the left figure below.  By 10:35a UTC, everything is “back to normal” as far as BGP routing is concerned although as we saw above, latencies remain high to most destinations for an additional eight hours.  It is unclear what causes latencies to remain high after latencies were restored, but this offers another important lesson: BGP connectivity does not equate to good performance through those BGP paths.  This underscores the importance of using both BGP routing tables and a globally distributed performance measurement platform like BISmark to understand performance and connectivity issues around the times of outages.

By 10:35a UTC, connectivity is restored through UbuntuNet (AS 36944), KENet's primary provider.  Once BGP convergence begins, it takes only a little more than a minute for paths to revert to the primary path.

By 10:35a UTC, connectivity is restored through UbuntuNet (AS 36944), KENet’s primary provider. Once BGP convergence begins, it takes only a little more than a minute for paths to revert to the primary path.

Takeaway Lessons

It’s worthwhile to reflect on some of the lessons from this incident; it teaches us about how Internet routing works (and doesn’t work), about the importance of backup paths, and about the importance of performing joint analysis of both routing information and active performance measurements from a variety of globally distributed locations.  I’ve summarized a few of these below:

  • Peering and interconnectivity in Africa haven’t yet come of age.  It is clear from this incident that certain locations in Africa (although not all) are not particularly resilient to fiber cuts.  The SWM4 fiber cut took KENet completely offline for several hours, and even after connectivity was “restored” several hours later, many locations still could not reach the destination through the backup path.  Certain ISPs in Africa that are better connected (e.g., Neotel, and the Measurement Lab node hosted in TENET in Johannesburg) weathered the fiber cut much better than others, most likely because they have backup connectivity through WACS or EASSy.  In a future post, we will explore performance issues in various parts of Africa that likely result from poor peering.
  • Connectivity does not imply good performance.  Even after connectivity was completely “restored” (at least according to BGP), latencies to Nairobi from most regions remained high for almost another eight hours.  This disparity underscores the importance of not relying solely on BGP routing information to understand the quality of connectivity to and from various Internet destinations.  Global deployments like BISmark are critical for getting a more complete picture of performance.
  • “Dynamic routing” isn’t always dynamic.  The ability for dynamic routing protocols to find a working backup path depend on the existence of those paths in the first place.  The underlying physical connectivity must be there, and the business arrangements (peering) between ISPs must exist to allow those paths to exist (and function) when failures do occur.  Something occurred on March 27, 2013 that exposed a glaring hole in the Internet’s ability to respond dynamically to a failure.  It would be very interesting to learn more about what happened between 6:20a UTC and 9:05a UTC to learn more about exactly what resulted in connectivity being restored (via SafariCom), and why it took so long.  Perhaps we need more sophisticated “what if” tools that help ISPs better evaluate their readiness for these types of events.

In future posts, we will continue to explore how BISmark can help expose pathologies that result from disconnections, outages, and other pathologies.  Our ability to perform this type of analysis depends on the continued support of ISPs, users, and the broader community.  We encourage you to contact us using the form below if you are interested in hosting a BISmark router in your access network.  (You can also post public comments at the bottom of the page, below the contact form.)

Making Sense of Data Caps and Tiered Pricing in Broadband and Mobile Networks

Last week, I had the pleasure of sitting on a panel at the Broadband Breakfast Club in downtown Washington, DC.  The panel was organized by BroadbandBreakfast.com, an policy and news organization that focuses on policy issues related to broadband service in the United States; the group meets about once a month.  I was asked last fall to sit on a panel on measuring broadband performance, due to our ongoing work on BISmark, but I was unable to make it last fall, so I found myself on a panel on data caps in wired and wireless networks.

I participated on the panel with the following other panelists: Serena Viswanathan, an attorney from the Federal Trade Commission; Patrick Lucey from the New America Foundation; and Roger Entner, the founder of Recon Analytics.  The panel discussed a variety of topics surrounding data caps in broadband networks, but the high level question that the panel circled around was: Do data caps (and tiered pricing) yield positive outcomes for the consumer?

We had an interesting discussion.  Roger Entner espoused the opinion that data caps really only affect the worst offenders, and that applications on mobile devices now make it much easier for users to manage their data caps.  Therefore, data caps shouldn’t be regarded as oppressive, but rather are simply a way for Internet service providers and mobile carriers to recoup costs for the most aggressive users.  Patrick Lucey, who recently wrote an article for the New America Foundation on data caps, stated a counterpoint that echoed his recent article, suggesting that data caps were essentially a profit generator for ISPs, and consumers are effectively captured because they have no real choice for providers.

I spent some time explaining the tool that Marshini Chetty and my students have built on top of BISmark called uCap (longer paper here).  Briefly, uCap is a tool that allows home network users to determine the devices in their home that consume the most data.  It also allows users to see what domains they are visiting that consume the most bandwidth.  It does not, however, tell the user which applications or people are using the most bandwidth (more on that below).  Below is a screenshot of uCap that shows device usage over time. My students have also built a similar tool for mobile devices called MySpeedTest, which tells users which applications are consuming the most data on their phones.  A screenshot of the MySpeedTest panel that shows how different applications consume usage is shown below.

uCap Screenshot Showing Device Usage Over Time
Screen Shot 2013-02-26 at 6.47.46 AM
MySpeedTest Showing Mobile Application Usagemst3

I used these two example applications to argue that usage caps per se are not necessarily a bad thing if the user has ways to manage these usage caps.  In fact, we have repeatedly seen evidence that tiered pricing (or usage caps) can actually improve both ISP profit and make consumers better off, if the consumer understands how different applications consume their usage cap and has ways to manage the usage of those applications.  Indeed, our past research has shown how tiered pricing can improve market efficiency, because the price of connectivity more closely reflects the cost to the provider of carrying specific data.  Further, we’ve seen examples where consumers have actually been worse off when regulators have stepped in to prevent tiered pricing, such as the events in summer 2011 when KPN customers all experienced a price increase for connectivity because KPN was prevented from introducing two tiers of service.

The problem isn’t so much that tiered pricing is bad—it is that users don’t understand it, and they currently don’t have good tools to help them understand it.  In the panel, I informally polled the room—ostensibly filled with broadband experts—about whether they could tell me off the top of their heads how much data a 2-hour high-definition Netflix movie would consume against their usage cap.  Only two or three hands went up in a room of 50 people.  I also confessed that before installing uCap and watching my usage in conjunction with specific applications, I had no idea how much data different applications consumed, or whether I was a so-called “heavy user” (it turns out I am not).  My own experience—and Marshini Chetty’s ongoing work—has shown that people are really bad at estimating how much of their data cap applications consume.  One interesting observation in Marshini’s work is that people conflate the time that they spend on a site with the amount of data it must consume  (“I spend most of my time on Facebook, therefore, it must consume most of my data cap.”).

If we are going to move towards pervasive data caps or tiered pricing models, then users need better tools to understand how applications consume data caps and to manage how different applications consume those caps.  I see two possibilities for better applications going forward:

  • Better visibility.  We need applications like uCap and MySpeedTest to help users understand how different applications consume their data cap.  Helping users get a better handle on how different applications consume data is the first step towards making tiered pricing something that users can cope with.  In addition to the applications that show usage directly, we might also consider other forms of visibility, such as information that helps users estimate a total cost of ownership for running a mobile application (e.g., the free application might actually cost the user more in the long run, if downloading the advertisements to support the free application eats into the user’s data cap).  We also need better ways of fingerprinting devices; applications like uCap still force users to identify devices (note the obscure MAC addresses in the dashboard above for devices on my network that I didn’t bother to manually identify).  Solving these problems requires both deep domain knowledge about networking and intuition and expertise in human factors and interface design.
  • Better control.  This area deserves much more attention.  uCap offers some nice first steps towards giving users control because it helps users control how much data a particular device can send.  But, shouldn’t we be solving this problem in other ways, as well?  For example, we might imagine exposing an SDK to application developers that helps them write applications that are more cognizant of data constraints—for example, by deferring updates when a user is near his or her cap, or deferring downloads until “off peak” times or when a user is on a WiFi network.  There are interesting potential developments in both applications and operating systems that could make tiered pricing and demand-side management more palatable, much like appliances in our homes are now being engineered to adapt to variable electricity pricing.

Finally, Patrick made a point that even if users could understand and control usage caps, they often don’t have any reasonable alternatives if they decide they don’t like their current ISP’s policies.  So, while some of the technological developments we discussed may make a user’s life easier, these improvements are, in some sense, a red herring if a user cannot have some amount of choice over their Internet service provider.  This issue of consumer choice (or lack thereof) does appear to be the elephant in the room for many of the policy discussions surrounding data caps, tiered pricing, and network neutrality.  Yet, until the issues of choice are solved, improving both visibility and control in the technologies that we develop can allow both users and ISPs to be better off in a realm where tiered pricing and data caps exist—a realm which, I would argue, is not only inevitable but also potentially beneficial for both ISPs and consumers.

Internet Relativism and the Hunt for Elusive “Ground Truth”

Networking and security research often rely on a notion of ground truth to evaluate the effectiveness of a solution.  “Ground truth” refers to a true underlying phenomenon that we would like to characterize, detect, or measure.  We often evaluate the effectiveness of a classifier, detector, or measurement technique by how well it reflects ground truth.

For example, an Internet link might have a certain upstream or downstream throughput; the effectiveness of a tool that measures throughput could be thus be quantified in terms of how close its estimates of upstream and downstream throughput are in comparison to the true throughput of the underlying link.  Since there is a physical link with actual upstream or downstream throughput characteristics—and the properties of that link are either explicitly known or can be independently measured—measuring error with respect to ground truth makes sense.  In the case of analyzing routing configuration to predict routing behavior (or detect errors), static configuration analysis can characterize where traffic in the network will flow and whether the configuration will give rise to erroneous behavior; either the predictions correctly characterize the behavior of the real network, or they don’t.  A spam filter might classify an email sender as a legitimate sender or a spammer; again, either the sender is a spammer or it is a legitimate mail server.  In this case, comparing against ground truth is more difficult, since if we had a perfect characterization of spammers and legitimate senders, we would already have the perfect spam filter.  The solution in these kinds of cases is to compare against an independent label (e.g., a blacklist) and somehow argue that the proposed detection mechanism is better than the existing approach to labeling or classification (e.g., faster, earlier, more lightweight, etc.).

Problem: Lack of ground truth.  For some Internet measurement problems, the underlying phenomenon simply cannot be known—even via an independent labeling mechanism—either because the perpetrator of an action won’t reveal his or her true intention, or sometimes because there actually is no “one true answer”. Sometimes we want to characterize scenarios or phenomena where the ground truth proves elusive.  

Consider the following two problems:

  • Network neutrality.The network neutrality debate centers around the question of whether Internet service providers should carry all traffic according to the same class of service, regardless of various properties such as what type of traffic it is (e.g., voice, video) or who is sending or receiving that traffic.
  • Filter bubbles.  Eli Pariser introduced the notion of a filter bubble in his book The Filter Bubble.  A filter bubble is the phenomenon whereby each Internet user sees different Internet content based on factors ranging from our demographic to our past search history to our stated preferences.  Briefly, each of us sees a different version of the Internet, based on a wide range of factors.

These two detection problems do not have a notion of ground truth that can be easily measured.  In the latter case, there is effectively no ground truth at all.

In the case of network neutrality, detection boils down to determining whether an ISP is providing preferential treatment to a certain class of applications or customers.  While ground truth certainly exists (i.e., either the ISP is discriminating against a certain class of traffic or it isn’t), discovering ground truth is incredibly challenging: ISPs may not reveal their policies concerning preferential treatment of different traffic flows, for example.

Similarly, in the case of filter bubbles, we want to determine whether a content provider or intermediary (e.g., search engine, news aggregator, social network feed) is manipulating content for particular groups of users (e.g., showing only certain news articles to Americans).  Again, there is a notion of ground truth—either the content is being manipulated or it isn’t—but the interesting aspect here is not so much whether content is being manipulated (we all know that it is), but rather what the extent of that manipulation is.  Characterizing the extent of manipulation is difficult, however, because personalization is so pervasive on the Internet: everyone effectively sees content that is tailored to their circumstances, and there is no notion of a baseline that reflects what a set of search results or a page of recommended products might look like before the contents were tailored for a particular user.  In many cases, personalization has been so ingrained in data mining and search that even the algorithm designers are unable to characterize what “ground truth” content (i.e., without manipulation) might look like.

Relativism: measuring how different perspectives give rise to inconsistencies.  In cases where ground truth is difficult to measure or impossible to know, we can still ask questions about consistency.  For example, in the case of network neutrality, we can ask whether different groups of users experience comparable performance.  In the case of filter bubbles, we can ask whether different groups of users see similar content.  When inconsistencies arise, we can then attempt to attribute a cause to these inconsistencies by controlling for all factors except for the factor we believe might be the underlying cause for the inconsistency.  One might call this Internet relativism, in a way: We concede that either there is no absolute truth, or that the absolute truth is so difficult to obtain that we might as well not try to know it.  Instead, we can explore how differences in perspective  or “input signals” (e.g., demographic, geography) give rise to different outcomes and try to determine which input differences triggered the inconsistency.  We have applied this technique to the design of two real-world systems that address these two respective problem areas.  In both of these problems, we would love to know the underlying intention of the ISP or information intermediary (i.e., “Is the performance problem I’m seeing a result of preferential treatment?”, “(How) is Google, Netflix, or Amazon manipulating my results based on my demographic?”).

  • NANO: Network Access Neutrality Observatory.We developed NANO several years ago to characterize ISP discrimination for different classes of traffic flows.  In contrast to existing work in this area (e.g., Glasnost), which requires a hypothesis about the type of discrimination that is taking place, NANO operates without any a priori hypothesis about discrimination rules and simply looks for systematic deviation from “normal” behavior for a certain class of traffic (e.g., all traffic from a certain ISP, for a certain application, etc.).  The tricky aspect involved in this type of detection is that there is no notion of normal.  For example, ISP Y might also be performing similar type of discrimination, so there is no firm ground truth against which to compare.  Ideally, what we’d like to ask is “What would be the performance that this user see using ISP X vs. the performance they would see if they were not using ISP X?”  Unfortunately, there is no reasonable way to test the performance that a user would experience as a result of not using an ISP.  (This is in contrast to randomized treatment in clinical trials, where it makes sense to have a group of users who, say, are subject to a particular treatment or not.)  To address this problem, the best we could do to establish a baseline was to average the performance seen by all users from other ISPs and compare those statistics against the performance seen by a group of users for the ISP under test.
  • Bobble: Exposing inconsistent search results.  We recently developed Bobble to characterize the inconsistencies that exist in Web search results that users see, as a result of both personalization and geography.  Ideally, we would like to measure the extent of manipulation against some kind of baseline.  Unfortunately, however, the notion of a baseline is almost meaningless, since no Internet user is subject to such a baseline—even a user who has no search history may still see personalized results based on geography, time of day, device type, and other features, for example.  In this scenario, we established a baseline by comparing the search results of a signed-in user against a user with no search history, making our best attempt to hold all other factors constant.  We also performed the same experiment with users who were not signed in and had no search history, varying only geography.  Unlike NANO, in the case of Bobble, there is not even a notion of an “average” user; the best we can hope for are meaningful characterizations of inconsistencies.

Takeaways and general principles.  These two problems both involve an attempt to characterize an underlying phenomenon without any hope of observing “ground truth”.  In these cases, it seems that our best hope is to approximate a baseline and compare against that (as we did in NANO); failing that, we can at least characterize inconsistencies.  In any case, when looking for these inconsistencies, it is important to (1) enumerate all factors that could possibly introduce inconsistencies; and (2) hold those factors fixed, to the extent possible.  For example, in NANO, one can only compare a user against average performance for a group of users that have identical (or at least similar) characteristics for anything that could affect the outcome.  If, for example, browser type (or other features) might affect performance, then the performance of a user for an ISP “under test” must be compared against users with the same browser (or other features), with the ISP being the only differing feature that could possibly affect performance.  Similarly, in the case of Bobble, we must hold other factors like browser type and device type fixed when attempting to isolate the effects of geography or search history.  Enumerating all of these features that could introduce  inconsistencies is extremely challenging, and I am not aware of any good way to determine whether a list of such features is exhaustive.

I believe networking and security researchers will continue to encounter phenomena that they would like to measure, but where the nature of underlying phenomenon cannot be known with certainty.  I am curious as to whether others have encountered problems that call for Internet relativism, and whether it may time to develop sound experimental methods to characterize Internet relativism, rather than simply blindly clamoring for “ground truth” when none may even exist.

Software-Defined Networking and The New Internet

Tonight, I am sitting on an panel sponsored by NSF and Discover Magazine about “The New Internet”.  The panel has four panelists who will be discussing their thoughts on the future of the Internet.  Some of the questions we have been asked to answer involve predictions about what will happen in the future.  Predictions are a tall order; as Yogi Berra said: “It is hard to make predictions, especially about the future.”

Predictions aside, I think one of the most exciting things about this panel is that we are having this discussion at all.  Not even ten years ago, Internet researchers were bemoaning the “ossification” of the Internet.  As the Internet continues to mature and expand, the opportunities and challenges seem limitless.  More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad.  This changing landscape presents tremendous opportunities for innovation.   The challenge, then, is developing a platform on which this innovation can occur.  Along these lines, a multicampus collaboration is pursuing a future Internet architecture that proposes to architect the network to make it easier for researchers and practitioners to introduce new, disruptive technologies on the Internet.  The “framework for innovation” that is proposed in the work rests on a newly emerging technology called software-defined networking.

Software-defined networking. Network devices effectively have two aspects: the control plane (in some sense, the “brain” for the network, or the protocols that make all of the decisions about where traffic should go), and the data plane (the set of functions that actually forward packets).  Part of the idea behind software-defined networking is to run the network’s control plane in software, on commodity servers that are separate from the network devices themselves.  This notion has roots in a system called the Routing Control Platform, which we worked on about five years ago and now operates in production at AT&T.  More recently, it has gained more widespread adoption in the form of the OpenFlow switch specification.  Software-defined networking is now coming of age in the NOX platform, an open-source OpenFlow controller that allows designers to write network control software in high-level languages like Python. A second aspect of software-defined networking is to make the data plane itself more programmable, for example, by engineering the network data plane to run on hardware.  People are trying to design data planes that are more programmable with FPGAs (see our SIGCOMM paper on SwitchBlade), with GPUs (see the PacketShader work), and also with clusters of servers (see the RouteBricks project).

This paradigm is reshaping how we do computer networking research.  Five years ago, vendors of proprietary networking devices essentially “held the keys” to innovation, because networking devices—and their functions—were closed and proprietary.  Now a software program can control the behavior not only of  individual networking devices but also of entire networks.  Essentially, we are now at the point where we can control very large networks of devices with a single piece of software.

Thoughts on the New Internet. The questions asked of the panelists are understandably a bit broad. I’ve decided to take a crack at these answers in the context of software-defined networking.

1. What do you see happening in computer networking and security in the next five to ten years? We are already beginning to see several developments that will continue to take shape over the next ten years. One trend is the movement of content and services to the “cloud”. We are increasingly using services that are not on our desktops but actually run in large datacenters alongside many other services.  This shift creates many opportunities: we can rely on service providers to maintain software and services that once required dedicated system and network administration.  But, there are also many associated challenges.  First, determining how to help network operators optimize both the cost and performance of these services is difficult; we are working on technologies and algorithms to help network operators better control how users reach services running in the cloud to help them better manage the cost of running these services while still providing adequate performance to the users of these services. A second challenge relates to security: as an increasing number of services move to the cloud, we must develop techniques to make certain that services running in the cloud cannot be compromised and that the data that is stored in the cloud is safe.

Another important trend in network security is the growing importance of controlling where data goes and tracking where it has been; as networks proliferate, it becomes increasingly easy to move data from place to place—sometimes to places where it should not go.  There have been several high-profile cases of “data leaks”, including a former Goldman Sachs employee who was caught copying sensitive data to his hedge fund.  Issues of data-leak prevention and compliance (which involves being able to verify that data did not leak to a certain portion of the network) are becoming much more important as more sensitive data moves to the Internet, and to the cloud.Software-defined networking is allowing us to develop new technologies to solve both of these problems. In our work on Transit Portal, we have used software routers to give cloud service providers much more fine-grained control over traffic to cloud services. We have also developed new technology based on software-defined networking to help stop data leaks at the network layer.

2. What is the biggest threat to everyday users in terms of computer security? Two of the biggest threats to everyday users in terms of computer security are the migration of data and services to the cloud and the proliferation of well-provisioned edge networks (e.g., the buildout of broadband connections to home networks).  The movement of data to the cloud offers many conveniences, but it also presents potentially serious privacy risks.  As services ranging from email to spreadsheets to social networking move to the cloud, we must develop ways to gain more assurance over who is allowed to have access to our data.  Another important challenge we will face with regards to computer security is the proliferation of well-provisioned edge networks. The threat of botnets that mount attacks ranging from spam to phishing to denial-of-service will become even more acute as home networks—which are, today, essentially unmanaged—proliferate. Attackers look for well-connected hosts, and as connectivity to homes improves and as the network “edge” expands, mechanisms to secure the edge of the network will also become more important.

3. What can we do via the Internet in the future that we can’t do now? The possibilities are limitless.  You could probably imagine that anything you are doing in the real world now might take place online in the future.  We are even seeing the proliferation of entirely separate virtual worlds, and the blending of the virtual world with the physical world, in areas such as augmented reality.  Pervasive, ubiquitous computing and the emergence of cloud-based data services make it easier to design, build, and deploy services that aggregate large quantities of data.  As everything we do moves online, everything we do will also be stored somewhere.  This trend poses privacy challenges, but, if we can surmount those challenges, there may also be significant benefits, if we can develop ways to efficiently aggregate, sort, search, analyze and present the growing volumes of data.

The Economist had a recent article that suggested that the next billion people who come onto the Internet will do so via mobile phone; this changing mode of operation will very likely give rise to completely new ways of communicating and interacting online.  For example, rural farmers are now getting information about farming techniques online; services such as Twitter are affecting political dynamics, and may even be used to help defeat censorship.

Future capabilities are especially difficult to predict, and I think networking researchers have not had the best track record in predicting future capabilities.  Many of the exciting new Internet applications have actually come from industry, both through large companies and from startups.  Networking research has been most successful at developing platforms on which these new applications can run, and ongoing research suggests that we will continue to see big successes in that area.  I think software-defined networking will make it easier to evolve these platforms as new applications develop and we see the need for new applications.

4. What are the big challenges facing the future of the Internet? One of the biggest challenges facing the future of the Internet is that we don’t really yet have a good understanding of how to make it usable, manageable, and secure.  We need to understand these aspects of the Internet, if for no other reason than we are becoming increasingly dependent on it.  As Mark Weiser said, “The most profound technologies are those that disappear.”  Our cars have complex networks inside of them that we don’t need to understand in order to drive them.  We don’t need to understand Maxwell’s equations to plug in a toaster.  Yet, to configure a home network, we still need to understand arcana such as “SSID”, “MAC Address”, and “traceroute”.  We must figure out how to make these technologies disappear, at least from the perspective of the everyday user.  Part of this involves providing more visibility to network users about the performance of their networks, in ways that they can understand.  We are working with SamKnows and the FCC on developing techniques to improve user visibility into the performance of their access networks, for example.  Software-defined networking probably has a role to play here, as well: imagine, for example, “outsourcing” some of the management of your home network to a third party service who could help you troubleshoot and secure your network.  We have begun to explore how software-defined networking could make this possible (our recent HomeNets paper presents one possible approach).  Finally, I don’t know if it’s a challenge per se, but another significant question we face is what will happen to online discourse and communication as more countries come online; tens of countries around the world implement some form of surveillance or censorship, and the technologies that we develop will continue to shape this debate.

5. What is it going to take to achieve these new frontiers? The foremost requirement is an underlying substrate that allows us to easily and rapidly innovate and frees us from the constraints of deployed infrastructure.  One of the lessons from the Internet thus far is that we are extraordinarily bad at predicting what will come next.  Therefore, the most important thing we can do is to design the infrastructure so that it is evolvable.

I recently read a debate in Communications of the ACM concerning whether innovation on the Internet should happen in an incremental, evolutionary way or whether new designs must come about in a “clean slate” fashion.  But, I don’t think these philosophies are necessarily contradictory at all: we should be approaching problems with a “clean slate” mentality; we should not constrain the way we think about solutions simply based on what technology is deployed today. On the other hand, we must also figure out how to deploy whatever solutions we devise in the context of real, existing, deployed infrastructure.  I think software-defined networking may effectively resolve this debate for good: clean-slate, disruptive innovation can occur in the context of existing infrastructure, as long as the infrastructure is designed to enable evolution.  Software-defined networking makes this evolution possible.

Tell Me a Story

Commencement time brings commencement speeches; one of my favorite commencement speeches is a speech by Robert Krulwich at Caltech in 2008, where he discusses the importance of storytelling in science.  His speech makes a case for talking about science to audiences that may not be well-versed experts in the topic being presented.   This speech should be required listening for any graduate student or researcher in science.

Krulwich begins the speech by putting the students in a hypothetical scenario where a non-technical friend or family member asks “What are you working on?” What would you think: Is it worth the effort to try to explain your work to the general public?  Do you care to be understood by average folks? His advice: When someone asks this question, even if it is hard to explain, give it a try.  Talking about science to non-scientists is a non-trivial undertaking.  And, it is an important undertaking, because the scientific version of things compete with other perhaps equally (or more) compelling stories.

As researchers, we are competing for human attention; we love to hear stories.  Storytelling is perhaps one of the most important—and one of the most under-taught—aspects of our discipline.  The narrative of a research writeup or talk can often determine whether the work is well-received—or even received, for that matter.  Some cynics may dismiss storytelling as “marketing”, “hype”, or “packaging”, but the fact of the matter is that packaging is important.  Certainly, research papers (or talks) cannot have merely style without substance, people are busy, and many people (reviewers, journalists, and even other people within your field) will not stick around for the punchline if the story is not compelling.  Of course, this advice applies well beyond the research community, but I will focus here on storytelling in research, and some things I have learned thus far in my experiences.

When I began working on network-level spam filtering, I was initially pretty surprised at how much attention the work was receiving.  In particular, I viewed our first paper on the topic as somewhat light in terms of results: there was no sound theory or strong results, for example.  But, the work was quickly picked up by the media, on multiple occasions.  I found myself talking to a lot of reporters about the work, and, as I repeatedly explained the work to reporters, I found myself getting better at telling the story of the work.  I was using analogies and metaphors to describe our techniques, and I got much better at setting the stage for the work.  I also realized what gave the work such broad appeal: everyone understands email spam, and the conceptual differences with our approach were very easy to explain.  Here is the story, in a nutshell:

“Approximately 95% of all email traffic is spam.  Conventional mail filters look at the contents of the message—words in the mail, for example, to distinguish spam from legitimate content.  Unfortunately, as spammers get more clever, they can evade these filtering techniques by changing the content of their messages.  In contrast, our approach looks at behavioral characteristics: rather than looking at the message itself or who sent it, look at how it was sent.  To understand this, think about telemarketer phone calls: you know when someone calls first thing in the morning or right during dinner that the call is most likely a telemarketer, simply because your friends or family are too considerate to call you at those times.  You know the call is unwanted and can dismiss it before you even answer the phone.  We take the same approach with email messages: we identify behavioral characteristics that allow a mail server to reject a message based on the initial contact attempt, before it even accepts or examines the message.  Our method filters spam with 99.9% accuracy, and network operators can deploy our techniques easily without modifications to existing protocols or infrastructure.”

It turns out that this message is relatively easy for the average human to understand; they can relate to this story because they can see what it has to do with their lives, and the approach is explained clearly, and in terms of things they already understand.  Even after this initial work was published, it took me years to refine the story, so that it could be expressed crisply.  Introductions to papers and talks should always be treated with similar care. One can think of the introduction to a paper as a synopsis of the entire story, with the paper itself being the “unabridged” version (i.e., it may include many details that only the most interested reader will pore over).

How does one tell a story that readers or listeners actually want to hear?  Unfortunately, there really is not a single silver bullet, and storytelling is certainly an art.  However, there are definitely certain key ingredients that I find tend to work well; in general, I find that good stories (and, in particular, good research stories) have many common elements.  Based on those common elements, here is some advice:

  • Have a beginning, middle, and an end. At the beginning, a research paper or talk should set the context for the work.  A reader or listener immediately wants to know why they should devote their time or attention to what you have to say.  Why is the problem being solved important and interesting?  Why is the problem challenging?  Why is the solution useful or beautiful?  Who can use the results, and how can they use them?  For example, in the above story on spam filtering, there is a beginning (“users get spam; it’s annoying, and current approaches don’t work perfectly”), a middle (“here’s a new and interesting approach”), and an end (“it works; people can use it easily”).
  • Use analogies and metaphors. People have a much easier time understanding a new concept if you can relate it to something they already understand.  For example, the above story uses telemarketing as an analogy for email spam; nearly everyone has experienced a rude awakening or disruption from a telemarketer, which makes the analogy easy to understand.  In some cases, it may be that the analogy is not perfect; in these cases, I find that it helps to use an analogy anyway and explain subtle differences later.
  • Use concrete examples. People like to see concrete examples because they are exciting and much easier to relate to.  It’s even better if the example can be surprising, or otherwise engaging.  For example, the above story gives a statistic about spam that is concrete, and some may even find surprising.  In a talk, I often augment this concrete example with a news clipping, a graph, or an interactive question (e.g., one can have people guess what fraction of email traffic is spam).
  • Write in the active voice. Consider “It was observed.” (passive) vs. “We saw.” (active).  The first is boring, indirect, and unclear: the reader (or listener) cannot even figure out who observed.  I find this writing style immensely frustrating for this reason.  My frustration generally comes to a boil when someone describes a system using primarily verbs in the passive voice (“The message was sent.”).  Passive voice makes it nearly impossible for the reader to figure out what is happening because the subject of the verb is unspecified.  Often, when I press students to turn their verbs into active voice, we find out that even they were unclear on what the subject of the verb should be (e.g., what part of the system takes a certain action).
  • Be as concise as possible, but not too concise. We’ve all complained about movies that “drag on too long” or a speech that “does not get to the point”.  Humans can be quite impatient, and, in the context of research papers, people want to know the punchline quickly, as well.  Research papers are not mystery novels; they should be interesting, but they should also convey findings clearly and efficiently.  Most of my time editing writing involves removing words and otherwise shortening paragraphs to streamline the story as much as possible.

A final point is to consider the audience.  Someone you meet in an elevator or hallway might be much less interested in the details of your work than someone listening to a conference talk or thesis defense.  For this reason, it’s important to have multiple versions of your story ready.  I call this a “multi-resolution elevator pitch”, because it’s a pitch where I can start with a high-level story and dive into details as necessary.  Having a multi-resolution elevator pitch ready also makes it much easier to convey your point to very busy people who may not have the time to stick around for more than 30 seconds.  If, however, you can hook them in the first 30 seconds, you may find that they stick around to hear the longer version of your story.

Show Me the Data

One of my friends recently pointed me to this post about network data. The author states that one of the things he will miss the most about working at Google is the access to the tremendous amount of data that the company collects.

Although I have not worked at Google and can only imagine the treasure trove their employees must have, I have also spent time with lots of sensitive data during my time at AT&T Research Labs.  At AT&T, we had—and researchers still presumably have—access to a font of data, ranging from router configurations to routing table dumps to traffic statistics of all kinds.  I found having direct access to this kind of data tremendously valuable: it allowed me to “get my hands dirty” and play with data as I explored interesting questions that might be hiding in the data itself.  During that summer, I developed a taste for working on real, operational problems.

Unfortunately, when one retreats to the ivory towers, one cannot bring the data along for the ride.  Sitting back at my desk at MIT, I realized there were a lot of problems with network configuration management and wanted to build tools to help network operators run their networks better.  One of these tools was the “router configuration checker” (rcc), which has been downloaded and used by hundreds of ISPs to check their routing configurations for various kinds of errors.  The road to developing this tool was tricky: it required knowing a lot about how network operators configure their networks, and more importantly direct access to network configurations on which to debug the tool.  I found myself in a catch-22 situation: I wanted to develop a tool that was useful for operators, but I needed operators to give me data to develop the tool in the first place.

My most useful mentor at this juncture was Randy Bush, a research-friendly operator who told me something along the following lines: Everyone wants data, but nobody knows what they’re going to do with it once they get it.  Help the operators solve a useful problem, and they will give you data.

This advice could not have been more sage.

I went to meetings of the North American Network Operators Group (NANOG) and talked about the basic checks I had managed to bootstrap into some scripts using data I had from MIT and a couple other smaller networks (basically, enough to test that the tool worked on Cisco and Juniper configurations).  At NANOG, I met a lot of operators who seemed interested in the tool and were willing to help—often they would not provide me with their configurations, but they would run the tool for me and tell me the output (and whether or not the output made sense).  Guy Tal was another person who I owe a lot of gratitude for his patience in this regard.  Sometimes, I got lucky and even got a hold of some configurations to stare at.

Before I knew it, I had a tool that could run on large Internet Service Provider (ISP) configurations and give operators meaningful information about their networks, and hundreds of ISPs were using the tool.  And, I think that when I gave my job talk, people from other areas may not have understood the details of “BGP”, or “route oscillations”, or “route hijacks”, but they certainly understood that ISPs were actually using the tool.

We applied the same approach when we started working on spam filtering.  We wrote an initial paper that studied the network-level behavior of spammers with some data we were able to collect at a local “spam trap” on the MIT campus (more on that project in a later post).  The visibility of that work (and its unique approach, which spawned a lot of follow-on work) allowed us to connect with people in industry who were working on spam filtering, had real problems that needed solving, and had data (and, equally importantly, expertise) to help us think about the problems and solutions more clearly.

In these projects (as well as other more recent ones), I see a pattern in how one can get access to “real data”, even in academia.  Roughly, here is some advice:

  • Have a clear, practical problem or question in mind. Do not simply ask for data.  Everyone asks for data.  A much more select set is actually capable of doing something useful with it.  Demonstrate that you have given some thought to questions you want to answer, and think about whether anyone else might be interested in those questions.  Importantly, think about whether the person you are asking for data might be interested in what you have to offer.
  • Be prepared to work with imperfect data. You may not get exactly the data you would like.  For example, the router configurations or traffic traces might be partially anonymized.  You may only get metadata about email messages, as opposed to full payloads.  (And so on.)  Your initial reaction might be to think that all is lost without the “perfect dataset”.  This is rarely the case!  Think about how you can either adjust your model, or adapt your approach (or even the question itself) with imperfect data.
  • Be prepared to operate blindly. In many cases, operators (or other researchers) cannot give you raw data that they have access to; often, data may be sensitive, or protected by non-disclosure agreements.  However, these people can sometimes run analysis on the data for you, if you are nice to them, and if you write the analysis code in a way that they can easily run your scripts.
  • Bring something to the table. This goes back to Randy Bush’s point. If you make yourself useful to operators (or others with data), they will want to work with you—if you are asking an interesting question or providing something useful, they might be just as interested in the answers as you are.

There is much more to say about networking research and data.  Sometimes it is simply not possible to get the data one needs to solve interesting research problems (e.g., pricing data is very difficult to obtain).  Still, I think as networking researchers, we should be first looking for interesting problems and then looking for data that can help us solve those problems; too often, we operate in reverse, like the drunk who looks for his keys under the lamppost because it is brighter where the light is shining.  I’ll say more about this in a later post.

Networking Meets Cloud Computing (Or, “How I Learned to Stop Worrying and Love GENI”)

If you build it, will they come? In Field of Dreams, Ray Kinsella is confronted in his cornfield by a whisper that says, “If you build it, he will come,” which Ray believes refers to building a baseball field in the middle of a cornfield that will play host to Shoeless Joe and members of the 1919 Black Sox.  Only Ray can see the players initially, leading others to tell him that he should simply rip out the baseball field and replant his corn crop.  Eventually, other people see the players, too, and decide that keeping the baseball field might not be such a bad idea after all.

I can’t help but wonder if  this scenario might have an analogy to the Global Environment for Network Innovations (GENI) effort, sponsored by the National Science Foundation.   The GENI project seeks to build a worldwide network testbed to allow Internet researchers to design and test new network architectures and protocols.  The project has many moving parts, and I won’t survey all of those here.  A salient feature of GENI, though, is that it funds infrastructure prototyping and development, but does not directly fund research on that infrastructure.   One of the most interesting challenges for me has been—and still is—how to couple projects that build infrastructure with projects that directly use that infrastructure to develop interesting new technologies and perform cutting-edge research.

Can prototyping spawn new research? This is, in its essence, the bet that I think GENI is placing: If we build a new experimental environment for networking innovation, the hope is that researchers will come use it.  Can this work? I think the answer is probably “yes”, but it is too soon to know the answer to this question in this context.  Instead, I would like to talk about how our GENI projects have spawned new research—and new educational material—here at Georgia Tech.

The Prototype: Connectivity for Virtual Networks. One of the the GENI-funded projects is called the “BGP Multiplexer” or, simply the “BGP Mux”.  If that sounds obscure, then perhaps you can already begin to understand the challenges we face. Simply put, the BGP Mux is like a proxy for Internet connectivity for virtual networks (BGP is the protocol that connects Internet Service Providers to one another).  The basic idea is that a developer or network researcher might build a virtual network (e.g., on the GENI testbed) and want to connect that network to the rest of the Internet, so that his or her experiment could attract real users.  You can read more about it on the GENI project Web page.

Some people are probably familiar with the concept of virtualization, or creating “virtual” resources (memory, servers, hardware, etc.) based on some shared physical substrate.  Virtual machines are now commonplace; virtual networks, however, are less so.  We started building a Virtual Network Infrastructure (VINI) in 2006.  The main motivation for VINI was to allow experimenters to build virtual networks on a shared physical testbed.  One of the big challenges was connecting these virtual networks to the rest of the Internet.  This is the problem that the BGP Mux solves.

Providing Internet connectivity to virtual networks is perhaps an interesting problem within the context of building a research testbed, but, in my view, it lacked broader research impact.  Effectively, we were building a “hammer” that was useful for building a testbed, but I wanted to find a “nail” that was solving a real problem, could be published, and could also be used in the classroom.  This was not easy.

The Research: Networking for Cloud Computing.  To broaden the applicability of what we had built, essentially we had to find a “nail” that might need fast, flexible way for setting up and tearing down Internet connections.   Cloud computing applications seemed like a natural fit: services on Amazon’s EC2, for example, might want to control inbound and outbound traffic with their customers.  They might want to do this for cost or performance reasons, for example.  Today, this is difficult.   When you rent servers in EC2, you have no control over how traffic comes over the Internet to reach those servers—if you want paths with less delay or otherwise better performance, you are out of luck.  Using the hammer that we had built with the BGP Mux, however, this was much easier: instead of solving a problem in terms of “virtual networks for researchers” (something only a small community might care about), we were solving the same problem, but in terms of users of EC2.   Essentially, the BGP Mux offers EC2 “tenants” the ability to control their own network routing.  This capability is now deployed in five locations and we are planning to expand its footprint.  A paper on this technology will appear at the USENIX Annual Technical Conference in June. We welcome any other networks that would like to help us out with this deployment (i.e., if you can offer us upstream connectivity at another location, we would like to talk to you!).

Education: Transit Portal in the Classroom. I’ve been teaching a course called “Next-Generation Networking”, a course on Future Internet Architectures that I plan to discuss at more length on this blog at some point.  Typical networking courses are not as “hands on” as I would prefer: I, for one, graduated from college without ever even seeing a router in person, let alone configuring one.  I wanted networking students to have more “street cred”—they should be able to say, for example, that they’ve configured routers on a real, running network that’s connected to the Internet and routing real traffic.  This sounds like lunacy.  Who would think that students could play “network operator for a day”?  It just sounds too dangerous to have students play around on live networks with real equipment.   But with virtual networking and the BGP Mux, it’s possible.  I recently assigned a project in this course that had students build virtual networks, connect them to the Internet, and control inbound and outbound traffic using real routing protocols.  Seeing students configure networks and “speak BGP with the rest of the Internet” was one of my proudest days in the classroom.  You can see the assignment and videos of these demos if you’d like to learn more.

Prototyping and research.  Will the researchers come? Our own GENI prototyping efforts have been an exercise in “working backwards” from solution to networking research problem.  I have found that exercise rewarding, if somewhat counter to my usual way of thinking about research (i.e., seek out the important problems first, then find the right hammer).  I think now the larger community will face this challenge, on a much broader scale: Once we have GENI, what will we do with it?  Some areas that seem promising include deployment of secure network protocols and services (our current protocols are known to be insecure), better support for mobility (the current Internet does not support mobility very well), new network configuration paradigms (networks of all kinds, from the transit backbone to the home, are much too hard to configure), and new ways of pricing and provisioning networks (today’s markets for Internet connectivity are far too rigid).  We have  just finished work on a large NSF proposal on Future Internet Architectures that I think will be able to make use of the infrastructure that we and others are building; in the coming months, I think we’ll have much more to say (and much more to see) on this topic.

A New Window for Networking

It’s an exciting time to be working in communications networks.  Opportunities abound for innovation and impact, in areas ranging from applications, to network operations and management, to network security, and even to the infrastructure and protocols itself.

When I was interviewing for jobs as networking faculty about five years ago, one of the most common questions I heard was, “How do you hope to effect any impact as a researcher when the major router vendors and standards bodies effectively hold the cards to innovation?”   I have always had a taste for solving practical problems with an eye towards fundamentals.  My dissertation work, for example, was on deriving correctness properties for Internet routing, and developing a tool, router configuration checker (rcc), to help network operators check that their routing configurations actually satisfied those properties.  The theoretical aspects of the work were fun, but the real impact was that people could actually use the tool; I still get regular requests for rcc today from both operators and various networking companies who want to perform route prediction.

This question about impact cut right to the core of what I think was a crisis of confidence for the field.  Much of the research seemed to be focused on performance tuning and protocol tweaks.  Big architectural ideas were confined to paper design, because there was simply no way to evaluate them.  Short of interacting directly with operators and developing tools that they could use, it seemed to me that truly bringing about innovation was rather difficult.

Much has happened in five years, however; there are seemingly countless exciting opportunities in networking; there are more exciting problems than there is time to work on them.  There are many areas where exciting innovation is happening, and it is becoming feasible to effect fundamental change to the network’s architecture and protocols.   I think several trends are responsible for this wealth of new opportunities:

  • Network security has come to the forefront.  The rise of spam, botnets, phishing, and cybercrime over the past few years cannot be ignored.  By some estimates, as much as 95% of all email is spam.  In a Global Survey by Deloitte, nearly half of the companies surveyed reported an internal security breach, a third of which resulted from viruses or malware.
  • Enterprise, campus, and data-center networks are facing a wealth of new problems, ranging from access control to rate limiting and prioritization to performance troubleshooting.  I interact regularly with the Georgia Tech campus network operators, as a source of inspiration for problems to study.  One of my main takeaways from that interaction is that today’s network configuration is complex, baroque, and low-level—far too much so for the high-level tasks that they wish to perform.  This makes these networks difficult to evolve and debug.
  • Network infrastructure is becoming increasingly flexible, agile, and programmable.  It used to be the case that network devices were closed, and difficult to modify aside from the configuration parameters they exposed.  Recent developments, however, are changing the game.  The OpenFlow project at Stanford University makes it much more tenable to write software programs to control the entire network at a higher level of abstraction, and provides more direct control over network behavior, thus potentially providing operators easier ways to control and debug their network.
  • Networking is increasingly coming to blows with policy.  The collision of networking and policy is certainly not new, but it is increasingly coming to the forefront, with front-page items such as network neutrality and Internet censorship.  As the two areas continue on this crash course, it is certainly worth thinking about the respective roles that policy and technology play with respect to each of these problems.
  • Networking increasingly entails direct interaction with people of varied technical backgrounds.  It used to be that a “home network” consisted of a computer and a modem.  Now, home networks comprise a wide range of devices, including media servers, game consoles, music streaming appliances, and so forth.  The increasing complexity of these networks makes each and every one of us a network operator, whether we like it or not.  The need to make networks simpler, more secure, and easier to manage has never been more acute.

The networking field continues to face new problems, which also opens the field to “hammers” from a variety of different areas, ranging from economics to machine learning to human-computer interaction.  One of my colleagues often says that networking is a domain that draws on many disciplines.  One of the fun things about the field is that it allows one to learn a little about a lot of other disciplines as well.  I have had a lot of fun—and learned a lot—working at many of these boundaries: machine learning, economics, architecture, security, and signal processing, to name a few.

The theme of my blog will be problems and topics that relate to network management, operations, security, and architecture.  I plan to write about my own (and my students’) research, current events as they relate to networking, and interesting problem areas and solutions that draw on multiple disciplines.  I will start in the next few posts by touching on each of the bullets above.