The Visible Effects and Hidden Sources of Internet Latency

Most Internet Service Providers advertise their performance in terms of downstream throughput.  The “speed” that one pays for reflects, effectively, the number of bits per second that can be delivered on the access link into your home network.  Although this metric makes sense for many applications, it is only one characteristic of network performance that ultimately affects a user’s experience.  In many cases, latency can be at least as important as downstream throughput.

For example, consider the figure below, which shows Web page load times as downstream throughput increases—the time to load many Web pages decreases as throughput increases, but downstream throughput that is faster than about 16 Mbps stops having any effect on Web page load time.


Page load times decrease with downstream throughput, but only up to 8–16 Mbits/s.

The culprit is latency: For short, small transfers (as is the case with many Web objects), the time to initiate a TCP connection and open the initial congestion window is dominated by the round-trip time between the client and the Web server.  In other words, the size of the access link no longer matters because TCP cannot increase its sending rate to “fill the pipe” before the connection has completed.

The role of latency in Web performance is no secret to anyone who has spent time studying it, and many content providers including Google, Facebook, and others have spent considerable effort to reduce latency (Google has a project called “Make the Web Faster” that encompasses many of these efforts).  Latency plays a role in the time it takes to complete a DNS lookup, the time to initiate a connection to the server, and the time to increase TCP’s congestion window (indeed, students of networking will remember that TCP throughput is inversely proportional to the round-trip time between the client and the server).  Thus, as throughput continues to increase, network latency plays an increasingly predominant role in the performance of applications such as the Web.  Of course, latency also determines user experience for many latency-sensitive applications as well, including streaming voice, audio, video, and gaming.

The question, then, becomes how to reduce latency to the destinations that users commonly access.  Content providers such as Google and others have taken several approaches: (1) placing Web caches closer to users; (2) adjusting TCP’s congestion control mechanism to start sending at a faster rate for the first few round trips.  These steps, however, are only part of the story, because the network performance between the Web cache and the user may still suffer, for a variety of reasons:

  • First, factors such as bufferbloat and DSL interleaving can introduce significant latency effects in the last mile.  Our study from SIGCOMM 2011 showed how both access link configuration and a user’s choice of equipment (e.g., DSL modem) can significantly affect the latency that a user sess.

  • Second, a poor wireless network in the home can introduce significant latency effects; sometimes we see that 20% of the latency for real user connections from homes is within the home itself.

  • Finally, if the Web cache is not close to users in the first place (e.g., in the case of developing countries), the paths between the users and their destinations can still be subject to significant latency.  These factors can be particularly evident in developing countries, where poor peering and interconnection can result in long paths to content, and where the vast majority of users access the network through mobile and cellular networks.

In the Last Mile

In our SIGCOMM 2011 paper “Broadband Internet Performance: A View from the Gateway” (led by Srikanth Sundaresan and Walter de Donato), we pointed out several aspects of home networks that can contribute significantly to latency.  We define a metric called last-mile latency, which is the latency to the first hop inside the ISP’s network. This metric captures the latency of the access link.

We found in this study that last-mile latencies are often quite high, varying from about 10 ms to nearly 40 ms (ranging from 40–80% of the end-to-end path latency). Variance is also high. One might expect that variance would be lower for DSL, since it is not a shared medium like cable. Surprisingly, we found that the opposite was true: Most users of cable ISPs have last-mile latencies of 0–10 ms. On the other hand, a significant proportion of DSL users have baseline last-mile latencies more than 20 ms, with some users seeing last-mile latencies as high as 50 to 60 ms. Based on discussions with network operators, we believe DSL companies may be enabling an interleaved local loop for these users.  ISPs enable interleaving for three main reasons: (1) the user is far from the DSLAM; (2) the user has a poor quality link to the DSLAM; or (3) the user subscribes to “triple play” services. An interleaved last-mile data path increases robustness to line noise at the cost of higher latency. The cost varies between two to four times the baseline latency. Thus, cable providers in general have lower last-mile latency and jitter. Latencies for DSL users may vary significantly based on physical factors such as distance to the DSLAM or line quality.


Most users see latencies less than 10 ms, but there are a significant number of users with the last mile latency greater than 10 ms.

Customer provided equipment also plays a role.  Our study confirmed that excessive buffering is a widespread problem afflicting most ISPs (and the equipment they provide). We profile different modems to study how the problem affects each of them. We also see the possible effect of ISP policies, such as active queue and buffer management, on latency and loss.  For example, when measuring latency under load (the latency that a user experiences when the access link is saturated due to an upload or a download), we see more than an order of magnitude of difference between modems. The 2Wire modem we tested had the lowest worst-case last-mile latency, 800 ms. Motorola’s was about 1.6 seconds, and the Westell modem we tested had a worst case latency of more than 10 seconds.


Empirical measurements of modem buffering. Different modems have different buffer sizes, leading to wide disparities in observed latencies when the upstream link is busy.

Last-mile latency can also be high for particular technologies such as mobile.  In a recent study of fixed and mobile broadband performance in South Africa, we found that, although the mobile providers consistently offer higher throughput, the latency of mobile connections is often 2–3x higher than that of fixed-line connectivity in the country.

In the Home Wireless Network

Our recent study of home network performance (led by Srikanth Sundaresan) found that a home wireless network can also be a significant source of latency.  We have recently instrumented home networks with a passive monitoring tool that determines whether the access link or the home wireless network (or both) are potential sources of performance problems.  One of the features that we explored in that work was the TCP round-trip time between wireless clients in the home network and the wireless access point in the home.  In many cases, due to wireless contention or other sources of wireless bottlenecks, the TCP round-trip latency in home wireless networks was a significant portion of the overall round-trip latency.

We analyzed the performance of the home network relative to the wide-area network performance for distributions of real user traffic in for about 65 homes over the course of one month. We use these traces to compare the round-trip times between the devices and the access point to the round- trip times from the access point to the wide-area destination for each flow. We define the median latency ratio for a device as the median ratio of the LAN TCP round-trip time to the WAN TCP round-trip time across all flows for that device. The figure below shows the distribution of the median latency ratio across all devices. The result shows for 30% of devices in those homes, at least half of the flows have end-to-end latencies where the home wireless network contributes more than 20% of the overall end-to-end latency.  This technical report provides more details concerning the significant role that home wireless networks can play in end-user performance; a future post will explore this topic at length.


The distribution of the median ratio of the LAN TCP round-trip time to the WAN TCP round-trip time across all flows for that device, across all devices.

Our findings of latency in home networks suggest that the RTT introduced by the wireless network may often be a significant fraction of the end-to-end RTT. This finding is particularly meaning- ful in light of the many recent efforts by service providers to reduce latency to end-to-end services with myriad opti- mizations and careful placement of content. We recommend that, in addition to the attention that is already being paid to optimizing wide-area performance and host TCP connection settings, operators should also spend effort to improve home wireless network performance.

In Developing Regions

Placing content in a Web cache has little effect if the users accessing the content still have high latency to those destinations.  A study of latency from fixed-line access networks in South Africa using BISmark data that was led by Marshini ChettySrikanth Sundarean, Sachit Muckaden, and Enrico Calandro in cooperation with Research ICT Africa showed that peering and interconnectivity within the country still has a long way to go: in particular, the plot below shows the average latency from 16 users of fixed-line access networks in South Africa to various Internet destinations.  The bars are sorted in order of increasing distance from Johannesburg, South Africa.  Notably, geographic distance from South Africa does not correlate with latency—the latency to Nairobi, Kenya is almost twice as much as the latency to London.  In our study, we found that users in South Africa experienced average round-trip latencies exceeding 200 ms to five of the ten most popular websites in South Africa: Facebook (246 ms), Yahoo (265 ms), LinkedIn (305 ms), Wikipedia (265 ms), and Amazon (236 ms). Many of these sites only have data centers in Europe and North America.


The average latencies to Measurement Lab servers around the world from South Africa. The numbers below each location reflect the distance from Johannesburg in kilometers, and the bars are sorted in order of increasing distance from Johannesburg.  Notably, latency does not increase monotonically with distance.

People familiar with Internet connectivity may not find this result surprising: indeed, many ISPs in South Africa connect to one another via the London Internet Exchange (LINX) or the Amsterdam Internet Exchange (AMS-IX) because it is cheaper to backhaul connectivity to exchange points in Europe than it is to connect directly at an exchange point on the African continent.  The reasons for this behavior appears to be both regulatory and economic, but more work is needed, both in deploying caches and improving Internet interconnectivity to reduce the latency that users in developing regions see to popular Internet content.


The Resilience of Internet Connectivity in East and South Africa: A Case Study

On March 27, 2013 at 6:20 a.m. UTC, the SeaMeWe-4 cable outage affected connectivity across the world.  SeaWeMe-4 is currently the largest submarine cable connecting Europe and Asia.  The Renesys blog recently covered the effect of this outage on various parts of Asia and Africa (Pakistan, Saudi Arabia, the UAE, etc.).  In this post, we explore how the fiber cut affected connectivity from other parts of the world, as visible from the BISmark home router deployment.  The credit for the data analysis in this blog post goes to Srikanth Sundaresan, one of Georgia Tech’s star Ph.D. students whose work on BISmark has garnered a number of awards.

Background: BISmark

The BISmark project has been deploying customized home gateways in home broadband access networks around the world for more than two years; we currently have more than 130 active home routers measuring the performance of access links in nearly 30 countries.  The high-level goal of the project is to gather information from inside home networks to help users and ISPs better debug their home networks.  Two years ago, we published the first paper using BISmark data in SIGCOMM.  The paper explores the performance of broadband access networks around the United States and has many interesting findings:

  • We showed how a technique called “interleaving” on DSL networks can introduce tends of milliseconds of additional latency on home networks.
  • We explored how a user’s choice of equipment can introduce “bufferbloat” effects on home access links.
  • We showed how technologies such as PowerBoost can also introduce sudden, dramatic increases in latency when interacting with buffering on the access link.

The image below shows the current deployment of BISmark.  We have more than 80 routers in North America, nearly 20 in Southeast Asia, about fifteen in the European Union, about 15 in South Africa, and about ten in East Asia.  You can explore the data from the deployment yourself on the Network Dashboard; all of the active measurements are available for download in raw XML format as they are collected.

The BISmark deployment as of May 28, 2013.

The BISmark deployment as of May 28, 2013.

Each BISmark router sits in a home broadband access network.  The routers are NetGear WNDR 3700 and 3800s; we ship routers to anyone who is interested in participating.  As an incentive for participating, you gain access to your own data on the network dashboard.  We are also actively seeking researchers and developers; please contact us below if you are interested, and feel free to check out the project GitHub page.

Every BISmark router measures latency to the Google anycast DNS service and to 10 globally distributed Measurement Lab servers every 10 minutes.  Those servers are located in Atlanta, Los Angeles, Amsterdam, Johannesburg, Nairobi, Tokyo, Sydney, New Delhi, and Rio de Janiero.

Effects of the SMW4 Fiber Cut: A Case Study

We first explore the effects of the fiber cut on reachability from the active BISmark routers to each of the Measurement Lab destinations.  At the time of the outage (6:20a UTC), the Measurement Lab server in Nairobi became completely unreachable for more than four hours.  The Nairobi Measurement Lab server is hosted in AS 36914 (KENet, the Kenyan Education Network).

Connectivity was restored at 10:34a UTC.  Interestingly, between 9a and 10a UTC, reachability from many of our other BISmark routers to all of the Measurement Lab destinations was affected.  We have not yet explored which of the BISmark routers experienced these reachability problems, but, as we explore further below, this connectivity blip coincides with some connectivity being restored to Kenya via Safaricom, the backup ISP for the Measurement Lab server hosted in KENet.  It is possible that other convergence events were also occurring at that time.

Reachability from BISmark routers to each of the Measurement Lab servers on March 27, 2013.

Reachability from BISmark routers to each of the Measurement Lab servers on March 27, 2013.

Analysis of the BGP routing table information from RouteViews shows that connectivity to AS 36914 was restored at 10:34a UTC. The following figure shows the latencies from all nodes to Nairobi before and after the outage. As soon as connectivity returns, the first set of latencies seem to be roughly the same as before, but latencies almost immediately increase to all destinations, except for a router situated in South Africa in AS 36937 (Neotel).  This result suggests that Neotel may have better connectivity to destinations within Africa than some other ISPs, and that access ISPs who use Neotel for “transit” may see better performance and reliability to destinations within the continent. Because only the SEACOM cable was affected by the cut, not the West African Cable System (WACS) or EASSy cable, Neotel’s access to other fiber paths may have allowed its users to sustain better performance after the fiber cut.

Latencies from BISmark routers in various regions to Naorobi, Kenya (AS 36914, Neotel).

Latencies from BISmark routers in various regions to Naorobi, Kenya (AS 36914, Neotel).

This incident—and Neotel’s relative resilience—suggests the importance of exploring the effects of undersea cable connectivity in various countries in Africa and how such connectivity affects resilience.  (In a future post, we will explore the effects of peering and ISP interconnectivity on the performance that users in this part of the world see.)   

Internet Routing to KENet during the Outage

6:20a: The Fiber Cut. The reachability and performance effects caused by the SWM4 fiber cut beg the question of what was happening to routes to Kenya (and, in particular KENet) at the time of the outage.  We explore this in further detail below.  The first graph below shows reachability to KENet (AS 36914, the large red dot) at 6:20:50 UTC, around which time the fiber cut occurred.  The second plot shows the routes at 6:23:51 UTC; by 6:27:06 UTC, AS 36914 became completely unreachable.

Internet reachability to KENet (AS 36914) at 6:20:50a, 6:23:51a, and 6:27:06a UTC.

Internet reachability to KENet (AS 36914) at 6:20:50a, 6:23:51a, and 6:27:06a UTC.

9:05a: Connectivity is (partially) restored through a backup path. About two-and-a-half hours later, at 9:05:49 UTC, AS 36914 starts to come back online, and connectivity is restored within about one minute, although all Internet paths to this destination go through AS 33771 (SafariCom), which is most likely KENet’s backup (i.e., commercial, and hence more expensive) provider.  This is an interesting example of BGP routing and backup connectivity in action: Many ISPs such as KENet have primary and backup Internet providers, and paths only go through the backup provider (in this case, SafariCom) when the primary path fails.  

Connectivity to KENet (AS36914) is restored, via the commercial backup provider, SafariCom (AS 33771).  It is interesting to note that although connectivity was restored at 9:06a through this backup path, the server hosted in this network was still unreachable until paths switched back to the primary provider (UbutuNet) at 10:34a.

Connectivity to KENet (AS36914) is restored, via the commercial backup provider, SafariCom (AS 33771). It is interesting to note that although connectivity was restored at 9:06a through this backup path, the server hosted in this network was still unreachable until paths switched back to the primary provider (UbutuNet) at 10:34a.

Note that although connectivity to KENet was restored through SafariCom at around 9:06a UTC, none of the BISmark routers could reach the Measurement Lab server hosted in KENet through this backup path!  This pathology suggests that the failover didn’t really work as planned, for some reason.  Although this disconnection could result from poor Internet “peering” between SafariCom and the locations of our BISmark routers around the world, it is unlikely that bad peering would affect reachability to all of our destinations.  Still it is not clear why the connectivity through SafariCom was not sufficient to restore connectivity to at least some of the BISmark nodes.  The connectivity issue we observed could be something mundane (e.g., SafariCom simply blocks ICMP “ping” packets), or it could be something much more profound.

It is also interesting to note that Internet routing took more than two hours to restore!  Usually, we think of Internet routing as being dynamic, automatically reconverging when failures occur to find a new working path (assuming one exists).  While BGP has never been known for being zippy, two-and-a-half hours seems excessive.  It is perhaps more likely that some additional connectivity arrangements were being made behind the scenes; it might even be the case that KENet purchased additional backup connectivity (or made special arrangements) during those several hours when they were offline.

10:35a: Connectivity returns through the primary path.  At around 10:34a UTC, routes to KENet begin reverting to the primary path, as can be seen in the left figure below.  By 10:35a UTC, everything is “back to normal” as far as BGP routing is concerned although as we saw above, latencies remain high to most destinations for an additional eight hours.  It is unclear what causes latencies to remain high after latencies were restored, but this offers another important lesson: BGP connectivity does not equate to good performance through those BGP paths.  This underscores the importance of using both BGP routing tables and a globally distributed performance measurement platform like BISmark to understand performance and connectivity issues around the times of outages.

By 10:35a UTC, connectivity is restored through UbuntuNet (AS 36944), KENet's primary provider.  Once BGP convergence begins, it takes only a little more than a minute for paths to revert to the primary path.

By 10:35a UTC, connectivity is restored through UbuntuNet (AS 36944), KENet’s primary provider. Once BGP convergence begins, it takes only a little more than a minute for paths to revert to the primary path.

Takeaway Lessons

It’s worthwhile to reflect on some of the lessons from this incident; it teaches us about how Internet routing works (and doesn’t work), about the importance of backup paths, and about the importance of performing joint analysis of both routing information and active performance measurements from a variety of globally distributed locations.  I’ve summarized a few of these below:

  • Peering and interconnectivity in Africa haven’t yet come of age.  It is clear from this incident that certain locations in Africa (although not all) are not particularly resilient to fiber cuts.  The SWM4 fiber cut took KENet completely offline for several hours, and even after connectivity was “restored” several hours later, many locations still could not reach the destination through the backup path.  Certain ISPs in Africa that are better connected (e.g., Neotel, and the Measurement Lab node hosted in TENET in Johannesburg) weathered the fiber cut much better than others, most likely because they have backup connectivity through WACS or EASSy.  In a future post, we will explore performance issues in various parts of Africa that likely result from poor peering.
  • Connectivity does not imply good performance.  Even after connectivity was completely “restored” (at least according to BGP), latencies to Nairobi from most regions remained high for almost another eight hours.  This disparity underscores the importance of not relying solely on BGP routing information to understand the quality of connectivity to and from various Internet destinations.  Global deployments like BISmark are critical for getting a more complete picture of performance.
  • “Dynamic routing” isn’t always dynamic.  The ability for dynamic routing protocols to find a working backup path depend on the existence of those paths in the first place.  The underlying physical connectivity must be there, and the business arrangements (peering) between ISPs must exist to allow those paths to exist (and function) when failures do occur.  Something occurred on March 27, 2013 that exposed a glaring hole in the Internet’s ability to respond dynamically to a failure.  It would be very interesting to learn more about what happened between 6:20a UTC and 9:05a UTC to learn more about exactly what resulted in connectivity being restored (via SafariCom), and why it took so long.  Perhaps we need more sophisticated “what if” tools that help ISPs better evaluate their readiness for these types of events.

In future posts, we will continue to explore how BISmark can help expose pathologies that result from disconnections, outages, and other pathologies.  Our ability to perform this type of analysis depends on the continued support of ISPs, users, and the broader community.  We encourage you to contact us using the form below if you are interested in hosting a BISmark router in your access network.  (You can also post public comments at the bottom of the page, below the contact form.)

It’s 10 p.m. Do You Know Where Your Passwords Are?

Have you ever wondered how many sites have your credit card number?  Or, have you ever wondered how many sites have a certain version of your password?  Do you think you might have reused the password you have used on your banking Web site on another site?  What if you decided that you wanted to “clean up” your personal information on some of the sites where you’ve leaked this information.  Would you even know where to start?

If you answered “yes” to any of the above questions, then Appu is the tool for you.  Appu is a Chrome extension developed by my Ph.D. student Yogesh Mundada that keeps track of what we call your privacy footprint on the Web.  Every time you enter personally identifiable information (address, credit card information, password, etc.) into a Web site, Appu performs a cryptographic hash of that information, associates the hash with that site, and stores it, to keep track of where you have entered various information.  If you ever re-enter the same password on a different site, Appu will warn you that you have reused a password and where you’ve re-used that password.   As a user, you will immediately see a warning like the one below:


You might be wondering: “Why should I trust your Chrome extension with the passwords that I enter on various sites?”  The good news is that you do not have to trust Appu with your passwords and personal information to use this tool, because Appu never sends your information anywhere in cleartext.  Before sending a report to us, Appu performs what is called a cryptographic hash on all of your information.  It also only stores a cryptographic hash of each password locally; no passwords are ever stored in cleartext, anywhere.  If you ever enter the same password elsewhere, the result of performing a cryptographic hash on your password would produce the same unreadable output—therefore, Appu never knows what your password is, only that you’ve reused it.  Appu stores your other personal information in cleartext locally on your machine so that you can see which sites have which values of various personal information, but it never sends that information in the clear to us.  Appu always asks the user before sending any information to us, and the tool also gives the user the option to delete anything from the reports that Appu sends to us.  If you still want to assure yourself that Appu is not doing anything suspect, you can read the source code.

Appu can help users keep track of the following information:

  • Password reuse.  Have I reused the same password across multiple sites?  If so, on which sites have I used the same password?
  • Privacy footprint.  Which sites have a copy of my full name and address (or other information)?  What specific information have I provided to those sites?
  • Password strength.  Have I used a weak password for my online bank account? (Or other Web site)
  • Password stagnancy. When was the last time I changed my password on a particular site?

In addition to the pop-up information that users see, as above, Appu also provides reports to allow users to keep track of answers to these questions.  The figures below show two examples of this.  The figure on the left shows the privacy footprint page, where a user can see which sites have stored personal information (e.g., name, email address); the figure on the right shows more detailed information, such as the last time a user changed his or her password.  That report also tells a user how often they’ve visited a site—therefore, Appu can help you figure out that even though you’ve only visited a site once, that site is storing sensitive information, such as your credit card number (hopefully spurring you to go clean up your personal information on that site).


Our hope is that Appu will help users better manage their online privacy footprints, thereby better managing the risks that they potentially expose themselves to through password reuse.

We initially released Appu through a private alpha release, to about ten close friends.  Even in this small sample size, we can observe interesting aggregate behavior.  Users are far more cavalier about their personal information than we expected.  For example, we have observed the following behaviors:

  • Although users are less forthcoming with their credit card information, they are surprisingly forthcoming about what one might otherwise think is private information, such as religious views.
  • People often share passwords across “high-value” (e.g., Amazon) and “low-value” (e.g., TripIt) sites.
  • Many users have revealed personal information (e.g., address, credit card information) to sites they rarely visit, or have visited only once.
  • Several users had weak passwords on their banking sites that could be cracked in less than one day.

Are you one of these users who needs to clean up their online privacy footprint?  Download and install the Appu Chrome extension to find out! As Appu gains a larger user base, we will follow up with more discoveries about users’ behavior regarding their online privacy footprint.  We are actively developing a Firefox version of Appu; please join the appu-users mailing list if you want to get updates aboutversion releases, news about support for other browsers.

(And yes, in case you are wondering, this project is being thoroughly reviewed by Georgia Tech’s Institutional Review Board.)

Making Sense of Data Caps and Tiered Pricing in Broadband and Mobile Networks

Last week, I had the pleasure of sitting on a panel at the Broadband Breakfast Club in downtown Washington, DC.  The panel was organized by, an policy and news organization that focuses on policy issues related to broadband service in the United States; the group meets about once a month.  I was asked last fall to sit on a panel on measuring broadband performance, due to our ongoing work on BISmark, but I was unable to make it last fall, so I found myself on a panel on data caps in wired and wireless networks.

I participated on the panel with the following other panelists: Serena Viswanathan, an attorney from the Federal Trade Commission; Patrick Lucey from the New America Foundation; and Roger Entner, the founder of Recon Analytics.  The panel discussed a variety of topics surrounding data caps in broadband networks, but the high level question that the panel circled around was: Do data caps (and tiered pricing) yield positive outcomes for the consumer?

We had an interesting discussion.  Roger Entner espoused the opinion that data caps really only affect the worst offenders, and that applications on mobile devices now make it much easier for users to manage their data caps.  Therefore, data caps shouldn’t be regarded as oppressive, but rather are simply a way for Internet service providers and mobile carriers to recoup costs for the most aggressive users.  Patrick Lucey, who recently wrote an article for the New America Foundation on data caps, stated a counterpoint that echoed his recent article, suggesting that data caps were essentially a profit generator for ISPs, and consumers are effectively captured because they have no real choice for providers.

I spent some time explaining the tool that Marshini Chetty and my students have built on top of BISmark called uCap (longer paper here).  Briefly, uCap is a tool that allows home network users to determine the devices in their home that consume the most data.  It also allows users to see what domains they are visiting that consume the most bandwidth.  It does not, however, tell the user which applications or people are using the most bandwidth (more on that below).  Below is a screenshot of uCap that shows device usage over time. My students have also built a similar tool for mobile devices called MySpeedTest, which tells users which applications are consuming the most data on their phones.  A screenshot of the MySpeedTest panel that shows how different applications consume usage is shown below.

uCap Screenshot Showing Device Usage Over Time
Screen Shot 2013-02-26 at 6.47.46 AM
MySpeedTest Showing Mobile Application Usagemst3

I used these two example applications to argue that usage caps per se are not necessarily a bad thing if the user has ways to manage these usage caps.  In fact, we have repeatedly seen evidence that tiered pricing (or usage caps) can actually improve both ISP profit and make consumers better off, if the consumer understands how different applications consume their usage cap and has ways to manage the usage of those applications.  Indeed, our past research has shown how tiered pricing can improve market efficiency, because the price of connectivity more closely reflects the cost to the provider of carrying specific data.  Further, we’ve seen examples where consumers have actually been worse off when regulators have stepped in to prevent tiered pricing, such as the events in summer 2011 when KPN customers all experienced a price increase for connectivity because KPN was prevented from introducing two tiers of service.

The problem isn’t so much that tiered pricing is bad—it is that users don’t understand it, and they currently don’t have good tools to help them understand it.  In the panel, I informally polled the room—ostensibly filled with broadband experts—about whether they could tell me off the top of their heads how much data a 2-hour high-definition Netflix movie would consume against their usage cap.  Only two or three hands went up in a room of 50 people.  I also confessed that before installing uCap and watching my usage in conjunction with specific applications, I had no idea how much data different applications consumed, or whether I was a so-called “heavy user” (it turns out I am not).  My own experience—and Marshini Chetty’s ongoing work—has shown that people are really bad at estimating how much of their data cap applications consume.  One interesting observation in Marshini’s work is that people conflate the time that they spend on a site with the amount of data it must consume  (“I spend most of my time on Facebook, therefore, it must consume most of my data cap.”).

If we are going to move towards pervasive data caps or tiered pricing models, then users need better tools to understand how applications consume data caps and to manage how different applications consume those caps.  I see two possibilities for better applications going forward:

  • Better visibility.  We need applications like uCap and MySpeedTest to help users understand how different applications consume their data cap.  Helping users get a better handle on how different applications consume data is the first step towards making tiered pricing something that users can cope with.  In addition to the applications that show usage directly, we might also consider other forms of visibility, such as information that helps users estimate a total cost of ownership for running a mobile application (e.g., the free application might actually cost the user more in the long run, if downloading the advertisements to support the free application eats into the user’s data cap).  We also need better ways of fingerprinting devices; applications like uCap still force users to identify devices (note the obscure MAC addresses in the dashboard above for devices on my network that I didn’t bother to manually identify).  Solving these problems requires both deep domain knowledge about networking and intuition and expertise in human factors and interface design.
  • Better control.  This area deserves much more attention.  uCap offers some nice first steps towards giving users control because it helps users control how much data a particular device can send.  But, shouldn’t we be solving this problem in other ways, as well?  For example, we might imagine exposing an SDK to application developers that helps them write applications that are more cognizant of data constraints—for example, by deferring updates when a user is near his or her cap, or deferring downloads until “off peak” times or when a user is on a WiFi network.  There are interesting potential developments in both applications and operating systems that could make tiered pricing and demand-side management more palatable, much like appliances in our homes are now being engineered to adapt to variable electricity pricing.

Finally, Patrick made a point that even if users could understand and control usage caps, they often don’t have any reasonable alternatives if they decide they don’t like their current ISP’s policies.  So, while some of the technological developments we discussed may make a user’s life easier, these improvements are, in some sense, a red herring if a user cannot have some amount of choice over their Internet service provider.  This issue of consumer choice (or lack thereof) does appear to be the elephant in the room for many of the policy discussions surrounding data caps, tiered pricing, and network neutrality.  Yet, until the issues of choice are solved, improving both visibility and control in the technologies that we develop can allow both users and ISPs to be better off in a realm where tiered pricing and data caps exist—a realm which, I would argue, is not only inevitable but also potentially beneficial for both ISPs and consumers.

Internet Relativism and the Hunt for Elusive “Ground Truth”

Networking and security research often rely on a notion of ground truth to evaluate the effectiveness of a solution.  “Ground truth” refers to a true underlying phenomenon that we would like to characterize, detect, or measure.  We often evaluate the effectiveness of a classifier, detector, or measurement technique by how well it reflects ground truth.

For example, an Internet link might have a certain upstream or downstream throughput; the effectiveness of a tool that measures throughput could be thus be quantified in terms of how close its estimates of upstream and downstream throughput are in comparison to the true throughput of the underlying link.  Since there is a physical link with actual upstream or downstream throughput characteristics—and the properties of that link are either explicitly known or can be independently measured—measuring error with respect to ground truth makes sense.  In the case of analyzing routing configuration to predict routing behavior (or detect errors), static configuration analysis can characterize where traffic in the network will flow and whether the configuration will give rise to erroneous behavior; either the predictions correctly characterize the behavior of the real network, or they don’t.  A spam filter might classify an email sender as a legitimate sender or a spammer; again, either the sender is a spammer or it is a legitimate mail server.  In this case, comparing against ground truth is more difficult, since if we had a perfect characterization of spammers and legitimate senders, we would already have the perfect spam filter.  The solution in these kinds of cases is to compare against an independent label (e.g., a blacklist) and somehow argue that the proposed detection mechanism is better than the existing approach to labeling or classification (e.g., faster, earlier, more lightweight, etc.).

Problem: Lack of ground truth.  For some Internet measurement problems, the underlying phenomenon simply cannot be known—even via an independent labeling mechanism—either because the perpetrator of an action won’t reveal his or her true intention, or sometimes because there actually is no “one true answer”. Sometimes we want to characterize scenarios or phenomena where the ground truth proves elusive.  

Consider the following two problems:

  • Network neutrality.The network neutrality debate centers around the question of whether Internet service providers should carry all traffic according to the same class of service, regardless of various properties such as what type of traffic it is (e.g., voice, video) or who is sending or receiving that traffic.
  • Filter bubbles.  Eli Pariser introduced the notion of a filter bubble in his book The Filter Bubble.  A filter bubble is the phenomenon whereby each Internet user sees different Internet content based on factors ranging from our demographic to our past search history to our stated preferences.  Briefly, each of us sees a different version of the Internet, based on a wide range of factors.

These two detection problems do not have a notion of ground truth that can be easily measured.  In the latter case, there is effectively no ground truth at all.

In the case of network neutrality, detection boils down to determining whether an ISP is providing preferential treatment to a certain class of applications or customers.  While ground truth certainly exists (i.e., either the ISP is discriminating against a certain class of traffic or it isn’t), discovering ground truth is incredibly challenging: ISPs may not reveal their policies concerning preferential treatment of different traffic flows, for example.

Similarly, in the case of filter bubbles, we want to determine whether a content provider or intermediary (e.g., search engine, news aggregator, social network feed) is manipulating content for particular groups of users (e.g., showing only certain news articles to Americans).  Again, there is a notion of ground truth—either the content is being manipulated or it isn’t—but the interesting aspect here is not so much whether content is being manipulated (we all know that it is), but rather what the extent of that manipulation is.  Characterizing the extent of manipulation is difficult, however, because personalization is so pervasive on the Internet: everyone effectively sees content that is tailored to their circumstances, and there is no notion of a baseline that reflects what a set of search results or a page of recommended products might look like before the contents were tailored for a particular user.  In many cases, personalization has been so ingrained in data mining and search that even the algorithm designers are unable to characterize what “ground truth” content (i.e., without manipulation) might look like.

Relativism: measuring how different perspectives give rise to inconsistencies.  In cases where ground truth is difficult to measure or impossible to know, we can still ask questions about consistency.  For example, in the case of network neutrality, we can ask whether different groups of users experience comparable performance.  In the case of filter bubbles, we can ask whether different groups of users see similar content.  When inconsistencies arise, we can then attempt to attribute a cause to these inconsistencies by controlling for all factors except for the factor we believe might be the underlying cause for the inconsistency.  One might call this Internet relativism, in a way: We concede that either there is no absolute truth, or that the absolute truth is so difficult to obtain that we might as well not try to know it.  Instead, we can explore how differences in perspective  or “input signals” (e.g., demographic, geography) give rise to different outcomes and try to determine which input differences triggered the inconsistency.  We have applied this technique to the design of two real-world systems that address these two respective problem areas.  In both of these problems, we would love to know the underlying intention of the ISP or information intermediary (i.e., “Is the performance problem I’m seeing a result of preferential treatment?”, “(How) is Google, Netflix, or Amazon manipulating my results based on my demographic?”).

  • NANO: Network Access Neutrality Observatory.We developed NANO several years ago to characterize ISP discrimination for different classes of traffic flows.  In contrast to existing work in this area (e.g., Glasnost), which requires a hypothesis about the type of discrimination that is taking place, NANO operates without any a priori hypothesis about discrimination rules and simply looks for systematic deviation from “normal” behavior for a certain class of traffic (e.g., all traffic from a certain ISP, for a certain application, etc.).  The tricky aspect involved in this type of detection is that there is no notion of normal.  For example, ISP Y might also be performing similar type of discrimination, so there is no firm ground truth against which to compare.  Ideally, what we’d like to ask is “What would be the performance that this user see using ISP X vs. the performance they would see if they were not using ISP X?”  Unfortunately, there is no reasonable way to test the performance that a user would experience as a result of not using an ISP.  (This is in contrast to randomized treatment in clinical trials, where it makes sense to have a group of users who, say, are subject to a particular treatment or not.)  To address this problem, the best we could do to establish a baseline was to average the performance seen by all users from other ISPs and compare those statistics against the performance seen by a group of users for the ISP under test.
  • Bobble: Exposing inconsistent search results.  We recently developed Bobble to characterize the inconsistencies that exist in Web search results that users see, as a result of both personalization and geography.  Ideally, we would like to measure the extent of manipulation against some kind of baseline.  Unfortunately, however, the notion of a baseline is almost meaningless, since no Internet user is subject to such a baseline—even a user who has no search history may still see personalized results based on geography, time of day, device type, and other features, for example.  In this scenario, we established a baseline by comparing the search results of a signed-in user against a user with no search history, making our best attempt to hold all other factors constant.  We also performed the same experiment with users who were not signed in and had no search history, varying only geography.  Unlike NANO, in the case of Bobble, there is not even a notion of an “average” user; the best we can hope for are meaningful characterizations of inconsistencies.

Takeaways and general principles.  These two problems both involve an attempt to characterize an underlying phenomenon without any hope of observing “ground truth”.  In these cases, it seems that our best hope is to approximate a baseline and compare against that (as we did in NANO); failing that, we can at least characterize inconsistencies.  In any case, when looking for these inconsistencies, it is important to (1) enumerate all factors that could possibly introduce inconsistencies; and (2) hold those factors fixed, to the extent possible.  For example, in NANO, one can only compare a user against average performance for a group of users that have identical (or at least similar) characteristics for anything that could affect the outcome.  If, for example, browser type (or other features) might affect performance, then the performance of a user for an ISP “under test” must be compared against users with the same browser (or other features), with the ISP being the only differing feature that could possibly affect performance.  Similarly, in the case of Bobble, we must hold other factors like browser type and device type fixed when attempting to isolate the effects of geography or search history.  Enumerating all of these features that could introduce  inconsistencies is extremely challenging, and I am not aware of any good way to determine whether a list of such features is exhaustive.

I believe networking and security researchers will continue to encounter phenomena that they would like to measure, but where the nature of underlying phenomenon cannot be known with certainty.  I am curious as to whether others have encountered problems that call for Internet relativism, and whether it may time to develop sound experimental methods to characterize Internet relativism, rather than simply blindly clamoring for “ground truth” when none may even exist.

Could We Ever See An Internet “Kill Switch”?

This past week I was interviewed on the evening news about Egypt’s decision to cut itself off from the Internet, and whether we might ever see something similar in the United States, in light of recent proposed legislation on a “kill switch” for the Internet.  I decided to elaborate on my thoughts from this interview.

Internet censorship is more pervasive than most people realize; it takes place in nearly 60 countries.  Last week, however, we witnessed an unprecedented event, whereby Egypt shut down Internet connectivity entirely.  In Egypt, all Internet traffic passes through only five Internet service providers (ISPs), so a government can shut off Internet traffic by controlling just a few Internet service providers.  An Internet “kill switch” is really a metaphor: these five Egyptian ISPs went offline one-by-one over the course of a few hours, with the large national incumbent ISP, Telecom Egypt, leading the way, and three other ISPs following their lead over about 15 minutes.  Four days later, a fifth ISP, which hosts many of the Egyptian financial institutions, was shut down.  Each of these ISPs has what is known as a “border router”, a device that forwards all traffic between its users inside Egypt and the rest of the world.  In the same way that network operators in this country can install filters to stop traffic from certain unwanted locations (e.g., from spammers), the Egyptian operators likely installed a filter to drop all Internet traffic between Egypt and the rest of the world.

This action raises many questions.  In light of recently proposed “kill switch” legislation (see page 76), many people wonder whether something similar could happen here in the United States. This outcome is unlikely.  First, the United States has more ISPs and connection points to the rest of the world.  Cutting us off would require control over many more Internet service providers and points of entry. Even if the government could order all ISPs to cut ties with the rest of the world, we would not feel the same impact: Because many of the sites and services that citizens want to access (such as Facebook and Twitter) actually host their services within our borders, cutting off the United States from the rest of the world might have a larger impact on the rest of the world than it would on us.

Another question is whether Internet access is a human right as much as water, electricity, and food, and, if we think it is, how we might guarantee that citizens have free and open access to information in the face of repressive or authoritarian governments.  There has been much work on circumventing censorship, but governments eventually block them.  A complementary strategy might be to somehow entangle the traffic of ordinary users with traffic associated with critical infrastructure and activities.  The OECD released an estimate that Egypt lost as much as $90 million as a result of the Internet shutdown.  Egypt only shut down the ISP hosting the Egyptian stock exchange for one day instead of five.    The more everyone’s Internet traffic is intertwined with the traffic that is critical to a country’s revenue and operations, the more difficult it is for a government to cut off Internet access to its citizens without crippling its own operations.

Finally, while having no Internet access whatsoever seems dire, it is worth asking whether there might be worse scenarios.  While a complete shutdown of the Internet is certainly inconvenient and costly, a more competent government or organization might use the Internet to persuade and control its citizens, perhaps by sending propaganda or misinformation through services like Twitter and Facebook.  We did see small-scale instances of this, where Egypt forced a large cellular provider, Vodafone, to spread propaganda through text messages.  A far more disturbing scenario may occur when a government harnesses the Internet to spread misinformation or influence public opinion.  Unlike a complete shutdown, such manipulation is more subtle: since it doesn’t disrupt information exchange, the average user may not even notice it.  In fact, it is likely that this practice may already be occurring, perhaps even here at home.

Software-Defined Networking and The New Internet

Tonight, I am sitting on an panel sponsored by NSF and Discover Magazine about “The New Internet”.  The panel has four panelists who will be discussing their thoughts on the future of the Internet.  Some of the questions we have been asked to answer involve predictions about what will happen in the future.  Predictions are a tall order; as Yogi Berra said: “It is hard to make predictions, especially about the future.”

Predictions aside, I think one of the most exciting things about this panel is that we are having this discussion at all.  Not even ten years ago, Internet researchers were bemoaning the “ossification” of the Internet.  As the Internet continues to mature and expand, the opportunities and challenges seem limitless.  More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad.  This changing landscape presents tremendous opportunities for innovation.   The challenge, then, is developing a platform on which this innovation can occur.  Along these lines, a multicampus collaboration is pursuing a future Internet architecture that proposes to architect the network to make it easier for researchers and practitioners to introduce new, disruptive technologies on the Internet.  The “framework for innovation” that is proposed in the work rests on a newly emerging technology called software-defined networking.

Software-defined networking. Network devices effectively have two aspects: the control plane (in some sense, the “brain” for the network, or the protocols that make all of the decisions about where traffic should go), and the data plane (the set of functions that actually forward packets).  Part of the idea behind software-defined networking is to run the network’s control plane in software, on commodity servers that are separate from the network devices themselves.  This notion has roots in a system called the Routing Control Platform, which we worked on about five years ago and now operates in production at AT&T.  More recently, it has gained more widespread adoption in the form of the OpenFlow switch specification.  Software-defined networking is now coming of age in the NOX platform, an open-source OpenFlow controller that allows designers to write network control software in high-level languages like Python. A second aspect of software-defined networking is to make the data plane itself more programmable, for example, by engineering the network data plane to run on hardware.  People are trying to design data planes that are more programmable with FPGAs (see our SIGCOMM paper on SwitchBlade), with GPUs (see the PacketShader work), and also with clusters of servers (see the RouteBricks project).

This paradigm is reshaping how we do computer networking research.  Five years ago, vendors of proprietary networking devices essentially “held the keys” to innovation, because networking devices—and their functions—were closed and proprietary.  Now a software program can control the behavior not only of  individual networking devices but also of entire networks.  Essentially, we are now at the point where we can control very large networks of devices with a single piece of software.

Thoughts on the New Internet. The questions asked of the panelists are understandably a bit broad. I’ve decided to take a crack at these answers in the context of software-defined networking.

1. What do you see happening in computer networking and security in the next five to ten years? We are already beginning to see several developments that will continue to take shape over the next ten years. One trend is the movement of content and services to the “cloud”. We are increasingly using services that are not on our desktops but actually run in large datacenters alongside many other services.  This shift creates many opportunities: we can rely on service providers to maintain software and services that once required dedicated system and network administration.  But, there are also many associated challenges.  First, determining how to help network operators optimize both the cost and performance of these services is difficult; we are working on technologies and algorithms to help network operators better control how users reach services running in the cloud to help them better manage the cost of running these services while still providing adequate performance to the users of these services. A second challenge relates to security: as an increasing number of services move to the cloud, we must develop techniques to make certain that services running in the cloud cannot be compromised and that the data that is stored in the cloud is safe.

Another important trend in network security is the growing importance of controlling where data goes and tracking where it has been; as networks proliferate, it becomes increasingly easy to move data from place to place—sometimes to places where it should not go.  There have been several high-profile cases of “data leaks”, including a former Goldman Sachs employee who was caught copying sensitive data to his hedge fund.  Issues of data-leak prevention and compliance (which involves being able to verify that data did not leak to a certain portion of the network) are becoming much more important as more sensitive data moves to the Internet, and to the cloud.Software-defined networking is allowing us to develop new technologies to solve both of these problems. In our work on Transit Portal, we have used software routers to give cloud service providers much more fine-grained control over traffic to cloud services. We have also developed new technology based on software-defined networking to help stop data leaks at the network layer.

2. What is the biggest threat to everyday users in terms of computer security? Two of the biggest threats to everyday users in terms of computer security are the migration of data and services to the cloud and the proliferation of well-provisioned edge networks (e.g., the buildout of broadband connections to home networks).  The movement of data to the cloud offers many conveniences, but it also presents potentially serious privacy risks.  As services ranging from email to spreadsheets to social networking move to the cloud, we must develop ways to gain more assurance over who is allowed to have access to our data.  Another important challenge we will face with regards to computer security is the proliferation of well-provisioned edge networks. The threat of botnets that mount attacks ranging from spam to phishing to denial-of-service will become even more acute as home networks—which are, today, essentially unmanaged—proliferate. Attackers look for well-connected hosts, and as connectivity to homes improves and as the network “edge” expands, mechanisms to secure the edge of the network will also become more important.

3. What can we do via the Internet in the future that we can’t do now? The possibilities are limitless.  You could probably imagine that anything you are doing in the real world now might take place online in the future.  We are even seeing the proliferation of entirely separate virtual worlds, and the blending of the virtual world with the physical world, in areas such as augmented reality.  Pervasive, ubiquitous computing and the emergence of cloud-based data services make it easier to design, build, and deploy services that aggregate large quantities of data.  As everything we do moves online, everything we do will also be stored somewhere.  This trend poses privacy challenges, but, if we can surmount those challenges, there may also be significant benefits, if we can develop ways to efficiently aggregate, sort, search, analyze and present the growing volumes of data.

The Economist had a recent article that suggested that the next billion people who come onto the Internet will do so via mobile phone; this changing mode of operation will very likely give rise to completely new ways of communicating and interacting online.  For example, rural farmers are now getting information about farming techniques online; services such as Twitter are affecting political dynamics, and may even be used to help defeat censorship.

Future capabilities are especially difficult to predict, and I think networking researchers have not had the best track record in predicting future capabilities.  Many of the exciting new Internet applications have actually come from industry, both through large companies and from startups.  Networking research has been most successful at developing platforms on which these new applications can run, and ongoing research suggests that we will continue to see big successes in that area.  I think software-defined networking will make it easier to evolve these platforms as new applications develop and we see the need for new applications.

4. What are the big challenges facing the future of the Internet? One of the biggest challenges facing the future of the Internet is that we don’t really yet have a good understanding of how to make it usable, manageable, and secure.  We need to understand these aspects of the Internet, if for no other reason than we are becoming increasingly dependent on it.  As Mark Weiser said, “The most profound technologies are those that disappear.”  Our cars have complex networks inside of them that we don’t need to understand in order to drive them.  We don’t need to understand Maxwell’s equations to plug in a toaster.  Yet, to configure a home network, we still need to understand arcana such as “SSID”, “MAC Address”, and “traceroute”.  We must figure out how to make these technologies disappear, at least from the perspective of the everyday user.  Part of this involves providing more visibility to network users about the performance of their networks, in ways that they can understand.  We are working with SamKnows and the FCC on developing techniques to improve user visibility into the performance of their access networks, for example.  Software-defined networking probably has a role to play here, as well: imagine, for example, “outsourcing” some of the management of your home network to a third party service who could help you troubleshoot and secure your network.  We have begun to explore how software-defined networking could make this possible (our recent HomeNets paper presents one possible approach).  Finally, I don’t know if it’s a challenge per se, but another significant question we face is what will happen to online discourse and communication as more countries come online; tens of countries around the world implement some form of surveillance or censorship, and the technologies that we develop will continue to shape this debate.

5. What is it going to take to achieve these new frontiers? The foremost requirement is an underlying substrate that allows us to easily and rapidly innovate and frees us from the constraints of deployed infrastructure.  One of the lessons from the Internet thus far is that we are extraordinarily bad at predicting what will come next.  Therefore, the most important thing we can do is to design the infrastructure so that it is evolvable.

I recently read a debate in Communications of the ACM concerning whether innovation on the Internet should happen in an incremental, evolutionary way or whether new designs must come about in a “clean slate” fashion.  But, I don’t think these philosophies are necessarily contradictory at all: we should be approaching problems with a “clean slate” mentality; we should not constrain the way we think about solutions simply based on what technology is deployed today. On the other hand, we must also figure out how to deploy whatever solutions we devise in the context of real, existing, deployed infrastructure.  I think software-defined networking may effectively resolve this debate for good: clean-slate, disruptive innovation can occur in the context of existing infrastructure, as long as the infrastructure is designed to enable evolution.  Software-defined networking makes this evolution possible.

Internet Censorship: Then and Now

I began working on Internet censorship nearly ten years ago, when Professors Hari Balakrishnan and David Karger talked about users who were behind the “Great Firewall of China” and their need to get more ready access to information.  In this post, I’ll talk about the state of censorship and censorship back then, how the landscape has changed, the lessons I have learned along the way, and my initial thoughts on future research in this area.

Censorship Then

Ten years ago, Internet use was exploding in the United States, so it was initially somewhat hard for me to comprehend that censorship and surveillance were taking place in other parts of the world, let alone what a pervasive problem censorship would become.  Intuition would suggest that the spread of Internet access would provide citizens with more access to information, not less.  In practice, however, the opposite can be true: the Internet gives a government a finite and fixed set of points from which they can monitor or restrict access.  The Berkman Center has a web site where they report on the complexity of the internal networks within a variety of countries.  Essentially, they are comparing the complexity of ISP interconnections within a number of countries: the more “rich” these Internet connections, the more difficult it is for a country to restrict, monitor, or block content.  Most remarkable are the ISP structures of countries like China, where most ISPs connect through a single backbone network (presumably where the blocking takes place); compared to Nigeria, for example, the Chinese network is much more like a hub and spoke, with all regional ISPs connecting through the ChinaNet Backbone (which is the parent of nearly 2/3 of all of the countries IP address space).

Nearly ten years ago, the Berkman Center published a nice report about the state of Internet censorship in China, exposing the extent of censorship in China and the determination of the government to develop more refined and sophisticated censorship techniques.  In response, people have developed techniques to try to circumvent censorship techniques.  Conceptually, every circumvention system works roughly as shown in this picture:

The helper has access to content outside the censorship firewall and can communicate with Alice, who is behind the firewall.  The helper’s job is to allow Alice and Bob to exchange content.  In practice, this helper might be a Web proxy (e.g., Anonymizer), a network of proxies (e.g., Tor), or, as we will see below, an intermediate drop site (e.g., Collage).

In response to ongoing censorship efforts at the time, we developed Infranet, a system to circumvent censorship firewalls.  The state of the art in circumventing censorship at the time (e.g., Anonymizer) were essentially glorified Web proxies: a user in a censored regime would connect to a cooperating proxy outside of the firewall, which would, in turn, fetch content for the user and return that content over an encrypted channel.  However, censors could discover and block such proxies, and simply connecting to a proxy like this could raise suspicion.  In other words, existing proxy-based systems lacked two important properties:

  • Robustness. The mechanism that citizens use to circumvent censorship should be robust to the censor’s attempt to disrupt, mangle, or block the communication entirely.  Most existing systems (even widely used anonymization tools like Tor) are not inherently robust, because censors can block entry and exit nodes.
  • Deniability. Users of an anti-censorship system could be subject to extreme sanctions or repercussions.  For example, last year, a Chinese blogger was stabbed; many believe the outspoken nature of his blog to have provoked the violence.  Due to the consequences of violating censorship regulations, users in countries such as China even practice what is known as “self-censorship”: pre-emptively avoiding the exchange of content that might be incriminating or otherwise subject to censorship.  Therefore, any censorship system must also be deniable: that is, the users of the system must be able to deny that they were even using the system in the first place.  Achieving this goal is more difficult on the Internet than with certain communications media (e.g., radio, television), and most existing tools for circumventing censorship of providing anonymity do not achieve deniability, either.

Infranet relies on a covert channel between the user behind a firewall and the censor outside of the firewall.  The main idea is to allow a user to “cloak” a request for some censored Web site in other seemingly innocuous Web traffic.  In the case of Infranet, the proxy outside of the firewall hosted a Web server itself.  A user would issue requests for content on that Web site, but the proxy would interpret those sequence of requests as a coded message that was actually requesting some other censored content.  Despite its improvements over existing technology, Infranet did not gain widespread adoption, for (I think) two reasons:

  1. Simple schemes worked. When we talked to Voice of America about the tool, they said that most people were happy with simple proxy-based schemes; of course, the proxies had to move continually, but by the time the censors found out the new locations of the proxies and managed to block access to them, the proxy had moved to a new location.  Infranet was the circumvention equivalent of pounding a thumbtack with a sledgehammer.
  2. It required too much effort. Most censorship or anonymization tools require “helpers” outside of the censorship firewall that the censored users can communicate with.  For example, someone might need to set up a machine that runs a secure Web proxy.  Running Infranet required philanthropic users to run an Apache Web server, patch it with special software, and then face the prospect that their legitimate content hosted on the site might be blocked as a result of trying to help.  All of this seems like too much to ask.

The lack of adoption was frustrating, and it seemed difficult to have real, measurable impact.  The research problems also seemed fuzzy, ill-defined, and unsolvable.

Censorship Now

Two important developments have occurred since that time, however, both of which give me more hope that this topic area both has interesting research questions and the potential for impact.

On the downside, censorship is becoming more pervasive. Many countries around the world have gone to råemarkable lengths to restrict access to content on the Internet.   According to the Open Net Initiative, twelve countries around the world have implemented some pervasive form of censorship.  Internet censorship has also played a significant role in political events, such as the Iranian elections.  A report by Freedom House last year also reported on the state of censorship, and found that censorship is now prevalent in nearly 60 countries around the world.  Internet censorship also matters more as increasingly more people use the Internet to communicate.

On the other hand, building circumvention tools is easier. One of the major problems with Infranet was that it required philanthropic individuals to host dedicated infrastructure.  Since ten years ago, however, “Web 2.0” has made it much easier for the average user to publish content on the Internet.  Users no longer have to maintain our own Web servers to host photos, videos, etc.  It occurred to me, then, that censorship circumvention technologies could also ride the Web 2.0 wave, using infrastructure in the cloud as the foundation for hiding information and building covert channels.  Sites such as Flickr that host “user generated content” appeared the perfect place to create “drop sites” for users to hide and exchange censored content.

Collage. Based on these observations, we designed Collage, which allows users to hide messages in photos that they post to user-generated content sites like Flickr and Twitter.  The tool allows message senders to hide messages in photos and tweets and upload them to respective user-generated content sites.  Its design has several advantages.  First, it does not require users to set up fixed infrastructure (e.g., Web servers).  Second, it uses erasure coding to “spread” any single message across multiple drop sites, making the system more robust to blocking than a proxy-based system.  Collage appeared at USENIX Security Symposium last Friday (paper here) and has appeared in the press recently.  Time will tell whether this tool sees more widespread adoption.

Lessons and Looking Forward

My experience with research in Internet censorship taught me an important lesson for research in general: continually reconsider old problems. An old problem that was once uninteresting or unsolvable might become tractable because of other, seemingly unrelated developments.  In the case of Collage, the advent of Web 2.0 allowed us to significantly advance the state of art over a system like Infranet.  It is worth repeatedly asking yourself about what bearing a particular development might have on any other problem, even if the two areas seem unrelated.  You might find the right-sized hammer for your nail in the most unlikely of places.

We still don’t understand very much about Internet censorship.  We are still trying to understand its extent.  We have even less understanding of how various circumvention technologies work in practice.  It’s even harder to try to “measure” the level of deniability or robustness that a censorship circumvention tool might provide.  Debugging is also difficult: When a certain circumvention technology fails, is the failure a bug, or a direct consequence of censorship?  Finally, getting the software into the hands of the people who need it and helping them get set up (“bootstrapping”) remains a challenging problem, particularly considering that any information that a normal user could get is also accessible to a censor.  Given this wide array of open questions—ranging from theory to practice, and from technology to policy—I believe we may be at the dawn of a new and exciting research area.

Tell Me a Story

Commencement time brings commencement speeches; one of my favorite commencement speeches is a speech by Robert Krulwich at Caltech in 2008, where he discusses the importance of storytelling in science.  His speech makes a case for talking about science to audiences that may not be well-versed experts in the topic being presented.   This speech should be required listening for any graduate student or researcher in science.

Krulwich begins the speech by putting the students in a hypothetical scenario where a non-technical friend or family member asks “What are you working on?” What would you think: Is it worth the effort to try to explain your work to the general public?  Do you care to be understood by average folks? His advice: When someone asks this question, even if it is hard to explain, give it a try.  Talking about science to non-scientists is a non-trivial undertaking.  And, it is an important undertaking, because the scientific version of things compete with other perhaps equally (or more) compelling stories.

As researchers, we are competing for human attention; we love to hear stories.  Storytelling is perhaps one of the most important—and one of the most under-taught—aspects of our discipline.  The narrative of a research writeup or talk can often determine whether the work is well-received—or even received, for that matter.  Some cynics may dismiss storytelling as “marketing”, “hype”, or “packaging”, but the fact of the matter is that packaging is important.  Certainly, research papers (or talks) cannot have merely style without substance, people are busy, and many people (reviewers, journalists, and even other people within your field) will not stick around for the punchline if the story is not compelling.  Of course, this advice applies well beyond the research community, but I will focus here on storytelling in research, and some things I have learned thus far in my experiences.

When I began working on network-level spam filtering, I was initially pretty surprised at how much attention the work was receiving.  In particular, I viewed our first paper on the topic as somewhat light in terms of results: there was no sound theory or strong results, for example.  But, the work was quickly picked up by the media, on multiple occasions.  I found myself talking to a lot of reporters about the work, and, as I repeatedly explained the work to reporters, I found myself getting better at telling the story of the work.  I was using analogies and metaphors to describe our techniques, and I got much better at setting the stage for the work.  I also realized what gave the work such broad appeal: everyone understands email spam, and the conceptual differences with our approach were very easy to explain.  Here is the story, in a nutshell:

“Approximately 95% of all email traffic is spam.  Conventional mail filters look at the contents of the message—words in the mail, for example, to distinguish spam from legitimate content.  Unfortunately, as spammers get more clever, they can evade these filtering techniques by changing the content of their messages.  In contrast, our approach looks at behavioral characteristics: rather than looking at the message itself or who sent it, look at how it was sent.  To understand this, think about telemarketer phone calls: you know when someone calls first thing in the morning or right during dinner that the call is most likely a telemarketer, simply because your friends or family are too considerate to call you at those times.  You know the call is unwanted and can dismiss it before you even answer the phone.  We take the same approach with email messages: we identify behavioral characteristics that allow a mail server to reject a message based on the initial contact attempt, before it even accepts or examines the message.  Our method filters spam with 99.9% accuracy, and network operators can deploy our techniques easily without modifications to existing protocols or infrastructure.”

It turns out that this message is relatively easy for the average human to understand; they can relate to this story because they can see what it has to do with their lives, and the approach is explained clearly, and in terms of things they already understand.  Even after this initial work was published, it took me years to refine the story, so that it could be expressed crisply.  Introductions to papers and talks should always be treated with similar care. One can think of the introduction to a paper as a synopsis of the entire story, with the paper itself being the “unabridged” version (i.e., it may include many details that only the most interested reader will pore over).

How does one tell a story that readers or listeners actually want to hear?  Unfortunately, there really is not a single silver bullet, and storytelling is certainly an art.  However, there are definitely certain key ingredients that I find tend to work well; in general, I find that good stories (and, in particular, good research stories) have many common elements.  Based on those common elements, here is some advice:

  • Have a beginning, middle, and an end. At the beginning, a research paper or talk should set the context for the work.  A reader or listener immediately wants to know why they should devote their time or attention to what you have to say.  Why is the problem being solved important and interesting?  Why is the problem challenging?  Why is the solution useful or beautiful?  Who can use the results, and how can they use them?  For example, in the above story on spam filtering, there is a beginning (“users get spam; it’s annoying, and current approaches don’t work perfectly”), a middle (“here’s a new and interesting approach”), and an end (“it works; people can use it easily”).
  • Use analogies and metaphors. People have a much easier time understanding a new concept if you can relate it to something they already understand.  For example, the above story uses telemarketing as an analogy for email spam; nearly everyone has experienced a rude awakening or disruption from a telemarketer, which makes the analogy easy to understand.  In some cases, it may be that the analogy is not perfect; in these cases, I find that it helps to use an analogy anyway and explain subtle differences later.
  • Use concrete examples. People like to see concrete examples because they are exciting and much easier to relate to.  It’s even better if the example can be surprising, or otherwise engaging.  For example, the above story gives a statistic about spam that is concrete, and some may even find surprising.  In a talk, I often augment this concrete example with a news clipping, a graph, or an interactive question (e.g., one can have people guess what fraction of email traffic is spam).
  • Write in the active voice. Consider “It was observed.” (passive) vs. “We saw.” (active).  The first is boring, indirect, and unclear: the reader (or listener) cannot even figure out who observed.  I find this writing style immensely frustrating for this reason.  My frustration generally comes to a boil when someone describes a system using primarily verbs in the passive voice (“The message was sent.”).  Passive voice makes it nearly impossible for the reader to figure out what is happening because the subject of the verb is unspecified.  Often, when I press students to turn their verbs into active voice, we find out that even they were unclear on what the subject of the verb should be (e.g., what part of the system takes a certain action).
  • Be as concise as possible, but not too concise. We’ve all complained about movies that “drag on too long” or a speech that “does not get to the point”.  Humans can be quite impatient, and, in the context of research papers, people want to know the punchline quickly, as well.  Research papers are not mystery novels; they should be interesting, but they should also convey findings clearly and efficiently.  Most of my time editing writing involves removing words and otherwise shortening paragraphs to streamline the story as much as possible.

A final point is to consider the audience.  Someone you meet in an elevator or hallway might be much less interested in the details of your work than someone listening to a conference talk or thesis defense.  For this reason, it’s important to have multiple versions of your story ready.  I call this a “multi-resolution elevator pitch”, because it’s a pitch where I can start with a high-level story and dive into details as necessary.  Having a multi-resolution elevator pitch ready also makes it much easier to convey your point to very busy people who may not have the time to stick around for more than 30 seconds.  If, however, you can hook them in the first 30 seconds, you may find that they stick around to hear the longer version of your story.

Show Me the Data

One of my friends recently pointed me to this post about network data. The author states that one of the things he will miss the most about working at Google is the access to the tremendous amount of data that the company collects.

Although I have not worked at Google and can only imagine the treasure trove their employees must have, I have also spent time with lots of sensitive data during my time at AT&T Research Labs.  At AT&T, we had—and researchers still presumably have—access to a font of data, ranging from router configurations to routing table dumps to traffic statistics of all kinds.  I found having direct access to this kind of data tremendously valuable: it allowed me to “get my hands dirty” and play with data as I explored interesting questions that might be hiding in the data itself.  During that summer, I developed a taste for working on real, operational problems.

Unfortunately, when one retreats to the ivory towers, one cannot bring the data along for the ride.  Sitting back at my desk at MIT, I realized there were a lot of problems with network configuration management and wanted to build tools to help network operators run their networks better.  One of these tools was the “router configuration checker” (rcc), which has been downloaded and used by hundreds of ISPs to check their routing configurations for various kinds of errors.  The road to developing this tool was tricky: it required knowing a lot about how network operators configure their networks, and more importantly direct access to network configurations on which to debug the tool.  I found myself in a catch-22 situation: I wanted to develop a tool that was useful for operators, but I needed operators to give me data to develop the tool in the first place.

My most useful mentor at this juncture was Randy Bush, a research-friendly operator who told me something along the following lines: Everyone wants data, but nobody knows what they’re going to do with it once they get it.  Help the operators solve a useful problem, and they will give you data.

This advice could not have been more sage.

I went to meetings of the North American Network Operators Group (NANOG) and talked about the basic checks I had managed to bootstrap into some scripts using data I had from MIT and a couple other smaller networks (basically, enough to test that the tool worked on Cisco and Juniper configurations).  At NANOG, I met a lot of operators who seemed interested in the tool and were willing to help—often they would not provide me with their configurations, but they would run the tool for me and tell me the output (and whether or not the output made sense).  Guy Tal was another person who I owe a lot of gratitude for his patience in this regard.  Sometimes, I got lucky and even got a hold of some configurations to stare at.

Before I knew it, I had a tool that could run on large Internet Service Provider (ISP) configurations and give operators meaningful information about their networks, and hundreds of ISPs were using the tool.  And, I think that when I gave my job talk, people from other areas may not have understood the details of “BGP”, or “route oscillations”, or “route hijacks”, but they certainly understood that ISPs were actually using the tool.

We applied the same approach when we started working on spam filtering.  We wrote an initial paper that studied the network-level behavior of spammers with some data we were able to collect at a local “spam trap” on the MIT campus (more on that project in a later post).  The visibility of that work (and its unique approach, which spawned a lot of follow-on work) allowed us to connect with people in industry who were working on spam filtering, had real problems that needed solving, and had data (and, equally importantly, expertise) to help us think about the problems and solutions more clearly.

In these projects (as well as other more recent ones), I see a pattern in how one can get access to “real data”, even in academia.  Roughly, here is some advice:

  • Have a clear, practical problem or question in mind. Do not simply ask for data.  Everyone asks for data.  A much more select set is actually capable of doing something useful with it.  Demonstrate that you have given some thought to questions you want to answer, and think about whether anyone else might be interested in those questions.  Importantly, think about whether the person you are asking for data might be interested in what you have to offer.
  • Be prepared to work with imperfect data. You may not get exactly the data you would like.  For example, the router configurations or traffic traces might be partially anonymized.  You may only get metadata about email messages, as opposed to full payloads.  (And so on.)  Your initial reaction might be to think that all is lost without the “perfect dataset”.  This is rarely the case!  Think about how you can either adjust your model, or adapt your approach (or even the question itself) with imperfect data.
  • Be prepared to operate blindly. In many cases, operators (or other researchers) cannot give you raw data that they have access to; often, data may be sensitive, or protected by non-disclosure agreements.  However, these people can sometimes run analysis on the data for you, if you are nice to them, and if you write the analysis code in a way that they can easily run your scripts.
  • Bring something to the table. This goes back to Randy Bush’s point. If you make yourself useful to operators (or others with data), they will want to work with you—if you are asking an interesting question or providing something useful, they might be just as interested in the answers as you are.

There is much more to say about networking research and data.  Sometimes it is simply not possible to get the data one needs to solve interesting research problems (e.g., pricing data is very difficult to obtain).  Still, I think as networking researchers, we should be first looking for interesting problems and then looking for data that can help us solve those problems; too often, we operate in reverse, like the drunk who looks for his keys under the lamppost because it is brighter where the light is shining.  I’ll say more about this in a later post.