It’s 10 p.m. Do You Know Where Your Passwords Are?

Have you ever wondered how many sites have your credit card number?  Or, have you ever wondered how many sites have a certain version of your password?  Do you think you might have reused the password you have used on your banking Web site on another site?  What if you decided that you wanted to “clean up” your personal information on some of the sites where you’ve leaked this information.  Would you even know where to start?

If you answered “yes” to any of the above questions, then Appu is the tool for you.  Appu is a Chrome extension developed by my Ph.D. student Yogesh Mundada that keeps track of what we call your privacy footprint on the Web.  Every time you enter personally identifiable information (address, credit card information, password, etc.) into a Web site, Appu performs a cryptographic hash of that information, associates the hash with that site, and stores it, to keep track of where you have entered various information.  If you ever re-enter the same password on a different site, Appu will warn you that you have reused a password and where you’ve re-used that password.   As a user, you will immediately see a warning like the one below:

appu-reuse

You might be wondering: “Why should I trust your Chrome extension with the passwords that I enter on various sites?”  The good news is that you do not have to trust Appu with your passwords and personal information to use this tool, because Appu never sends your information anywhere in cleartext.  Before sending a report to us, Appu performs what is called a cryptographic hash on all of your information.  It also only stores a cryptographic hash of each password locally; no passwords are ever stored in cleartext, anywhere.  If you ever enter the same password elsewhere, the result of performing a cryptographic hash on your password would produce the same unreadable output—therefore, Appu never knows what your password is, only that you’ve reused it.  Appu stores your other personal information in cleartext locally on your machine so that you can see which sites have which values of various personal information, but it never sends that information in the clear to us.  Appu always asks the user before sending any information to us, and the tool also gives the user the option to delete anything from the reports that Appu sends to us.  If you still want to assure yourself that Appu is not doing anything suspect, you can read the source code.

Appu can help users keep track of the following information:

  • Password reuse.  Have I reused the same password across multiple sites?  If so, on which sites have I used the same password?
  • Privacy footprint.  Which sites have a copy of my full name and address (or other information)?  What specific information have I provided to those sites?
  • Password strength.  Have I used a weak password for my online bank account? (Or other Web site)
  • Password stagnancy. When was the last time I changed my password on a particular site?

In addition to the pop-up information that users see, as above, Appu also provides reports to allow users to keep track of answers to these questions.  The figures below show two examples of this.  The figure on the left shows the privacy footprint page, where a user can see which sites have stored personal information (e.g., name, email address); the figure on the right shows more detailed information, such as the last time a user changed his or her password.  That report also tells a user how often they’ve visited a site—therefore, Appu can help you figure out that even though you’ve only visited a site once, that site is storing sensitive information, such as your credit card number (hopefully spurring you to go clean up your personal information on that site).

appu-footprint
appu-report

Our hope is that Appu will help users better manage their online privacy footprints, thereby better managing the risks that they potentially expose themselves to through password reuse.

We initially released Appu through a private alpha release, to about ten close friends.  Even in this small sample size, we can observe interesting aggregate behavior.  Users are far more cavalier about their personal information than we expected.  For example, we have observed the following behaviors:

  • Although users are less forthcoming with their credit card information, they are surprisingly forthcoming about what one might otherwise think is private information, such as religious views.
  • People often share passwords across “high-value” (e.g., Amazon) and “low-value” (e.g., TripIt) sites.
  • Many users have revealed personal information (e.g., address, credit card information) to sites they rarely visit, or have visited only once.
  • Several users had weak passwords on their banking sites that could be cracked in less than one day.

Are you one of these users who needs to clean up their online privacy footprint?  Download and install the Appu Chrome extension to find out! As Appu gains a larger user base, we will follow up with more discoveries about users’ behavior regarding their online privacy footprint.  We are actively developing a Firefox version of Appu; please join the appu-users mailing list if you want to get updates aboutversion releases, news about support for other browsers.

(And yes, in case you are wondering, this project is being thoroughly reviewed by Georgia Tech’s Institutional Review Board.)

Advertisement

Internet Relativism and the Hunt for Elusive “Ground Truth”

Networking and security research often rely on a notion of ground truth to evaluate the effectiveness of a solution.  “Ground truth” refers to a true underlying phenomenon that we would like to characterize, detect, or measure.  We often evaluate the effectiveness of a classifier, detector, or measurement technique by how well it reflects ground truth.

For example, an Internet link might have a certain upstream or downstream throughput; the effectiveness of a tool that measures throughput could be thus be quantified in terms of how close its estimates of upstream and downstream throughput are in comparison to the true throughput of the underlying link.  Since there is a physical link with actual upstream or downstream throughput characteristics—and the properties of that link are either explicitly known or can be independently measured—measuring error with respect to ground truth makes sense.  In the case of analyzing routing configuration to predict routing behavior (or detect errors), static configuration analysis can characterize where traffic in the network will flow and whether the configuration will give rise to erroneous behavior; either the predictions correctly characterize the behavior of the real network, or they don’t.  A spam filter might classify an email sender as a legitimate sender or a spammer; again, either the sender is a spammer or it is a legitimate mail server.  In this case, comparing against ground truth is more difficult, since if we had a perfect characterization of spammers and legitimate senders, we would already have the perfect spam filter.  The solution in these kinds of cases is to compare against an independent label (e.g., a blacklist) and somehow argue that the proposed detection mechanism is better than the existing approach to labeling or classification (e.g., faster, earlier, more lightweight, etc.).

Problem: Lack of ground truth.  For some Internet measurement problems, the underlying phenomenon simply cannot be known—even via an independent labeling mechanism—either because the perpetrator of an action won’t reveal his or her true intention, or sometimes because there actually is no “one true answer”. Sometimes we want to characterize scenarios or phenomena where the ground truth proves elusive.  

Consider the following two problems:

  • Network neutrality.The network neutrality debate centers around the question of whether Internet service providers should carry all traffic according to the same class of service, regardless of various properties such as what type of traffic it is (e.g., voice, video) or who is sending or receiving that traffic.
  • Filter bubbles.  Eli Pariser introduced the notion of a filter bubble in his book The Filter Bubble.  A filter bubble is the phenomenon whereby each Internet user sees different Internet content based on factors ranging from our demographic to our past search history to our stated preferences.  Briefly, each of us sees a different version of the Internet, based on a wide range of factors.

These two detection problems do not have a notion of ground truth that can be easily measured.  In the latter case, there is effectively no ground truth at all.

In the case of network neutrality, detection boils down to determining whether an ISP is providing preferential treatment to a certain class of applications or customers.  While ground truth certainly exists (i.e., either the ISP is discriminating against a certain class of traffic or it isn’t), discovering ground truth is incredibly challenging: ISPs may not reveal their policies concerning preferential treatment of different traffic flows, for example.

Similarly, in the case of filter bubbles, we want to determine whether a content provider or intermediary (e.g., search engine, news aggregator, social network feed) is manipulating content for particular groups of users (e.g., showing only certain news articles to Americans).  Again, there is a notion of ground truth—either the content is being manipulated or it isn’t—but the interesting aspect here is not so much whether content is being manipulated (we all know that it is), but rather what the extent of that manipulation is.  Characterizing the extent of manipulation is difficult, however, because personalization is so pervasive on the Internet: everyone effectively sees content that is tailored to their circumstances, and there is no notion of a baseline that reflects what a set of search results or a page of recommended products might look like before the contents were tailored for a particular user.  In many cases, personalization has been so ingrained in data mining and search that even the algorithm designers are unable to characterize what “ground truth” content (i.e., without manipulation) might look like.

Relativism: measuring how different perspectives give rise to inconsistencies.  In cases where ground truth is difficult to measure or impossible to know, we can still ask questions about consistency.  For example, in the case of network neutrality, we can ask whether different groups of users experience comparable performance.  In the case of filter bubbles, we can ask whether different groups of users see similar content.  When inconsistencies arise, we can then attempt to attribute a cause to these inconsistencies by controlling for all factors except for the factor we believe might be the underlying cause for the inconsistency.  One might call this Internet relativism, in a way: We concede that either there is no absolute truth, or that the absolute truth is so difficult to obtain that we might as well not try to know it.  Instead, we can explore how differences in perspective  or “input signals” (e.g., demographic, geography) give rise to different outcomes and try to determine which input differences triggered the inconsistency.  We have applied this technique to the design of two real-world systems that address these two respective problem areas.  In both of these problems, we would love to know the underlying intention of the ISP or information intermediary (i.e., “Is the performance problem I’m seeing a result of preferential treatment?”, “(How) is Google, Netflix, or Amazon manipulating my results based on my demographic?”).

  • NANO: Network Access Neutrality Observatory.We developed NANO several years ago to characterize ISP discrimination for different classes of traffic flows.  In contrast to existing work in this area (e.g., Glasnost), which requires a hypothesis about the type of discrimination that is taking place, NANO operates without any a priori hypothesis about discrimination rules and simply looks for systematic deviation from “normal” behavior for a certain class of traffic (e.g., all traffic from a certain ISP, for a certain application, etc.).  The tricky aspect involved in this type of detection is that there is no notion of normal.  For example, ISP Y might also be performing similar type of discrimination, so there is no firm ground truth against which to compare.  Ideally, what we’d like to ask is “What would be the performance that this user see using ISP X vs. the performance they would see if they were not using ISP X?”  Unfortunately, there is no reasonable way to test the performance that a user would experience as a result of not using an ISP.  (This is in contrast to randomized treatment in clinical trials, where it makes sense to have a group of users who, say, are subject to a particular treatment or not.)  To address this problem, the best we could do to establish a baseline was to average the performance seen by all users from other ISPs and compare those statistics against the performance seen by a group of users for the ISP under test.
  • Bobble: Exposing inconsistent search results.  We recently developed Bobble to characterize the inconsistencies that exist in Web search results that users see, as a result of both personalization and geography.  Ideally, we would like to measure the extent of manipulation against some kind of baseline.  Unfortunately, however, the notion of a baseline is almost meaningless, since no Internet user is subject to such a baseline—even a user who has no search history may still see personalized results based on geography, time of day, device type, and other features, for example.  In this scenario, we established a baseline by comparing the search results of a signed-in user against a user with no search history, making our best attempt to hold all other factors constant.  We also performed the same experiment with users who were not signed in and had no search history, varying only geography.  Unlike NANO, in the case of Bobble, there is not even a notion of an “average” user; the best we can hope for are meaningful characterizations of inconsistencies.

Takeaways and general principles.  These two problems both involve an attempt to characterize an underlying phenomenon without any hope of observing “ground truth”.  In these cases, it seems that our best hope is to approximate a baseline and compare against that (as we did in NANO); failing that, we can at least characterize inconsistencies.  In any case, when looking for these inconsistencies, it is important to (1) enumerate all factors that could possibly introduce inconsistencies; and (2) hold those factors fixed, to the extent possible.  For example, in NANO, one can only compare a user against average performance for a group of users that have identical (or at least similar) characteristics for anything that could affect the outcome.  If, for example, browser type (or other features) might affect performance, then the performance of a user for an ISP “under test” must be compared against users with the same browser (or other features), with the ISP being the only differing feature that could possibly affect performance.  Similarly, in the case of Bobble, we must hold other factors like browser type and device type fixed when attempting to isolate the effects of geography or search history.  Enumerating all of these features that could introduce  inconsistencies is extremely challenging, and I am not aware of any good way to determine whether a list of such features is exhaustive.

I believe networking and security researchers will continue to encounter phenomena that they would like to measure, but where the nature of underlying phenomenon cannot be known with certainty.  I am curious as to whether others have encountered problems that call for Internet relativism, and whether it may time to develop sound experimental methods to characterize Internet relativism, rather than simply blindly clamoring for “ground truth” when none may even exist.

Software-Defined Networking and The New Internet

Tonight, I am sitting on an panel sponsored by NSF and Discover Magazine about “The New Internet”.  The panel has four panelists who will be discussing their thoughts on the future of the Internet.  Some of the questions we have been asked to answer involve predictions about what will happen in the future.  Predictions are a tall order; as Yogi Berra said: “It is hard to make predictions, especially about the future.”

Predictions aside, I think one of the most exciting things about this panel is that we are having this discussion at all.  Not even ten years ago, Internet researchers were bemoaning the “ossification” of the Internet.  As the Internet continues to mature and expand, the opportunities and challenges seem limitless.  More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad.  This changing landscape presents tremendous opportunities for innovation.   The challenge, then, is developing a platform on which this innovation can occur.  Along these lines, a multicampus collaboration is pursuing a future Internet architecture that proposes to architect the network to make it easier for researchers and practitioners to introduce new, disruptive technologies on the Internet.  The “framework for innovation” that is proposed in the work rests on a newly emerging technology called software-defined networking.

Software-defined networking. Network devices effectively have two aspects: the control plane (in some sense, the “brain” for the network, or the protocols that make all of the decisions about where traffic should go), and the data plane (the set of functions that actually forward packets).  Part of the idea behind software-defined networking is to run the network’s control plane in software, on commodity servers that are separate from the network devices themselves.  This notion has roots in a system called the Routing Control Platform, which we worked on about five years ago and now operates in production at AT&T.  More recently, it has gained more widespread adoption in the form of the OpenFlow switch specification.  Software-defined networking is now coming of age in the NOX platform, an open-source OpenFlow controller that allows designers to write network control software in high-level languages like Python. A second aspect of software-defined networking is to make the data plane itself more programmable, for example, by engineering the network data plane to run on hardware.  People are trying to design data planes that are more programmable with FPGAs (see our SIGCOMM paper on SwitchBlade), with GPUs (see the PacketShader work), and also with clusters of servers (see the RouteBricks project).

This paradigm is reshaping how we do computer networking research.  Five years ago, vendors of proprietary networking devices essentially “held the keys” to innovation, because networking devices—and their functions—were closed and proprietary.  Now a software program can control the behavior not only of  individual networking devices but also of entire networks.  Essentially, we are now at the point where we can control very large networks of devices with a single piece of software.

Thoughts on the New Internet. The questions asked of the panelists are understandably a bit broad. I’ve decided to take a crack at these answers in the context of software-defined networking.

1. What do you see happening in computer networking and security in the next five to ten years? We are already beginning to see several developments that will continue to take shape over the next ten years. One trend is the movement of content and services to the “cloud”. We are increasingly using services that are not on our desktops but actually run in large datacenters alongside many other services.  This shift creates many opportunities: we can rely on service providers to maintain software and services that once required dedicated system and network administration.  But, there are also many associated challenges.  First, determining how to help network operators optimize both the cost and performance of these services is difficult; we are working on technologies and algorithms to help network operators better control how users reach services running in the cloud to help them better manage the cost of running these services while still providing adequate performance to the users of these services. A second challenge relates to security: as an increasing number of services move to the cloud, we must develop techniques to make certain that services running in the cloud cannot be compromised and that the data that is stored in the cloud is safe.

Another important trend in network security is the growing importance of controlling where data goes and tracking where it has been; as networks proliferate, it becomes increasingly easy to move data from place to place—sometimes to places where it should not go.  There have been several high-profile cases of “data leaks”, including a former Goldman Sachs employee who was caught copying sensitive data to his hedge fund.  Issues of data-leak prevention and compliance (which involves being able to verify that data did not leak to a certain portion of the network) are becoming much more important as more sensitive data moves to the Internet, and to the cloud.Software-defined networking is allowing us to develop new technologies to solve both of these problems. In our work on Transit Portal, we have used software routers to give cloud service providers much more fine-grained control over traffic to cloud services. We have also developed new technology based on software-defined networking to help stop data leaks at the network layer.

2. What is the biggest threat to everyday users in terms of computer security? Two of the biggest threats to everyday users in terms of computer security are the migration of data and services to the cloud and the proliferation of well-provisioned edge networks (e.g., the buildout of broadband connections to home networks).  The movement of data to the cloud offers many conveniences, but it also presents potentially serious privacy risks.  As services ranging from email to spreadsheets to social networking move to the cloud, we must develop ways to gain more assurance over who is allowed to have access to our data.  Another important challenge we will face with regards to computer security is the proliferation of well-provisioned edge networks. The threat of botnets that mount attacks ranging from spam to phishing to denial-of-service will become even more acute as home networks—which are, today, essentially unmanaged—proliferate. Attackers look for well-connected hosts, and as connectivity to homes improves and as the network “edge” expands, mechanisms to secure the edge of the network will also become more important.

3. What can we do via the Internet in the future that we can’t do now? The possibilities are limitless.  You could probably imagine that anything you are doing in the real world now might take place online in the future.  We are even seeing the proliferation of entirely separate virtual worlds, and the blending of the virtual world with the physical world, in areas such as augmented reality.  Pervasive, ubiquitous computing and the emergence of cloud-based data services make it easier to design, build, and deploy services that aggregate large quantities of data.  As everything we do moves online, everything we do will also be stored somewhere.  This trend poses privacy challenges, but, if we can surmount those challenges, there may also be significant benefits, if we can develop ways to efficiently aggregate, sort, search, analyze and present the growing volumes of data.

The Economist had a recent article that suggested that the next billion people who come onto the Internet will do so via mobile phone; this changing mode of operation will very likely give rise to completely new ways of communicating and interacting online.  For example, rural farmers are now getting information about farming techniques online; services such as Twitter are affecting political dynamics, and may even be used to help defeat censorship.

Future capabilities are especially difficult to predict, and I think networking researchers have not had the best track record in predicting future capabilities.  Many of the exciting new Internet applications have actually come from industry, both through large companies and from startups.  Networking research has been most successful at developing platforms on which these new applications can run, and ongoing research suggests that we will continue to see big successes in that area.  I think software-defined networking will make it easier to evolve these platforms as new applications develop and we see the need for new applications.

4. What are the big challenges facing the future of the Internet? One of the biggest challenges facing the future of the Internet is that we don’t really yet have a good understanding of how to make it usable, manageable, and secure.  We need to understand these aspects of the Internet, if for no other reason than we are becoming increasingly dependent on it.  As Mark Weiser said, “The most profound technologies are those that disappear.”  Our cars have complex networks inside of them that we don’t need to understand in order to drive them.  We don’t need to understand Maxwell’s equations to plug in a toaster.  Yet, to configure a home network, we still need to understand arcana such as “SSID”, “MAC Address”, and “traceroute”.  We must figure out how to make these technologies disappear, at least from the perspective of the everyday user.  Part of this involves providing more visibility to network users about the performance of their networks, in ways that they can understand.  We are working with SamKnows and the FCC on developing techniques to improve user visibility into the performance of their access networks, for example.  Software-defined networking probably has a role to play here, as well: imagine, for example, “outsourcing” some of the management of your home network to a third party service who could help you troubleshoot and secure your network.  We have begun to explore how software-defined networking could make this possible (our recent HomeNets paper presents one possible approach).  Finally, I don’t know if it’s a challenge per se, but another significant question we face is what will happen to online discourse and communication as more countries come online; tens of countries around the world implement some form of surveillance or censorship, and the technologies that we develop will continue to shape this debate.

5. What is it going to take to achieve these new frontiers? The foremost requirement is an underlying substrate that allows us to easily and rapidly innovate and frees us from the constraints of deployed infrastructure.  One of the lessons from the Internet thus far is that we are extraordinarily bad at predicting what will come next.  Therefore, the most important thing we can do is to design the infrastructure so that it is evolvable.

I recently read a debate in Communications of the ACM concerning whether innovation on the Internet should happen in an incremental, evolutionary way or whether new designs must come about in a “clean slate” fashion.  But, I don’t think these philosophies are necessarily contradictory at all: we should be approaching problems with a “clean slate” mentality; we should not constrain the way we think about solutions simply based on what technology is deployed today. On the other hand, we must also figure out how to deploy whatever solutions we devise in the context of real, existing, deployed infrastructure.  I think software-defined networking may effectively resolve this debate for good: clean-slate, disruptive innovation can occur in the context of existing infrastructure, as long as the infrastructure is designed to enable evolution.  Software-defined networking makes this evolution possible.

Tell Me a Story

Commencement time brings commencement speeches; one of my favorite commencement speeches is a speech by Robert Krulwich at Caltech in 2008, where he discusses the importance of storytelling in science.  His speech makes a case for talking about science to audiences that may not be well-versed experts in the topic being presented.   This speech should be required listening for any graduate student or researcher in science.

Krulwich begins the speech by putting the students in a hypothetical scenario where a non-technical friend or family member asks “What are you working on?” What would you think: Is it worth the effort to try to explain your work to the general public?  Do you care to be understood by average folks? His advice: When someone asks this question, even if it is hard to explain, give it a try.  Talking about science to non-scientists is a non-trivial undertaking.  And, it is an important undertaking, because the scientific version of things compete with other perhaps equally (or more) compelling stories.

As researchers, we are competing for human attention; we love to hear stories.  Storytelling is perhaps one of the most important—and one of the most under-taught—aspects of our discipline.  The narrative of a research writeup or talk can often determine whether the work is well-received—or even received, for that matter.  Some cynics may dismiss storytelling as “marketing”, “hype”, or “packaging”, but the fact of the matter is that packaging is important.  Certainly, research papers (or talks) cannot have merely style without substance, people are busy, and many people (reviewers, journalists, and even other people within your field) will not stick around for the punchline if the story is not compelling.  Of course, this advice applies well beyond the research community, but I will focus here on storytelling in research, and some things I have learned thus far in my experiences.

When I began working on network-level spam filtering, I was initially pretty surprised at how much attention the work was receiving.  In particular, I viewed our first paper on the topic as somewhat light in terms of results: there was no sound theory or strong results, for example.  But, the work was quickly picked up by the media, on multiple occasions.  I found myself talking to a lot of reporters about the work, and, as I repeatedly explained the work to reporters, I found myself getting better at telling the story of the work.  I was using analogies and metaphors to describe our techniques, and I got much better at setting the stage for the work.  I also realized what gave the work such broad appeal: everyone understands email spam, and the conceptual differences with our approach were very easy to explain.  Here is the story, in a nutshell:

“Approximately 95% of all email traffic is spam.  Conventional mail filters look at the contents of the message—words in the mail, for example, to distinguish spam from legitimate content.  Unfortunately, as spammers get more clever, they can evade these filtering techniques by changing the content of their messages.  In contrast, our approach looks at behavioral characteristics: rather than looking at the message itself or who sent it, look at how it was sent.  To understand this, think about telemarketer phone calls: you know when someone calls first thing in the morning or right during dinner that the call is most likely a telemarketer, simply because your friends or family are too considerate to call you at those times.  You know the call is unwanted and can dismiss it before you even answer the phone.  We take the same approach with email messages: we identify behavioral characteristics that allow a mail server to reject a message based on the initial contact attempt, before it even accepts or examines the message.  Our method filters spam with 99.9% accuracy, and network operators can deploy our techniques easily without modifications to existing protocols or infrastructure.”

It turns out that this message is relatively easy for the average human to understand; they can relate to this story because they can see what it has to do with their lives, and the approach is explained clearly, and in terms of things they already understand.  Even after this initial work was published, it took me years to refine the story, so that it could be expressed crisply.  Introductions to papers and talks should always be treated with similar care. One can think of the introduction to a paper as a synopsis of the entire story, with the paper itself being the “unabridged” version (i.e., it may include many details that only the most interested reader will pore over).

How does one tell a story that readers or listeners actually want to hear?  Unfortunately, there really is not a single silver bullet, and storytelling is certainly an art.  However, there are definitely certain key ingredients that I find tend to work well; in general, I find that good stories (and, in particular, good research stories) have many common elements.  Based on those common elements, here is some advice:

  • Have a beginning, middle, and an end. At the beginning, a research paper or talk should set the context for the work.  A reader or listener immediately wants to know why they should devote their time or attention to what you have to say.  Why is the problem being solved important and interesting?  Why is the problem challenging?  Why is the solution useful or beautiful?  Who can use the results, and how can they use them?  For example, in the above story on spam filtering, there is a beginning (“users get spam; it’s annoying, and current approaches don’t work perfectly”), a middle (“here’s a new and interesting approach”), and an end (“it works; people can use it easily”).
  • Use analogies and metaphors. People have a much easier time understanding a new concept if you can relate it to something they already understand.  For example, the above story uses telemarketing as an analogy for email spam; nearly everyone has experienced a rude awakening or disruption from a telemarketer, which makes the analogy easy to understand.  In some cases, it may be that the analogy is not perfect; in these cases, I find that it helps to use an analogy anyway and explain subtle differences later.
  • Use concrete examples. People like to see concrete examples because they are exciting and much easier to relate to.  It’s even better if the example can be surprising, or otherwise engaging.  For example, the above story gives a statistic about spam that is concrete, and some may even find surprising.  In a talk, I often augment this concrete example with a news clipping, a graph, or an interactive question (e.g., one can have people guess what fraction of email traffic is spam).
  • Write in the active voice. Consider “It was observed.” (passive) vs. “We saw.” (active).  The first is boring, indirect, and unclear: the reader (or listener) cannot even figure out who observed.  I find this writing style immensely frustrating for this reason.  My frustration generally comes to a boil when someone describes a system using primarily verbs in the passive voice (“The message was sent.”).  Passive voice makes it nearly impossible for the reader to figure out what is happening because the subject of the verb is unspecified.  Often, when I press students to turn their verbs into active voice, we find out that even they were unclear on what the subject of the verb should be (e.g., what part of the system takes a certain action).
  • Be as concise as possible, but not too concise. We’ve all complained about movies that “drag on too long” or a speech that “does not get to the point”.  Humans can be quite impatient, and, in the context of research papers, people want to know the punchline quickly, as well.  Research papers are not mystery novels; they should be interesting, but they should also convey findings clearly and efficiently.  Most of my time editing writing involves removing words and otherwise shortening paragraphs to streamline the story as much as possible.

A final point is to consider the audience.  Someone you meet in an elevator or hallway might be much less interested in the details of your work than someone listening to a conference talk or thesis defense.  For this reason, it’s important to have multiple versions of your story ready.  I call this a “multi-resolution elevator pitch”, because it’s a pitch where I can start with a high-level story and dive into details as necessary.  Having a multi-resolution elevator pitch ready also makes it much easier to convey your point to very busy people who may not have the time to stick around for more than 30 seconds.  If, however, you can hook them in the first 30 seconds, you may find that they stick around to hear the longer version of your story.