The internet apocalypse map hides the major vulnerability that created it

During Friday’s massive distributed denial of service (DDoS) attack on DNS service provider Dyn, one might be forgiven for mistaking the maps of network outages for images of some post-apocalyptic nuclear fallout. Screenshots from sites like downdetector.com showed menacingly red, fuzzy heat maps of, well, effectively just population centers of the United States experiencing serious difficulty accessing Twitter, Github, Etsy, or any of Dyn’s other high-profile clients. Aside from offering little detail and making a DDoS literally into a glowing red menace, they also obscured the reality of just how centralized a lot of internet infrastructure really is. DNS is ground zero for the uneasy tension of the internet’s presumed decentralized resilience and the reality that as of now, translating IP addresses into domain names requires some kind of centralized, hierarchical platform, and that’s probably not going to radically change anytime soon.

Other maps provided by various business to business network infrastructure companies weren’t much more helpful. These maps seem to exist mostly to signal that the companies in question have lots of cool data and that it can be made into a flashy map — which might impress potential customers, but that doesn’t offer a ton of insights for the layperson. For example, threat intelligence company Norse’s map appears to be mostly a homage to the Matthew Broderick movie War Games: a constant barrage of DDoS attacks beaming like space invader rockets across a world map. Akamai has an impressive 3D visualization that renders traffic as points beaming into the atmosphere. And website monitoring service Pingdom offers a dot map at such a far-out zoom level that it’s essentially useless for seeking out more meaningful patterns than “outages happen in population centers, also there are a lot of outages.”

That being said, those population centers are only one piece of reading the patterns of outage maps. A lot of those higher-density population areas also happen to be home to a lot of internet exchanges (in industry parlance, IXes) — buildings where different internet providers connect their networks to one another. When you type twitter.com into a browser, your computer’s request for content from twitter.com has to eventually leave your ISP’s network, go to Twitter’s network, and then return to your network. An internet exchange is basically where that handoff happens. Telegeography has a really nice map of all of them.

The locations of internet exchanges tend to follow population hubs because the routes of internet connectivity often follow older routes of telephone connectivity (which themselves often follow telegraph routes, railways, and highways). In turn, internet exchanges attract data centers and more network infrastructure. For dense coastal areas, some internet exchanges are also key switch points for data traveling across transoceanic submarine cables, as in the case of Manhattan’s 60 Hudson Street or Los Angeles’ One Wilshire. In all likelihood, devices used for Friday’s DDoS attack located across the Atlantic or Pacific probably passed through or possibly connected to Dyn’s network through these buildings.

That being said, some of the overlaps between population centers and network outages are more a reflection of the number of connections in an area than the number of humans living there. The Portland, Oregon metro area has six IXes. So does Manhattan, which is surrounded by nine additional IXes in the surrounding metro areas of New Jersey and Long Island. Dallas, Silicon Valley, and Seattle were all areas that were subsumed by the grim red cloud of No Tweets For You in outage maps yesterday.

What the maps don’t show are the anomalies of history that, today, are deeply entrenched network geography. Ashburn, Virginia, which lies about 45 minutes outside of DC and not exactly what most would call a major population hub (less than 50,000 people) has eight IXes. Suburban northern Virginia is a major chokepoint for internet traffic in part due to an accident of internet history, one that perpetuates itself because of that tendency of networks to follow other networks.

Looking at Dyn’s own highly stylized map of the locations of its DNS servers, the company also followed the existing infrastructure, placing its equipment in proximity to major IX regions and infrastructure hubs.

Ironically, the company that offers some of the more legible visualizations and comprehensive analysis of where internet outages start and how they proliferate across a network is actually Dyn itself; since 2014, Dyn has maintained a research arm exclusively dedicated to studying internet performance data and observing global events and trends online. If you’ve seen a story pass through your timeline about internet outages in Turkey or Syria in the past year, the data behind that story very likely came from Dyn Research. And that data about network traffic is one of the reasons so many companies come to Dyn: they have, essentially, one of the most detailed maps of the internet, and they help massive companies navigate that map efficiently, whether through DNS services or helping companies figure out where to locate their data centers.

And it’s not clear that such an attack would have been less devastating if Dyn were a smaller actor or its clients were rolling their own domain name servers. DNS is annoying and tedious and arguably the least sexy technical problem on the internet. “DNS is a 30-year-old protocol. We just manage it,” Kyle York, chief strategy officer at DNS service provider Dyn, noted via phone. They happen to manage it really, really well, which is in part what made yesterday’s DDoS so significant. “We’re the best in the world at this and it brought us to our knees,” York said. Not to downplay the significance of the attack, but Dyn “at its knees” in this context means “resuming normal operations in less than 24 hours.” How many other companies could say the same? If Twitter, Github, AWS, and all of Dyn’s other clients were separately, simultaneously hit with the same scale of DDoS attack on their nameservers, would they have done any better at fending it off than a single company that specializes in this stuff?

October 21st, 2016 is a day that may or may not not live in cyberwar infamy, but the DDoS was unexpectedly successful in uniting the community of mostly unnoticed and often forgotten people who work on maintenance of core internet infrastructure. It certainly united the 430-some employees at Dyn. York, sounding pretty tired (although, he apologetically informed me during the call, he also was wrangling a four-year-old on a Saturday morning), described the day as an “emotional rollercoaster” in which everyone was on deck. “Accountants were volunteering to help the customer service team, sales people were cheering on our NOC [network operations center] team,” York said. It’s not quite the thrilling war story of generations past, but it echoes the general uniting of voices from infrastructure, standards, and the security community, who have been concerned about the possibility of attacks like this for years.

But Dyn’s not especially interested in being a cyberwar hero or a household name. As Dyn’s founder Jeremy Hitchcock tweeted last night, “We like to just run the Internet and stay out of the news.” Perhaps fortunately for Hitchcock, most of the media attention on this attack is turning more and more toward its source, an Internet of Things botnet orchestrated by unknown actors. Unfortunately for Hitchcock and for Dyn, based on reports about the attack itself, it doesn’t sound like IoT-based botnets are going to be a one-time thing in DDoS attacks of this scale, and that might make it a lot harder for Dyn to just run the internet and stay out of the news.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top