About this

Writings so far

9.28.2014

Not in Kansas anymore: How the new Internet and cloud-based networking impacts network neutrality

In an article in the Norwegian newspaper Dagbladet.no June 11th 2014, Harald Krohg of Telenor calls for an discussion on net neutrality that starts with how the world of Internet actually is and not how it used to be some 20 years ago (largely an academic arena - it wasn't, but that's another story).

What might this "new" Internet actually be and why is it any different from earlier versions? And what has this to do with the network neutrality debate that the article in question discussed? Lets look at some major changes in the fabric of the Internet over the last 7-8 years or so, and some of the new developments that would potentially change the piping of the Internet and how services over the Internet are delivered and consumed.  And not the least, how these new developments impacts "network neutrality".

I guess most people with some interest in Internet traffic exchange and flows, has learned that the Internet is a series of interconnected networks operated by different network service providers, operated autonomously and independant of each other, exchanging traffic with each others as peers (i.e. more or less equals) and covering costs this might incur themselves. Or buying full routing and access to the global Internet through Internet transit.

This could be called the classic Internet, and was more or less how the whole global Internet came about in the 80's and 90's; thousands of independant network service providers worldwide linking up with each other, exchanging traffic as peers and buying transit to networks or parties they didn't have immediate access to. Or didn't want to manage with a direct peering relationship.

For a number of years the Internet developed as a many to many interconnected web of networks and within structured tiers: There were tier 1 large internationals, tier 2 regional, medium network operators typically and tier 3 small, local network operators organized in a structured network hierarchy, but close to all parties were able to exchange traffic with each other through mutual peering or paid transit (usually to tier 1's or some tier 2s). The Internet was thousand of networks forming a joint whole and everyone was or could be interconnected to peer and exchange traffic more or less equally.

This classic Internet started to change or unravel in the period leading up to 2007, when an ATLAS Internet Study ("ATLAS Internet Observatory, 2009 Annual Report" (1) study found that whereas in 2007, "thousands of ASNs contributed 50% of content, in 2009, 150 ASNs contribute 50% of all Internet traffic" (an ASN is autonomous system or routing domain for a set of network domains controlled by a network operator or service provider).

This meant that by 2007 a mere 150 network operators or service providers controlled or originated over 50% of the global Internet traffic, and that the classic Internet as a series of interconnected peer networks was gone.  These 150 or so dominant network operators wasn't just tier 1 Internet providers and international telecoms operators any longer, but various US Internet web site giants, traffic aggregators, advertisers and distributors like Google, Comcast, Limelight, Akamai and, at the time, P2P-traffic.

As an example, Akamai, the worlds largest CDN operator with some 150,000 servers in 92 countries within over 1,200 networks, now in 2014 claims to deliver between 15-30% of all Web traffic worldwide (what is web traffic in this statement? Http-based for sure, but most likely http-based website traffic only and not adaptive streaming over the http protocol).

In a 2011 follow-up study to the 2007 ATLAS Internet Observatory report named "Internet Traffic Evolution 2007 - 2011" (2) by Craig Labovitz, Mr Labovitz showed that by 2011 some top 10 network operators and Internet service providers now alone accounted for close to 40% of global Internet traffic, clearly showing that large scale traffic concentration and aggregation was in full swing on the global Internet by various Internet giants. These Internet giants and their audience was also mostly American, with the US "growing in both absolute traffic volume and as a weighted average percentage of Internet traffic (grew from 40% to 50% by average aggregate traffic volume in 2011).

So in 2007 some 150 operators of various size and form originated some 50% of the Internet in, while by 2011, it took only 10 operators to control close to 40%.

So one can safely conclude that the Internet by now is dominated by some large Internet giants. Who, it turns out, all have worldwide data centers they operate themselves and all rely on content distribution network operators or infrastructures residing increasingly inside local, last-mile access network operators networks.

This post isn't about the build and reliance upon large, distributed data centers and CDN infrastructures for the Internet parties dominating Internet traffic volumes, but rather that these build-outs can be viewed as a way to preposition and enhance the reachability of services and content by the Internet giants. And as a way to make sure that users get their content, adverts and services with far greater quality of experience than plain Internet delivery from some far-away host. And it would be a far stretch to call the traffic managed and shaped, manipulated, re-routed, load balanced, proxied, pre-positioned or cached by terminal type, location, time of day, web site type etc by these players this way for neutral in delivery or reception.

As noted above, Akamai CDN clusters are inside some 1200 local and global networks, with Akamai URLs showing up lots of places inside Facebook pages, as well as Microsoft and Apple services for instance.  In 2, it's shown that "as of February 2010, more than 60% of Google traffic does not use transit" pointing to the number of ISP deployments (in over 100 countries) and effectiveness of the Google Global Cache Platform (claimed hit rate at 70-990% for local cache delivery) for YouTube and other Google domains.  These distributed cache and CDN deployments and their associated, integrated client-side players, can perform most of the network functionality and traffic shaping as mentioned in previous paragraph.

Since 2011 Netflix has become, of course, a significant source of traffic in the markets they are present in, making up close to 34% of all broadband traffic in the US for instance. And Netflix also relies on a CDN infrastructure of their own design to carry and distribute Netflix movies and vides, the Netflix Open Connect CDN, that is offered for free (or "free") to regional network operators and ISPs to locate inside their network.  As with other CDNs, Netflix Open Connect is meant to save money (for Netflix, for intermediate networks, for access networks) - and to increase the customer user experience, i.e. QoE, for the end-user.  Minimizing video buffering, video load times and achieving best videorate possible towards customers equals increased video rentals per month and higher ARPUs.

It's interesting to parallel the development of Internet giants, their distributed data centers worldwide and reliance of CDNs to distribute traffic and content for maximum quality of experience and performance with two other Internet phenomena developing in the 2007 - 2011 period, namely Internet network neutrality and cloud computing (and cloud-based networking coming out of this latter one).

Since US telco CEO Edward Whitacre uttered the famous "not for free on my pipes" in 2005 (see note a), bringing the earlier work on network neutrality of Tim Wu to the fore ("Network Neutrality, Broadband Discrimination", 2003), in turn strongly influencing and leading up to the 2010 passage of  the US Net Neutrality rule, the real action or infrastructure investments on the Internet, one could claim, took place elsewhere.

Or to re-phrase: While the public debate was quite concerned about perceived non- network neutral activities and shenanigans taking place with local ISPs and telco's, the Internet giants was busy building de facto quality of service traffic management and content manipulation capabilities behind the scenes, that in turn was very much focused on bringing the user a vastly improved Quality of Experience and performance compared to what the plain Internet would enable them to do.

The network neutrality debate boiled down to, I would claim, that all traffic delivery was to be treated equal, neutral and dumb ("neutral" has a better ring to it than dumb, but the effect was the same...) for all users and originators, and without traffic shaping, differentiating quality of service towards the user and without any service discrimination.  So-called last-mile Internet access providers were seen as the main obstacle to or most likely culprits to screw up Internet network neutrality, and much of the network neutrality debate was held I would claim on the assumption that your local telco would be on the opposite end of network neutrality if they could have their own ways.

This level playing field approach was very much in the interest of first-mile and middle-mile operators like the aforementioned Internet giants, that didn't have, usually, last-mile Internet access networks of their own, and were relying on a range of 3rd party local, access providers to bring their stuff and content to the end-users.  For them having one dumb or standard tcp/ip pipe to the end-user would be vastly preferable to having local QoS access schemas all over the place from local besserwissers.

So mini summary - having a local access schema across the Internet that could be treated as dumb and plain as possible without any local or per provider QoS traffic classification was and is a great benefit if you are looking to distribute your content and traffic as uniformly as possible.  Any need, or the real need you have, for traffic management, shaping and positioning can be achieved by having

  1. distributed DCs and CDNs as close to the main volume markets as possible, 
  2. controlling the end-user client with your own player or portal,
  3. logging the client and user behaviour and usage patterns into your own big-data stores to review and optimize service delivery.

Since 2007-2008 we have also witnessed the explosive growth of cloud services for anywhere, anytime and as-a Service IT service delivery.  The virtualisation of servers and compute power gave way to radical economics of scale for the initial set of Virtual Server Providers (VPS-provider), some of whom evolved onto true cloud IT service providers when they added improved self-serve configuration support, pay as you go billing models, ubiquitous network access and so-called location transparent resource pooling.  Soonish a range of network services and functions was also offered on top of virtual servers and not just on dedicated network hardware any longer, paving the way for network virtualization of network functionality.

If one wants to have a look at the future of the Internet infrastructure and networking, there are 2 key developments that should be studied and understood:
  1. The development of network virtualization via (increasingly standardized) network functionality virtualization (NFV) and software defined networks (SDN) for network control, inside clouds, backbones or data centers, and
  2. Internet bypass and private nets developments

The virtualization of network functionality with NFV for IP routing, caching, proxy, firewalls, NAT and traffic shaping as well as application acceleration will come to mean just as much for the continued development of IP and Internet networking as server virtualization has come to dictate and govern IT-infrastructure and cloud-based IT-production. Network virtualization will over time remove the need for a series of dedicated networking hardware on local customer accesses (corp market) or in the data center and will move advanced traffic management and networking functionality in to the cloud that in turn will lower networking costs for both the corporate and residential market.

And not the least, lower the threshold for giving companies and end-users access to new and advanced network functionality for extended traffic manipulation, shaping and acceleration for instance per application type, IP ports, terminal types, time of day parameters, traffic tariffs and more.

In light of current network neutrality approaches and "all users and all traffic shall be treated equally and without discrimination" one can argue that the whole purpose of network virtualization, NFV (and SDN) is exactly to enable and perform broad, active and wide-ranging network traffic flow re-arrangement, traffic manipulation of all sorts and application discrimination (positive or negative). And whether one wants it or not.  For instance the Google Andromeda introduction this spring, the codename for Google’s network virtualization stack, it was also announced that "Customers in these zones will automatically see major performance gains in throughput over our already fast network connections", resulting from applying "distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls". With more enhancements coming...Who says no to more performance?

With websites, Internet services and apps increasingly residing inside one or several clouds, another side to moving traffic for access to the same websites, apps, Internet services and of course SaaS-based applications out of the basic Internet and into the cloud-providers cloud fabrics, their own fiber-nets and data centers, the usual "level" playing and delivery mechanisms of the Internet, with various degrees of assumed network neutrality, no longer applies.

Once the traffic is inside a cloud-providers network and cloud service, the network access and delivery is more akin to an Internet overlay or private, virtual network domain. Most of the larger cloud providers now provide their own direct, private (fiber) access to their own cloud services from major Internet peering exchanges, data centers or telco backbones (for instance Amazon DirectRoute, MS Azure ExpressRoute, IBM Softlayer Direct Link etc) with accompanying "better than Internet SLAs" and support for QoS-tagging of traffic to/from the cloud service in question.

The use of CDN-networks was far along Internet overlay and bypass and this trend will accelerate even more with the introduction of private, dedicated cloud access, that in turn undermines or bypasses traditional net-neutral traffic delivery and traffic management over the Internet:  So far there has been no calls for network neutrality and "neutral" traffic management to take place to/from and inside cloud networks.  Cloud networking policy, management and performance levels are governed by the cloud-provider only.

The Internet bypass phenomena is set to grow, and already a number of companies are positioning themselves as bypass-only providers, for instance International Internet Exchange (see also this article).

Compared to traditional Internet peering and transit it's also interesting to observe the pricing model that most cloud providers utilize for ingress and egress traffic to and from a cloud service: It costs close to nothing or it's free to send data and traffic into or across the cloud-providers infrastructure, while all exit traffic from the same cloud, for intance to the Internet, other network servcie providers or to other clouds, has an associated cost and metered delivery (i.e. cost of GB transfer volumes per month). Not many ISPs would have gotten away with such an pricing policy, and schema and it runs contrary to traditional Internet peering models, and as the line between cloud-providers and cloud-provuders will get blurred, it may be seen as a discrimination of business and market terms between cloud providers and ISPs.  ISPs should rename themselves cloud-provider something and charge for all exit traffic to customers and peers!

In summary it doesn't take a crystal ball to predict that the Internet giants using CDN networks and cloud delivery will only become bigger and increase their share of Internet traffic even more over the next, coming years, while amount of traffic generated by or originating with ISPs and telcos will decrease.  And in parallel, that cloud-based service and traffic delivery will increase using large-scale network virtualization and traffic manipulation, with the traditional network neutrality policies most likely not making it into the clouds at all.

Note a): "Google, MS, Vonage... Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. ... Why should they be allowed to use my pipes? The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!".

Erik Jensen,
28.09.2014