About this

Writings so far

11.19.2014

Going cloud, going offshore - it's all about IT automation

Two major trends impacting enterprise IT worldwide is

  • The move towards cloud-based IT service delivery (from dedicated or virtualized server deployments inside customers data centers or with 3rd party DC operators), and
  • Global sourcing of IT operations and service delivery, or off-shoring when handing over IT operations and delivery to "offshore" IT suppliers
Both of them are concerned with or aims to, I would argue, reduce IT cost through increased IT automation.

The journey from legacy, classic server environments to cloud-based IT service delivery models can be depicted as in fig 1 below.




Here, various IT systems with a enterprise customer or let's say with a IT service provider, are in various stages of being run on classic, legacy IT environments on virtualized server platforms or maybe has made the leap to a VM IaaS/PaaS setup with a cloud service provider.  

For legacy IT environments, typically one IT system are running on a dedicated hardware and database platform, and IT budgeting is usually done based on CAPEX upfront, 3-4 years write-off period.  Legacy IT environments would be based on IT systems popular 10-20 years ago, i.e. SUN servers running Solaris, HP servers running HP-UX, IBM servers running AIX etc, even pre MS WinServer pre 2012 server on COTS Intel servers.

Some of these IT systems or business applications running on them would be able to make the leap onto virtualized IT-platforms, either as part of a major server consolidation projects that most enterprise IT departments would be running since the dot.com era. Leading candidates or candidate of course being VMWare, with freeware hypervisors like KVM and Zen, additional hypervisors form Citrix, Parallels and others being utilized also.

When a critical mass of virtualized servers were reached, the creation and utilization of server instances could be viewed as OPEX and on a per month, week basis, the cost of each new VM being incremental.  Also when a critical mass of CPU cores on a number of physical servers were reached, it was each to provision VMs from a pool of available CPU cores, over-commit on VM cores that were scheduled to being put into production or assign cores to VMs for dedicated VM resources and instances.  At a higher price than pool-VMs. 

With virtualized servers, server or VM provisioning and configuration could be dramatically automated and lead times for VMs were/are dramatically lower than installing dedicated, physical servers - seconds and minutes versus days and weeks.  VM re-configuration could also be done on the fly with "instant" re-sizing of VM CPU cores, memory and disk-space changes and allocation.  A significant higher degree of automation and IT-production levels were achieved with virtualized servers, leading to lower IT cost overall (per VM, per IT employee, per production unit etc).

Some IT workloads and IT systems has made the transition onto private or public cloud infrastructures, leading to a even higher degree of IT automation than traditionally available from both virtualized or legacy IT environments,  Between highly virtualized and automated IT environments and cloud based IT-delivery there aren't really a clear cut switch-over or demarcation line, but I guess cloud based IT delivery are seen as having a higher degree of auto-scaling and capacity on demand than a single location VM-environment.  Plus a higher degree of self-serve support and options for IT management than a on-prem solution.  IT departments did server virtualization for themselves and to meet corporate cost targets, while cloud IT delivery are available to a wider audience with an associated price plan, service catalog and SLA accessible in a way not always seen with corporate IT departments.

For many end-users, business applications delivered as a SaaS-solution, represents state of the art in automated IT delivery, "just" insert the data and press play. While cloud IaaS or PaaS-delivery would be state of IT-automation for IT-departments and developers.

In many ways, outsourcing and offshoring of IT operations and service delivery, can be seen as an IT automation drive also.

If we apply a onshore (on-prem) and offshore dimension to illustration above, we have a lineup as depicted in figure 2.






Corporate IT systems are, in addition to various state of "physical to cloud"server platforms, in different states of being managed and operated onshore locally (on-prem with customer, with 3rd party local IT-provider) or with a offshore, seen from the customers point of view, IT provider performing day to day operations, maintenance and incident management.

These day to day operations and maintenance work are being performed inside or based on well-defined work packages and by personnel that has specific module certifications on various Microsoft, Oracle, SAP, HP, RedHat, EMC/VMWare etc IT platforms and systems.  In turn this means that IT management are removed from specific personal skill-sets or knowledge, and one set of work tasks lets say on a Oracle DB can be performed interchangeably by different Oracle-trained personnel, and one reaches a new level of IT automation where the personal/personnel factor is taken out of the IT operations equation.  Work tasks gets increasingly specific and well specified, customers avoid customer specific adaptations and developments as far as possible, i.e IT work and delivery gets boxed in and turned into work modules specified down to the minute.

Put another way, part of the cost benefit of offshore IT delivery are down to the modularization of IT work tasks and IT operations that offshore providers have achieved compared to in-house IT,

Thus the transition B in figure 2 is part of an overall mega-trend that uses IT automation to reach lower IT production costs, and it will be interesting to see how the IT service delivery business unfolds between offshore IT-providers and cloud-based IT delivery.  Or more likely, how offshore IT-providers use cloud-based delivery options (their own private cloud services, mix of public clouds) to reach new IT automation levels and increased market share.


Erik Jensen, 19.11.2014

9.28.2014

Not in Kansas anymore: How the new Internet and cloud-based networking impacts network neutrality

In an article in the Norwegian newspaper Dagbladet.no June 11th 2014, Harald Krohg of Telenor calls for an discussion on net neutrality that starts with how the world of Internet actually is and not how it used to be some 20 years ago (largely an academic arena - it wasn't, but that's another story).

What might this "new" Internet actually be and why is it any different from earlier versions? And what has this to do with the network neutrality debate that the article in question discussed? Lets look at some major changes in the fabric of the Internet over the last 7-8 years or so, and some of the new developments that would potentially change the piping of the Internet and how services over the Internet are delivered and consumed.  And not the least, how these new developments impacts "network neutrality".

I guess most people with some interest in Internet traffic exchange and flows, has learned that the Internet is a series of interconnected networks operated by different network service providers, operated autonomously and independant of each other, exchanging traffic with each others as peers (i.e. more or less equals) and covering costs this might incur themselves. Or buying full routing and access to the global Internet through Internet transit.

This could be called the classic Internet, and was more or less how the whole global Internet came about in the 80's and 90's; thousands of independant network service providers worldwide linking up with each other, exchanging traffic as peers and buying transit to networks or parties they didn't have immediate access to. Or didn't want to manage with a direct peering relationship.

For a number of years the Internet developed as a many to many interconnected web of networks and within structured tiers: There were tier 1 large internationals, tier 2 regional, medium network operators typically and tier 3 small, local network operators organized in a structured network hierarchy, but close to all parties were able to exchange traffic with each other through mutual peering or paid transit (usually to tier 1's or some tier 2s). The Internet was thousand of networks forming a joint whole and everyone was or could be interconnected to peer and exchange traffic more or less equally.

This classic Internet started to change or unravel in the period leading up to 2007, when an ATLAS Internet Study ("ATLAS Internet Observatory, 2009 Annual Report" (1) study found that whereas in 2007, "thousands of ASNs contributed 50% of content, in 2009, 150 ASNs contribute 50% of all Internet traffic" (an ASN is autonomous system or routing domain for a set of network domains controlled by a network operator or service provider).

This meant that by 2007 a mere 150 network operators or service providers controlled or originated over 50% of the global Internet traffic, and that the classic Internet as a series of interconnected peer networks was gone.  These 150 or so dominant network operators wasn't just tier 1 Internet providers and international telecoms operators any longer, but various US Internet web site giants, traffic aggregators, advertisers and distributors like Google, Comcast, Limelight, Akamai and, at the time, P2P-traffic.

As an example, Akamai, the worlds largest CDN operator with some 150,000 servers in 92 countries within over 1,200 networks, now in 2014 claims to deliver between 15-30% of all Web traffic worldwide (what is web traffic in this statement? Http-based for sure, but most likely http-based website traffic only and not adaptive streaming over the http protocol).

In a 2011 follow-up study to the 2007 ATLAS Internet Observatory report named "Internet Traffic Evolution 2007 - 2011" (2) by Craig Labovitz, Mr Labovitz showed that by 2011 some top 10 network operators and Internet service providers now alone accounted for close to 40% of global Internet traffic, clearly showing that large scale traffic concentration and aggregation was in full swing on the global Internet by various Internet giants. These Internet giants and their audience was also mostly American, with the US "growing in both absolute traffic volume and as a weighted average percentage of Internet traffic (grew from 40% to 50% by average aggregate traffic volume in 2011).

So in 2007 some 150 operators of various size and form originated some 50% of the Internet in, while by 2011, it took only 10 operators to control close to 40%.

So one can safely conclude that the Internet by now is dominated by some large Internet giants. Who, it turns out, all have worldwide data centers they operate themselves and all rely on content distribution network operators or infrastructures residing increasingly inside local, last-mile access network operators networks.

This post isn't about the build and reliance upon large, distributed data centers and CDN infrastructures for the Internet parties dominating Internet traffic volumes, but rather that these build-outs can be viewed as a way to preposition and enhance the reachability of services and content by the Internet giants. And as a way to make sure that users get their content, adverts and services with far greater quality of experience than plain Internet delivery from some far-away host. And it would be a far stretch to call the traffic managed and shaped, manipulated, re-routed, load balanced, proxied, pre-positioned or cached by terminal type, location, time of day, web site type etc by these players this way for neutral in delivery or reception.

As noted above, Akamai CDN clusters are inside some 1200 local and global networks, with Akamai URLs showing up lots of places inside Facebook pages, as well as Microsoft and Apple services for instance.  In 2, it's shown that "as of February 2010, more than 60% of Google traffic does not use transit" pointing to the number of ISP deployments (in over 100 countries) and effectiveness of the Google Global Cache Platform (claimed hit rate at 70-990% for local cache delivery) for YouTube and other Google domains.  These distributed cache and CDN deployments and their associated, integrated client-side players, can perform most of the network functionality and traffic shaping as mentioned in previous paragraph.

Since 2011 Netflix has become, of course, a significant source of traffic in the markets they are present in, making up close to 34% of all broadband traffic in the US for instance. And Netflix also relies on a CDN infrastructure of their own design to carry and distribute Netflix movies and vides, the Netflix Open Connect CDN, that is offered for free (or "free") to regional network operators and ISPs to locate inside their network.  As with other CDNs, Netflix Open Connect is meant to save money (for Netflix, for intermediate networks, for access networks) - and to increase the customer user experience, i.e. QoE, for the end-user.  Minimizing video buffering, video load times and achieving best videorate possible towards customers equals increased video rentals per month and higher ARPUs.

It's interesting to parallel the development of Internet giants, their distributed data centers worldwide and reliance of CDNs to distribute traffic and content for maximum quality of experience and performance with two other Internet phenomena developing in the 2007 - 2011 period, namely Internet network neutrality and cloud computing (and cloud-based networking coming out of this latter one).

Since US telco CEO Edward Whitacre uttered the famous "not for free on my pipes" in 2005 (see note a), bringing the earlier work on network neutrality of Tim Wu to the fore ("Network Neutrality, Broadband Discrimination", 2003), in turn strongly influencing and leading up to the 2010 passage of  the US Net Neutrality rule, the real action or infrastructure investments on the Internet, one could claim, took place elsewhere.

Or to re-phrase: While the public debate was quite concerned about perceived non- network neutral activities and shenanigans taking place with local ISPs and telco's, the Internet giants was busy building de facto quality of service traffic management and content manipulation capabilities behind the scenes, that in turn was very much focused on bringing the user a vastly improved Quality of Experience and performance compared to what the plain Internet would enable them to do.

The network neutrality debate boiled down to, I would claim, that all traffic delivery was to be treated equal, neutral and dumb ("neutral" has a better ring to it than dumb, but the effect was the same...) for all users and originators, and without traffic shaping, differentiating quality of service towards the user and without any service discrimination.  So-called last-mile Internet access providers were seen as the main obstacle to or most likely culprits to screw up Internet network neutrality, and much of the network neutrality debate was held I would claim on the assumption that your local telco would be on the opposite end of network neutrality if they could have their own ways.

This level playing field approach was very much in the interest of first-mile and middle-mile operators like the aforementioned Internet giants, that didn't have, usually, last-mile Internet access networks of their own, and were relying on a range of 3rd party local, access providers to bring their stuff and content to the end-users.  For them having one dumb or standard tcp/ip pipe to the end-user would be vastly preferable to having local QoS access schemas all over the place from local besserwissers.

So mini summary - having a local access schema across the Internet that could be treated as dumb and plain as possible without any local or per provider QoS traffic classification was and is a great benefit if you are looking to distribute your content and traffic as uniformly as possible.  Any need, or the real need you have, for traffic management, shaping and positioning can be achieved by having

  1. distributed DCs and CDNs as close to the main volume markets as possible, 
  2. controlling the end-user client with your own player or portal,
  3. logging the client and user behaviour and usage patterns into your own big-data stores to review and optimize service delivery.

Since 2007-2008 we have also witnessed the explosive growth of cloud services for anywhere, anytime and as-a Service IT service delivery.  The virtualisation of servers and compute power gave way to radical economics of scale for the initial set of Virtual Server Providers (VPS-provider), some of whom evolved onto true cloud IT service providers when they added improved self-serve configuration support, pay as you go billing models, ubiquitous network access and so-called location transparent resource pooling.  Soonish a range of network services and functions was also offered on top of virtual servers and not just on dedicated network hardware any longer, paving the way for network virtualization of network functionality.

If one wants to have a look at the future of the Internet infrastructure and networking, there are 2 key developments that should be studied and understood:
  1. The development of network virtualization via (increasingly standardized) network functionality virtualization (NFV) and software defined networks (SDN) for network control, inside clouds, backbones or data centers, and
  2. Internet bypass and private nets developments

The virtualization of network functionality with NFV for IP routing, caching, proxy, firewalls, NAT and traffic shaping as well as application acceleration will come to mean just as much for the continued development of IP and Internet networking as server virtualization has come to dictate and govern IT-infrastructure and cloud-based IT-production. Network virtualization will over time remove the need for a series of dedicated networking hardware on local customer accesses (corp market) or in the data center and will move advanced traffic management and networking functionality in to the cloud that in turn will lower networking costs for both the corporate and residential market.

And not the least, lower the threshold for giving companies and end-users access to new and advanced network functionality for extended traffic manipulation, shaping and acceleration for instance per application type, IP ports, terminal types, time of day parameters, traffic tariffs and more.

In light of current network neutrality approaches and "all users and all traffic shall be treated equally and without discrimination" one can argue that the whole purpose of network virtualization, NFV (and SDN) is exactly to enable and perform broad, active and wide-ranging network traffic flow re-arrangement, traffic manipulation of all sorts and application discrimination (positive or negative). And whether one wants it or not.  For instance the Google Andromeda introduction this spring, the codename for Google’s network virtualization stack, it was also announced that "Customers in these zones will automatically see major performance gains in throughput over our already fast network connections", resulting from applying "distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls". With more enhancements coming...Who says no to more performance?

With websites, Internet services and apps increasingly residing inside one or several clouds, another side to moving traffic for access to the same websites, apps, Internet services and of course SaaS-based applications out of the basic Internet and into the cloud-providers cloud fabrics, their own fiber-nets and data centers, the usual "level" playing and delivery mechanisms of the Internet, with various degrees of assumed network neutrality, no longer applies.

Once the traffic is inside a cloud-providers network and cloud service, the network access and delivery is more akin to an Internet overlay or private, virtual network domain. Most of the larger cloud providers now provide their own direct, private (fiber) access to their own cloud services from major Internet peering exchanges, data centers or telco backbones (for instance Amazon DirectRoute, MS Azure ExpressRoute, IBM Softlayer Direct Link etc) with accompanying "better than Internet SLAs" and support for QoS-tagging of traffic to/from the cloud service in question.

The use of CDN-networks was far along Internet overlay and bypass and this trend will accelerate even more with the introduction of private, dedicated cloud access, that in turn undermines or bypasses traditional net-neutral traffic delivery and traffic management over the Internet:  So far there has been no calls for network neutrality and "neutral" traffic management to take place to/from and inside cloud networks.  Cloud networking policy, management and performance levels are governed by the cloud-provider only.

The Internet bypass phenomena is set to grow, and already a number of companies are positioning themselves as bypass-only providers, for instance International Internet Exchange (see also this article).

Compared to traditional Internet peering and transit it's also interesting to observe the pricing model that most cloud providers utilize for ingress and egress traffic to and from a cloud service: It costs close to nothing or it's free to send data and traffic into or across the cloud-providers infrastructure, while all exit traffic from the same cloud, for intance to the Internet, other network servcie providers or to other clouds, has an associated cost and metered delivery (i.e. cost of GB transfer volumes per month). Not many ISPs would have gotten away with such an pricing policy, and schema and it runs contrary to traditional Internet peering models, and as the line between cloud-providers and cloud-provuders will get blurred, it may be seen as a discrimination of business and market terms between cloud providers and ISPs.  ISPs should rename themselves cloud-provider something and charge for all exit traffic to customers and peers!

In summary it doesn't take a crystal ball to predict that the Internet giants using CDN networks and cloud delivery will only become bigger and increase their share of Internet traffic even more over the next, coming years, while amount of traffic generated by or originating with ISPs and telcos will decrease.  And in parallel, that cloud-based service and traffic delivery will increase using large-scale network virtualization and traffic manipulation, with the traditional network neutrality policies most likely not making it into the clouds at all.

Note a): "Google, MS, Vonage... Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. ... Why should they be allowed to use my pipes? The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!".

Erik Jensen,
28.09.2014




2.18.2014

Rise and fall of (mobile) Backend as a Service?

Question mark is warranted I think, but this readwrite.com article, "How Mobile Cloud Services Will Consolidate After The Death Of StackMob" (i.e. after being acquired by and folded into PayPal) highlights the gauntlet many VC-funded, narrowly, or is that sharply?, focused upstarts faces.

I wrote about some of these mobile Backend as a Service start-ups in November ("Backend as a Service (BaaS) for mobile services, Internet of Things and devices"), thinking they would be a great match for mobile apps and services in general, for IoT specifically, but Internet giants saw them as a nice addition and as a way to strengthen their own backends (Facebook acquiring Parse etc).

Added challenge was having developers as your core customer group - highly challenging customer group, but noise-value ratio in no way linked to willingness to pay.

Last stand?  - Dedicated Kinvey.

Going to the Mobile World Congress next week, it will be interesting to see if there are movement in this area on the GSMA side of things.


Erik Jensen, 18.2.2014


2.10.2014

Large-scale NSA Internet surveillance and bulk traffic collection as an IT challenge

Saturday's article in the New York Times about the low-cost, readily available tools that Snowden used to index and collect NSA documents inside the NSA ("Snowden Used Low-Cost Tool to Best N.S.A."), also points to how large-scale surveillance and traffic monitoring can be done today using low-cost, freely available commercial or open-source software, and commercial Internet or cloud infrastructure services.

In many ways NSA faces the same IT challenges as most large corporations, not the ideal place to be the CTO or CIO probably (or in charge of internal IT security...), but if looking at NSA surveillance and Internet traffic collection as an IT challenge, what are some of the available options?

Most of the following involves some, or a fair bit!, of speculation and what-if-maybe for "Large-scale traffic collection for Dummies", but most of it should be fairly evident if one has followed the Snowden revelations over time.  So how could one go about doing large-scale Internet traffic collection and bulk usage logging?

Firstly, if one remembers the old Roman saying "who watches the watch-guards" (or, ""Who will guard the guards themselves?"), it looks like the NSA should have implemented, across all locations and stations, with contractors and internally, a form of change management and tracking system to track excessive document indexing and downloading, even if it's done with different system admin accounts.  Apparently PRISM could have had some use on the NSA LAN...

Secondly, while this isn't a low-cost option, looking at a map of worldwide submarine fiber cables (for instance http://www.submarinecablemap.com/), there isn't that many key submarine cable landing areas in the US, Europe, Asia and Africa one needs to tap into, splice and copy to gain full access to nearly all Internet traffic by most network operators, telcos and ISPs, with traffic encryption so far seemingly more of a nuisance than a real obstacle to reading traffic and content over this cable.  Fiber splicing by uboat seems like an approach here - i.e. with something like the USS Jimmy Carter.

Thirdly, one odd thing about the XKeyscore revelations last year was the low numbers of servers the NSA employed worldwide, even way back in 2008, to index and/or collect Internet traffic, content and meta-data; only some 700 servers in 150 or so locations.  And able to hold traffic collections only for some days, meta-data for 4 weeks or so.

Already in 2008 of course there were many large scale CDN deployments on the Internet by different commercial operators, having web cache servers deployed in DCs near the major Internet exchanges or with local ISPs worldwide, and directly in the pathway of end-users Internet access and service traversals. Tapping into and using mirrored cached content of 3rd party, commercial CDN services and operators that already then held the majority of users Internet service and content consumption, seems like a way easier and more scalable approach for traffic collection than deploying and maintaining log servers of one own.

Incidentally, that would also give the NSA, as well as the content originators and service providers the opportunity, to deny that they are accessing operator so and so servers directly as has been done - they are working on an mirrored, off-site copy of the servers in question.

Since then cloud computing has come to life in a significant way, and easy to rent IaaS compute, storage and networking resources in almost any country, making it even easier to install and manage XKeyscore related SW and indexing on VMs in country of choice (not that I'm an expert on IaaS-availability in Africa!).  But main thing is - instead of installing and running a battery of servers worldwide for traffic capture and content indexing, it's easier to acquire cache copies of network traffic and content from CDN-operators and/or rent IaaS compute resources worldwide that in most cases are located in most end-users pathways. Or, put differently (slide 20 in the "An NSA Big Graph experiment" presentation: "Cloud architectures can cope with graphs at Big Data scales".

This move to the cloud was also highlighted in the 2011 Defence News pre-Snowden article "Securing the cloud" - highlights: "The National Security Agency has been working on a secure version of the cloud since late 2007. By the end of the year, NSA plans to move all its databases into a cloud architecture while retaining its old-fashioned servers for some time.  "Eventually, we'll terminate the other data base structures," said NSA Director Army Gen. Keith Alexander...  The intelligence community proposal would expand the cloud approach across the community to take advantage of the cloud storage and access methods pioneered in the private sector... "I went in [and] talked to our folks who are on the offensive side" and asked them what would make a network "most difficult" to crack, he said. "And the answer was, going virtual and the cloud technology". See also this one.

The fourth measure to collect user traffic and meta-data is more on the speculative end of things, but since 2008 we have seen the birth of mobile app-stores and mobile users downloading apps for their smartphones in the millions - almost daily.  Instead of creating highly secure and zero-day malware exploits, why not create an app, or at least at app development library, that everyone wants to use, and that people download and interact with daily. And get the app to collect location data, address book access and export, message store access and export etc?  Much easier than doing obscure malware developments and then trying to get the SW-package into peoples devices.

An extension of this is to listen in on leaky apps and the ad networks that most developers of free apps use to place ads in the app and get some sort of kick-back for app generated ad streams.  See "NSA using 'leaky apps' like Angry Birds, Google Maps to siphon user data" for an example.

Moving on to a possible 5th measure or IT approach to do large-scale user tracking and surveilance, Internet traffic collection, we have another area that has "exploded" since 2008, and that is the low-threshold availability of big data log collection systems, distributed storage & processing systems like Hadoop and data analytics tools for laymen (i.e. business analytics).

Once an fairly obscure area of IT administration, who reads and understand router and server OS syslogs anyhow?, analysis and visualization of systems logs turned out to provide valuable business information - usage patterns, performance issues, faulty applications, break-in attempts, what not, system or business wide logging and analysis even more so.  Commercial software coming from these IT amin areas and developments have been identified in the NSA leaks by Snowden, and are now firmly established in the Big Data business.

Large scale storage and processing, or Hadoop by most accounts nowadays.  Grapewine says 9 out of 10 shops looking at Hadoop doesn't actually need Hadoop - but it's Bg Data!, and looks good on the CV - with NSA data collection and processing the one shop being the one that actually could put it to good use (with a Splunk connector or two to get data into the Hadoop cluster, but seems getting data sources for NSA Hadoop clusters isn't the main challenge...).

NSA activities with Hadoop and related technologies are well documented, see for instance

OK, several more main street IT services and options still to go, but will come back to that in a later posting.



Erik Jensen, 09.02.2014

1.27.2014

Internet traffic as cost driver for telcos - not

One of the longer-running debates in the Internet infrastructure area is the cost impact of Internet services and content delivery by 3rd party providers on local ISPs and telco's IP access network and peering points. Or, taking the opposite view, how 3rd party services and content delivery from the likes of Google/YouTube, Netflix and for instance via the Akamai CDN, has been the key driver helping local ISPs and telcos in actually selling and getting traction for their Internet access services and bundles.  No attractive Internet services and content over the Internet, no local uptake in telco broadband offerings and Internet access business to the residential and business market...

I won't reference the main cases and "conflicts" going on in this area, as I'm sure readers of this blog are mostly familiar with the YouTube and Netflix peering and transit discussions with local or regional ISPs, but rather look into the argument that network build and provisioning of network capacity to 3rd party Internet services and content delivery (that is, 3rd party to the local ISP and telco) are a main or large cost driver to the local ISP or telco.

Firstly, a very rough guide to Internet services or content delivery, using the first, middle and last mile analogy:


  1. First mile Internet services and content delivery: All things involved with service and content design, build and preparation for delivery, storing and hosting the service or content in one or several data centers, one or several clouds, for service and content origin.  Internet access capacity from the hoster on to the Internet for service and content distribution.
    • Many ISPs and telcos are in the data center or hosting business, so this a source of income for many ISPs and telcos.
  2. Middle mile service and content delivery: All things to do Internet/ISP traffic exchange (free peering, paid peering, paid transit) and service/content delivery via one or several Content Distribution Networks (CDNs) or via cloud providers. Handover to local access network for end-users, operated by local ISP or telco.
    • As above, many ISPs and telcos are in the Internet peering (achieving Internet traffic aggregation for improved transit position towards peers, not necessarily direct money), transit, CDN and/or cloud business themselves mostly through wholesale set-ups, so this is also a source of income for many ISPs and telcos.
  3. Last-mile service or content delivery: All things to do with delivery of services and content to the local end-user, by fixed or mobile broadband.  Will in many cases be done via a mix of locally deployed 3rd party CDNs or ISP caches, a mix of layer 3-5 load balancing by 3rd party or ISP.  And for content provisioning, adaptive streaming taking end-user bandwidth and client processing capability into the equation (negotiated with content origin or CDN edge caches in question).
    • Main source of income for most ISPs and telcos, based on selling local access bundles, 3rd party service add-ons and ISP bundled services like TV or IP in various forms, music services, sponsored tablets and smartphones tied to broadband subscription of some sort and voice over IP services. 
Looking at this, it's possible to make the argument that ISPs are uniquely positioned to cash in on and control the distribution of 3rd party services and content deliveries )(overlooking that Internet majors seldom hosts with local heros that much), but still there is dissatisfaction with how 3rd party service providers and content services like Netflix and YouTube wrecks ISP and telco backbones and economies. Why is that?

Looking at ISP cost categories and drivers, they can be divided as follows (not necessarily in order of cost impact):

  1. FTEs and man-hours: Skilled personnel needed to design, deploy and operate quite complex network structures in the backbone and local access areas, protect from Internet hacking and DDoS attacks.
  2. Network equipment: Routers, switches, firewalls, control planes, backbone fiber, local access network, POP and (over-)capacities at major national and international peering and transit locations, DCs and co-lo space for network equipment.
  3. IT IS and back-office IT systems for service authentication, service provisioning, billing and help-desk.  A major cost driver for most telcos and ISPs, as service complexity, number of integration points, legacy IT IS systems and integration of virtual network functions seems to multiply out of control weekly.
  4. Internet transit and peering capacity, off-net capacity: ISPS and telcos needs to buy Internet CDN, peering and/or transit network capacity with or from 3rd parties. This is the cost element that is most often brought forward by ISPs and telcos in the "OTT is eating my network" debates and arguments.
Looking at these costs elements and drivers, one can argue that

  1. FTEs and man-hours per ce are under control, and going down due to outsourcing of many network operations tasks. Key challenge seems to be number of FTEs needed for a certain operational capability, not the salary levels per FTE in itself.
  2. Network equipment: Overall trend is for this kind of equipment following Moore's law, giving ISPs and telcos ever more processing power or networking capacity. Same downward price points for DCs and co-lo.
  3. Internet peering/transit and CDN price points are involved in a race to the bottom giving Mbps or that GB/month pricing that would have been unthinkable some few years ago.  Although a bit old, have a look at "Internet Transit Prices - Historical and Projected" and for CDN pricing, "CDN Pricing Stable: Survey Data Shows Pricing Down 15% This year"


This leaves IT IS and back-office IT systems as the fall-guy, and here the legacy plus complexity spiral has most companies, not only ISPs and telcos, but banks, insurance, governments and institutions trapped.  IT IS remains a key cost driver for most companies, whereas Internet giants like Amazon, Google, Netflix and Facebook has achieved rapid service development, service provisioning and roll-out and consistent service performance (equals commercial success) precisely because they have managed to get their IT IS systems under control or rather, into a flexible and adaptable software development process environment being at the forefront of cloud developments. And IT IS then becomes a competitive and cool tool rather than some boring way in the back back-office legacy and career ending thing.

An additional argument that can be made is also the overall cost savings ISPs and telcos have been able to achieve using Internet and tcp/ip technologies (IP on everything, everything on IP...), and not things like pre-Internet ISDN and ATM, OSI-stacks, for integrated services networks and one network for multiple services that we all take for granted today. Imagine a competitive IT IS stack for OSI on ATM versus one control plane for IP-based networking.

In summary: ISPs and telcos stand to gain the most from adopting SW development processes, environments and cloud technologies on par with Internet giants to simplify their IT IS stack, speed up service delivery and competitiveness.  Not being competitive in this area is a far greater threat than Internet 3rd parties consuming too much bandwidth without paying for it.


Erik Jensen, 27.1.2014

1.20.2014

The net neutrality that never was: Bringing net neutrality up to speed

Lot of comments and writings has been made after the US Court of Appeals for the District of Columbia Circuit last week halted the FCC's "ban on traffic blocking and discrimination by Internet service providers because the FCC had not designated ISPs as common carriers" (*). Most of the writing has fallen into the camp that this more or less means that non-discrimination of Internet traffic and net neutrality for Internet traffic and parties is dead and that this signals the end of basic Internet traffic delivery freedoms. In the US. And seemingly most other places as well.

In the US, were most areas and cities are only served by one dominant service provider or telco, the situation in many ways are more dire than most other well-functioning markets, where there are multiple Internet access providers and access technologies to choose from, and one dominant access provider can't run amok in how basic access services are provisioned.  And, let's say, play favoritism with one's own services or content offerings.

That said, I think the basic Internet neutrality approach and arguments has been flawed from day one, or at least half of what net neutrality implicates.  To me, net neutrality could and does entail two things

  1. Neutrality as to how network protocols, services, applications and devices are treated, i.e the same and without prejudice or helping hands of any sort
  2. Neutrality as to how communicating parties are treated, i.e. basically how sender and receivers or peers in a networking session are treated, without prejudice and equally.  For instance
    • Broadband Internet access customers treated the same for the same service or session
    • Content providers, Internet service providers treated the same towards broadband subscribers and in IP backbone management
Number 2 is taken for granted by most parties I believe, whereas number 1 I think was and is a failed approach as most network protocols, Internet services and applications as well as end-user devices never was or is designed to be "neutral" or equal to one another, nor to they behave particularly "net neutral".

A couple of examples as to why "network protocols, Internet services and applications as well as end-user devices" are closer to the "all you can eat" camp or passive resources, than nicely behaved, neutral net citizens:

  1. Some layer 5 tcp/ip protocols are better designed or designed for better performance than others (depending on how old they are, what they are designed or forced to support or utilize the underlying tcp/ip stack).  Some tcp/ip protocols are also better at utilizing or able to use sliding window mechanisms, giving them larger transmissions windows and better abilities for avoiding IP traffic congestion.  And there are of course performance and behaviour differences in how he basic tcp/ip stack is designed and implemented in differents operating systems and network elements.
  2. Use of http for adaptive media streaming means streaming clients will quite aggressively seek to reach the highest encoded video rate achievable for their broadband connection and media device playback capability, meaning other network protocols and service deliveries as well as non-adaptive devices will suffer and be put to the back on the broadband connection.
  3. There are few or none IP packets that traverses the Internet between a sender and a receiver or between two peers today that hasn't been modified in some way by one or more load-balancers, NATs, layer 3-5 traffic accelerators, a cache server or CDN service or had their DNS/IP header re-written or payload compressed in some way or modified depending on OS, browser, location or time of day, meaning that full-time network modification or alterations to Internet traffic is already a fact of life.  And was also some 6-7 years ago. MOst of these are very useful and necessary developments for IP and Internet traffic management, scaling and optimization as the basic tcp/ip protocol stack is old.  Http 1.1 is also getting geriatric. Meaning layer 3-5 protocol optimization, caching and application level load balancing are useful and necessary additions to basic, "neutral" IP networking.
  4. Most Internet traffic and content today are served by a couple of Internet giants.  As measured in the ATLAS Internet Observatory 2009 Annual Report, "In 2009, 150 ASNs contribute 50% of all Internet traffic".  Would anyone be surprised if some 25-50 ASNs are behind closer to 75% of Internet traffic in 2014?  These Internet giants are using their own fiber backbones and cache/CDN/cloud infrastructures for service and content delivery towards end users, not the basic Internet itself, and Internet bypass for first-mile and middle-mile service delivery has been the default approach and get things done in large-scale settings for years.


These are some of the ways basic Internet network protocols and service delivery never was or isn't "neutral" from the outset, and with cloud-based networking and service delivery becoming the norm towards broadband users and customer in both the business and residential market, this "non-Internet" approach will only accelerate in the coming years.  

As noted above, many of the advance in layer 3-5 traffic optimization, acceleration and load balancing greatly helps in the delivery of services and content towards end users.  And I would like to see the user who takes his YouTube videos or mobile app usage without any traffic assistance at all and opts for a "neutral" non-QoS delivery (it's not even best-effort) over a optimized and accelerated service delivery.  

For net neutrality I think the main focus for policy development should center around 
  • Treating everyone the same, including giving everyone the option of paying for optimized transport and Internet traffic management on Internet first-mile, middle-mile and/or last-mile sections
  • Open and transparent information as to how first-mile, middle-mile and last-mile service providers (including CDN- and cloud operators) utilize Internet traffic management and optimization technologies for their networks, CDNs or clouds and what's available for 3rd parties to utilize for their service delivery through open APIs, SDN interfaces or manually.  In short, what's their internal and 3rd party QoS regime and policies.
  • Don't limit IP network, Internet and IP traffic management developments by enforcing a net neutral straitjacket on Internet networking and service delivery that hasn't kept up with reality for the last 10 years or so.
In short: The net was never neutral, but users on the Internet needs to be treated neutrally.

Erik Jensen, 20.1.2014

1.15.2014

Why doesn't mobile Internet access translate to mobile revenue and increased ARPU?

Mobile Internet access and traffic - there's no shortage of it.  In fact, in the latest Sandvine Global Internet Phenomena Report for 2H, 2013, mobile traffic is shown to grow rapidly for every market being monitored.  For instance:
  • North America, since 1H report: Mean monthly usage has made a 13.5% jump, increasing from 390.1 MB to 443.5 MB
  • Europe: Mean monthly usage has increased 15% from 311 MB to 358.4 MB per month
  • Asia: Increasing from 700.4 MB to over 1.1 GB per month
Overall, mobile traffic (i.e. from mobile devices) only makes up some 20% of Internet traffic (see KPCB 2013 Internet Trends, slide 32), but growing at 1,5x per year or more, mobile broadband traffic is on a pretty good trajectory.

Similarly, the Ericsson Mobility report for November 2013 states that "the increase of monthly mobile data traffic in Q3 2013 exceeded total monthly mobile data traffic in Q4 2009" and that there was a "80% growth in data traffic between Q3 2012 and Q3 2013".  And looking forward, "mobile data traffic is expected to grow at a CAGR of around 45 percent (2013-2019) leading to a 10x growth in mobile data traffic between 2013 and 2019".

So, volume-wise, mobile traffic numbers look good if you are in the mobile broadband area or a mobile access provider.  Does the traffic increases translate to money for mobile access operators?

While mobile traffic is increasing significantly, ARPU for mobile operators, for both mixed voice and mobile broadband providers as well as voice only providers,  are expected to fall in the next years (GSMA and OVUM numbers a bit old):
  • GSMA: "European mobile ARPU falls 20%", 1, 2
  • Global Mobile Outlook, OVUM (2011): Monthly ARPU remains in steady decline across all the regions, and we expect it to fall at a CAGR of 4% from 2011 to 2016.
  • The Mobile Economy 2013, AT Kearney: Despite the growth in usage of voice and SMS and increasing numbers of data subscriptions, ARPU rates have declined across every region globally. The overall global ARPU rate has fallen by 7.6% p.a. from US$19 to US$14 per month, with the highest reductions in 2010-2012 seen in Africa (-10% p.a.) and Europe (-7% p.a.)

So, revenue per user going down for mobile operators, but average traffic volumes per subscriber going up rapidly.  Why?  There's a number of reasons among them:

  1. Despite Moore's Law and networking units getting ever more capacity at ever lower prices, functionality, complexity, service management levels and operations transactions per networking unit is going up.  There is seldom less complexity introduced with new mobile G generations and always-on mobile units, costs for building and maintaining mobile networks are also increasing.
  2. Race to the bottom.lack of service differentiation: Telco and mobile operations has squarely entered the mass-production or utility service era, and there is little noticeable service quality or service range differentiation once basic mobile coverage has been ensured. Operators are competing on level of mobile handset subsidies and traffic volume/capping bundles.
  3. Mobile access subscription and service does not translate to any significant degree of share of "mobile wallet and spend" - customers get their mobile service needs by Internet and 3rd party providers.
  4. Lack of investment or market foothold in first-mile services (i.e. hosting, extended comms and messaging) and middle-mile infrastructure and services (i.e. CDN, on-net traffic aggregation) for mobile operators and telco's .

Point 4 can be illustrated by how things are looking in the mobile ad space and mobile app & service subscription space - much better than mobile voice and broadband ARPU developments! Some indicators:

  1. Business insider,THE FUTURE OF DIGITAL, 2013: Mobile is the only media time that's growing (slide 24), Mobile is now approx 20% of e-commerce traffic (slide 37)
  2. Mobile ads now close to half of Facebook ad ARPU - slide 37 in the KPCB report referenced above
  3. Twitter: " More than 70 percent of advertising revenue came from mobile devices in the third quarter, compared with 65 percent in the second quarter." (2013 numbers, of total Twitter revenue of $168.6 million in the last period).
  4. Gartner is predicting that worldwide revenue from app stores will increase this year (2013) by 62%, bringing the total industry revenue to $25 billion dollars.
  5. Business Insider, "The Results So Far From Holiday Shopping Point To Huge Gains For Mobile Commerce This Year":  ...mobile commerce grew more than twice as quickly (as e-commerce overall), at 63%, and accounted for nearly $940 million in sales on those three days. ...one in four e-commerce dollars spent on Black Friday and Thanksgiving were on purchases made through mobile devices. ...Tablets have emerged as a principal engine for mobile commerce growth.


So, to answer the question raised in the title: It turns out there is no direct relation, so far?, between providing mobile Internet access and capturing share of mobile traffic and services revenue, or share of mobile customers spend on e-commerce, Internet services and use of traffic-intensive deliveries.  Because with whom customers choose to have their mobile access isn't the same parties that they choose to use for e-commerce, Internet services like messaging, social media sharing, file sync, photo storage and get delivery of movies and entertainment. 

Erik Jensen, 15.1.2014