About this

Writings so far
Showing posts with label infrastructure performance. Show all posts
Showing posts with label infrastructure performance. Show all posts

11.19.2014

Going cloud, going offshore - it's all about IT automation

Two major trends impacting enterprise IT worldwide is

  • The move towards cloud-based IT service delivery (from dedicated or virtualized server deployments inside customers data centers or with 3rd party DC operators), and
  • Global sourcing of IT operations and service delivery, or off-shoring when handing over IT operations and delivery to "offshore" IT suppliers
Both of them are concerned with or aims to, I would argue, reduce IT cost through increased IT automation.

The journey from legacy, classic server environments to cloud-based IT service delivery models can be depicted as in fig 1 below.




Here, various IT systems with a enterprise customer or let's say with a IT service provider, are in various stages of being run on classic, legacy IT environments on virtualized server platforms or maybe has made the leap to a VM IaaS/PaaS setup with a cloud service provider.  

For legacy IT environments, typically one IT system are running on a dedicated hardware and database platform, and IT budgeting is usually done based on CAPEX upfront, 3-4 years write-off period.  Legacy IT environments would be based on IT systems popular 10-20 years ago, i.e. SUN servers running Solaris, HP servers running HP-UX, IBM servers running AIX etc, even pre MS WinServer pre 2012 server on COTS Intel servers.

Some of these IT systems or business applications running on them would be able to make the leap onto virtualized IT-platforms, either as part of a major server consolidation projects that most enterprise IT departments would be running since the dot.com era. Leading candidates or candidate of course being VMWare, with freeware hypervisors like KVM and Zen, additional hypervisors form Citrix, Parallels and others being utilized also.

When a critical mass of virtualized servers were reached, the creation and utilization of server instances could be viewed as OPEX and on a per month, week basis, the cost of each new VM being incremental.  Also when a critical mass of CPU cores on a number of physical servers were reached, it was each to provision VMs from a pool of available CPU cores, over-commit on VM cores that were scheduled to being put into production or assign cores to VMs for dedicated VM resources and instances.  At a higher price than pool-VMs. 

With virtualized servers, server or VM provisioning and configuration could be dramatically automated and lead times for VMs were/are dramatically lower than installing dedicated, physical servers - seconds and minutes versus days and weeks.  VM re-configuration could also be done on the fly with "instant" re-sizing of VM CPU cores, memory and disk-space changes and allocation.  A significant higher degree of automation and IT-production levels were achieved with virtualized servers, leading to lower IT cost overall (per VM, per IT employee, per production unit etc).

Some IT workloads and IT systems has made the transition onto private or public cloud infrastructures, leading to a even higher degree of IT automation than traditionally available from both virtualized or legacy IT environments,  Between highly virtualized and automated IT environments and cloud based IT-delivery there aren't really a clear cut switch-over or demarcation line, but I guess cloud based IT delivery are seen as having a higher degree of auto-scaling and capacity on demand than a single location VM-environment.  Plus a higher degree of self-serve support and options for IT management than a on-prem solution.  IT departments did server virtualization for themselves and to meet corporate cost targets, while cloud IT delivery are available to a wider audience with an associated price plan, service catalog and SLA accessible in a way not always seen with corporate IT departments.

For many end-users, business applications delivered as a SaaS-solution, represents state of the art in automated IT delivery, "just" insert the data and press play. While cloud IaaS or PaaS-delivery would be state of IT-automation for IT-departments and developers.

In many ways, outsourcing and offshoring of IT operations and service delivery, can be seen as an IT automation drive also.

If we apply a onshore (on-prem) and offshore dimension to illustration above, we have a lineup as depicted in figure 2.






Corporate IT systems are, in addition to various state of "physical to cloud"server platforms, in different states of being managed and operated onshore locally (on-prem with customer, with 3rd party local IT-provider) or with a offshore, seen from the customers point of view, IT provider performing day to day operations, maintenance and incident management.

These day to day operations and maintenance work are being performed inside or based on well-defined work packages and by personnel that has specific module certifications on various Microsoft, Oracle, SAP, HP, RedHat, EMC/VMWare etc IT platforms and systems.  In turn this means that IT management are removed from specific personal skill-sets or knowledge, and one set of work tasks lets say on a Oracle DB can be performed interchangeably by different Oracle-trained personnel, and one reaches a new level of IT automation where the personal/personnel factor is taken out of the IT operations equation.  Work tasks gets increasingly specific and well specified, customers avoid customer specific adaptations and developments as far as possible, i.e IT work and delivery gets boxed in and turned into work modules specified down to the minute.

Put another way, part of the cost benefit of offshore IT delivery are down to the modularization of IT work tasks and IT operations that offshore providers have achieved compared to in-house IT,

Thus the transition B in figure 2 is part of an overall mega-trend that uses IT automation to reach lower IT production costs, and it will be interesting to see how the IT service delivery business unfolds between offshore IT-providers and cloud-based IT delivery.  Or more likely, how offshore IT-providers use cloud-based delivery options (their own private cloud services, mix of public clouds) to reach new IT automation levels and increased market share.


Erik Jensen, 19.11.2014

9.28.2014

Not in Kansas anymore: How the new Internet and cloud-based networking impacts network neutrality

In an article in the Norwegian newspaper Dagbladet.no June 11th 2014, Harald Krohg of Telenor calls for an discussion on net neutrality that starts with how the world of Internet actually is and not how it used to be some 20 years ago (largely an academic arena - it wasn't, but that's another story).

What might this "new" Internet actually be and why is it any different from earlier versions? And what has this to do with the network neutrality debate that the article in question discussed? Lets look at some major changes in the fabric of the Internet over the last 7-8 years or so, and some of the new developments that would potentially change the piping of the Internet and how services over the Internet are delivered and consumed.  And not the least, how these new developments impacts "network neutrality".

I guess most people with some interest in Internet traffic exchange and flows, has learned that the Internet is a series of interconnected networks operated by different network service providers, operated autonomously and independant of each other, exchanging traffic with each others as peers (i.e. more or less equals) and covering costs this might incur themselves. Or buying full routing and access to the global Internet through Internet transit.

This could be called the classic Internet, and was more or less how the whole global Internet came about in the 80's and 90's; thousands of independant network service providers worldwide linking up with each other, exchanging traffic as peers and buying transit to networks or parties they didn't have immediate access to. Or didn't want to manage with a direct peering relationship.

For a number of years the Internet developed as a many to many interconnected web of networks and within structured tiers: There were tier 1 large internationals, tier 2 regional, medium network operators typically and tier 3 small, local network operators organized in a structured network hierarchy, but close to all parties were able to exchange traffic with each other through mutual peering or paid transit (usually to tier 1's or some tier 2s). The Internet was thousand of networks forming a joint whole and everyone was or could be interconnected to peer and exchange traffic more or less equally.

This classic Internet started to change or unravel in the period leading up to 2007, when an ATLAS Internet Study ("ATLAS Internet Observatory, 2009 Annual Report" (1) study found that whereas in 2007, "thousands of ASNs contributed 50% of content, in 2009, 150 ASNs contribute 50% of all Internet traffic" (an ASN is autonomous system or routing domain for a set of network domains controlled by a network operator or service provider).

This meant that by 2007 a mere 150 network operators or service providers controlled or originated over 50% of the global Internet traffic, and that the classic Internet as a series of interconnected peer networks was gone.  These 150 or so dominant network operators wasn't just tier 1 Internet providers and international telecoms operators any longer, but various US Internet web site giants, traffic aggregators, advertisers and distributors like Google, Comcast, Limelight, Akamai and, at the time, P2P-traffic.

As an example, Akamai, the worlds largest CDN operator with some 150,000 servers in 92 countries within over 1,200 networks, now in 2014 claims to deliver between 15-30% of all Web traffic worldwide (what is web traffic in this statement? Http-based for sure, but most likely http-based website traffic only and not adaptive streaming over the http protocol).

In a 2011 follow-up study to the 2007 ATLAS Internet Observatory report named "Internet Traffic Evolution 2007 - 2011" (2) by Craig Labovitz, Mr Labovitz showed that by 2011 some top 10 network operators and Internet service providers now alone accounted for close to 40% of global Internet traffic, clearly showing that large scale traffic concentration and aggregation was in full swing on the global Internet by various Internet giants. These Internet giants and their audience was also mostly American, with the US "growing in both absolute traffic volume and as a weighted average percentage of Internet traffic (grew from 40% to 50% by average aggregate traffic volume in 2011).

So in 2007 some 150 operators of various size and form originated some 50% of the Internet in, while by 2011, it took only 10 operators to control close to 40%.

So one can safely conclude that the Internet by now is dominated by some large Internet giants. Who, it turns out, all have worldwide data centers they operate themselves and all rely on content distribution network operators or infrastructures residing increasingly inside local, last-mile access network operators networks.

This post isn't about the build and reliance upon large, distributed data centers and CDN infrastructures for the Internet parties dominating Internet traffic volumes, but rather that these build-outs can be viewed as a way to preposition and enhance the reachability of services and content by the Internet giants. And as a way to make sure that users get their content, adverts and services with far greater quality of experience than plain Internet delivery from some far-away host. And it would be a far stretch to call the traffic managed and shaped, manipulated, re-routed, load balanced, proxied, pre-positioned or cached by terminal type, location, time of day, web site type etc by these players this way for neutral in delivery or reception.

As noted above, Akamai CDN clusters are inside some 1200 local and global networks, with Akamai URLs showing up lots of places inside Facebook pages, as well as Microsoft and Apple services for instance.  In 2, it's shown that "as of February 2010, more than 60% of Google traffic does not use transit" pointing to the number of ISP deployments (in over 100 countries) and effectiveness of the Google Global Cache Platform (claimed hit rate at 70-990% for local cache delivery) for YouTube and other Google domains.  These distributed cache and CDN deployments and their associated, integrated client-side players, can perform most of the network functionality and traffic shaping as mentioned in previous paragraph.

Since 2011 Netflix has become, of course, a significant source of traffic in the markets they are present in, making up close to 34% of all broadband traffic in the US for instance. And Netflix also relies on a CDN infrastructure of their own design to carry and distribute Netflix movies and vides, the Netflix Open Connect CDN, that is offered for free (or "free") to regional network operators and ISPs to locate inside their network.  As with other CDNs, Netflix Open Connect is meant to save money (for Netflix, for intermediate networks, for access networks) - and to increase the customer user experience, i.e. QoE, for the end-user.  Minimizing video buffering, video load times and achieving best videorate possible towards customers equals increased video rentals per month and higher ARPUs.

It's interesting to parallel the development of Internet giants, their distributed data centers worldwide and reliance of CDNs to distribute traffic and content for maximum quality of experience and performance with two other Internet phenomena developing in the 2007 - 2011 period, namely Internet network neutrality and cloud computing (and cloud-based networking coming out of this latter one).

Since US telco CEO Edward Whitacre uttered the famous "not for free on my pipes" in 2005 (see note a), bringing the earlier work on network neutrality of Tim Wu to the fore ("Network Neutrality, Broadband Discrimination", 2003), in turn strongly influencing and leading up to the 2010 passage of  the US Net Neutrality rule, the real action or infrastructure investments on the Internet, one could claim, took place elsewhere.

Or to re-phrase: While the public debate was quite concerned about perceived non- network neutral activities and shenanigans taking place with local ISPs and telco's, the Internet giants was busy building de facto quality of service traffic management and content manipulation capabilities behind the scenes, that in turn was very much focused on bringing the user a vastly improved Quality of Experience and performance compared to what the plain Internet would enable them to do.

The network neutrality debate boiled down to, I would claim, that all traffic delivery was to be treated equal, neutral and dumb ("neutral" has a better ring to it than dumb, but the effect was the same...) for all users and originators, and without traffic shaping, differentiating quality of service towards the user and without any service discrimination.  So-called last-mile Internet access providers were seen as the main obstacle to or most likely culprits to screw up Internet network neutrality, and much of the network neutrality debate was held I would claim on the assumption that your local telco would be on the opposite end of network neutrality if they could have their own ways.

This level playing field approach was very much in the interest of first-mile and middle-mile operators like the aforementioned Internet giants, that didn't have, usually, last-mile Internet access networks of their own, and were relying on a range of 3rd party local, access providers to bring their stuff and content to the end-users.  For them having one dumb or standard tcp/ip pipe to the end-user would be vastly preferable to having local QoS access schemas all over the place from local besserwissers.

So mini summary - having a local access schema across the Internet that could be treated as dumb and plain as possible without any local or per provider QoS traffic classification was and is a great benefit if you are looking to distribute your content and traffic as uniformly as possible.  Any need, or the real need you have, for traffic management, shaping and positioning can be achieved by having

  1. distributed DCs and CDNs as close to the main volume markets as possible, 
  2. controlling the end-user client with your own player or portal,
  3. logging the client and user behaviour and usage patterns into your own big-data stores to review and optimize service delivery.

Since 2007-2008 we have also witnessed the explosive growth of cloud services for anywhere, anytime and as-a Service IT service delivery.  The virtualisation of servers and compute power gave way to radical economics of scale for the initial set of Virtual Server Providers (VPS-provider), some of whom evolved onto true cloud IT service providers when they added improved self-serve configuration support, pay as you go billing models, ubiquitous network access and so-called location transparent resource pooling.  Soonish a range of network services and functions was also offered on top of virtual servers and not just on dedicated network hardware any longer, paving the way for network virtualization of network functionality.

If one wants to have a look at the future of the Internet infrastructure and networking, there are 2 key developments that should be studied and understood:
  1. The development of network virtualization via (increasingly standardized) network functionality virtualization (NFV) and software defined networks (SDN) for network control, inside clouds, backbones or data centers, and
  2. Internet bypass and private nets developments

The virtualization of network functionality with NFV for IP routing, caching, proxy, firewalls, NAT and traffic shaping as well as application acceleration will come to mean just as much for the continued development of IP and Internet networking as server virtualization has come to dictate and govern IT-infrastructure and cloud-based IT-production. Network virtualization will over time remove the need for a series of dedicated networking hardware on local customer accesses (corp market) or in the data center and will move advanced traffic management and networking functionality in to the cloud that in turn will lower networking costs for both the corporate and residential market.

And not the least, lower the threshold for giving companies and end-users access to new and advanced network functionality for extended traffic manipulation, shaping and acceleration for instance per application type, IP ports, terminal types, time of day parameters, traffic tariffs and more.

In light of current network neutrality approaches and "all users and all traffic shall be treated equally and without discrimination" one can argue that the whole purpose of network virtualization, NFV (and SDN) is exactly to enable and perform broad, active and wide-ranging network traffic flow re-arrangement, traffic manipulation of all sorts and application discrimination (positive or negative). And whether one wants it or not.  For instance the Google Andromeda introduction this spring, the codename for Google’s network virtualization stack, it was also announced that "Customers in these zones will automatically see major performance gains in throughput over our already fast network connections", resulting from applying "distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls". With more enhancements coming...Who says no to more performance?

With websites, Internet services and apps increasingly residing inside one or several clouds, another side to moving traffic for access to the same websites, apps, Internet services and of course SaaS-based applications out of the basic Internet and into the cloud-providers cloud fabrics, their own fiber-nets and data centers, the usual "level" playing and delivery mechanisms of the Internet, with various degrees of assumed network neutrality, no longer applies.

Once the traffic is inside a cloud-providers network and cloud service, the network access and delivery is more akin to an Internet overlay or private, virtual network domain. Most of the larger cloud providers now provide their own direct, private (fiber) access to their own cloud services from major Internet peering exchanges, data centers or telco backbones (for instance Amazon DirectRoute, MS Azure ExpressRoute, IBM Softlayer Direct Link etc) with accompanying "better than Internet SLAs" and support for QoS-tagging of traffic to/from the cloud service in question.

The use of CDN-networks was far along Internet overlay and bypass and this trend will accelerate even more with the introduction of private, dedicated cloud access, that in turn undermines or bypasses traditional net-neutral traffic delivery and traffic management over the Internet:  So far there has been no calls for network neutrality and "neutral" traffic management to take place to/from and inside cloud networks.  Cloud networking policy, management and performance levels are governed by the cloud-provider only.

The Internet bypass phenomena is set to grow, and already a number of companies are positioning themselves as bypass-only providers, for instance International Internet Exchange (see also this article).

Compared to traditional Internet peering and transit it's also interesting to observe the pricing model that most cloud providers utilize for ingress and egress traffic to and from a cloud service: It costs close to nothing or it's free to send data and traffic into or across the cloud-providers infrastructure, while all exit traffic from the same cloud, for intance to the Internet, other network servcie providers or to other clouds, has an associated cost and metered delivery (i.e. cost of GB transfer volumes per month). Not many ISPs would have gotten away with such an pricing policy, and schema and it runs contrary to traditional Internet peering models, and as the line between cloud-providers and cloud-provuders will get blurred, it may be seen as a discrimination of business and market terms between cloud providers and ISPs.  ISPs should rename themselves cloud-provider something and charge for all exit traffic to customers and peers!

In summary it doesn't take a crystal ball to predict that the Internet giants using CDN networks and cloud delivery will only become bigger and increase their share of Internet traffic even more over the next, coming years, while amount of traffic generated by or originating with ISPs and telcos will decrease.  And in parallel, that cloud-based service and traffic delivery will increase using large-scale network virtualization and traffic manipulation, with the traditional network neutrality policies most likely not making it into the clouds at all.

Note a): "Google, MS, Vonage... Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. ... Why should they be allowed to use my pipes? The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!".

Erik Jensen,
28.09.2014




9.17.2013

Bluelock infographic: Benefits and Advantages of Cloud Infrastructure-as-a-Service

Cloud services provider Bluelock has published an interesting infographic on "Benefits and Advantages of Cloud Infrastructure-as-a-Service" based on a 11-question survey among 325 respondents over a period of three months.

And not very surprisingly, key benefits are in the areas of increased infrastructure reliability and performance for development, test and pre-production workloads as well as business critical applications, showing the delivery flexibility achievable with modern IaaS-based server solutions.



Bluelock: Benefits and Advantages of Cloud Infrastructure-as-a-Service