About this

Writings so far

9.29.2013

Cloud IaaS service requirements and benchmarks

If you are in the market for a cloud infrastructure solution and have done some initial cost benchmarking using for example the CloudVertical cost cheat sheet for some of the larger, public cloud providers in the IaaS-space, it might be useful to do a more functional check of the cloud providers, matching their offerings against your own cloud IaaS service requirements or for instance the draft cloud IaaS checklist that are provided here.

This published version is an early draft that will be expanded in the coming weeks to provide a more full-fledged cloud IaaS checklist or service requirements specification.  At the moment the checklist contains the rudimentary service elements of the service areas in the ToC below, and can  be used to score 2 or more IaaS service providers against each other.

It can also be used together with for example the Cloud Assessment Tool by Asia Cloud Computing Association (ACCA) to give overall picture of cloud provider capabilities and ranking for a set of business goals or defined deliverables.


Cloud IaaS check list - ToC


  1. Key criteria
  2. Use cases
  3. Public, private, hybrid
  4. IT Operations support
  5. Reliability and availability
  6. Performance SLAs
  7. DC coverage
  8. VM configurations
  9. Storage
  10. Networking
  11. Security
  12. Monitoring
  13. Service control - service panel
  14. Billing
  15. Customer support
To be continued!

9.26.2013

Sorting out public cloud costs for budgeting

Sorting out all the cost elements involved in using a public cloud service from Amazon AWS, MS Azure or for instance Rackspace can be quite tedious even though the main service elements are clearly labeled. There are usually many service elements and options that comes into play, and cloud infra usage for a given service might vary from day to day or form one instance to another.

CloudVertical has developed a pretty comprehensive public cloud cost cheat sheet for the cloud services from

  • Amazon AWS - per AWS service region, that can be broken down for per hour, per day, week, month and so on
  • Rackspace - per availability region, and same as above
  • MS Windows Azure - same as above
  • Google compute
  • HP Cloud

for all the main IaaS service categories (i.e. compute/VMs, storage and networking) and service unit costs can be benchmarked against each other as well, see screenshot below).

Most of these providers change their VM and storage prices and configurations almost daily, but CloudVertical seems to follow updates and changes pretty thoroughly.




9.23.2013

Cloud networking

One of the more promising areas of cloud infrastructure is, besides the range of compute and storage services available, cloud based networking.  Or more correctly, moving CPE-based network functionality onto VMs, or a virtualized environment, and into the cloud.

For years a number of companies have been utilizing what's been called net-centric service delivery for some managed network services like IP proxy, web caches and Internet firewall, often in dedicated set-up tailored for that one customer.

Going from there and moving networking and traffic management functionality like IP proxy and reverse-proxy, caching, load balancing, firewalling and application acceleration onto cloud-based VMs promises to relieve companies of dedicated installations for these specialised functionalities at smaller or regional business units, and they can equip these smaller office locations with basic IP routers only, that can be managed remotely easily.

Another interesting twist on this approach is that on-prem VMs with these networking functionality, or private clouds with same, can be paired with equal installations and configurations in Amazon AWS, MS Azure or HP SmartCloud, meaning that one can overlay advanced networking functionality and control over the basic, best-effort Internet and in between clouds, creating a virtualized networking environment that can be tailored, stretched and adapted to time of day, week or seasonal kind of fluctuating work-loads.

Some examples of networking functionality being offered cloud style, here using Amazon AWS Marketplace listings:


  • Check Point Virtual Appliance for AWS - R75.40: "Check Point Virtual Appliance for Amazon Web Services delivers a security cloud computing platform that enables customers to deploy flexible multilayer security in the cloud. It extends the latest security technology to Amazon's cloud, protects assets in the cloud from attacks, and enable security connectivity."
  • Riverbed Stingray Traffic Manager 1000L (10 Mbps 1000 SSL TPS) with AppFirewall: "Stingray traffic management solutions provide complete control over user traffic, allowing administrators to accelerate, optimize, and secure key business applications. Now it's possible to deliver these services more quickly and ensure the best possible performance across any deployment platform."
  • NetScaler VPX Standard Edition: "Citrix NetScaler is an all-in-one web application delivery controller that makes applications run five times better, reduces web application ownership costs, optimizes the user experience, and makes sure that applications are always available by using advanced L4-7 load balancing and traffic management; proven application acceleration such as HTTP compression and caching..."
  • NGINX Plus - Ubuntu AMI: "NGINX is a high performance, flexible, scalable, secure reverse proxy, load balancer, edge cache and origin server. NGINX features include: reverse proxy for HTTP, SMTP, IMAP and POP3"
  • Wowza Media Server 3: "Wowza Media Server® 3 is the high-performance, high-value infrastructure software for unified media streaming to virtually any screen"
I'll be looking more closely into cloud based networking in a later posting, including software defined networking (SDN) options.

9.19.2013

Netflix as a model for network operators and telecoms

(Following a conversation with Snorre Corneliussen)

Netflix has, just the last few years, pulled off 2 industry transformations that are quite remarkable:
  1. Transforming their legacy, post-order DVD rental business into a online, video on demand rental business 
  2. Changing from a online content aggregator and distributor, or a software house if you will, into a content producer and provider of original TV-programming.

Doing 1 inside a year or two and going from zero to 36 mil subs is remarkable.  Doing 2 also and establish a Emmy-level producer position inside a year, even more so.

The transformation to the world's leading online, OTT TV-distributor coming from a DVD rental business holds a number of lessons for network operators and telecoms in it's own right, but in this post it's how the Emmy-level producer role was achieved using, among other things, outsourcing of all things TV-series production and technical was outsourced, that is the main focus.

A number of studies and articles details the why' and how's of 1 and 2, although it might be said that it's too early to make the call if 1 is sustainable and profitable in the long run, and way to early to judge whether 2 will build subscription base and revenue for Netflix Or if 2 is mostly used as a tactical tool against rising content rights fee's from "legacy industry" for bulk of Netflix content portfolio (Netflix had to pay $1.355 billion (!) in licensing costs just for the first quarter of this year, source Reuters).

And it's interesting the way Netflix has been using big data analytics to create House of Cards and the other Netflix original TV-programming to reach specific audiences and user demographics, as opposed to industry approach of doing 20 shows or movies per season based on some key themes perceived by marketeers, cancel 16 of them after 2 weeks (or expect 16 out of 20 movies to bomb at the box office)  - and you might have a hit!

But let's look closer to see if the Netflix content producer role and TV-production outsourcing approach might say anything about the future role or possibilities of network operators and telecoms.  I think it does.

Put very simply, Netflix, currently mostly in aggregation and distribution role for content provisioning towards consumer market, has one of the largest databases or big data mining space for online, multi-terminal, multi-language, international content and TV consumption across all key user demographics and age groups.  All Netflix user portals and terminals produce "tons" of data sets that can be "analyzed, cross-referenced and sampled" to get trends, usage patterns and viewing habits for current releases, archive and long-tail content, down to viewing habits per user and terminal for each one of those 36 million subscribers.  Or per ZIP-zone, city, region, country etc.

A small, but good illustration of Netflix data mining can be found at this NY Times article, cross-linking Netflix movie rentals with US ZIP-codes. That in turn can be used for regional campaigns, promotions or bundles.

This in turn was used as key input into the creation and positioning of House of Cards and other Netflix TV-series towards specific demographics and subscriber trends.  None of theses TV-series was actually written, scripted or made by Netflix themselves, but made according to high-level business goals, key market or user themes and TV role models or archetypes if you like. All TV-series technical production has been outsourced to independent production companies, directors and actors hired for these one-off unique production runs. But the TV-shows was made according to Netflix marketing and positioning requirements, and their overall content portfolio plan.

If we look at today's network operators and telecoms as "legacy style TV studios and distributors", very much doing their own TV-production, or network builds and service marketing, based on outdated marketing data with in-house personnel, the Netflix model points towards a future where a network operator focuses on and retains the ability to do proper big data collection and analytics to understand customer base for market in question, develop business and market requirements for new services and marketing ability etc, but outsources technical development and build of networks as well as specific service sets, increasingly OTT-based, to specialist companies and builders, who excel at this specific function across the industry.

Large scale outsourcing of basic and core networks operations and maintenance are already widespread in the industry, certain to accelerate in the coming years to maintain margins against industry wide reduction in top-line revenues and ARPU.  Next step is outsourcing the actual network and service development work as well and for telco's to focus on customer service and relationship expansion, market bundles using 3rd party (OTT-based) services and segmented campaigns. I.e. sales and marketing, leaving network and back-office build to specialists.


9.18.2013

Moving in-house or on-prem servers into the cloud - take 1: Cost Advantages

As detailed in an earlier blog post, the next "natural" step for in-house or on-prem server workload management seems to be "the move towards "servers or virtual machines as a service" in the cloud, either in a private cloud delivery mode or in a public cloud delivery mode". 

What are the main drivers for this and what are IT-departments and businesses looking to achieve?

One obvious driver or pull factor is the obvious ease of moving virtualised work loads or virtualized machines (VMs)  from on-prem hypervized servers and on to the the same hypervised set-up with IaaS cloud providers, for use in a private, public og hybrid use mode.

Another driver is the cost of delivery or TCO between on-prem servers/VMs versus multi-hosted cloud based ones.  BAIN & Company (www.bain.com) has a great illustration of this in their 2011 article "The five faces of the cloud" based on a IDC Worldwide Enterprise Server Cloud Computing 2010-2014 forecast.  Around 2010-2011 there was a shift in pricing of cloud based servers versus on-prem servers, with cloud-based servers for the first time achieving cost benefit compared to on-prem ones, projected to reach a 30-40% cost advantage in 2014.

In addition to cost advantges for cloud based VMs, there are numerous other advantages in the areas of more flexible workload management, provisioning time, VM flexibility and auto sizing, load balancing, recovery etc, that I'll try to cover in a upcoming blog post. 



9.17.2013

Cloud delivery mode for IaaS-based workload management

The Bluelock survey highlighted some of the usage areas for IaaS-based server utilization. An additional survey conducted at the recent VMworld seems to indicate that most users or companies is doing this in a private cloud use mode on their infrastructure, or some 48% using IaaS in private cloud mode versus only 10% in public cloud mode.

For development and pre-production work this certainly makes sense, but expect a move towards service provider based private clouds by IT-departements and projects at companies at the expense of in-house ones. And public cloud use mode, coupled with beta-launch approach, for new services beta test and launch.

We'll look further into the reasons for this in upcoming posts.

VMworld 2013 survey

Bluelock infographic: Benefits and Advantages of Cloud Infrastructure-as-a-Service

Cloud services provider Bluelock has published an interesting infographic on "Benefits and Advantages of Cloud Infrastructure-as-a-Service" based on a 11-question survey among 325 respondents over a period of three months.

And not very surprisingly, key benefits are in the areas of increased infrastructure reliability and performance for development, test and pre-production workloads as well as business critical applications, showing the delivery flexibility achievable with modern IaaS-based server solutions.



Bluelock: Benefits and Advantages of Cloud Infrastructure-as-a-Service

9.16.2013

Where does cloud-based IT services and delivery come from?

What are the origins of cloud-based IT services and delivery? The system and business development paths might be said to come from many sources, participants and movements over the years, but the two main ingredients I think are server or CPU core virtualization on the system level and the for ever developing business need for greater IT and service delivery flexibility.  Also there have been two distinct development paths at play; one coming from in-house server consolidation and cost reduction using server virtualization, the other from the hosting services arena where the move from dedicated servers to virtualized private servers (one OS serving many user or services instances) to proper virtualized CPU cores (one hypervisor layer serving multiple, properly walled-in CPU virtual machines with their associated OS).

Both development paths were seeking greater IT service delivery flexibility, one for the internal IT department and it's users, the other for the hosting service provider and service provisioning & production, and both achieved much improved TCO.

Going a bit further into the in-house development path, one traditionally had IT departments using servers for single tasks, i.e. file and print, database hosting, email server, firewall, as work separation made functional sense and CPU's couldn't carry greater work loads.  Beginning around 2002 with VMWare, physical servers could be virtualized, i.e. made to carry multiple work loads, depending on time of day or some basic concurrent task switching, and the server virtualisation movement or consolidation path was started once IT admins saw the server management and cost reduction benefits.  Most companies with a in-house data center or server farm would have migrated to virtualized, consolidated server platform by now.

In the Internet hosting arena, or outsourced IT services arena for that matter, the hosting space evolved from shared hosting (i.e. multiple web domains on a single or load-balanced server) or dedicated servers for some high-capacity work load environments into the ASP market (application service provider) to host and offer higher margin application hosting and delivery.  ASP loads were in most cases tied to core business hours, and ASP servers were left idling outside 08.00 - 17.00.  Also providing dedicated servers were costly, as these servers also typically were hard at work only during specific hours during the day - business hours if the server were covering business area service utilization, 17.00-23.00 or so if the services on the server were geared towards the consumer market.

For both ASP and dedicated server hosting, server and CPU core virtualisation came in as a cost saviour or allowed hosting companies to move away from costly one server for one application or customer environments, and into virtualized work environments were work loads could be shared or shifted between fewer servers throughout the day.  Also proper CPU and OS virtualisation meant greater work load control and configuration than VPS server, where one OS install and config were tasked with serving a range of use cases and applications.  Server virtualisation lead to virtualized server platforms and in due time to virtualzed data centers, that allowed for easier load balancing between servers and data centers for that matter.

With both corporate in-house servers and data centers being virtualized, as well as server platforms for hosters, and the evolved mind-set for virtualized IT service delivery opportunities that comes with this, the next "natural" step seems to be or are the move towards "servers or virtual machines as a service" in the cloud, either in a private cloud delivery mode or in a public cloud delivery mode.  Or a mix of the two modes in a hybrid delivery mode for cloud based IT services.

And that's the topic for the next post in this blog.

9.15.2013

Cloud Snip

Cloud Snip

Yes, it's another blog about IT cloud services, developments, users and suppliers.  
The labels gives the ToC away - enjoy!