About this

Writings so far

12.12.2013

Cloud or not, compute or not, Internet or not.

Biggest lesson from the Snowdon revelations: Social engineering and employee "activities" of various sorts is still the greatest threat to company IT security. As it always has been.

All the worlds best, strongest and well maintained network and IT security systems won't matter if it's being bypassed or circumvented by social engineering, employees or 3rd party, on-site contractors, and that holds equally well for on-site as well as cloud IT services.

In many ways the debate nowadays if cloud IT is secure or not, if a company should be making the jump or not, or maybe even pull back the initial services and data that might have done the cloud leap, resembles the discussions at CxO level 15-20 years ago whether a company should hook up to the Internet or not (beyond the IT departement and some islands of deployments):  It wasn't perceived as secure, there were strange things out there and hackers might get into the LAN!

Most or nearly 100% of companies went with the Internet and gaining all that now is taken for granted re Internet, web and seamless communications services after putting in something resembling a "Internet access and usage security" policy and a big firewall.  Users were allowed to browse the Internet.

20 years later and many line of business units or departments has not only browsed but also put to good use a number of SaaS cloud services as well. In many cases the new IT services buyers in the HR, marketing or finances departement not knowing that the solution they had to have, and ASAP!, were in fact a cloud service delivery.  Lately the IT departement has come around to the cloud IT service delivery model as well.  And looks likely to continue on that path, as the business benefits are seen to outweigh the NSA or governmental listening in drawbacks (were take for granted anyway, just as for general Internet traffic and service usage by most pragmatic companies and network security managers).

2 additional factors are that if you aren't doing cloud computing or service delivery, someone else certainly will still be doing it and increasingly getting better at it, while you risk being stuck with on-site service model and lead times.  And, secondly, it's not as if having everything behind that big firewall doesn't carry some risks as well, be it on human, social or system level...

This pragmatic and take appropriate measures approach were highlighted in a recent IDC survey, "2013 U.S. Cloud Security Survey" (Sep 2013) of  IT executives in North America and Europe, loosely summarized as "yes, there are security and surveillance concerns, but the economic benefits and increased business agility outweighs security concerns".

What are the measures that can be taken by most companies to overcome cloud security concerns and issues?

A new infographic by Sage highlights the first steps that should be taken by anyone, for any IT solution really:


  1. Establish the IT and business security policy for IT in general and the the IT solution in question
  2. Train your employees.  On the IT policy and the IT solution in question, best practises etc.
  3. Assess business needs, i.e. what business data needs to be where, accessible by whom and how. With what kind of service levels.
  4. Choose the right supplier and service for number 3



Erik Jensen, 12.12.2013

12.04.2013

Cloud IT services for SCRUM, companies getting agile with cloud services

A strange mix or bedfellows maybe?

  • SCRUM IT development with cloud IT services
  • AGILE approaches and companies and cloud, or rather, how agile companies and organisations are well suited to utilize and benefit from flexible service delivery by way of cloud IT.

Putting it very broadly - agile companies using SCRUM for a stepwise, iterative approach to IT service and business development stands to gain the most from the flexible and on-demand service delivery nature of cloud IT services, be it in the cloud infrastructure area, for platform level kind of services or in the areas of application and ready-to-go line of business services delivery.

Let's say you have agreed upon the goals and visions for your SCRUM-based work effort, the initial roadmap and product backlog has come into place with the product owner, stakeholders and team. And the initial design, sprints and infrastructure requirements has been decided upon. Then it is very easy - and, not the least, very fast to start doing the initial prototyping and spring functionalities on a basic IaaS or PaaS set-up, with limited or as-needed capacities for the core team development and testing than can be put up, expended or thorn down as needed on the hour.

Or, for instance, one is doing a SaaS-area SCRUM development effort, involving migration legacy data onto the new SaaS-services.  Once again it's quite easy to start with some initial SaaS accounts for the main SaaS roles involved, and start  importing the core data sets and look into the core SaaS functionality for the business, data flows and data collections in question.

Or for the "developing functionality from the scratch" kind of SCRUM-runs, start with some basic IaaS capacities or PaaS feature set, and then add storage, processing, networking or database capabilities as the sprints falls into place and the testing, piloting, beta and launch gets underway.

That's the basic lineup, matching the agile and iterative approaches of SCRUM with the same stepwise and flexible capabilities of cloud-based IT services. For companies who have already signed into the agile and lean approaches for business and IT development, this shouldn't be a surprise, and this way is taken for granted - cloud services and agile approaches came about more or less in the same cultural happening or paradigm even.   What's harder to archive is to change or develop let's say traditional businesses and organisations into a agile, iterative and sprint-enabled organisation.  And that's a whole MBA course just there...


Erik Jensen, 4.12.2013

11.25.2013

Backend as a Service (BaaS) for mobile services, Internet of Things and devices

Backend as a service (BaaS), is making an appearance as a new "as a Service" cloud or SW development approach to give developers (mostly) a general way or an API into common application enabling infrastructure.  It gives web, mobile app and Internet of Things (IoT) developers a way to link their client services and applications to backend cloud processing and storage, as well as providing generic functionalities like user authentication and management, service management and logging, push notifications and integration with social networking services and more for their apps.

With all this going on with a BaaS, it might be easy think it's a mix of cloud IaaS, PaaS and SaaS geared towards mobile developers and clients, and gives developers a turn-key, if there ever was anything "turn-key", software modules needed to run the general backend services of mobile applications.

Kinvey has a great mobile dev and services ecosystem map, that shows were BaaS is generally positioned and residing.



Why would anyone use a BaaS kind of backend, and not develop their own to suit service specifics?  Main reasons seems to be that BaaS makes it

  1. Easier to cover multiple terminals and form factor, different mobile operating systems and for instance multiple authentication, logging and payment schemas.
  2. Easier to prototype, quicker to launch and both easier and quicker to scale the backend as an app or mobile service might take off or see seasonal capacity demands
  3. Possible to outsource or transfer many of the security issues and worries associated with mobile apps, payment and general hacking.
  4. Easier to also cover Internet of Things kind of devices as most of the mobile BaaS players have IoT enablement on their roadmap also.

This is a developing area, and merges and some big names VC investments are certain to come through the next 6-9 months.

Some BaaS players you might want to check out:

  1. Firebase
  2. Parse, recently bought by Facebook
  3. Kinvey
  4. Appcelerator
  5. StackMob
  6. AppliCasa
  7. StrongLoop
  8. Kumulos
  9. BaasBox



Erik Jensen, 25.11.2013

11.20.2013

Who are the cloud services buyers?

Already there are some stereotyping on who is buying, i.e. actually paying for, cloud IT services:

  • Developers and upstarts buy Platform as a Service type of cloud services, i.e. development environments that can be tailored to meet developers needs or configurable platforms that upstarts can tailor to their needs
  • CTOs and IT departments buy cloud infrastructure, Infrastructure as a Service, type of cloud IT services, i.e. compute, storage or networking as a service
  • Everyone buys cloud app services and Software as a Service, but usually it's line of business units that buys a specific SaaS-service for their task or project deliverable at hand (and CTO or IT departement are out of the loop)

This is already seeing some change, for instance enterprise IT are increasingly looking at and buying PaaS kind of cloud IT services to cover business needs that can be met using "one size for all" SaaS applications and basic cloud IT infrastructure setups.  And for instance HR or finance departments that wanted that one, great, must-have SaaS application for their line of business, finds out that there are integration issues or data exchange issues once they have 2 or more SaaS apps up and running from 2 or more cloud providers.  Can the IT departement please help sort out this mess?

Bain and Company had a great break-down of cloud buyers in a 2011 report (The five faces of the cloud, by  Michael Heric, Ron Kermisch and Steve Bertrand), listing 5 CIO buyer categories according to cloud "adoption speed and willingness":

  • Transformational: These early adopters already use cloud computing heavily to transform business IT and delivery to their business, with on average more than 40 percent of their IT environments relying on one or more cloud models.
  • Heterogeneous: These companies are looking to evolve IT service delivery and capabilities and typically have an diverse mix of legacy systems and newer technologies like virtualization and cloud computing. Assumed to make up more than 40% of the buyers in 2013.
  • Safety-conscious: Balancing security with growth, these buyers and companies are particularly concerned with the security and reliability of their IT environments. They understand the value that cloud computing offers, but are willing to compromise to ensure that their IT and business environment is safe and secure. Private cloud and hybrid public-private cloud models have the most appeal. Along with the transformational type of companies they are the biggest cloud IT spenders by 2013.
  • Price-conscious: Have their TCO for IT services in place years ago, these bottom-line focused companies purchase cloud technologies and services primarily for cost savings and to deliver basic business functionality.
  • Slow and Steady: This is by far the largest group of companies and IT buyers (some 44% of companies in 2011) and do not yet appear ready to adopt cloud computing in a progressive way, although they express interest in exploring offerings if a provider can slowly and steadily guide them.

The key thing in the report is the observation that "... early adopters generate ~50% of cloud spending today (i.e. in 2011), but ~90% of growth through 2013 will come from other companies".

Assuming, for many reasons, that Europe and the Nordics lags some 2-3 years behind the US in cloud adoption and uptake, a lot of the stories and hybris we are seeing in the Nordics for cloud take-up and usage today are coming from these transformational and heterogenous early adopters, but the real big money are on hold until the larger companies and enterprise IT starts adopting cloud IT in a forceful way. Which should be 2014 - 2016 in the Nordics approximately.  

A finishing note on the role of the CTO and IT departments, that in many cases are being bypassed by the CFO, CIO or CMO or developers buying cloud services directly themselves - what are they left with?  In many cases being very or too much focused on their on-prem IT platforms and services, they will be forced to take on on-prem and cloud based IT service delivery as well, evolving into an IT broker of physical or virtual application services for their organisation once the CFO or CMO realize that handling 2 or more cloud services and applications aren't that straightforward anyways when it comes to user support, login hazzles, performance variations, billing and security across different cloud providers.


Erik Jensen, 20.11.2013

11.14.2013

Internet of Things ecosystems and balkanisation risks

Like Big Data, Internet of Things (or IoT for short) has been talked about for years, and seems on the verge of making it big the next 1-36 months or so.  Just as real-life management intelligence and business value from Big Data logging and analytics.

Just as Big Data solutions and systems has to deal with tons of different/proprietary log formats and data sources within an enterprise or from public data sources on the Internet or other places, then applying application or vendor specific data collection and log normalisation, and doing application specific mapping to business KPIs, reports and analytics, so IoT faces a number of non-standardized or vendor-proprietary challenges to become a true interconnected web of things, things to humans, humans to things etc.

There are numerous non-standardised issues and management in the areas of IoT security (service access to things by other things and humans, authentication and authorisation, management and reporting of denial of service and hi-jacking of devices, device upgrades, logging), identification and naming schemas for things, common IoT metrics, real-time control and communication protocols, subscription models and reporting.

A recent IETF Internet report draft, "Security Considerations in the IP-based Internet of Things,               draft-garcia-core-security-06") seemingly puts a lot of faith in IPv6 and web services in general to facilitate IoT developments ("The introduction of IPv6 and web services as fundamental building blocks for IoT applications [RFC6568] promises to bring a number of basic advantages including: (i) a homogeneous protocol ecosystem that allows simple integration with Internet hosts; (ii) simplified development of very different appliances; (iii) an unified interface for applications, removing the need for application-level proxies."), but also adds "Although the security needs are well-recognized, it is still not fully clear how existing IP-based security protocols can be applied to this new setting".

On a general level this is of course quite all right but of one looks at the developments of for instance, and quite relevant, mobile ecosystems where some key players control their entire ecosystem (clients and device OS, programming APIs and SDKs, backend for authentication and billing, app stores, ad networks integration etc), homogeneous protocol ecosystem for IoT and unified interface for IoT devices, clients and services, looks a long way of.   And so far, in my opinion, most IoT devices and services for home automation and IoT, in-car or transport IoT, M2M payment arrangements and more are proprietary and vendor specific.

For instance for home automation, it's not easy or doable at all to get Belkin WeMo units to talk to or interact with Nest units or Telldus units.  Or reach them through a common programming interface or backend. (although I should backtrack slightly here - the great IFTTT scripting service is starting to emerge as a common way for end-users to program their devices, and is supported bu Belkin WeMo and Philips for their Hue range for instance).

With that backgrounder, are there risks of IoT being balkanized, and that IoT devices and services will become vendor or ecosystem proprietary?  Or are there standardisation efforts underway to overcome this risk and 2-3 vendors dominating this field over time as we have seen in the mobile area, Internet video or social media area for instance.

Currently the IETF doesn't seem to have a RFC track for IoT comms and networking standards, but the IEEE standards organization are now finally are gearing up (or, they had their first IoT report out in 2005), and are meeting for their initial IoT standardisation tracks.  Will probably take some years and in the meantime it's not hard to predict that this developing and promising business area will see most gadget, cloud and Internet OTT players getting involved (why not Facebook for home automation and control, Microsoft Xbox with Kinect for same and as automation hub, Android and Google Glass for an Google approach, Apple TV or iOS-devices for same etc).

And getting involved here means each vendor building and securing their IoT ecosystem on both client and backend/cloud side, extending device OS (iOS, Android) to cover IoT functionality and attract developers and pĂ„artners into their IoT ecosystem.  I would put my own money on one or two of this, even though it means IoT balkanisation.

Looking to read up on IoT developments and work? Here are some pointers and vendor samples (in no particular order):
  1. Wikipedia on IoT
  2. McKinsey Quarterly report, The Internet of Things
  3. Dark reading, Identity management in the cloud by Ericka Chickowski
  4. IFTTT. And an article on how to get started with IFTTT from ReadWrite
  5. OpenIoT - Open Source Solution for the Internet of Things into the Cloud
  6. CastleOS for home automation
  7. You are most likely a IoT service provider - Google Maps gets real-time traffic, crowdsources Android GPS data
  8. Postscapes - tracking the Internet of Things
  9. IoT cloud specialist - Arrayent
  10. IoT developments environment and tools, IoT cloud - Xively
  11. Device relationship and ID management - Forgerock IRM
BTW, what are the Balkans and Balkanisation?


Erik Jensen, 14.11.2013

11.11.2013

Cloud security and surveilance - what are the non-US alternatives?

GigaOM is quoting a new survey by PriceWaterhouseCoopers (PWC) released last week, saying that some " 22 percent of German companies now see the risk of using cloud services as “very high,”... 54 percent say risk is high or very high.  ...while 15 percent want to switch to European tech providers that won’t cooperate with American or British intelligence services."

I haven't found the PWC survey on question, but it mirrors findings in the "How Much will PRISM cost the U.S. Cloud Computing Industry?" report from the Information Technology & Innovation Foundation earlier this summer finding that "10 per cent of respondents outside the US had cancelled a cloud project with an American firm because of PRISM, while 56 per cent said they're less likely to use a provider based in the US."

OK, let's say you are tasked with finding a secure cloud provider outside the usual US ones and that doesn't have US offices, subsidiaries or business units that would be covered by FISA/NSA or US National Security Letters that will impact non-US operations or locations as well, and need to come up with a cloud infrastructure provider that covers processing, storage and networking at competitive prices and that have feature-parity more or less with the leading players, i.e. Amazon AWS.

What are the options?

One could start looking at German and Swiss providers that have some track record legally, nationally and culturally for safe-keeping and data privacy.  UK and Swedish ones would be out because of GCHQ and FRA impacts, France with their equal, same thing with Norwegian providers as 99% of Norwegian Internet connectivity goes through Sweden.  One place that's often overlooked is Finland, but they have some players as well.

With that in mind, some cloud infrastructure players that have the basics covered for IaaS, an extended feature set for IaaS and self-serve IaaS at competitive prices. It's not an extensive list and I haven't checked all the way if they have US units or not, that would be impacted by FISA or US National Security Letters. Also, remember, it's very hard at most times to say a cloud provider are Swiss or Finnish or is located in a particular country - many DCs and servers for IaaS might be located in one country, but management and ops are done remotely, Internet infrastructure for the service (DNS, SSL certificates, L3-7 global load balancing, service logging etc) are done from a remote location, that might have data or data control for a remote DC running through them.

Some German cloud infrastructure providers worth a look:

  1. Profitbricks: I thought of them from the start, but now see they have a US unit, and they would be covered by FISA or National Security Letters just as any US company.  Still gives a good indication of service and feature levels available from leading European cloud providers.
  2. Internet4YOU: Servers, storage and DCs in Germany, covers most IaaS-areas
  3. dynaCloud: OnApp based cloud provider. Also CDN-services.
  4. The unbelievable Machine Company: Name alone makes them worth a check

Some Swiss cloud providers
  1. Exoscale: Cloud infra offering. See also "In Switzerland your data is safe" section.
  2. Safe Swiss Cloud: Focus on security and privacy
  3. Swisscom dynamic computing: Covers IaaS basic, has online configurator and more.
  4. Incloudibly
  5. Cloudcom: Cloud servers with DDOS-protection and more
Alternatives in Finland:
  1. Tieto cloud services: Also has a Swedish, FRA-impacted counterpart
  2. Nebula
  3. Hostingservice.fi: Another OnApp based contender

OK, this is by no means a comprehensive list, and a closer review might find that some of these providers do indeed have US affiliates or hosting of some sort, from their own router at US IX or using some back-up facility of sort.  But main thing is that there are lots of alternative cloud providers in the IaaS-space and that one isn't necessarily forced to go with NSA-compatible ones to get business or developer requirements fulfilled.


Erik Jensen, 11.11.2013

11.08.2013

Mobile apps and services development tools

In an earlier post, I promised to come back with an overview of development tools for mobile apps and services, geared towards the "drag&drop" developer, or developer who doesn't want to work directly with, let's say, the SDKs and APIs for Android and iOS.

Most of these tools now support cross-OS publishing or builds, so one can get apps done in one go for Android, Apple iOS and MS Windows Phone. Or mostly, some tweaking to adopt to user interface and conventions per OS might be needed, but for single-task apps, that apps was all about in the beginning, and to tip the toe in mobile development waters, they are a great help and introduction.

OK, the list!

  1. Mobincube: Template based development, free for basic features, publishing to app-stores, add integration and really seems to be evolving very well.  Great pish on HTML 5 side as well, so should be useable towards Firefox OS as well. The one I tried myself for some basic apps. recommended!
  2. Appery.io: Supports the usual OS suspects, drag and drop development environment, DB and cloud backend integration and more.  Also has free edition for basic features.
  3. Conduit: Positions itself as the quick and easy alternative for cross-platform app development and has many great demos and use cases on their site.  
  4. Widgetbox: Supports iOS and Android, another template and widget based approach to get apps "done in minutes".  
  5. MobileNation: A senior in the market with a good track record, drag& drop approach, free option to get started
Any of these is a good choice to get started and acquainted with mobile apps and services development.

One important thing, besides ad networks integration, is to make sure you have full tracking on number of downloads, usually from app-store, and access to usage and traffic statistics for your app or mobile service as it reaches thousands and millions of users.

Some candidates for app usage and traffic logging, statistics (avoided the Big Data thing there):

  1. Keen IO: Extended app, or most anything else, service logging and statistics
  2. Google Analytics: Hard to avoid this one, now enhanced with mobile app and services tracking as well
  3. Good Data: Analytics-as-a-Service, and makes it easy to come up with good looking and useful service usage reports.
  4. Mixpanel
  5. KISSmetrics
  6. And to make your statistics look good on the big screen - Gecoboard


Erik Jensen, 08.11.2013

11.06.2013

Venice, the direct route to Calcutta and business transformation

I recently went on a trip to Venice - recommended for all! - and wanted to read up a bit on the history of Venice before I went.  "City of Fortune. How Venice won and lost a naval empire" by Roger Crowley turned out to be a very good read for the golden years of Venice, let's say from year 1000, when newly elected Doge Orseolo II turned the sea and ports of the Adriatic into their own shipping lanes and safe havens for trade until around year 1500, when the Ottomans all but controlled the East of the Mediterranean and the main trading routes East-West on land and on sea.

And, just as importantly, the Portuguese with Vasco da Gama in 1499 finally found a direct route to Calcutta and India, meaning that many many middlemen and tax increases along the historical Silk Route or through the many ports of Alexandria, Beirut, Constantinople and others in the Mediterranean, could be bypassed, and other nations and kings could take over the lucrative trade in spices, glas and minerals that the venetians had controlled for hundreds of years with great margins and profit.  News of the direct route to India reached Venice in 1500, and most traders and sailors understood the implications right away.  Venice was based on controlling the trades from the East through many middlemen, bribes, taxes and being the best, or most greedy, traders over years and years - this business model was now going away rapidly.

What's this to do with cloud IT?

Not trying to stretch the point to far here, but one can make the point that just as the venetians were very good at managing and controlling their ships, their sourcing for trade of all kind, middlemen and taxes along the way to get goods and material into Venice, and then moved on to the rest of Italy or northern-Europe, corporate IT has become fairly good at managing and controlling their

  • servers, storage and network infrastructure
  • sourcing of licenses and IT services
  • re-sellers, channel partners and suppliers for hardware or software
  • enterprise budgets, cost centers and TCO activities
  • distribution of IT-resources, applications, services and access to their users/customers

And this has been how corporate IT has functioned and worked for a number of years, to the benefit of the IT department, their suppliers and mostly to their customers.

But customers always want more, or the ones being on the road and being mobile certainly are, as are developers who hate dealing with the IT department and corporate IT frameworks.  And, it turns out, so does the CFO (or he wants less...) and increasingly the CIO.  Once these guys get the "no way" or the "we don't support that" once too many, they will start scouting for alternatives that meets their business needs better than corporate IT can.  And many of these has found the direct path without too many middlemen to cloud based services for their processing or storage need, for the development and test environment they seek or for more flexible big data analytics logging and visualization services than they get from their "always 2 releases behind" internal IT business intelligence solution. 

They find the direct path to Calcutta.

Now, the story isn't that corporate IT will be left in the backwaters like Venice was some years ago, but that corporate IT needs to understand and adopt to that users will always look for better, cheaper and more flexible ways to get their work done.  And that corporate IT needs to develop their own cloud IT services story, get to Calcutta before their users and put up safe working conditions for their users no matter where they might be or end up.

Erik Jensen, 06.11.2013


11.04.2013

Cloud IT billing - or getting to IT costs transparency

There are a number of elements and parameters that goes into cloud billing, or billing all the IT service elements that goes into a cloud IaaS or PaaS IT delivery, be it in private, public or hybrid fashion.
Cloud SaaS delivery and billing appears to be a much easier set-up and process, as most SaaS services are billed per licensed seat or per user.

For cloud IaaS service delivery, some of the main elements are listed in the overview below, and centers around the three main IaaS service elements of processing or virtual machines, storage and networking.

  1. Processing: 
    • # Cpu cores per sec, min, hour etc or fixed number of VM cores per month
    • Dedicated, reserved/assigned or pool CPU cores
  2. Storage: 
    • Storage volume
    • File type: Local HDD storage (persistent or non-persistent for VM), SAN or object storage
    • Storage types: Processing/VM storage, data storage and back-up, off-site back-up, disaster recovery
    • Number of IOPS (input/output operations per second, i.e. reads and writes from/to a storage domain): This one can get quite tricky with the number of IOPS parameters involved and hard to determine up front before actual, real-life production levels has been reached for a storage-based service.  Many providers balance cheap storage volumes with steep IOPS levels, so if an application or web-service has any significant storage traffic or transactions, then that cheap storage isn't that cheap anymore if one factors in IOPS and storage network traffic volumes.  Be aware!  IPS usually comes in different IOPS classes, for instance x-thousand per VM/month, or x-thousand per disk-volume per month.
  3. Networking: 
    • VM networking capacities/volumes: Traffic volume per billing period, and/or speed (Mbps/Gbps) thresholds
    • Internet access capacities or volumes, per region (i.e. Europe, North-America, SE Asia etc)
    • Firewall services, per VM, for the IaaS/DC server farm in question
    • Load balancing (between VMs, DCs, regions)
    • Cache, proxy, reverse proxy services
    • Virus control
    • Denial of service protection, basic or extended, for VMs, between VMs, for DC in question or at operator backbone perimeter
    • Distribution services, object or dynamic caching, web acceleration or CDN services
Besides the "basic" billing units and parameters for cloud IT services, there are a number of other factors that needs to be covered as well to provide meaningful, transparent IT billing for companies and customers.  Some of them are
  1. Overview of the the logging, correlation, mediation and aggregation set-up. Customers needs to understand how IT service activities and usage are logged, correlated across VMs, server farms, DCs or service delivery regions (i.e. that one log entry means the same across different production units), how some log entries are transformed or mediated into different billing units and how all the activity/usage entries that has been logged, correlated or meditated, are aggregated into high-level billing units.  Cloud IT billing, or any IT billing, IT TCO or ROI exercise needs full transparency in this area, or one is left with black-box IT billing.
  2. It must be possible to collect cloud IT billing automatically or per self-serve interface into a main customer account, or split the cloud IT billing on several parties, be it across different enterprise unis or departments, projects or delegated service account.  It must also be possible to have different entries or receivers for service owner, legal owner ad billing recipient for a cloud IT service.
  3. All the billing data should be available in defined format to open API or DB access for 3rd party billing analytics, so that customers can look into how their cloud IT utilization and costs are coming together over time and where service utilization can be optimized.  Als customers needs the cloud IT cost side to come together with their revenue side and establish historical overview of margins, cash-flow, ARPU and customer developments (good, bad) and put a weight as well as cost/performance goals on VM and storage utilization, cost of on-demand campaigns/periodic offers etc.


These billing elements and parameters have been included in the cloud IaaS checklist that I wrote about in an earlier post, and goes into the overall service requirements for an cloud IT service and delivery.


Erik Jensen, 4.11.2013

10.30.2013

Cloud platforms for mobile services development

Now this headline and it's subject could be the title and scope of a whole book (and there are a number of them available), but I wanted to get into the subject by some initial posts on the matter.  And do some mobile "drag & drop" app development myself to try out a range of new app dev tools for non-programmers (see links later on).

To start with, Gartner predicts, with usual assurance, that by 2016, "40 Percent of Mobile Application Development Projects Will Leverage Cloud Mobile Back-End Services".  And it doesn't stop there; "causing development leaders to lose control of the pace and path of cloud adoption within their enterprises, predicts Gartner, Inc."

Mobile developers and apps using cloud-based platforms for their service management, processing and storage doesn't mean loosing control per ce in my opinion, just as on-prem or in-house development and deployments platforms aren't more secure or insecure than cloud based one . It boils down to security policy and culture, and how one actually adheres to them. But using a cloud based service delivery platforms for mobile apps and services seems like a no-brainer if the app in question is Internet-facing or supposed to be used by a public audience, and not just internally in an enterprise.

Using a cloud-based development and service delivery platform, i.e. a Platform as a Services (PaaS) kind of cloud platform with a wide range of support for ready-to-go development environments, databases and tools a step above basic IaaS platforms, gives a range of benefits and options, including

  • A uniform development, test, piloting and launch environment and platform - one doesn't need to move code, databases, web-servers and other service delivery platforms from a closed, limited capacity dev environment to a more scalable test & pilot environment, and then onto a production set-up that supports the number of user and traffic that might come in at peak every month or whatever
  • In other words, a cloud based dev, test and production environment for mobile services gives built-in load-balancing, scalability and capacity on demand that in-house platforms or IT-departments typically struggles with
  • Most cloud platforms also has built in functionality for server side processing, off-loading clients or the apps form this, as well as caching and static content serving
  • And most cloud service platforms has built-in security provisions, like DDoS-protection, firewalling as well as authentication services.
If one are using development platforms on Google App Engine or Mobile Backend Starter, MS Azure or Amazon AWS, one also typically get
  • Access to industry-norm development environments like LAMP, RUBY or Node.js
  • Authentication of users and services against the vendors shop, messaging or document store services
  • Integration of log and analytics tools for mobile apps and services, for instance Google Analytics for Mobile
  • Easier access to public app stores like Google Play
With smartphones and tablets now becoming the clients of choice for most users, there's a race on between the dominant and wanna-be cloud service providers to been seen as the most attractive platform for mobile developers, recently highlighted by the Google Mobile Backend Starter launch earlier on in October.  Here's a list of some cloud-based mobile development platforms and services:

Now, about the list of mobile "drag & drop" app development tools - new post coming up shortly!

Erik Jensen, 30.10.2013

10.29.2013

Great Openstack overview

For anyone looking for a Openstack introduction and an overview of the Havanna release, Edgar Magana has a great slideshare at
http://www.slideshare.net/emaganap/open-stack-overview-meetups-oct-2013

10.28.2013

If you are not paying for it, you're the "big data" product

Andrew Lewis made a great comment on user-driven content back in 2010 in a metafilter.com exchange that since has become a Internet meme of sorts on the range of free Internet services being offered from Facebook, Google, MS, Yahoo and lot's of others: "If you are not paying for it, you're not the customer; you're the product being sold".

Great one-liner, but what does it mean? Well, it lead to Mr Lewis, of course, starting his own online shop offering t-shirts, coffee mugs and aprons with that same slogan, going almost meta on his own meta.

But in the setting of free or "free" Internet services like Facebook, Google, Yahoo and MS online services, running on some of the largest server and application platforms ever deployed and developed at a significant platform and man-hours cost one must assume, why is it offered for free?

One obvious answer is advertising and the development of personalised or context driven, online advertising on the Internet and mobile devices. Be it in the form of banner ads, splash ads etc or product content being adapted to user location or ZIP-code, time of day, device type and earlier browsing history for the user in question.  This gives advertiser a way to achieve much better and greater ad targeting than usual shotgun advertising of "manual" or analog media, and it's much easier to see ad hits, view times, conversion rates, also in real-time than with traditional media. So giving away IT-services like storage, communication services like email and IP messaging or content is a way to attract users, get them registered, build user base and attract more advertising dollars.

Another angle is collecting and aggregating user data per site, device type, ZIP-code or region and use this aggregated user and usage data for business analytics, trend watching, benchmarking new service offerings and competitors services.  That then goes into the continuous re-work and make-over that most large IT companies do all the time.  Analyzing customers and customer behaviour should lead to greater service offerings.  And we are over in big data and analytics territory.

And one of the most fascinating stories of using big data analytics to understand customer behaviour and wants, comes from Netflix and how the House of Cards TV-series got created, partly at least if we are to believe the backgrounder here.  Netflix has been very open and explicit about its plans to exploit user data logging and its big data capabilities to influence its programming choices well before the House of Cards TV-series was aired. Netflix has detailed viewer logs for any market they are in, broken down by content type, country, ZIP-code, time of day and device type and more.  Knowledge of Netflix subscribers viewing preferences pointed towards a political TV-drama with a number of defined attributes, among them starring Kevin Spacy for the lead, that would ensure high engagement levels and viewership through the Netflix recommendation engine, that is claimed to influence 75 percent of Netflix subscribers in viewer choice.  Big data logging and recommendation engines are a match seemingly made in heaven.

Other reasons for giving away IT and communications services for free are simply to stay competitive and doe service bundling and/or upgrades.  One guy is selling 2 GB of storage for $5 per month - that doesn't cover cost anyway and nobody actually uses 2 GB - why not give it away and attract more users, and then later on try to move them to more premium, paid service offers, presumably with better performance and higher service levels?

And that has been the approach for introducing Internet services or offers for the last 20 years or so.  Hasn't worked, best-effort free services worked too well in most cases - and generated tons of user and usage data anyways. that at least kept the marketers and advertisers happy.

Erik Jensen, 28.10.2013

10.27.2013

The cloud, the NSA and some 450.000 private contractors

A recent article at forbes.com refers to a OVUM report about the increasing use of cloud IT services in financial services, due to "“improvements in cloud security and a wider variety of applications, investment in cloud, by both the buy side and the sell side...".  Cloud-based IT services, both on the infrastructure, platform/development and as-a-service side sounds like a natural step, with the on-demand and flexible nature of cloud IT service provisioning and workload management fitting the cyclic or periodic need of the finance and banking industry very well. Also as most customer interaction with banking and financial services will move to be Internet-facing or by ways of mobile terminals.

There are a range of operational security issues with cloud IT services as well as exposure to Internet denial of service attacks and accounts break-ins for most companies on the Internet.  But, as the article notes, there are also the aspect of the NSA listening in or surveilling the cloud service platforms being utilized, following transactions and accounts movements for US based cloud services (and most others probably).

That is the subject for an article or an entire book in itself, but I wanted to touch upon an other aspect of most US IT companies, main telcos and ISPs as well as cloud providers being part of various NSA programs (PRISM, XKEYSCORE etc.), namely the extensive use of private contractors within the NSA, like Edvard Snowden himself, to perform many of the NSA day to day operations for the programs in questions.

According to many public articles (1, 2 and more), in information publicized by the Office of the Director of National Intelligence this year, 1.2 million Americans hold top-secret clearances, and 38% of those clearances are held by private contractors. I.e close to 500.000 contractors have top-secret clearance like Mr. Snowden.

The head of the NSA,Gen. Keith Alexander, has gone on record saying "reporters should be prevented from "selling" National Security Agency documents" (3). But with the NSA not being aware of the document downloading Mr. Snowden had done before he made it public himself, pointing to somewhat lackluster system logging, incident and security revision routines within the NSA, and the reported widespread use of NSA surveillance tools for private endeavours inside the NSA, isn't it likely that over the last years, some of these NSA contractors used NSA tools to spy on and extract information for personal use and financial gain (for instance early access to upcoming quarterly results or upcoming acquisitions and mergers) or sold inside or critical information about one company to a competitor. Or alerted management at company where they were employed about upcoming bids, performance reviews, competitors or management changes?

Social engineering has always been the easiest and cheapest way to get access to confidential information, and that specific engineering part is bound to have happen within the NSA and among some 450.000 private contractors as well.


EJ, 27.10.2013

10.22.2013

Cloud servers: Hypervised, virtual, becomes elastic.

Besides clouds coming in different flavours (private, public, hybrid, as a infrastructure service, as a application delivery service), the basic cloud IT building block the cloud virtual server or machine also comes in many different flavours.  Or exhibiting a great deal of elasticity, based on cloud servers now typically having multi-core CPUs and workload hypervisors that can span one or many CPU cores with different operating systems or OS images.

This has been, of course, a regular feature of mainframes for many years ago, and was brought into mainstream server computing accessible for many more with mini machines like DEC VAX-series, UNIX-based workstations and, in the Nordics, Norsk Data Sintran based computers for instance.  Looking at the Intel-based server architectures with MS Windows Server OS that overtook these, one had initially 1 CPU (with one CPU core) associated with this one operating system, where the OS could multitask of shift jobs among databases and applications running on that one physical server. Thereafter Intel-based servers become multi-core, i.e. one server CPU having 2 or 4 processing cores, and the OS could work shift or load balance more easily among the server CPU cores once the OS became fully multi-CPU.

Yet another "workload management" shift came when VMware introduced their first hypervisor in 1999 (2001 for servers), meaning that one workload monitor or janitor, introduced between the server CPU core and the OS, could create virtual machines (i.e. VM) and task switch on the fly between different OS images and builds on the VMs one physical server. The VMs one one server could have different number of CPU cores, memory sized and hard disk volumes associated with them, as well as a mix of operating systems, images and configuration.  The VMs could also span CPU cores on many servers, leading to easier ways to do server load balancing, hot-cold or hot-hot fail-over configurations, and, not the least to reduce IT TCO and man-hours, provided a way to migrate and get away from previous 1 application or 1 database equals 1 physical server set-ups.

The introduction of workload and server virtualisation more or less paved the way for today's cloud VM servers and the move away from dedicated servers - per application or per customer install.  Without the development and introduction of proper OS and workload hypervisors like VMware, ZEN and KVM it wouldn't be possible to provision and multi-host customer servers and applications in a cost-effective way, and share available CPU core and memory space among many customer workloads.

After a rather lengthy historical background to cloud virtual servers or VMs, what makes up a cloud VM today?  It's not a fixed property for sure, as multi-core, hypervised server farms with enough memory can be configured and provisioned in a lot of ways.

Currently, the main service offerings for cloud VMs among cloud providers seems to be:

  1. Fixed size, fixed price VMs: The standard fare of most cloud providers, offering fixed VM configurations with x number of VM cores, typically 2, 4, 8, 12 cores etc, a fixed size of VM RAM memory and disk space at a fixed monthly cost.
  2. Building on this, some cloud providers also support different ways of doing on the fly VM scaling, i.e. adding more CPU cores, memory or disk space for a certain time if certain traffic or capacity thresholds are being met, or being able to load balance between VMs on different servers.
  3. Smart servers, i.e. dedicated, single-customer servers with a smallish hypervisor, giving the benefits of hypervised and virtualized workload management, but on dedicated server for increased workload throughput or high-level security environments.
  4. Cloud VMs can come with different service and availability levels , for instance best-effort (shared, best-effort throughput), reserved, protected or guaranteed VM capacity, 99,5% towards 99,9999% availability
  5. Increasingly cloud VMs are offered in CPU core pools, where the customer signs up for a pool of CPU cores, for instance 8, 16, 32 or 48 and a given pool of VM RAM and hard disk space, and the customer can configure a number of VM capacities with different CPU cores from this CPU pool. Cloud VMs in this setting are typically billed by utilization hours or minutes per month, and can lead to some very cost-effective server or VM hours per month if managed properly and if one knows the cyclic workload that pool VMs are expected to handle.


With this basic overview of cloud VMs, I'll be looking at a the different, or not so different, cloud VM offerings from Amazon AWS, Rackspace, MS Windows Azure, Softlayer and others in an upcoming blog post.

EJ, 22.10.2013

10.17.2013

Private cloud - in so many ways

Following my post about the 5-3-3 of cloud computing, I've spent some more time of the various ways one can build, manage and operate a private cloud solution.

Firstly, there are a number of definitions of private cloud, for instance

  1. Wikipedia: " Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally"
  2. NIST: "The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises."
  3. Microsoft: "Private cloud is the implementation of cloud services on resources that are dedicated to your organization, whether they exist on-premises or off-premises. ... "
  4. Webopedia: "Private cloud is the phrase used to describe a cloud computing platform that is implemented within the corporate firewall, under the control of the IT department."
  5. Gartner: "Private cloud computing is a form of cloud computing that is used by only one organization, or that ensures that an organization is completely isolated from others".
Looking at this, there seems to be general acknowledgment that a private cloud solution needs to be or can be:
  • Provisioned for exclusive use by a single business organisation (that can, of course, have many business units)
  • Cloud resources or infrastructure is dedicated for the business organisation - or at least "completely isolated from others", i.e. a private cloud can run on shared infrastructure as long as there is complete resource, ID, usage, logging and management isolation between different business organisations
  • Hosted and managed internally or by 3rd party on internal or external DC or service platform
  • Doesn't need to be "inside the firewall" or on internal DC

Overall, resource control and service delivery isolation seems to be the key criteria, giving the appearance of "dedicated infrastructure and delivery", with internal or 3rd party management and delivery, internal or 3rd party DC taking a back seat. This in turn leads to, at least on paper, that reserved capacity VMs on public cloud can be used to create private cloud solutions, but with SLAs even for reserved VMS or instances, this options is still far off from bare-metal or single-user virtualized servers to create private clouds with proper resource control and isolation.  

Also, all the parts that makes up the private cloud solution has to have resource control and resource utilization isolation according to the business requirements for a private cloud, including storage, VM and DC networking, firewalls, load balancers, VPNs or Internet access etc.

This leads to the following aspects as how a private cloud solution can come about, no doubt in many cases crossing over to hybrid cloud delivery territory:

  • On-demand and self-service: Yes, must have
  • Ubiquitous network access: Yes, must have
  • Location transparent resource pooling: Yes, must have
  • Rapid elasticity: Yes, must have
  • Measured service with pay per use: Yes, must have
  • SaaS-delivery: Private clouds can be used for SaaS delivery
  • PaaS-delivery: Private clouds can be used for PaaS delivery
  • IaaS-delivery: Private clouds can be used for IaaS delivery
  • Dedicated resources: Can use dedicated IT resources, or shared resource with resource control and service delivery isolation
  • Dedicated hardware: For the organization, but private cloud doesn't necessarily require dedicated hardware
  • Shared hardware/servers/infra: Can be used if resource control and isolation
  • On-prem DC (company internal): Can be used
  • 3rd party DC: Can be used
  • Cloud-based: Can use public cloud provider or solution as long as resource control and isolation meets business requirements
  • Internet access: Not general, public Internet access to private cloud solution, but can use Internet access for secure access to and log-in to private cloud solution
  • VPN access: Yes, gives greater resource utilisation control
  • Private link access: "


10.15.2013

The 5-3-2 definition of cloud computing. Or is it 5-3-3?

One of the benefits of cloud computing or cloud IT services is that it got a fairly good definition from quite early on.  As opposed to a lot of other IT trends, developments and phenomena (Big Data, UGC, augmented reality anyone?).

The main definitions for cloud IT are based on the following 3 main principles or frameworks:


  1. The "5 Essential Characteristics of Cloud Computing" by the National Institute of Standards and Technology (NIST)  in the “Definition of Cloud Computing” publication, namely
    • On-demand and self-service
    • Ubiquitous network access
    • Location transparent resource pooling
    • Rapid elasticity, and 
    • Measured service with pay per use.
  2. The three service stacks or the three service delivery methods for cloud IT, namely: 
    • Software as a Service (SaaS): Applications delivered as-a-service to end-users in the fashion of the 5 main characteristics listed above
    • Platform as a Service (PaaS): System, development and service platform delivered as-a-service, again based on key principles listed in 1, and
    • Infrastructure (IaaS): Basic or fundamental IT services like processing, storage and networking delivered and utilized as-a-service, without the need for local HW install, management and involvement by the IT department.
  3. The deployment or usage model for cloud IT, namely
    • Private cloud: Access to and use of cloud IT service for private use only. i.e. for company internal or private home use only. Consumed from public cloud provider or based on internal or 3rd party DCs that are transparent towards the user.  And not Internet facing or exposed in general for general, public access
    • Public cloud: General, Internet facing and exposed cloud-based IT service, accessible for anyone. A public IaaS or PaaS can be used to create a private cloud solution for instance in the SaaS-area.
    • Hybrid cloud: For most companies it's hard to come by a IT solution that is strictly 100% private, internal only, or 100% public with no personal login or access.  This in turn led to the development of hybrid cloud IT services, where IT services hosted locally or by 3rd party were combined with public cloud service, and one can gain access to private cloud or on-prem IT services through public cloud gateway.
      And this leads to the "old" 5-3-2 cloud definition morphing into the 5-3-3 definition of cloud computing.
This 5-3-2 or now 5-3-3 definition was nicely formulated by Yung Chou of the Microsoft US Developer and Platform Evangelism Team, and illustrated by Chou in figure below.




Some of the listed principles and definitions merits a closer look and discussion besides the development of the hybrid cloud delivery model.

In many cases, one-company private cloud services evolved from IT departments having developed and were running highly efficient server virtualization solution on prem or in 3rd party DCs, and were adding self-serve, compute billing to internal business units, on-demand scaling etc to their service delivery.  As noted in a earlier post ("Where does cloud-based IT services and delivery come from?"), it was then easy to move to a 3rd party cloud service, most server hypervisors supporting transparent VM migration, load balancing or fail-over between on--prem VMs and VMs living with a cloud provider.

But in many cases we also have IT departments boasting that they already have done the cloud exercise when they have moved their server platform to a virtualization platform, and gaining increased management, quicker server deployment and service delivery as well as lower TCO/OPEX towards their users.  Looking at the NIST definition, many such IT shops are still missing self-serve support for business users, lack true cost-based IT accounting and pay per use billing, as well as location transparent resource pooling - many company IT platforms are single-location DCs, and there are built-in location or access restrictions.

Also the true nature of private cloud services seems to be up for debate.  While a public cloud solution are accessible and open for "anyone" based on shared, self-serve, pay as you go infrastructure, are a private cloud service dedicated to an organization inside a private data center or can it be on prem or hosted off premises by a 3rd party DC or hoster?  The answer is probably that all three ways can be used to create a private cloud solution.  Also, as noted above, a public cloud IaaS or PaaS service can in turn be used to provision a private PaaS or SaaS solution, when using reserved instances of VM for instance.



10.11.2013

Asia Cloud Computing Association Cloud Assessment Tool - benchmark table

Based on the ACCA CAT I've put the performance categories and associated service levels into a easy to user table, that gives a "one-page" overview of the CAT, that in turn can be used for CAT benchmarking and presentations.




10.09.2013

Asia Cloud Computing Association Cloud Assessment Tool

Earlier this year, the Asia Cloud Computing Association (ACCA) released a Cloud Assessment Tool (CAT) that can be used to benchmark and compare different cloud providers, geared mostly towards the operational performance side of cloud IT service delivery.

An online version is available at the www.asiacloud.org site.

Benchmarking and comparing any IT service delivery or performance is tricky, be it for corporate IT or cloud based service delivery, but the ACCA CAT provides a valuable tool and framework to help companies evaluate not only cloud service providers and offerings, but also data center providers, hosters and online service providers in general.

Short overview of the CAT:

The CAT is is organised into eight performance categories spread over four service tiers, with the performance categories being

  • Security: Privacy, information security, regulatory
  • Life Cycle: Long-term support impacting customer business processes
  • Performance: Runtime behavior of deployed application software
  • Access: Connectivity between the end user and cloud service provider
  • Data Center: Data Center physical infrastructure
  • Certification: Degree of quality assurance to the customer
  • Support: Deployment and maintenance of applications
  • Interoperability: Cloud hypervisor interfaces to applications

The four service tiers are based on the availability or uptime classification system used by the Uptime Institute which defines 4 data center models, referred to as Tiers I-IV. Tier I defines a data center with quite basic
reliability, whereas Tier IV defines a data center having a highly redundant architecture.  Level 4 is not necessarily better than Levels 3, 2 or 1, it's more a matter of suitability to IT task or service delivery use case at hand, and whether a particular level is needed by a business unit, an application or Internet service or not. 

The service levels:
  • Level 1 Typical enterprise cloud solution
  • Level 2 Stringent application
  • Level 3 Telecommunications grade
  • Level 4 Beyond telecommunications grade

Using the online CAT, it's then quite easy to grade cloud service providers for a specific project or service delivery, and take out some of the guesswork and uncertainty when choosing a cloud service provider. The CAT can or should of course be used together with other assessment tools for choosing a cloud service provider, for instance in the areas of cost and pricing, APIs, helpdesk and support, references and functionality.








Leverhawk article: The Real Story Behind Cloud and Financial Transparency

There's a good article over at Leverhawk by Scott Bils on "The Real Story Behind Cloud and Financial Transparency" and corporate IT cost modelling and baselining versus cloud IT cost transparency.

It makes the point that corporate IT needs to expose IT cost down to the main and optional IT service elements for business IT as we now have with public cloud services, and also that corporate IT needs to switch to an periodic OPEX based cost model as cloud providers support, and not yearly CAPEX towards business units that they serve.

But in addition, the author makes the point that greater corporate IT cost transparency and move to OPEX cost model misses the bigger point, and that "the more significant impact that public cloud services have is that for the first time they expose corporate IT to the forces of market pricing."

This is of course a valid point, as it's now quite easy for internal business units now to compare internal IT costs with more or less, or increasingly better public cloud services.  Both in the areas of IT infrastructure (IaaS) and application delivery (SaaS), for instance


  • Monthly cost of corp IT storage vs public cloud storage (GB/month with different SLAs)
  • Monthly cost of server hours/month versus cloud VMs
  • Monthly cost of apps and app suites like MS Office, SAP, Oracle and MS Sharepoint vs cloud based equals

Initially it might seem that both greater cost transparency and move to OPEX based cost model for corp IT as well as baselining and benchmarking against public cloud pricing is both a good thing and key driver for corporate IT cost efficiency and staying relevant, but there's also another angle here.

Public cloud pricing for application services in the SaaS domain also exposes and threatens the software licensing + yearly support model that most software companies has relied on for the last 10-20 years.  Besides leaving out many middlemen that currently runs with the CD licensing model for software, doing on-site install, support and integrations for local installs, a cloud based delivery model also exposes the software vendors to the same pricing transparency and benchmarking opportunities that corp IT now has to live with.





10.08.2013

Towards a cloud IT utility marketplace

As noted in an earlier post, cloud IT infrastructure from different providers are rapidly being commoditized and comparable through public pricing and T&Cs: VMs rapidly approaching same-same pricing, performance and specifications, more or less the same for basic file storage and IP networking.

As an aside, commodity and IT are often used in common and thrown around when a given IT service or piece of hardware has been in the market for some time available from many providers, but as outlined by Wikipedia, "The more specific meaning of the term commodity is applied to goods only. It is used to describe a class of goods for which there is demand, but which is supplied without qualitative differentiation across a market".

So even of cloud IT services and IaaS isn't a "class of goods" in it's own right, certainly we are seeing IT services delivery demand that can be "supplied without qualitative differentiation across a market", albeit with some, over time not critical, different service and operational levels (i.e. SLAs and OLAs).

For cloud IaaS compute or processing, a growing trend seems to the development of IaaS marketplaces for CPU core or VM resources. This has come about in many ways, but some drivers to me seems to be

  • Public IaaS pricing and T&Cs coupled with partner or re-seller programs enabled the birth of cloud aggregators, that could aggregate and offer cloud IaaS services across many IaaS providers and hosters, using different providers for different use cases, regions or application sets.  Customers still had to be or were aware the underlying IaaS provider for their apps and had to go with their IaaS service provisioning workflow and set-up.
  • Another set of companies like IT monitoring, TCO/pricing, security and compliance specialists like TÜV Rheinland developed their IT service catalogs to include cost baselines for basic IT components like CPU compute, storage and networking.  These IT product catalogs with IT baseline pricing benchmarks can also be applied to cloud based IT delivery.
  • Mature companies and cloud users adopted an multi-cloud business delivery strategy to avoid one vendor lock-in.
  • IT vendors and cloud specialists developed proper cloud aggregation or marketplace service delivery platforms that made the hoster or cloud provider and the VM production platform and site in the marketplace transparent towards the cloud buyers.
  • Players from the broker, stock and derivatives market side realized cloud IT could be viewed as separate, atomized, billable utility units and are teaming up with cloud platform providers in the aggregation or marketplace area.


A special note can be made for Amazon and their dominating AWS cloud offering.  No doubt many of the cloud marketplace initiatives and offerings are established as a way to compete with Amazon AWS in terms of pricing and cloud IT feature set as currently apparently nobody is able to reach the "default cloud provider" and position of Amazon AWS (besides maybe Google).

Some early cloud IaaS "open marketplace" contenders are:
  • ComputeNext: The ComputeNext platform makes it possible to "compare cloud services and find the best cloud provider to service a given geography, while factoring in price, uptime, and other performance factors such as provisioning consistency, speed, and machine reliability", working with a range of local hosters and cloud providers.
  • Deutsche Börse Cloud Exchange (DBCE)/Zimory: DBCE is using the IaaS cloud management software of Zimory for their vendor-neutral marketplace for compute and storage capacity in 2014, targeting corporate and medium-to-large enterprise companies, as well as organizations from the public sector, aiming to make it as easy to trade IaaS capacity as it is to trade energy or stocks.
  • CME Group/6fusion Marketplace: CME Group (Chicago Mercantile Exchange Group) and one of Europe’s largest derivatives exchanges, has partnered with 6fusion, a company that specializes in the economic measurement and standardisation of IT infrastructure, to develop a spot and over-the-counter marketplace for trading computing resources and financial contracts.  And is a good example of a trading company using its electronic trading platform together with an cloud aggregation or marketplace platform.  

No doubt this is a developing market, and there are a range of players looking at early positioning and options, but for the cloud IT market and development, it's an encouraging sign that commercial market exchanges and börse's are looking to engage their trading platforms with this market area, leading to increased standardization and hopefully increased supplier choice for cloud buyers.

10.07.2013

Twitter IPO and Twitter message streams for market analytics, part 2: Twitter streams for TV analytics

An example of using Twitter message feeds for market and consumer analytics will be highlighted later on today, according to a WSJ article,  when Nielsen releases their first ranking of TV programs with the greatest reach on Twitter and will provide details on the size of TV audiences for TV shows and the number of tweets about them.

This shows some of the potential of using Twitter message streams for market analytics, where the Twitter Amplify program for content providers also looks to bring in additional Twitter and TV-shows "integrations" over time.

As pointed out in the article, there are barriers to overcome for general and large-scale use of Twitter for TV analytics; audience to small and segmented, skew between "ordinary" Nielsen TV-ratings and Twitter mentions, integration with other market data analytics tools etc, but in due time Twitter TV-mentions and trends will form part of TV programming analytics and scheduling, as well as market research. 

Short note: A Guide to ‘Going Google’ for CIO’s and Enterprise Architects

The Cloud Best Practices Network has a nice paper or guide for ‘Going Google’ for CIO’s and Enterprise Architects that provides an overview of the Google Cloud suite of products, ranging from Google Apps through Compute IaaS, Big Data services, storage and more.

10.06.2013

Twitter IPO and Twitter message streams for market analytics: The real potential of Twitter for advertisers

Twitter, which filed for an public IPO with the US Securities and Exchange Commission back in June, that didn't become public before now in September, hopes to raise $1 billion with their public IPO. The SEC S-1 documents made public, stated that Twitter had $253,6 million in revenue for first half of 2013,  with net loss at 41 percent to $69.3 million and with some 215 million monthly active users.

The current and future revenue are said to come from three main ad-based sources:


  1. Promoted tweets that appears in users message feeds
  2. Promoted accounts (i.e. brands, companies, events etc) that appears on Twitter landing pages
  3. Promoted trends, where advertisers can buy their way into trending lists, themes and developments
The Twitter "ad media universe" is somewhat limited and the advertising tools available for advertisers might seem limited as well.  But with the Twitter IPO being a confidential or "secret" IPO available for sub $1 billion companies (in revenue), all the available or future ad channels for Twitter hasn't been highlighted. To me there's one obvious one that is clearly missing (though I haven't read the full S-1 documents), and that is the big data mining and analytics opportunities with Twitter tweets and message flows for consumer tracking, audience sentiment tracking and overall market trends.

The Twitter streaming APIs gives developers or Twitter log collectors, applications or apps "low latency access to Twitter's global stream of Tweet data", either collecting all Twitter messages in a continuous stream, collecting single-user message streams or site streams . 

With more than 500 million tweets a day through Twitter, this gives market analysts, advertisers, companies and Twitter itself of course, a unique view into
  • Trending themes and developments, i.e. new phenomena of all sorts, Internet memes, things going viral, movie or TV-shows releases, new consumer brands, pop stars, new albums, books etc
  • Long-term development and standing of brands, products, product models, consumer sentiments
  • Developing news and events
  • National and regional break-downs of trends, developments and long-term standing 
  • Cross-linked with mobile or PC access, client type, time of day, frequency of tweets or mentions etc
Utilizing Twitter message streams for near real-time market analysis and consumer views should be a no-brainer for advertisers, just as Netflix used their own data analytics to create House of Cards and other TV-shows - how long before advertisers catches on?

9.29.2013

Cloud IaaS service requirements and benchmarks

If you are in the market for a cloud infrastructure solution and have done some initial cost benchmarking using for example the CloudVertical cost cheat sheet for some of the larger, public cloud providers in the IaaS-space, it might be useful to do a more functional check of the cloud providers, matching their offerings against your own cloud IaaS service requirements or for instance the draft cloud IaaS checklist that are provided here.

This published version is an early draft that will be expanded in the coming weeks to provide a more full-fledged cloud IaaS checklist or service requirements specification.  At the moment the checklist contains the rudimentary service elements of the service areas in the ToC below, and can  be used to score 2 or more IaaS service providers against each other.

It can also be used together with for example the Cloud Assessment Tool by Asia Cloud Computing Association (ACCA) to give overall picture of cloud provider capabilities and ranking for a set of business goals or defined deliverables.


Cloud IaaS check list - ToC


  1. Key criteria
  2. Use cases
  3. Public, private, hybrid
  4. IT Operations support
  5. Reliability and availability
  6. Performance SLAs
  7. DC coverage
  8. VM configurations
  9. Storage
  10. Networking
  11. Security
  12. Monitoring
  13. Service control - service panel
  14. Billing
  15. Customer support
To be continued!

9.26.2013

Sorting out public cloud costs for budgeting

Sorting out all the cost elements involved in using a public cloud service from Amazon AWS, MS Azure or for instance Rackspace can be quite tedious even though the main service elements are clearly labeled. There are usually many service elements and options that comes into play, and cloud infra usage for a given service might vary from day to day or form one instance to another.

CloudVertical has developed a pretty comprehensive public cloud cost cheat sheet for the cloud services from

  • Amazon AWS - per AWS service region, that can be broken down for per hour, per day, week, month and so on
  • Rackspace - per availability region, and same as above
  • MS Windows Azure - same as above
  • Google compute
  • HP Cloud

for all the main IaaS service categories (i.e. compute/VMs, storage and networking) and service unit costs can be benchmarked against each other as well, see screenshot below).

Most of these providers change their VM and storage prices and configurations almost daily, but CloudVertical seems to follow updates and changes pretty thoroughly.




9.23.2013

Cloud networking

One of the more promising areas of cloud infrastructure is, besides the range of compute and storage services available, cloud based networking.  Or more correctly, moving CPE-based network functionality onto VMs, or a virtualized environment, and into the cloud.

For years a number of companies have been utilizing what's been called net-centric service delivery for some managed network services like IP proxy, web caches and Internet firewall, often in dedicated set-up tailored for that one customer.

Going from there and moving networking and traffic management functionality like IP proxy and reverse-proxy, caching, load balancing, firewalling and application acceleration onto cloud-based VMs promises to relieve companies of dedicated installations for these specialised functionalities at smaller or regional business units, and they can equip these smaller office locations with basic IP routers only, that can be managed remotely easily.

Another interesting twist on this approach is that on-prem VMs with these networking functionality, or private clouds with same, can be paired with equal installations and configurations in Amazon AWS, MS Azure or HP SmartCloud, meaning that one can overlay advanced networking functionality and control over the basic, best-effort Internet and in between clouds, creating a virtualized networking environment that can be tailored, stretched and adapted to time of day, week or seasonal kind of fluctuating work-loads.

Some examples of networking functionality being offered cloud style, here using Amazon AWS Marketplace listings:


  • Check Point Virtual Appliance for AWS - R75.40: "Check Point Virtual Appliance for Amazon Web Services delivers a security cloud computing platform that enables customers to deploy flexible multilayer security in the cloud. It extends the latest security technology to Amazon's cloud, protects assets in the cloud from attacks, and enable security connectivity."
  • Riverbed Stingray Traffic Manager 1000L (10 Mbps 1000 SSL TPS) with AppFirewall: "Stingray traffic management solutions provide complete control over user traffic, allowing administrators to accelerate, optimize, and secure key business applications. Now it's possible to deliver these services more quickly and ensure the best possible performance across any deployment platform."
  • NetScaler VPX Standard Edition: "Citrix NetScaler is an all-in-one web application delivery controller that makes applications run five times better, reduces web application ownership costs, optimizes the user experience, and makes sure that applications are always available by using advanced L4-7 load balancing and traffic management; proven application acceleration such as HTTP compression and caching..."
  • NGINX Plus - Ubuntu AMI: "NGINX is a high performance, flexible, scalable, secure reverse proxy, load balancer, edge cache and origin server. NGINX features include: reverse proxy for HTTP, SMTP, IMAP and POP3"
  • Wowza Media Server 3: "Wowza Media Server® 3 is the high-performance, high-value infrastructure software for unified media streaming to virtually any screen"
I'll be looking more closely into cloud based networking in a later posting, including software defined networking (SDN) options.

9.19.2013

Netflix as a model for network operators and telecoms

(Following a conversation with Snorre Corneliussen)

Netflix has, just the last few years, pulled off 2 industry transformations that are quite remarkable:
  1. Transforming their legacy, post-order DVD rental business into a online, video on demand rental business 
  2. Changing from a online content aggregator and distributor, or a software house if you will, into a content producer and provider of original TV-programming.

Doing 1 inside a year or two and going from zero to 36 mil subs is remarkable.  Doing 2 also and establish a Emmy-level producer position inside a year, even more so.

The transformation to the world's leading online, OTT TV-distributor coming from a DVD rental business holds a number of lessons for network operators and telecoms in it's own right, but in this post it's how the Emmy-level producer role was achieved using, among other things, outsourcing of all things TV-series production and technical was outsourced, that is the main focus.

A number of studies and articles details the why' and how's of 1 and 2, although it might be said that it's too early to make the call if 1 is sustainable and profitable in the long run, and way to early to judge whether 2 will build subscription base and revenue for Netflix Or if 2 is mostly used as a tactical tool against rising content rights fee's from "legacy industry" for bulk of Netflix content portfolio (Netflix had to pay $1.355 billion (!) in licensing costs just for the first quarter of this year, source Reuters).

And it's interesting the way Netflix has been using big data analytics to create House of Cards and the other Netflix original TV-programming to reach specific audiences and user demographics, as opposed to industry approach of doing 20 shows or movies per season based on some key themes perceived by marketeers, cancel 16 of them after 2 weeks (or expect 16 out of 20 movies to bomb at the box office)  - and you might have a hit!

But let's look closer to see if the Netflix content producer role and TV-production outsourcing approach might say anything about the future role or possibilities of network operators and telecoms.  I think it does.

Put very simply, Netflix, currently mostly in aggregation and distribution role for content provisioning towards consumer market, has one of the largest databases or big data mining space for online, multi-terminal, multi-language, international content and TV consumption across all key user demographics and age groups.  All Netflix user portals and terminals produce "tons" of data sets that can be "analyzed, cross-referenced and sampled" to get trends, usage patterns and viewing habits for current releases, archive and long-tail content, down to viewing habits per user and terminal for each one of those 36 million subscribers.  Or per ZIP-zone, city, region, country etc.

A small, but good illustration of Netflix data mining can be found at this NY Times article, cross-linking Netflix movie rentals with US ZIP-codes. That in turn can be used for regional campaigns, promotions or bundles.

This in turn was used as key input into the creation and positioning of House of Cards and other Netflix TV-series towards specific demographics and subscriber trends.  None of theses TV-series was actually written, scripted or made by Netflix themselves, but made according to high-level business goals, key market or user themes and TV role models or archetypes if you like. All TV-series technical production has been outsourced to independent production companies, directors and actors hired for these one-off unique production runs. But the TV-shows was made according to Netflix marketing and positioning requirements, and their overall content portfolio plan.

If we look at today's network operators and telecoms as "legacy style TV studios and distributors", very much doing their own TV-production, or network builds and service marketing, based on outdated marketing data with in-house personnel, the Netflix model points towards a future where a network operator focuses on and retains the ability to do proper big data collection and analytics to understand customer base for market in question, develop business and market requirements for new services and marketing ability etc, but outsources technical development and build of networks as well as specific service sets, increasingly OTT-based, to specialist companies and builders, who excel at this specific function across the industry.

Large scale outsourcing of basic and core networks operations and maintenance are already widespread in the industry, certain to accelerate in the coming years to maintain margins against industry wide reduction in top-line revenues and ARPU.  Next step is outsourcing the actual network and service development work as well and for telco's to focus on customer service and relationship expansion, market bundles using 3rd party (OTT-based) services and segmented campaigns. I.e. sales and marketing, leaving network and back-office build to specialists.