About this

Writings so far

10.30.2013

Cloud platforms for mobile services development

Now this headline and it's subject could be the title and scope of a whole book (and there are a number of them available), but I wanted to get into the subject by some initial posts on the matter.  And do some mobile "drag & drop" app development myself to try out a range of new app dev tools for non-programmers (see links later on).

To start with, Gartner predicts, with usual assurance, that by 2016, "40 Percent of Mobile Application Development Projects Will Leverage Cloud Mobile Back-End Services".  And it doesn't stop there; "causing development leaders to lose control of the pace and path of cloud adoption within their enterprises, predicts Gartner, Inc."

Mobile developers and apps using cloud-based platforms for their service management, processing and storage doesn't mean loosing control per ce in my opinion, just as on-prem or in-house development and deployments platforms aren't more secure or insecure than cloud based one . It boils down to security policy and culture, and how one actually adheres to them. But using a cloud based service delivery platforms for mobile apps and services seems like a no-brainer if the app in question is Internet-facing or supposed to be used by a public audience, and not just internally in an enterprise.

Using a cloud-based development and service delivery platform, i.e. a Platform as a Services (PaaS) kind of cloud platform with a wide range of support for ready-to-go development environments, databases and tools a step above basic IaaS platforms, gives a range of benefits and options, including

  • A uniform development, test, piloting and launch environment and platform - one doesn't need to move code, databases, web-servers and other service delivery platforms from a closed, limited capacity dev environment to a more scalable test & pilot environment, and then onto a production set-up that supports the number of user and traffic that might come in at peak every month or whatever
  • In other words, a cloud based dev, test and production environment for mobile services gives built-in load-balancing, scalability and capacity on demand that in-house platforms or IT-departments typically struggles with
  • Most cloud platforms also has built in functionality for server side processing, off-loading clients or the apps form this, as well as caching and static content serving
  • And most cloud service platforms has built-in security provisions, like DDoS-protection, firewalling as well as authentication services.
If one are using development platforms on Google App Engine or Mobile Backend Starter, MS Azure or Amazon AWS, one also typically get
  • Access to industry-norm development environments like LAMP, RUBY or Node.js
  • Authentication of users and services against the vendors shop, messaging or document store services
  • Integration of log and analytics tools for mobile apps and services, for instance Google Analytics for Mobile
  • Easier access to public app stores like Google Play
With smartphones and tablets now becoming the clients of choice for most users, there's a race on between the dominant and wanna-be cloud service providers to been seen as the most attractive platform for mobile developers, recently highlighted by the Google Mobile Backend Starter launch earlier on in October.  Here's a list of some cloud-based mobile development platforms and services:

Now, about the list of mobile "drag & drop" app development tools - new post coming up shortly!

Erik Jensen, 30.10.2013

10.29.2013

Great Openstack overview

For anyone looking for a Openstack introduction and an overview of the Havanna release, Edgar Magana has a great slideshare at
http://www.slideshare.net/emaganap/open-stack-overview-meetups-oct-2013

10.28.2013

If you are not paying for it, you're the "big data" product

Andrew Lewis made a great comment on user-driven content back in 2010 in a metafilter.com exchange that since has become a Internet meme of sorts on the range of free Internet services being offered from Facebook, Google, MS, Yahoo and lot's of others: "If you are not paying for it, you're not the customer; you're the product being sold".

Great one-liner, but what does it mean? Well, it lead to Mr Lewis, of course, starting his own online shop offering t-shirts, coffee mugs and aprons with that same slogan, going almost meta on his own meta.

But in the setting of free or "free" Internet services like Facebook, Google, Yahoo and MS online services, running on some of the largest server and application platforms ever deployed and developed at a significant platform and man-hours cost one must assume, why is it offered for free?

One obvious answer is advertising and the development of personalised or context driven, online advertising on the Internet and mobile devices. Be it in the form of banner ads, splash ads etc or product content being adapted to user location or ZIP-code, time of day, device type and earlier browsing history for the user in question.  This gives advertiser a way to achieve much better and greater ad targeting than usual shotgun advertising of "manual" or analog media, and it's much easier to see ad hits, view times, conversion rates, also in real-time than with traditional media. So giving away IT-services like storage, communication services like email and IP messaging or content is a way to attract users, get them registered, build user base and attract more advertising dollars.

Another angle is collecting and aggregating user data per site, device type, ZIP-code or region and use this aggregated user and usage data for business analytics, trend watching, benchmarking new service offerings and competitors services.  That then goes into the continuous re-work and make-over that most large IT companies do all the time.  Analyzing customers and customer behaviour should lead to greater service offerings.  And we are over in big data and analytics territory.

And one of the most fascinating stories of using big data analytics to understand customer behaviour and wants, comes from Netflix and how the House of Cards TV-series got created, partly at least if we are to believe the backgrounder here.  Netflix has been very open and explicit about its plans to exploit user data logging and its big data capabilities to influence its programming choices well before the House of Cards TV-series was aired. Netflix has detailed viewer logs for any market they are in, broken down by content type, country, ZIP-code, time of day and device type and more.  Knowledge of Netflix subscribers viewing preferences pointed towards a political TV-drama with a number of defined attributes, among them starring Kevin Spacy for the lead, that would ensure high engagement levels and viewership through the Netflix recommendation engine, that is claimed to influence 75 percent of Netflix subscribers in viewer choice.  Big data logging and recommendation engines are a match seemingly made in heaven.

Other reasons for giving away IT and communications services for free are simply to stay competitive and doe service bundling and/or upgrades.  One guy is selling 2 GB of storage for $5 per month - that doesn't cover cost anyway and nobody actually uses 2 GB - why not give it away and attract more users, and then later on try to move them to more premium, paid service offers, presumably with better performance and higher service levels?

And that has been the approach for introducing Internet services or offers for the last 20 years or so.  Hasn't worked, best-effort free services worked too well in most cases - and generated tons of user and usage data anyways. that at least kept the marketers and advertisers happy.

Erik Jensen, 28.10.2013

10.27.2013

The cloud, the NSA and some 450.000 private contractors

A recent article at forbes.com refers to a OVUM report about the increasing use of cloud IT services in financial services, due to "“improvements in cloud security and a wider variety of applications, investment in cloud, by both the buy side and the sell side...".  Cloud-based IT services, both on the infrastructure, platform/development and as-a-service side sounds like a natural step, with the on-demand and flexible nature of cloud IT service provisioning and workload management fitting the cyclic or periodic need of the finance and banking industry very well. Also as most customer interaction with banking and financial services will move to be Internet-facing or by ways of mobile terminals.

There are a range of operational security issues with cloud IT services as well as exposure to Internet denial of service attacks and accounts break-ins for most companies on the Internet.  But, as the article notes, there are also the aspect of the NSA listening in or surveilling the cloud service platforms being utilized, following transactions and accounts movements for US based cloud services (and most others probably).

That is the subject for an article or an entire book in itself, but I wanted to touch upon an other aspect of most US IT companies, main telcos and ISPs as well as cloud providers being part of various NSA programs (PRISM, XKEYSCORE etc.), namely the extensive use of private contractors within the NSA, like Edvard Snowden himself, to perform many of the NSA day to day operations for the programs in questions.

According to many public articles (1, 2 and more), in information publicized by the Office of the Director of National Intelligence this year, 1.2 million Americans hold top-secret clearances, and 38% of those clearances are held by private contractors. I.e close to 500.000 contractors have top-secret clearance like Mr. Snowden.

The head of the NSA,Gen. Keith Alexander, has gone on record saying "reporters should be prevented from "selling" National Security Agency documents" (3). But with the NSA not being aware of the document downloading Mr. Snowden had done before he made it public himself, pointing to somewhat lackluster system logging, incident and security revision routines within the NSA, and the reported widespread use of NSA surveillance tools for private endeavours inside the NSA, isn't it likely that over the last years, some of these NSA contractors used NSA tools to spy on and extract information for personal use and financial gain (for instance early access to upcoming quarterly results or upcoming acquisitions and mergers) or sold inside or critical information about one company to a competitor. Or alerted management at company where they were employed about upcoming bids, performance reviews, competitors or management changes?

Social engineering has always been the easiest and cheapest way to get access to confidential information, and that specific engineering part is bound to have happen within the NSA and among some 450.000 private contractors as well.


EJ, 27.10.2013

10.22.2013

Cloud servers: Hypervised, virtual, becomes elastic.

Besides clouds coming in different flavours (private, public, hybrid, as a infrastructure service, as a application delivery service), the basic cloud IT building block the cloud virtual server or machine also comes in many different flavours.  Or exhibiting a great deal of elasticity, based on cloud servers now typically having multi-core CPUs and workload hypervisors that can span one or many CPU cores with different operating systems or OS images.

This has been, of course, a regular feature of mainframes for many years ago, and was brought into mainstream server computing accessible for many more with mini machines like DEC VAX-series, UNIX-based workstations and, in the Nordics, Norsk Data Sintran based computers for instance.  Looking at the Intel-based server architectures with MS Windows Server OS that overtook these, one had initially 1 CPU (with one CPU core) associated with this one operating system, where the OS could multitask of shift jobs among databases and applications running on that one physical server. Thereafter Intel-based servers become multi-core, i.e. one server CPU having 2 or 4 processing cores, and the OS could work shift or load balance more easily among the server CPU cores once the OS became fully multi-CPU.

Yet another "workload management" shift came when VMware introduced their first hypervisor in 1999 (2001 for servers), meaning that one workload monitor or janitor, introduced between the server CPU core and the OS, could create virtual machines (i.e. VM) and task switch on the fly between different OS images and builds on the VMs one physical server. The VMs one one server could have different number of CPU cores, memory sized and hard disk volumes associated with them, as well as a mix of operating systems, images and configuration.  The VMs could also span CPU cores on many servers, leading to easier ways to do server load balancing, hot-cold or hot-hot fail-over configurations, and, not the least to reduce IT TCO and man-hours, provided a way to migrate and get away from previous 1 application or 1 database equals 1 physical server set-ups.

The introduction of workload and server virtualisation more or less paved the way for today's cloud VM servers and the move away from dedicated servers - per application or per customer install.  Without the development and introduction of proper OS and workload hypervisors like VMware, ZEN and KVM it wouldn't be possible to provision and multi-host customer servers and applications in a cost-effective way, and share available CPU core and memory space among many customer workloads.

After a rather lengthy historical background to cloud virtual servers or VMs, what makes up a cloud VM today?  It's not a fixed property for sure, as multi-core, hypervised server farms with enough memory can be configured and provisioned in a lot of ways.

Currently, the main service offerings for cloud VMs among cloud providers seems to be:

  1. Fixed size, fixed price VMs: The standard fare of most cloud providers, offering fixed VM configurations with x number of VM cores, typically 2, 4, 8, 12 cores etc, a fixed size of VM RAM memory and disk space at a fixed monthly cost.
  2. Building on this, some cloud providers also support different ways of doing on the fly VM scaling, i.e. adding more CPU cores, memory or disk space for a certain time if certain traffic or capacity thresholds are being met, or being able to load balance between VMs on different servers.
  3. Smart servers, i.e. dedicated, single-customer servers with a smallish hypervisor, giving the benefits of hypervised and virtualized workload management, but on dedicated server for increased workload throughput or high-level security environments.
  4. Cloud VMs can come with different service and availability levels , for instance best-effort (shared, best-effort throughput), reserved, protected or guaranteed VM capacity, 99,5% towards 99,9999% availability
  5. Increasingly cloud VMs are offered in CPU core pools, where the customer signs up for a pool of CPU cores, for instance 8, 16, 32 or 48 and a given pool of VM RAM and hard disk space, and the customer can configure a number of VM capacities with different CPU cores from this CPU pool. Cloud VMs in this setting are typically billed by utilization hours or minutes per month, and can lead to some very cost-effective server or VM hours per month if managed properly and if one knows the cyclic workload that pool VMs are expected to handle.


With this basic overview of cloud VMs, I'll be looking at a the different, or not so different, cloud VM offerings from Amazon AWS, Rackspace, MS Windows Azure, Softlayer and others in an upcoming blog post.

EJ, 22.10.2013

10.17.2013

Private cloud - in so many ways

Following my post about the 5-3-3 of cloud computing, I've spent some more time of the various ways one can build, manage and operate a private cloud solution.

Firstly, there are a number of definitions of private cloud, for instance

  1. Wikipedia: " Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally"
  2. NIST: "The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises."
  3. Microsoft: "Private cloud is the implementation of cloud services on resources that are dedicated to your organization, whether they exist on-premises or off-premises. ... "
  4. Webopedia: "Private cloud is the phrase used to describe a cloud computing platform that is implemented within the corporate firewall, under the control of the IT department."
  5. Gartner: "Private cloud computing is a form of cloud computing that is used by only one organization, or that ensures that an organization is completely isolated from others".
Looking at this, there seems to be general acknowledgment that a private cloud solution needs to be or can be:
  • Provisioned for exclusive use by a single business organisation (that can, of course, have many business units)
  • Cloud resources or infrastructure is dedicated for the business organisation - or at least "completely isolated from others", i.e. a private cloud can run on shared infrastructure as long as there is complete resource, ID, usage, logging and management isolation between different business organisations
  • Hosted and managed internally or by 3rd party on internal or external DC or service platform
  • Doesn't need to be "inside the firewall" or on internal DC

Overall, resource control and service delivery isolation seems to be the key criteria, giving the appearance of "dedicated infrastructure and delivery", with internal or 3rd party management and delivery, internal or 3rd party DC taking a back seat. This in turn leads to, at least on paper, that reserved capacity VMs on public cloud can be used to create private cloud solutions, but with SLAs even for reserved VMS or instances, this options is still far off from bare-metal or single-user virtualized servers to create private clouds with proper resource control and isolation.  

Also, all the parts that makes up the private cloud solution has to have resource control and resource utilization isolation according to the business requirements for a private cloud, including storage, VM and DC networking, firewalls, load balancers, VPNs or Internet access etc.

This leads to the following aspects as how a private cloud solution can come about, no doubt in many cases crossing over to hybrid cloud delivery territory:

  • On-demand and self-service: Yes, must have
  • Ubiquitous network access: Yes, must have
  • Location transparent resource pooling: Yes, must have
  • Rapid elasticity: Yes, must have
  • Measured service with pay per use: Yes, must have
  • SaaS-delivery: Private clouds can be used for SaaS delivery
  • PaaS-delivery: Private clouds can be used for PaaS delivery
  • IaaS-delivery: Private clouds can be used for IaaS delivery
  • Dedicated resources: Can use dedicated IT resources, or shared resource with resource control and service delivery isolation
  • Dedicated hardware: For the organization, but private cloud doesn't necessarily require dedicated hardware
  • Shared hardware/servers/infra: Can be used if resource control and isolation
  • On-prem DC (company internal): Can be used
  • 3rd party DC: Can be used
  • Cloud-based: Can use public cloud provider or solution as long as resource control and isolation meets business requirements
  • Internet access: Not general, public Internet access to private cloud solution, but can use Internet access for secure access to and log-in to private cloud solution
  • VPN access: Yes, gives greater resource utilisation control
  • Private link access: "


10.15.2013

The 5-3-2 definition of cloud computing. Or is it 5-3-3?

One of the benefits of cloud computing or cloud IT services is that it got a fairly good definition from quite early on.  As opposed to a lot of other IT trends, developments and phenomena (Big Data, UGC, augmented reality anyone?).

The main definitions for cloud IT are based on the following 3 main principles or frameworks:


  1. The "5 Essential Characteristics of Cloud Computing" by the National Institute of Standards and Technology (NIST)  in the “Definition of Cloud Computing” publication, namely
    • On-demand and self-service
    • Ubiquitous network access
    • Location transparent resource pooling
    • Rapid elasticity, and 
    • Measured service with pay per use.
  2. The three service stacks or the three service delivery methods for cloud IT, namely: 
    • Software as a Service (SaaS): Applications delivered as-a-service to end-users in the fashion of the 5 main characteristics listed above
    • Platform as a Service (PaaS): System, development and service platform delivered as-a-service, again based on key principles listed in 1, and
    • Infrastructure (IaaS): Basic or fundamental IT services like processing, storage and networking delivered and utilized as-a-service, without the need for local HW install, management and involvement by the IT department.
  3. The deployment or usage model for cloud IT, namely
    • Private cloud: Access to and use of cloud IT service for private use only. i.e. for company internal or private home use only. Consumed from public cloud provider or based on internal or 3rd party DCs that are transparent towards the user.  And not Internet facing or exposed in general for general, public access
    • Public cloud: General, Internet facing and exposed cloud-based IT service, accessible for anyone. A public IaaS or PaaS can be used to create a private cloud solution for instance in the SaaS-area.
    • Hybrid cloud: For most companies it's hard to come by a IT solution that is strictly 100% private, internal only, or 100% public with no personal login or access.  This in turn led to the development of hybrid cloud IT services, where IT services hosted locally or by 3rd party were combined with public cloud service, and one can gain access to private cloud or on-prem IT services through public cloud gateway.
      And this leads to the "old" 5-3-2 cloud definition morphing into the 5-3-3 definition of cloud computing.
This 5-3-2 or now 5-3-3 definition was nicely formulated by Yung Chou of the Microsoft US Developer and Platform Evangelism Team, and illustrated by Chou in figure below.




Some of the listed principles and definitions merits a closer look and discussion besides the development of the hybrid cloud delivery model.

In many cases, one-company private cloud services evolved from IT departments having developed and were running highly efficient server virtualization solution on prem or in 3rd party DCs, and were adding self-serve, compute billing to internal business units, on-demand scaling etc to their service delivery.  As noted in a earlier post ("Where does cloud-based IT services and delivery come from?"), it was then easy to move to a 3rd party cloud service, most server hypervisors supporting transparent VM migration, load balancing or fail-over between on--prem VMs and VMs living with a cloud provider.

But in many cases we also have IT departments boasting that they already have done the cloud exercise when they have moved their server platform to a virtualization platform, and gaining increased management, quicker server deployment and service delivery as well as lower TCO/OPEX towards their users.  Looking at the NIST definition, many such IT shops are still missing self-serve support for business users, lack true cost-based IT accounting and pay per use billing, as well as location transparent resource pooling - many company IT platforms are single-location DCs, and there are built-in location or access restrictions.

Also the true nature of private cloud services seems to be up for debate.  While a public cloud solution are accessible and open for "anyone" based on shared, self-serve, pay as you go infrastructure, are a private cloud service dedicated to an organization inside a private data center or can it be on prem or hosted off premises by a 3rd party DC or hoster?  The answer is probably that all three ways can be used to create a private cloud solution.  Also, as noted above, a public cloud IaaS or PaaS service can in turn be used to provision a private PaaS or SaaS solution, when using reserved instances of VM for instance.



10.11.2013

Asia Cloud Computing Association Cloud Assessment Tool - benchmark table

Based on the ACCA CAT I've put the performance categories and associated service levels into a easy to user table, that gives a "one-page" overview of the CAT, that in turn can be used for CAT benchmarking and presentations.




10.09.2013

Asia Cloud Computing Association Cloud Assessment Tool

Earlier this year, the Asia Cloud Computing Association (ACCA) released a Cloud Assessment Tool (CAT) that can be used to benchmark and compare different cloud providers, geared mostly towards the operational performance side of cloud IT service delivery.

An online version is available at the www.asiacloud.org site.

Benchmarking and comparing any IT service delivery or performance is tricky, be it for corporate IT or cloud based service delivery, but the ACCA CAT provides a valuable tool and framework to help companies evaluate not only cloud service providers and offerings, but also data center providers, hosters and online service providers in general.

Short overview of the CAT:

The CAT is is organised into eight performance categories spread over four service tiers, with the performance categories being

  • Security: Privacy, information security, regulatory
  • Life Cycle: Long-term support impacting customer business processes
  • Performance: Runtime behavior of deployed application software
  • Access: Connectivity between the end user and cloud service provider
  • Data Center: Data Center physical infrastructure
  • Certification: Degree of quality assurance to the customer
  • Support: Deployment and maintenance of applications
  • Interoperability: Cloud hypervisor interfaces to applications

The four service tiers are based on the availability or uptime classification system used by the Uptime Institute which defines 4 data center models, referred to as Tiers I-IV. Tier I defines a data center with quite basic
reliability, whereas Tier IV defines a data center having a highly redundant architecture.  Level 4 is not necessarily better than Levels 3, 2 or 1, it's more a matter of suitability to IT task or service delivery use case at hand, and whether a particular level is needed by a business unit, an application or Internet service or not. 

The service levels:
  • Level 1 Typical enterprise cloud solution
  • Level 2 Stringent application
  • Level 3 Telecommunications grade
  • Level 4 Beyond telecommunications grade

Using the online CAT, it's then quite easy to grade cloud service providers for a specific project or service delivery, and take out some of the guesswork and uncertainty when choosing a cloud service provider. The CAT can or should of course be used together with other assessment tools for choosing a cloud service provider, for instance in the areas of cost and pricing, APIs, helpdesk and support, references and functionality.








Leverhawk article: The Real Story Behind Cloud and Financial Transparency

There's a good article over at Leverhawk by Scott Bils on "The Real Story Behind Cloud and Financial Transparency" and corporate IT cost modelling and baselining versus cloud IT cost transparency.

It makes the point that corporate IT needs to expose IT cost down to the main and optional IT service elements for business IT as we now have with public cloud services, and also that corporate IT needs to switch to an periodic OPEX based cost model as cloud providers support, and not yearly CAPEX towards business units that they serve.

But in addition, the author makes the point that greater corporate IT cost transparency and move to OPEX cost model misses the bigger point, and that "the more significant impact that public cloud services have is that for the first time they expose corporate IT to the forces of market pricing."

This is of course a valid point, as it's now quite easy for internal business units now to compare internal IT costs with more or less, or increasingly better public cloud services.  Both in the areas of IT infrastructure (IaaS) and application delivery (SaaS), for instance


  • Monthly cost of corp IT storage vs public cloud storage (GB/month with different SLAs)
  • Monthly cost of server hours/month versus cloud VMs
  • Monthly cost of apps and app suites like MS Office, SAP, Oracle and MS Sharepoint vs cloud based equals

Initially it might seem that both greater cost transparency and move to OPEX based cost model for corp IT as well as baselining and benchmarking against public cloud pricing is both a good thing and key driver for corporate IT cost efficiency and staying relevant, but there's also another angle here.

Public cloud pricing for application services in the SaaS domain also exposes and threatens the software licensing + yearly support model that most software companies has relied on for the last 10-20 years.  Besides leaving out many middlemen that currently runs with the CD licensing model for software, doing on-site install, support and integrations for local installs, a cloud based delivery model also exposes the software vendors to the same pricing transparency and benchmarking opportunities that corp IT now has to live with.





10.08.2013

Towards a cloud IT utility marketplace

As noted in an earlier post, cloud IT infrastructure from different providers are rapidly being commoditized and comparable through public pricing and T&Cs: VMs rapidly approaching same-same pricing, performance and specifications, more or less the same for basic file storage and IP networking.

As an aside, commodity and IT are often used in common and thrown around when a given IT service or piece of hardware has been in the market for some time available from many providers, but as outlined by Wikipedia, "The more specific meaning of the term commodity is applied to goods only. It is used to describe a class of goods for which there is demand, but which is supplied without qualitative differentiation across a market".

So even of cloud IT services and IaaS isn't a "class of goods" in it's own right, certainly we are seeing IT services delivery demand that can be "supplied without qualitative differentiation across a market", albeit with some, over time not critical, different service and operational levels (i.e. SLAs and OLAs).

For cloud IaaS compute or processing, a growing trend seems to the development of IaaS marketplaces for CPU core or VM resources. This has come about in many ways, but some drivers to me seems to be

  • Public IaaS pricing and T&Cs coupled with partner or re-seller programs enabled the birth of cloud aggregators, that could aggregate and offer cloud IaaS services across many IaaS providers and hosters, using different providers for different use cases, regions or application sets.  Customers still had to be or were aware the underlying IaaS provider for their apps and had to go with their IaaS service provisioning workflow and set-up.
  • Another set of companies like IT monitoring, TCO/pricing, security and compliance specialists like TÜV Rheinland developed their IT service catalogs to include cost baselines for basic IT components like CPU compute, storage and networking.  These IT product catalogs with IT baseline pricing benchmarks can also be applied to cloud based IT delivery.
  • Mature companies and cloud users adopted an multi-cloud business delivery strategy to avoid one vendor lock-in.
  • IT vendors and cloud specialists developed proper cloud aggregation or marketplace service delivery platforms that made the hoster or cloud provider and the VM production platform and site in the marketplace transparent towards the cloud buyers.
  • Players from the broker, stock and derivatives market side realized cloud IT could be viewed as separate, atomized, billable utility units and are teaming up with cloud platform providers in the aggregation or marketplace area.


A special note can be made for Amazon and their dominating AWS cloud offering.  No doubt many of the cloud marketplace initiatives and offerings are established as a way to compete with Amazon AWS in terms of pricing and cloud IT feature set as currently apparently nobody is able to reach the "default cloud provider" and position of Amazon AWS (besides maybe Google).

Some early cloud IaaS "open marketplace" contenders are:
  • ComputeNext: The ComputeNext platform makes it possible to "compare cloud services and find the best cloud provider to service a given geography, while factoring in price, uptime, and other performance factors such as provisioning consistency, speed, and machine reliability", working with a range of local hosters and cloud providers.
  • Deutsche Börse Cloud Exchange (DBCE)/Zimory: DBCE is using the IaaS cloud management software of Zimory for their vendor-neutral marketplace for compute and storage capacity in 2014, targeting corporate and medium-to-large enterprise companies, as well as organizations from the public sector, aiming to make it as easy to trade IaaS capacity as it is to trade energy or stocks.
  • CME Group/6fusion Marketplace: CME Group (Chicago Mercantile Exchange Group) and one of Europe’s largest derivatives exchanges, has partnered with 6fusion, a company that specializes in the economic measurement and standardisation of IT infrastructure, to develop a spot and over-the-counter marketplace for trading computing resources and financial contracts.  And is a good example of a trading company using its electronic trading platform together with an cloud aggregation or marketplace platform.  

No doubt this is a developing market, and there are a range of players looking at early positioning and options, but for the cloud IT market and development, it's an encouraging sign that commercial market exchanges and börse's are looking to engage their trading platforms with this market area, leading to increased standardization and hopefully increased supplier choice for cloud buyers.

10.07.2013

Twitter IPO and Twitter message streams for market analytics, part 2: Twitter streams for TV analytics

An example of using Twitter message feeds for market and consumer analytics will be highlighted later on today, according to a WSJ article,  when Nielsen releases their first ranking of TV programs with the greatest reach on Twitter and will provide details on the size of TV audiences for TV shows and the number of tweets about them.

This shows some of the potential of using Twitter message streams for market analytics, where the Twitter Amplify program for content providers also looks to bring in additional Twitter and TV-shows "integrations" over time.

As pointed out in the article, there are barriers to overcome for general and large-scale use of Twitter for TV analytics; audience to small and segmented, skew between "ordinary" Nielsen TV-ratings and Twitter mentions, integration with other market data analytics tools etc, but in due time Twitter TV-mentions and trends will form part of TV programming analytics and scheduling, as well as market research. 

Short note: A Guide to ‘Going Google’ for CIO’s and Enterprise Architects

The Cloud Best Practices Network has a nice paper or guide for ‘Going Google’ for CIO’s and Enterprise Architects that provides an overview of the Google Cloud suite of products, ranging from Google Apps through Compute IaaS, Big Data services, storage and more.

10.06.2013

Twitter IPO and Twitter message streams for market analytics: The real potential of Twitter for advertisers

Twitter, which filed for an public IPO with the US Securities and Exchange Commission back in June, that didn't become public before now in September, hopes to raise $1 billion with their public IPO. The SEC S-1 documents made public, stated that Twitter had $253,6 million in revenue for first half of 2013,  with net loss at 41 percent to $69.3 million and with some 215 million monthly active users.

The current and future revenue are said to come from three main ad-based sources:


  1. Promoted tweets that appears in users message feeds
  2. Promoted accounts (i.e. brands, companies, events etc) that appears on Twitter landing pages
  3. Promoted trends, where advertisers can buy their way into trending lists, themes and developments
The Twitter "ad media universe" is somewhat limited and the advertising tools available for advertisers might seem limited as well.  But with the Twitter IPO being a confidential or "secret" IPO available for sub $1 billion companies (in revenue), all the available or future ad channels for Twitter hasn't been highlighted. To me there's one obvious one that is clearly missing (though I haven't read the full S-1 documents), and that is the big data mining and analytics opportunities with Twitter tweets and message flows for consumer tracking, audience sentiment tracking and overall market trends.

The Twitter streaming APIs gives developers or Twitter log collectors, applications or apps "low latency access to Twitter's global stream of Tweet data", either collecting all Twitter messages in a continuous stream, collecting single-user message streams or site streams . 

With more than 500 million tweets a day through Twitter, this gives market analysts, advertisers, companies and Twitter itself of course, a unique view into
  • Trending themes and developments, i.e. new phenomena of all sorts, Internet memes, things going viral, movie or TV-shows releases, new consumer brands, pop stars, new albums, books etc
  • Long-term development and standing of brands, products, product models, consumer sentiments
  • Developing news and events
  • National and regional break-downs of trends, developments and long-term standing 
  • Cross-linked with mobile or PC access, client type, time of day, frequency of tweets or mentions etc
Utilizing Twitter message streams for near real-time market analysis and consumer views should be a no-brainer for advertisers, just as Netflix used their own data analytics to create House of Cards and other TV-shows - how long before advertisers catches on?