Post image for Nine ways hybrid cloud delivers real IT and business benefit
Sam Lightstone

Sam Lightstone

Distinguished Engineer at IBM
BIO: Sam Lightstone is Distinguished Engineer for relational Cloud Data repositories as well as co-founder of IBM's technology incubation initiative.He is author or several books, papers, and patents on computer science, careers, and data engineering.
Sam Lightstone

Hybrid Cloud is a big buzzword lately, and it’s easy to run into the jargon every day if you work in high tech. Try finding someone who can tell you what it means – a bit harder. In this blog I’ll try to unveil the mystery, describing what it’s all about and how it can really help drive efficiencies. Since I spend my days, evenings, and weekends helping to develop a cloud product named dashDB, a scalable cloud data warehouse for very fast business intelligence and data science, I’ll use that as an example to illustrate the scenarios. The ideas apply broadly to many kinds of products, and I’m sure you’ll be able to transpose from the database domain to other kinds of products and services of interest to you.

Hybrid cloud defined

What is hybrid cloud? Many companies offer cloud services now, and at IBM we’re proud to have a large world-wide distribution of cutting edge data centers thanks to our SoftLayer cloud division. When you use Netflix or Google you are using cloud computing (as an end user). There are massive collections of systems behind those seemingly simple web pages. The economies of scale from large data centers result in the IT infrastructure being lower cost for the company that owns it. Having the IT and software on tap for users means that software development is much easier, faster, and lower cost as well. Best of all, everything is available on demand, so users of cloud computing don’t need to provision or purchase IT systems.

For example, if you want to set up a 4 server data warehouse in your own company you would need to find or buy the servers, storage and software and then string it all together. But with a cloud data warehouse like dashDB you can simply request the service, and presto you have a database and all the necessary hardware and software ready to go with zero configuration or tuning. When your company owns the cloud it uses, that’s called “private cloud”. Private cloud can provide some of the economies of scale and simplify provisioning for their users with the benefit of keeping data secure inside your own organization. When you use another company’s cloud it’s called “public cloud”. Hybrid cloud is the situation where some of your software and compute is in the public cloud and some is not – it’s either in a private cloud or in non-cloud on-premises IT. In short, (for the purposes of this article) hybrid cloud is the mix of public cloud with either private cloud or non-cloud systems.

Exploiting hybrid cloud for business advantage

Here are nine ways that companies are exploiting hybrid cloud for business advantage:

#1: Rapid prototyping and agile development

When you are ramping up a new project the last thing you want to do is wait around for hardware or worse – beg your boss for approvals to purchase new hardware. You want to get going fast, and you want to be agile. The cloud is great for concept development whether or not deployment ends up back on premises. It can be hard to procure IT infrastructure for a “concept” project, and cloud can help organizations really go agile with low risk and low cost and having IT on tap – renting cloud services only for the duration of the trial. Once proven the production projects can live in the cloud or on-premises, making it a hybrid choice. A lot of companies aren’t ready or comfortable to place sensitive production data in the cloud, and often have corporate policies against it – but a proof of concept with synthetic data? Have at it!

#2: Skills and component reuse

Every organization prefers to reuse skills when possible. Similarly, you’d like to reuse middleware and applications in many cases as well. You want similar development and usage semantics across on-premises and cloud in order to leverage skills, tools, and experience regardless of where the system will eventually be placed. For example, customers may want DB2 for Cloud to have similar semantics to DB2 LUW on-premises, or dashDB on cloud to behave very much like Pure Data for Analytics (a.k.a. Netezza) appliance because they have existing skills with DB2 or Pure Data for Analytics. Public cloud services that have been designed for hybrid scenarios pay a lot more attention to this. Most public clouds actually don’t offer this – you’ll find the majority of the time you end up learning too much that’s new, and existing applications can’t be easily moved in whole or part without a lot of changes. Having a common set of programming paradigms and products means your company can reuse skills whether you are working on cloud projects or on-premises ones. The result is friendly, familiar technology, tools and already proven systems that make it far easier to leverage cloud computing without risk.

#3: Place components optimally

It may be a lot more cost effective or practical to have some components of a system in the cloud and not others. For example it’s common to have a dashDB warehouse in the cloud, while using BI tools that are on-premises in either a private cloud or non-cloud IT systems. There are loads of reasons why this happens. Here’s a common situation: your company has on-premises licenses for a BI software product like Cognos or Tableau pushing BI queries to a cloud warehouse like dashDB. Creating the data warehouse in the cloud saves you in administration and TCO and likely offers superior performance, but since you have those on-premises licenses for the BI tools you might as well use them – a classic hybrid scenario.

#4: Analytics in the cloud, with operational data on premises

This is very common at the moment. This use case is notable for the importance of regular data movement from the operational system on-premises into the analytics environments in the cloud. Your operational transaction processing systems (sometimes called “systems of record”) may have been around for a long time. They are working well on-premises and you see no need to change them. (“If it ain’t broke….”). Now you’re looking at adding analytics to this data, and want to stand up a data warehouse easily where you can do complex reporting, BI, perhaps some machine learning and statistical analysis too. A cloud data warehouse like dashDB is a perfect fit – low cost, super easy with oodles of processing power. Now you have the system of record for transactions on-premises, and the analytics system in a public cloud. This may sound a bit like my previous item “place components optimally” but there I was talking about component of a single system, while this case is talking about related by independent systems – a great hybrid solution.

#5: Cloud as Development and Test environment (a.k.a “Dev/Test”)

In this scenario you have a production system on-premises, but you don’t want your developers and testers mucking about on the production system.

  • A few bad lines of code from components under development could disrupt the production systems.
  • Developers and testers are often not permitted access to the production data or production system by corporate policy.

I don’t blame you! Public cloud to the rescue. The public cloud can stand up systems for dev/test rapidly, and in many cases for dev/test the data volumes are small, so free IBM instances can be used. (IBM offers many cloud computing services, including dashDB, for FREE provided the data volume remains modest – less than 50 GB or so. Really, free, and no strings.) I believe there is huge untapped potential in this cloud for this. Dev/test should think about bringing the cloud into their daily continuous integration build/test cycles. For example, via a simple script, provision an instance of the dashDB entry plan, load sample data, and then run your daily analytics tests for queries/Rscripts. Once you are done you can immediately de-provision it.

#6: Cloud as Disaster Recovery system

My team worked with a large US marketing analytics company to create a Disaster Recovery (DR) site in the cloud while they kept their very high-end production systems with specialized hardware on site. Their concern was that they wanted their operations to keep running even if a disaster compromised their home location where their private cloud lives (due to a hurricane, a flood, an earthquake, or the unlikely event of a zombie invasion). In this case there’s no point of having backup servers in the same building or even the same town, and they certainly did not want to go through the expense of buying and maintaining a entire new data center for DR purposes more than 500 miles away. Instead, they used the public cloud – IT on tap, at low cost and with oodles of compute power in case they needed it. In this case the compelling topic for our customers was that we were able to demonstrate that dashDB in the public cloud could perform great while providing excellent language (SQL, DDL, PL/SQL) and operational compatibility with the Pure Data for Analytics systems they are currently using. If the zombies or a flood take down their production data center, shazam, the cloud systems take over. This empowers you to have effective off-site DR without actually owning/renting another site with your own IT. Aside – we’ve heard from several customer that this scenario is how they want to test the waters on cloud computing: start by using it for DR and then if the DR site impresses then consider moving production systems to cloud in the future.

#7: Hot/Cold data

Cold (older) data in the cloud, and hot (recent) data on-prem. Lower cost and lower fuss for less frequently accessed data, but with full application support and consistent semantics. This is a special variation of “place components optimally”. We’ve seen interest in this at a few accounts. For example at a large UK retailer my team worked with the company has an on-premises data warehouse system that was filling up and wanted to move cold data to the cloud into dashDB, federating through the front end. In this scenario the hot data accessed frequently stayed live at the customer site where they kept their high end systems, while the older data lived on in the cloud at lower cost but still completely accessible. In another case with a large US retailer they had cold (frozen solid) data they essentially never planned to access but needed to maintain programmatic access to for compliance and policy reasons for several years – cloud to the rescue to storing the ice-cold data at low cost with all the rich programming semantics needed for compliance and corporate policy just in case it was ever needed.

#8: On-premises systems drawing data from the cloud

In these cases while the IT systems are on-premises they draw data from new cloud sources. Twitter Data, and Open/Public data, are examples of cloud data that on-premises systems are increasingly going to be interested in drawing from. By joining your own corporate data with publically available data sets you can add amazing insights to your analysis. For example, by leveraging Twitter Data in dashDB with a push of a button a retailer can not only explore which products are selling but also see how social media is reacting to products and brands. By leveraging open data (often published by governments, municipalities, police forces etc.) a financial institution can easily analyze how their business correlates to government policy changes, state budgets, and even crime rates. I think of this as lighting up the database – adding truly interesting information to the foundational data you already have. That’s why we’ve added TwitterData to dashDB with pushbutton simplicity – you can really light up your data with other peoples’ data, and you can light up you data the same way on premises too.

#9: Breaking the boundaries of performance

Today’s computers are very powerful.

We’re awash in (relatively) large multi-core systems with large memories. For many applications you’ll find that the IT infrastructure you have in-house is more than powerful enough. Then there are those special projects that need major horsepower. By leveraging public cloud infrastructure you can tap into some of the latest and greatest infrastructure that your own company may not yet have. High-I/O workloads. Big data. Fast data. Your cloud infrastructure needs to keep up with your application’s performance demands. For example, the IBM Cloud infrastructure where we run dashDB is based on SoftLayer that outperforms the competition up to 8.7x. The public cloud can offer a much broader and faster network, and more powerful components at the best performance for your cloud dollar. You can provision high-end components, with huge compute (thousands of CPU cores if needed) and fast SSD storage while keeping the costs low thanks to the economies of scale that public clouds provide.

Conclusion

Net net: Hybrid offers real potential for more agile development, lower costs, better disaster recovery, improved insight, breakthrough performance, and skills reuse. Hybrid cloud allows you to blend your private cloud and IT with public cloud systems to achieve maximum business value. Even if the zombies don’t attack.

Previous post: