Latest posts by Sam Lightstone (see all)
- Nine ways hybrid cloud delivers real IT and business benefit - April 13, 2016
- IBM dashDB is here!Keeping data warehouse infrastructure out of your way - November 10, 2014
- Please join me at IBM Insight 2014 for more on #IBMBLU and #BLUEMIX - October 24, 2014
On September 4 I participated in a twitter chat about IBM’s in-memory database for optimized analytic reporting and business intelligence, BLU Acceleration (#IBMBLU), now available in DB2 10.5. At its peak the tweets were coming fast and furious – and trying to keep up with the pace was bit like trying to catch 15 ping-pong balls every second for 45 minutes. Enough to drive one to neurosis. As Rita Rudner once quipped “Neurotics build castles in the air, psychotics live in them. My mother cleans them.” But with frenetic energy came some insightful questions and comebacks. Here are some highlights.
- What does in-memory optimized technology offer? Speed of thought analytics. Complex queries that used to run for minutes can now run in seconds or less.
- How super easy is it? Just load and go. DB2 takes care of all the tuning, not matter what size server you are using, how many queries are running at once or how big the data set is.
- Tell me about actionable compression. This is IBM’s language for a compression strategy that not only compresses data down to phenomenally small sizes allows the database to operate on the data in its compressed form. Most customers are seeing storage savings in the range of 10x-24x. Since the database can perform scans, joins and grouping on the data while it is in compressed format, the CPU cycles that would have been spent decompressing the data are saved. The smaller the data is under compression the faster BLU Acceleration runs. That’s the opposite of what happens with most database compression schemes where tighter compression has a consequence of more costly query runtime.
- Is it safe? You bet. One of the key strategies for our engineering team was to build BLU Acceleration directly into the kernel of the DB2 database. It’s not an add-on. Not a separately sold product. Not an acceleration layer. It’s baked into the DNA of the database. Because of that, BLU Acceleration inherits all of the security, reliability and robustness qualities of DB2. Rock solid database processing that we have been able to bring huge test resources to bear on before making this technology available. How? Because BLU Acceleration is based into the DNA of DB2, the external semantics of the system are identical to DB2 – just much faster, smaller and simpler. That allowed the engineering team to execute vast arrays of existing DB2 test cases against the new code.
- RAM is the new disk (or put differently, memory is too slow). In a world where we are accustomed to thinking of disks as the slow component that persistently holds data it’s hard ot think of RAM as being “slow”. But with BLU Acceleration, IBM has built an architecture based on the philosophy that RAM really is too slow! BLU Acceleration is designed to minimize memory latency (the time it takes for the CPU to access data in RAM). Extreme focus on cache-resident processing (L2 and L3 caches), as well as CPU prefetching and thread localized processing has made BLU Acceleration a game changer that can really live up to the statement “RAM is too slow”.
- Combining analaytics and transaction processing (OLTP): BLU Acceleration is one of the very few technologies in the world that allows OLTP and analytic processing in the same database and in the same memory while still offering load-and-go simplicity. IBM believes it is the first to do so with this level of performance per core.
- You don’t need enough RAM for all of your data to see these benefits. One of the best things about BLU Acceleration is that it’s powerful “dynamic in-memory processing” qualities enable it to only draw into memory the subset of data that is important to the active workload. We call this the “active data set”, and it’s the subset of columns and rows (in chunks) that the workload needs to access. Everything else can stay on disk, and will be automatically prefetched into RAM on demand if a query or workload needs to look at data that isn’t yet in memory. The net effect is that most customer only need enough RAM for a fraction of the data set.
- Do these in-memory benefits diminish when too many queries are running at once? Nope. Another great quality of BLU Acceleration is its built in “automatic workload management” that internally constrains the number of SQL statements consuming working RAM and CPU, transparently to the applications. As far as the applications are concerned, hundred if not thousands of queries can be submitted to the system at once. This is part of the ease of use “load-and-go” characteristics of BLU Acceleration. Yet another thing you don’t have to worry about.
- Stable response times. As Dr. Guy Lohman tweeted, one of the design points of BLU Acceleration is to reduce the variability of query execution times. Designed for efficient main memory scanning, BLU Acceleration stabilizes and converges the time taken to execute different queries over the same data.
- Do we need more and more speed? Yes. Speed is power. Businesses always find news ways to push more work. Industry will find ways to leverage more of it.
As Richard Lee tweeted during the chat: #IBMBLU A4 – Benefits of In-Memory are on multiple levels; <Time to Decision, >Risk Mitigation, >Customer SAT, >$’s Next Best Offer, etc.
You can see the highlights here: #ibmblu Twitterchat: Speeding Up Transactions & Analytics with In-Memory Processing