Tuesday, May 15, 2012

In-Memory Computing: Future Proof Scalability




At the Sapphire 2012 Conference in Orlando tomorrow SAP Chariman and Cofounder Dr. Hasso Plattner will once again address the subject of in-memory computing. SAP’s attempt to dominate the in-memory computing discussion risks burying a serious conversation about computing in a pile of marketing drivel disguised as academic research. But that’s probably alright. Technology is not science, but it depends on it.

The SAP HANA solution has finally developed enough that SAP has customers creating prototypes with it today. It is both different and more immature than say Hadoop or Gigaspaces.

All of these technologies present interesting possibilities designed to take advantage of today’s more powerful and ubiquitous hardware, enable multitenant cloud architectures and enable massive scalability based on parallelization.

We’ve been talking about the possibilities of grid computing for more than a decade. I remember a visit I made to IBM Research Labs in early 2001 and attending lectures by IBM research scientists on the subject of grid computing. At the time, IBM seemed poised to dominate the future of grid computing. But as always seems to happen, the dialog has evolved as technologies converge and paradigms shift. Now we’re talking more about in-memory computing and multitenancy than we are about grid computing.

As the technologies that provide the foundations for applications evolve, so too will application and integration platforms from Magic Software. As in-memory databases begin to replace traditional disk-based SQL databases the physical writing of the data to disk will be functionally akin to tape backup systems.

The benefit of these new approaches will be found in hyper-scalability of enterprise systems and elasticity in the cloud. As the underlying physical servers evolve and the data layer moves from disk to memory, the Magic application and integration platforms will keep right on running. Companies that seek to take even greater advantage of multicore architectures and the affordability of massive amounts of main memory will be able to continue using the Magic application platform to develop and deploy their applications. Magic sometimes calls our approach “future-proofing” which means that the platform adapts to the underlying changes so that your applications need to change as little as possible to take advantage of the new possibilities.

Hadoop is a slightly older concept and it has the distinction of being an open source project (depending on your bias/predisposition you may read that as good, bad or in-between). Gigaspaces is a more mature and commercial solution with a very complete stack that sits between (or within) the application infrasturcture and the application itself.  

In its marketing and technical communications, SAP likes to focus on the difference between online transaction processing (OLTP) and online analytical processing (OLAP) and points out that OLTP takes a row based approach where data is read a row at a time while OLAP looks at data a column at a time. The nature of physical disks makes the physical arrangement of the data on the disk important to performance and therefore the data is read in either row order or column order. In-memory computing is silicon based and makes physical location virtually irrelevant. The same database can now be read in either row order or column order with no latency penalty. In this regard, they are really just “selling the category.” There is nothing proprietary about SAP’s in-memory computing approach in this regard.  The HANA appliance has been alternatively praised as the future of the cloud and criticized as cloud-washing. The whole argument seems irrelevant to me as in-memory computing is definitely making the transition to the cloud easier. Oracle’s market response is interesting as they seem to be treating it more as a BI skirmish whereas SAP treats the debate as part of a full-spectrum IT paradigm shift. At the end of the day, its all just another good reason to align with the smarter, future-proof technology in Magic’s metadata driven platforms.

No comments:

Post a Comment