Welcome!

From the Principal Analyst with Ovum

Tony Baer

Subscribe to Tony Baer: eMailAlertsEmail Alerts
Get Tony Baer via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Apache Web Server Journal, Open Source Journal, Dana Gardner's BriefingsDirect, Big Data on Ulitzer

Cloud Computing: Article

Another Vote for the Apache Hadoop Stack

The measure of success of an open-source stack is the degree to which the target remains intact

As we’ve noted previously, the measure of success of an open source stack is the degree to which the target remains intact. That either comes as part of a captive open source project, where a vendor unilaterally open sources their code (typically hosting the project) to promote adoption, or a community model where a neutral industry body hosts the project and gains support from a diverse cross section of vendors and advanced developers. In that case, the goal is getting the formal standard to also become the de facto standard.

The most successful open source projects are those that represent commodity software – otherwise, why would vendors choose not to compete with software that anybody can freely license or consume? That’s been the secret behind the success of Linux, where there has been general agreement on where the kernel ends, and as a result, a healthy market of products that run atop (and license) Linux. For community open source projects, vendors obviously have to agree on where the line between commodity and unique value-add begins.

And so we’ve discussed that the fruition of Hadoop will require some informal agreement as to exactly what components make Hadoop, Hadoop. For a while, the question appeared in doubt, as one of the obvious pillars – the file system – was being contested with proprietary alternatives like MapR and IBM’s GPFS.

Retrenching

What’s interesting is that the two primary commercial providers that signed on for the proprietary files systems – IBM and EMC (via partnership with MapR) – have retrenched. They still offer the proprietary file system systems in question, but both now also offer purer Apache versions. IBM made the announcement today, buried below the fold after its announced intention to acquire data federation search player, Vivisimo. The announcement had a bit of a grudging aspect to it – unlike Oracle, which has a full OEM agreement with Cloudera, IBM is simply stating that it will certify Cloudera’s Hadoop as one of the approved distributions for InfoSphere BigInsights – there’s no exchange of money or other skin in the game. If IBM also gets demand for the Hortonworks distro (or if it wants to keep Cloudera in its place), it’ll also likely add Hortonworks to the approved list.

Against this background is a technology that is a moving target. The primary drawback – that there was no redundancy or failover with the HDFS NameNode (which acts as a file directory) – has been addressed with the latest versions of Hadoop. The other – which provides POSIX compliance so Hadoop can be accessed through the NFS standard) – is only necessary for very high, transactional-like (OK, not ACID) performance which so far has not been an issue. If you want that kind of performance, Hadoop’s HBase offers more promise.

What’s interesting is that the two primary commercial providers that signed on for the proprietary files systems have retrenched.

But just as the market has passed judgment on what comprises the Hadoop “kernel” (using some Linuxspeak), that doesn’t rule out differences in implementation. Teradata Aster and Sybase IQ are promoting their analytics data stores as swappable, more refined replacements for HBase (Hadoop’s column store), while upstarts like Hadapt are proposing to hang SQL data nodes onto HDFS.

When it comes to Hadoop, you gotta reverse the old maxim: The more things stay the same, the more things are actually changing.

You may also be interested in:

More Stories By Tony Baer

Tony Baer is Principal Analyst with Ovum, leading Ovum’s research on the software lifecycle. Working in concert with other members of Ovum’s software group, his research covers the full lifecycle from design and development to deployment and management. Areas of focus include application lifecycle management, software development methodologies (including agile), SOA, IT service management/ITIL, and IT management/governance.

Baer has been a noted authority on software development platforms and integration architecture for nearly 20 years. Prior to joining Ovum, he was an independent analyst whose company ‘onStrategies’ delivered software development and integration tools to vendors with technology assessment and market positioning services. He also led Computerwire’s CIO Agenda and Computer Finance end-user best practices research services.

Follow him on Twitter @TonyBaer or read his blog site www.onstrategies.com/blog.