Informatica is within a year or two of becoming a $1 billion company, and the
CEO’s stretch goal is to get to $3b.
Informatica has been on a decent tear. It’s had a string of roughly 30
consecutive growth quarters, growth over the last 6 years averaging 20%, and
2011 revenues nearing $800 million. Abbasi took charge back in 2004, lifting
Informatica out of its midlife crisis by ditching an abortive foray into
analytic applications, instead expanding from the company’s data
transformation roots to data integration.
Getting the company to its current level came largely through a series of
acquisitions that then expanded the category of data integration itself.
While master data management (MDM) has been the headliner, other recent
acquisitions have targeted information lifecycle management (ILM), complex
event processing (CEP), low latency messaging (ultra messaging), a... (more)
This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is
senior analyst at Ovum.
By Tony Baer
It was never a question of whether SAP would bring it flagship product,
Business Suite to HANA, but when. And when I saw this while parking the car
at my physical therapist over the holidays, I should’ve suspected that
something was up: SAP at long last was about to announce … this.
From the start, SAP has made clear that its vision for HANA was not a
technical curiosity, positioned as some high-end niche product or sideshow.
In the long run, SAP was going to take HANA ... (more)
HP chose the occasion of its Q3 earnings call to drop the bomb. The company
that under Mike Hurd’s watch focused on Converged Infrastructure, spending
almost $7 billion to buy Palm, 3COM, and 3PAR, is now pulling a 180 in
ditching both the PC and Palm hardware business, and making an offer to buy
Autonomy, one of the last major independent enterprise content management
players, for roughly $11 billion.
At first glance, the deal makes perfect sense, given Leo Apotheker’s
enterprise software orientation. From that standpoint, Apotheker has made
some shrewd moves, putting aging ent... (more)
To date, Big Storage has been locked out of Big Data. It’s been all about
direct attached storage for several reasons. First, Advanced SQL players have
typically optimized architectures from data structure (using columnar),
unique compression algorithms, and liberal usage of caching to juice response
over hundreds of terabytes. For the NoSQL side, it’s been about cheap,
cheap, cheap along the Internet data center model: have lots of commodity
stuff and scale it out. Hadoop was engineered exactly for such an
architecture; rather than speed, it was optimized for sheer linear scale.... (more)
Of the 3 "V’s” of Big Data – volume, variety, velocity (we’d add
"Value” as the 4th V) – velocity has been the unsung ‘V.’ With the
spotlight on Hadoop, the popular image of Big Data is large petabyte data
stores of unstructured data (which are the first two V’s). While Big Data
has been thought of as large stores of data at rest, it can also be about
data in motion.
"Fast Data” refers to processes that require lower latencies than would
otherwise be possible with optimized disk-based storage. Fast Data is not a
single technology, but a spectrum of approaches that process data t... (more)