Making Hadoop Work in More Places With Hadapt

A Yale computer science project has turned into a company that’s attempting to combine the scalability of Hadoop with an ability to perform analytics with both structured and unstructured data. Hadapt launches today at the Structure: Big Data conference in New York, with an undisclosed amount of seed funding and the goal of making Hadoop more broadly applicable for analytics.

The company, which was founded last year, is based on technology commercialized from Yale University, said Justin Borgman, CEO and co-founder of Hadapt. The idea was to help make Hadoop — the open-source, large-data analytics technology inspired by Google’s (s goog) MapReduce software — more enterprise friendly. Borgman says the primary benefits are Hadapt’s ability to work with unstructured and structured data, and an SQL-like interface.

Companies running analytics applications on virtualized environments would also see speed gains using Hadapt, although many companies try to run their databases on non-virtualized environments because running multiple databases on virtualized servers tends to slow them down. However, for those who have smaller database environments or want to try to cram everything into a cloud, Hadapt might offer performance benefits.

The software runs on any hardware and can be clustered in a heterogenous environment. The bigger idea behind Hadapt is to replace the large-scale proprietary data-warehousing gear offered by EMC (s emc), Teradata (s tdc) and Netezza (now a division of IBM) (s ibm). In the last few months, many of the large computer systems makers have spent billions buying up software to help analyze structured data with Greenplum going to EMC, HP buying Vertica (s hpq) and Teradata scooping up Aster. Borgman looks at those deals and says, “The playing field is now cleared for companies like ours.”