Founded by Ex-Uber Data Architect and Apache Hudi Creator, Onehouse Supercharges Data Lakes for AI and Machine Learning With $8 Million in Seed Funding From Greylock and Addition
Onehouse Combines the Ease-of-Use of a Data Warehouse With the Scale of a Data Lake Into a Fully-Managed Service on Top of the Popular Apache Hudi Open Source Project
Today Onehouse, the first managed lakehouse company, emerged from stealth with its cloud-native managed service based on Apache Hudi that makes data lakes easier, faster and cheaper.
Marketing Technology News: MarTech Interview with Roli Saxena, President at AdRoll (a Division of NextRoll)
Data has become the driving force of innovation across nearly every industry in the world. Yet organizations still struggle to build and maintain data architectures that can economically scale at the fast-paced growth of their data. As the size of the data and the AI and machine learning (ML) workloads increase, their costs rise exponentially and they start to outgrow their data warehouses. To scale any further they turn to a data lake where they face a whole new set of complex challenges like constantly tuning data layouts, large-scale concurrency controls, fast data ingestion, data deletions and more.
Onehouse founder Vinoth Chandar faced these very challenges as he was building one of the largest data lakes in the world at Uber. A rapidly growing Uber needed the performance of a warehouse and the scale of a data lake, in near real-time to power AI/ML driven features like predicting ETAs, recommending eats and ensuring ride safety. He created Apache Hudi to implement a new path-breaking architecture where the core warehouse and database functionality was directly added to the data lake, today known as the “lakehouse”. Apache Hudi brings a state-of-the-art data lakehouse to life with advanced indexes, streaming ingestion services and data clustering/optimization techniques.
Apache Hudi is now widely adopted across the industry used from startups to large enterprises including Amazon, Walmart, Disney+ Hotstar, GE Aviation, Robinhood and TikTok to build exabyte scale data lakes in near-real-time at vastly improved price/performance. The broad adoption of Hudi has battle-tested and proven the foundational benefits of this open source project. Thousands of organizations from across the world have contributed to Hudi and the project has grown 7x in less than two years to nearly one million monthly downloads. At Uber, Hudi continues to ingest more than 500 billion records every day.
Zheng Shao and Mohammad Islam from Uber shared “we started the Hudi project in 2016, and submitted it to Apache Incubator Project in 2019. Apache Hudi is now a Top-Level Project, with the majority of our Big Data on HDFS in Hudi format. This has dramatically reduced the computing capacity needs at Uber” in the “Cost-Efficient Open Source Big Data Platform at Uber” blog: https://eng.uber.com/cost-efficient-big-data-platform/.
Even with transformative technology like Apache Hudi, building a high quality data lake requires months of investment with scarce talent without which there are high risks that data is not fresh enough or the lake is unreliable or performs poorly.
Marketing Technology News: Zero Trust Leader iboss Wins 2021 Cybersecurity Excellence Award
Onehouse founder and CEO Vinoth Chandar said: “While a warehouse can just be ‘used’, a lakehouse still needs to be ‘built’. Having worked with many organizations on that journey for four years in the Apache Hudi community, we believe Onehouse will enable easy adoption of data lakes and future-proof the data architecture for machine learning/data science down the line.”
Onehouse streamlines the adoption of the lakehouse architecture, by offering a fully-managed cloud-native service that quickly ingests, self-manages and auto-optimizes data. Instead of creating yet another vertically integrated data and query stack, it provides one interoperable and truly open data layer that accelerates workloads across all popular data lake query engines like Apache Spark, Trino, Presto and even cloud warehouses as external tables.
Leveraging unique capabilities of Apache Hudi, Onehouse opens the door for incremental data processing that is typically orders of magnitude faster than “old-school” batch processing. By combining a breakthrough technology and a fully-managed easy-to-use service, organizations can build data lakes in minutes, not months, realize large cost savings and still own their data in open formats, not locked into any individual vendors.
Industry Analysts on Onehouse
- “The complexity of building a data lake today is prohibitive for many organizations who want to quickly unlock analytics and AI from their data,” said Paul Nashawaty, Senior Analyst at Enterprise Strategy Group. “The team at Onehouse is building a fully-managed lakehouse infrastructure that automates away tedious data engineering chores and complex performance tuning. Built on an industry proven open source project, Apache Hudi, Onehouse ensures your data foundation is open and future proof.”
- “Data is the new oil and the driving force behind data economy and innovation. But it is very hard, and expensive to build real-time data lakes that can serve AI/ML model creation and model serving in real-time,” said Andy Thurai, Vice President and Principal Analyst at Constellation Research. “A good data ‘lakehouse’ solution should consider using a hybrid model as well as look into using a combination of commercial and open-source options (such as Apache Hudi) to strike a balance between cost vs ease of use.”
- “To unlock the power of machine learning, enterprises should invest in an open standards data lake that makes all enterprise data available for relevant models,” said Hyoun Park, Chief Analyst at Amalgam Insights. “Onehouse tackles this challenge head-on by providing a fully-managed lakehouse that will greatly accelerate the ability to translate massive and varied data sources into AI-guided insight.”
$8 Million in Seed Funding
Onehouse raised $8 million in seed funding co-led by Greylock and Addition. Onehouse plans to use the money for its managed lakehouse product and to further the research and development on Apache Hudi.
Greylock Partner Jerry Chen said: “The data lake house is the future of data lakes, providing customers the ease of use of a data warehouse with the cost and scale advantages of a data lake. Apache Hudi is already the de facto starting point for modern data lakes and today Onehouse makes data lakes easily accessible and usable by all customers.”
Addition Investor Aaron Schildkrout said: “Onehouse is ushering in the next generation of data infrastructure, replacing expensive data ingestion and data warehousing solutions with a single lakehouse that’s dramatically less costly, faster, more open and – now – also easier to use. Onehouse is going to make broadly accessible what has to-date been a tightly held secret used by only the most advanced data teams.”
Marketing Technology News: Applitools Wins Best Testing Service and Tool in DevOps Dozen 2021 Awards