The Data Lakehouse has emerged as a new data management architecture across many organizations and use cases. This post describes this new architecture and its advantages over previous methods.
In this article we will cover:
- Traditional Data Warehouses and Data Lakes
- What is a Lakehouse?
- Lakehouse Used for Business Intelligence
- Sesame Software Data Warehouse Platform
Traditional data warehouses have a long history in decision support and business intelligence applications. While data warehouses are great for structured data, many modern enterprises have to deal with unstructured, semi-structured, and data with a wide variety, velocity, and volume. Data warehouses are not suited for many of these use cases, and they are certainly not the most cost-efficient.
As companies collected large amounts of raw data from many different sources, there was an increasing need for a single system to house data for many other analytic products and workloads. In response to this need, companies began building data lakes. While suitable for storing data, data lakes lack some critical features: they do not support transactions or enforce data quality, resulting in a lack of data consistency.
The need for a flexible, high-performance system hasn’t diminished. Companies require systems for diverse data applications, including SQL analytics, real-time monitoring, and data science.
Most of the recent advances have been in better models to process unstructured data, but these are the types of data that a data warehouse is not optimized for. A common approach is to use multiple systems – a data lake, data warehouses, and other specialized systems. However, this introduces complexity and delays, as data professionals need to efficiently move or copy data between different systems.
Today, organizations that work with various data sets have yet another option for storage architecture: a hybrid architecture called the “data lakehouse” approach.
What is a Lakehouse?
Like a data lake, a data lakehouse is built to unify data – both structured and unstructured. Businesses that can now benefit from working with unstructured data only need one data repository rather than requiring both warehouse and lake infrastructure.
Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse directly on top of low-cost storage in open formats.
When organizations use both, generally, data in the warehouse feeds BI analytics. In contrast, data in the lake is used for data science – including artificial intelligence (AI) such as machine learning and storage for future, undefined use cases.
Data lakehouses enable structure and schema like those used in a data warehouse to be applied to the unstructured data of the type typically stored in a data lake. This means that data users can access the information more quickly and start putting that information to work. Those data users might be data scientists or, increasingly, workers in any number of other roles that are increasingly seeing the benefits of augmenting themselves with advanced analytics capabilities.
These data lakehouses might use intelligent metadata layers – that act as a middle ground between the unstructured data and the data user to categorize and classify the data. By identifying and extracting features from the data, it can effectively be structured, allowing it to be cataloged and indexed just as if it was nice, tidy structured data.
Lakehouse for Business Intelligence
Organizations are increasingly looking to unstructured data to inform their data-driven operations and decision-making simply because of the richness of the insights extracted from it. So who is the data lakehouse architecture built for? One key group is organizations looking to graduate from BI to AI. Organizations are increasingly looking to unstructured data to inform their data-driven operations and decision-making simply because of the richness of the insights extracted from it.
Yes, you could put all of that data into a data lake. However, there would be significant issues of data governance to address – such as the fact you’re likely dealing with personal information.
A lakehouse architecture would address this by automating compliance procedures – perhaps even anonymizing data where needed.
Unlike data warehouses, data lakehouses are inexpensive to scale because integrating new data sources is automated – they don’t have to be made to manually fit with the organization’s data formats and schema. Data can be queried from anywhere using any tool, rather than being accessed through applications that can only handle structured data (such as SQL).
The data lakehouse approach is one that’s likely to become increasingly popular as more organizations begin to understand the value of using unstructured data together with AI and machine learning. In the data analytics journey, it’s a step up in maturity from the combined data lake and data warehouse model.
Over time lakehouses will close these gaps while retaining the core properties of being simpler, more cost-efficient, and more capable of serving diverse data applications.
Sesame Software Lakehouse Platform
Sesame Software’s Lakehouse Platform combines the best elements of data warehouses and data lakes. Sesame Software delivers data management and performance typically found in data warehouses with the low-cost, flexible object stores offered by data lakes. This unified platform simplifies your data architecture by eliminating the data silos that traditionally separate analytics and data science. Request a demo to learn more on how Sesame Software can provide you with a secure and scalable foundation for your Lakehouse.