The Analytics Engine for All Your Data with Justin Borgman @ Starburst
In this episode we speak with Justin Borgman, Chairman & CEO at Starburst, which is based on open source Trino (formerly PrestoSQL) and was recently valued at $3.35 billion after securing their series D funding. In this episode we discuss convergence of DW’s / DL's, why data lakes fail and much much more. Top 3 takeawaysThe data mesh architecture is gaining adoption more quickly in Europe due to GDPR.There were two main limitations of data lakes when comparing to DW’s, performance and CRUD operations. Performance has been resolved with query engines like Starburst and tools like Apache Iceberg, Apache Hudi and Delta Lake are starting to close the gap with CRUD operations. The principle of a single source of truth / storing everything in a single DL or DW is not always feasible or possible depending on regulations. Starburst is bridging that gap and enabling data mesh and data fabric architectures.
Transform Your Object Storage Into a Git-like Repository With Paul Singman @ LakeFS
In this episode we speak with Paul Singman Developer Advocate at Treeverse / LakeFS. LakeFS is an open source project that allows you to transform your object storage into a Git-like repository. Top 3 takeawaysLakeFS enables use cases like debugging to quickly view historical versions of your data at a specific point in time and running ML experiments over the same set of data with branching..The current data landscape is very fragmented with many tools available.. Over the coming years there will most likely be consolidation of tools that are more open and integrated. Data quality and observability continue to be key components of successful data lakes and having visibility into job runs.
Enable Faster Data Processing and Access with Apache Arrow with Matt Topol @ Factset
In this episode we speak with Matt Topol, Vice President, Principal Software Architect @ FactSet and dive deep into how they are taking advantage of Apache Arrow for faster processing and data access. Below are the top 3 value bombs:Apache Arrow is an open-source in-memory columnar format that creates a standard way to share and process data structures.Apache Arrow Flight eliminates serialization and deserialization which enables faster access to query results compared to traditional JDBC and ODBC interfaces.Don’t put all your eggs in one basket, whether you're using commercial products or open source, make sure you design a modular architecture that does not tie you down to any one piece of technology.
Implementing Amundsen @ Convoy with Chad Sanderson
In this episode we speak with Chad Sanderson head of data and early stage startup advisor focused on data innovation @ Convoy and uncover their journey to implementing Amundsen, an open source data catalog.Below are the top 3 value bombs: Data Scientist’s should not be spending the majority of their time trying to find the data they are interested in. Amundsen is a powerful open source data catalog that integrates across your data landscape to provide visibility into your data assets and lineage. We often get lost in the features within data teams. It’s important to take a step back and understand how you're impacting the bottom line of the business.
The Importance of Treating Your Data Initiatives as Products with Murali Bhogavalli
Your data team should not just be keeping the lights on, but should be building and creating data products to support the business. In this episode we speak with Murali Bhogavalli a data product manager and explore what is a data product manager and how they differ from a traditional product manager. Below are the top 3 value bombs: Data should be looked at as a product and treated as such within the organization (i.e. agile methodologies, continuous improvement…)Organizations need to be more than just data driven but also data informed. For that to happen, you need to build data literacy into your ecosystem by helping everybody understand what the data means and where is it coming from and the quality of it.. Product managers typically use data to deliver the outcomes. But for a data PM, data is the deliverable and it also the outcome.