Transforming Data Pipelines at XENA Intelligence with Naseem Shah
The shift from simple cron jobs to orchestrated AI-powered workflows is reshaping how startups scale. For a small team, these transitions come with unique challenges and big opportunities.In this episode, Naseem Shah, Head of Engineering at Xena Intelligence, shares how he built data pipelines from scratch, adopted Apache Airflow and transformed Amazon review analysis with LLMs.Key Takeaways:00:00 Introduction.03:28 The importance of building initial products that support growth and investment.06:16 The process of adopting new tools to improve reliability and efficiency.09:29 Approaches to learning complex technologies through practice and fundamentals.13:57 Trade-offs small teams face when balancing performance and costs.18:40 Using AI-driven approaches to generate insights from large datasets.22:38 How unstructured data can be transformed into actionable information.25:55 Moving from manual tasks to fully automated workflows.28:05 Orchestration as a foundation for scaling advanced use cases.Resources Mentioned:Naseem Shahhttps://www.linkedin.com/in/naseemshah/Xena Intelligence | LinkedInhttps://www.linkedin.com/company/xena-intelligence/Xena Intelligence | Websitehttps://xenaintelligence.com/Apache Airflowhttps://airflow.apache.org/Google Cloud Composerhttps://cloud.google.com/composerTechstarshttps://www.techstars.com/Dockerhttps://www.docker.com/AWS SQShttps://aws.amazon.com/sqs/PostgreSQLhttps://www.postgresql.org/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
Scaling Geospatial Workflows With Airflow at Overture Maps Foundation and Wherobots with Alex Iannicelli and Daniel Smith
Using Airflow to orchestrate geospatial data pipelines unlocks powerful efficiencies for data teams. The combination of scalable processing and visual observability streamlines workflows, reduces costs and improves iteration speed.In this episode, Alex Iannicelli, Staff Software Engineer at Overture Maps Foundation, and Daniel Smith, Senior Solutions Architect at Wherobots, join us to discuss leveraging Apache Airflow and Apache Sedona to process massive geospatial datasets, build reproducible pipelines and orchestrate complex workflows across platforms.Key Takeaways:00:00 Introduction.03:22 How merging multiple data sources supports comprehensive datasets.04:20 The value of flexible configurations for running pipelines on different platforms.06:35 Why orchestration tools are essential for handling continuous data streams.09:45 The importance of observability for monitoring progress and troubleshooting issues.11:30 Strategies for processing large, complex datasets efficiently.13:27 Expanding orchestration beyond core pipelines to automate frequent tasks.17:02 Advantages of using open-source operators to simplify integration and deployment.20:32 Desired improvements in orchestration tools for usability and workflow management.Resources Mentioned:Alex Iannicellihttps://www.linkedin.com/in/atiannicelli/Overture Maps Foundation | LinkedInhttps://www.linkedin.com/company/overture-maps-foundation/Overture Maps Foundation | Websitehttps://overturemaps.orgDaniel Smithhttps://www.linkedin.com/in/daniel-smith-analyst/Wherobots | LinkedInhttps://www.linkedin.com/company/wherobotsWherobots | Websitehttps://www.wherobots.comApache Airflowhttps://airflow.apache.org/Apache Sedonahttps://sedona.apache.org/Github repohttps://github.com/wherobots/airflow-providers-wherobotsThanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
Scaling Airflow for Enterprise Data Platforms at PepsiCo with Kunal Bhattacharya
PepsiCo’s data platform drives insights across finance, marketing and data science. Delivering stability, scalability and developer delight is central to its success, and engineering leadership plays a key role in making this possible.In this episode, Kunal Bhattacharya, Senior Manager of Data Platform Engineering at PepsiCo, shares how his team manages Airflow at scale while ensuring security, performance and cost efficiency.Key Takeaways:00:00 Introduction.02:31 Enabling developer delight by extending platform capabilities.03:56 Role of Snowflake, dbt and Airflow in PepsiCo’s data stack.06:10 Local developer environments built using official Airflow Helm charts.07:13 Pre-staging and PR environments as testing playgrounds.08:08 Automating labeling and resource allocation via DAG factories.12:16 Cost optimization through pod labeling and Datadog insights.14:01 Isolating dbt engines to improve performance across teams.16:12 Wishlist for Airflow 3: Improved role-based grants and database modeling.Resources Mentioned:Kunal Bhattacharyahttps://www.linkedin.com/in/kunaljubce/PepsiCo | LinkedInhttps://www.linkedin.com/company/pepsico/PepsiCo | Websitehttps://www.pepsico.comApache Airflowhttps://airflow.apache.org/Snowflakehttps://www.snowflake.comdbthttps://www.getdbt.comKuberneteshttps://kubernetes.ioGreat Expectationshttps://greatexpectations.ioMonte Carlohttps://www.montecarlodata.comThanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
Building a Unified Data Platform at Pattern with William Graham
The orchestration of data workflows at scale requires both flexibility and security. At Pattern, decoupling scheduling from orchestration has reshaped how data teams manage large-scale pipelines.In this episode, we are joined by William Graham, Senior Data Engineer at Pattern, who explains how his team leverages Apache Airflow alongside their open-source tool Heimdall to streamline scheduling, orchestration and access management.Key Takeaways:00:00 Introduction.02:44 Structure of Pattern’s data teams across acquisition, engineering and platform.04:27 How Airflow became the central scheduler for batch jobs.08:57 Credential management challenges that led to decoupling scheduling and orchestration.12:21 Heimdall simplifies multi-application access through a unified interface.13:15 Standardized operators in Airflow using Heimdall integration.17:13 Open-source contributions and early adoption of Heimdall within Pattern.21:01 Community support for Airflow and satisfaction with scheduling flexibility.Resources Mentioned:William Grahamhttps://www.linkedin.com/in/willgraham2/Pattern | LinkedInhttps://www.linkedin.com/company/pattern-hq/Pattern | Websitehttps://pattern.comApache Airflowhttps://airflow.apache.orgHeimdall on GitHubhttps://github.com/Rev4N1/HeimdallNetflix Geniehttps://netflix.github.io/genie/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
How Astronomer Turns Proactive Monitoring Into Customer Success with Collin McNulty
The evolution of Airflow continues to shape data orchestration and monitoring strategies. Leveraging it beyond traditional ETL use cases opens powerful new possibilities for proactive support and internal operations.In this episode, we are joined by Collin McNulty, Sr. Director of Global Support at Astronomer, who shares insights from his journey into data engineering and the lessons learned from leading Astronomer’s Customer Reliability Engineering (CRE) team.Key Takeaways:00:00 Introduction.03:07 Lessons learned in adapting to major platform transitions.05:18 How proactive monitoring improves reliability and customer experience.08:10 Using automation to enhance internal support processes.12:09 Why keeping systems current helps avoid unnecessary issues.15:14 Approaches that strengthen system reliability and efficiency.18:46 Best practices for simplifying complex orchestration dependencies.23:24 Anticipated innovations that expand orchestration capabilities.Resources Mentioned:Collin McNultyhttps://www.linkedin.com/in/collin-mcnulty/Astronomer | LinkedInhttps://www.linkedin.com/company/astronomer/Astronomer | Websitehttps://www.astronomer.ioApache Airflowhttps://airflow.apache.org/Prometheushttps://prometheus.io/Splunkhttps://www.splunk.com/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning