Confluent Platform 7.0 has launched and includes Apache Kafka® 3.0, plus new features introduced by KIP-630: Kafka Raft Snapshot, KIP-745: Connect API to restart connector and task, and KIP-695: Further improve Kafka Streams timestamp synchronization. Reporting from Dubai, Tim Berglund (Senior Director, Developer Advocacy, Confluent) provides a summary of new features, updates, and improvements to the 7.0 release, including the ability to create a real-time bridge from on-premises environments to the cloud with Cluster Linking.
Cluster Linking allows you to create a single cluster link between multiple environments from Confluent Platform to Confluent Cloud, which is available on public clouds like AWS, Google Cloud, and Microsoft Azure, removing the need for numerous point-to-point connections. Consumers reading from a topic in one environment can read from the same topic in a different environment without risks of reprocessing or missing critical messages. This provides operators the flexibility to make changes to topic replication smoothly and byte for byte without data loss. Additionally, Cluster Linking eliminates any need to deploy MirrorMaker2 for replication management while ensuring offsets are preserved.
Furthermore, the release of Confluent for Kubernetes 2.2 allows you to build your own private cloud in Kafka. It completes the declarative API by adding cloud-native management of connectors, schemas, and cluster links to reduce the operational burden and manual processes so that you can instead focus on high-level declarations. Confluent for Kubernetes 2.2 also enhances elastic scaling through the Shrink API.
Following ZooKeeper’s removal in Apache Kafka 3.0, Confluent Platform 7.0 introduces KRaft in preview to make it easier to monitor and scale Kafka clusters to millions of partitions. There are also several ksqlDB enhancements in this release, including foreign-key table joins and the support of new data types—DATE and TIME— to account for time values that aren’t TIMESTAMP. This results in consistent data ingestion from the source without having to convert data types.
EPISODE LINKS
Building Real-Time Data Governance at Scale with Apache Kafka ft. Tushar Thole
Handling 2 Million Apache Kafka Messages Per Second at Honeycomb
Why Data Mesh? ft. Ben Stopford
Serverless Stream Processing with Apache Kafka ft. Bill Bejeck
The Evolution of Apache Kafka: From In-House Infrastructure to Managed Cloud Service ft. Jay Kreps
What’s Next for the Streaming Audio Podcast ft. Kris Jenkins
On to the Next Chapter ft. Tim Berglund
Intro to Event Sourcing with Apache Kafka ft. Anna McDonald
Expanding Apache Kafka Multi-Tenancy for Cloud-Native Systems ft. Anna Povzner and Anastasia Vela
Apache Kafka 3.1 - Overview of Latest Features, Updates, and KIPs
Optimizing Cloud-Native Apache Kafka Performance ft. Alok Nikhil and Adithya Chandra
From Batch to Real-Time: Tips for Streaming Data Pipelines with Apache Kafka ft. Danica Fine
Real-Time Change Data Capture and Data Integration with Apache Kafka and Qlik
Modernizing Banking Architectures with Apache Kafka ft. Fotios Filacouris
Running Hundreds of Stream Processing Applications with Apache Kafka at Wise
Lessons Learned From Designing Serverless Apache Kafka ft. Prachetaa Raghavan
Using Apache Kafka as Cloud-Native Data System ft. Gwen Shapira
ksqlDB Fundamentals: How Apache Kafka, SQL, and ksqlDB Work Together ft. Simon Aubury
Explaining Stream Processing and Apache Kafka ft. Eugene Meidinger
Handling Message Errors and Dead Letter Queues in Apache Kafka ft. Jason Bell
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast