Inheriting software in the banking sector can be challenging. Perhaps the only thing harder is inheriting software built by a committee of banks. How do you keep it running, while improving it, refactoring it, and planning a bigger future for it? In this episode, Jean-Francois Garet (Technical Architect, Symphony) shares his experience at Symphony as he helps it evolve from an inherited, monolithic, single-tenant architecture to an event mesh for seamless event-streaming microservices. He talks about the journey they’ve taken so far, and the foundations they’ve laid for a modern data mesh.
Symphony is the leading markets’ infrastructure and technology platform, which provides a full communication stack (chat, voice and video meetings, file and screen sharing) for the financial industry. Jean-Francois shares that its initial system was inherited from one of the founding institutions—and features the highest level of security to ensure confidentiality of business conversations, coupled with compliance with regulations covering financial transactions. However, its stacks are monolithic and single tenant.
To modernize Symphony's architecture for real-time data, Jean-Francois and team have been exploring various approaches over the last four years. They started breaking down the monolith into microservices, and also made a move towards multitenancy by setting up an event mesh. However, they experienced a mix of success and failure in both attempts.
To continue the evolution of the system, while maintaining business deliveries, the team started to focus on event streaming for asynchronous communications, as well as connecting the microservices for real-time data exchange. As they had prior Apache Kafka® usage in the company, the team decided to go with managed Kafka on the cloud as their streaming platform.
The team has a set of principles in mind for the development of their event-streaming functionality:
Jean-Francois shares that data mesh is ultimately what they are hoping to achieve with their platform—to provide governance around data and make data available as a product for self service. As of now, though, their focus is achieving real-time event streams with event mesh.
EPISODE LINKS
Apache Kafka 3.5 - Kafka Core, Connect, Streams, & Client Updates
A Special Announcement from Streaming Audio
How to use Data Contracts for Long-Term Schema Management
How to use Python with Apache Kafka
Next-Gen Data Modeling, Integrity, and Governance with YODA
Migrate Your Kafka Cluster with Minimal Downtime
Real-Time Data Transformation and Analytics with dbt Labs
What is the Future of Streaming Data?
What can Apache Kafka Developers learn from Online Gaming?
Apache Kafka 3.4 - New Features & Improvements
How to use OpenTelemetry to Trace and Monitor Apache Kafka Systems
What is Data Democratization and Why is it Important?
Git for Data: Managing Data like Code with lakeFS
Using Kafka-Leader-Election to Improve Scalability and Performance
Real-Time Machine Learning and Smarter AI with Data Streaming
The Present and Future of Stream Processing
Top 6 Worst Apache Kafka JIRA Bugs
Learn How Stream-Processing Works The Simplest Way Possible
Building and Designing Events and Event Streams with Apache Kafka
Rethinking Apache Kafka Security and Account Management
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast