Luke Kysow is a software engineer at HashiCorp, and he's in conversation with host Robert Blumen. The subject of their discussion is on the idea of a service mesh. As software architecture moved towards microservices, several reusable pieces of code needed to be configured for each application. On a macro scale, load balancers need to be configuring to control where packets are flowing; on a micro level, things like authorization and rate limiting for data access need to be set up for each application. This is where a service mesh came into being. As each microservice began to call out to each other, shared logic was taken out and placed into a separate layer. Now, every inbound and outbound connection--whether between services or from external clients--goes through the same service mesh layer.
Extracting common functionality out like this has several benefits. As containerization enables organizations to become more polyglot, service meshes provide the opportunity to write operational logic once, and reuse it everywhere, no matter the base application's language. Similarly, each application does not need to rely on its own bespoke dependency library for circuit breakers, rate limiting, authorization and so on. The service mesh provides a single place for the logic to be configured and everywhere. Service meshes can also be useful in metrics aggregation. If every packet of communication must traverse the service mesh layer, it becomes the de facto location to set up counters and gauges for actions that you're interested in, rather than having each application send out non-unique data.
Luke notes that while it's important for engineers to understand the value of a service mesh, it's just as important to know when such a layer will work for your application. It depends on how big your organization is, and the challenges you're trying to solve, but it's not an absolutely essential piece for every stack. Even a hybrid approach, where some logic is shared and some is unique to each microservice, can be of some benefit, without necessarily extracting everything out.
Links from this episode118. Why Writing Matters for Engineers
117. Open Source with Jim Jagielski
116. Success From Anywhere
115. Demystifying the User Experience with Performance Monitoring
114. Beyond Root Cause Analysis in Complex Systems
113. Principles of Pragmatic Engineering
112. Managing Public Key Infrastructure within an Enterprise
111. Gift Cards for Small Businesses
110. Scaling a Bernie Meme
109. Meditation for the Curious Skeptic
108. Building Community with the Wicked CoolKit
I Was There: Stories of Production Incidents II
107. How to Write Seriously Good Software
106. Growing a Self-Funded Company
105. Event Sourcing and CQRS
103. Chaos Engineering
102. Whether or Not to Repeat Yourself: DRY, DAMP, or WET
101. Cloud Native Applications
100. Math for Programmers
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Black Wolf Feed (Chapo Premium Feed Bootleg)
Bannon`s War Room