Microservices-style applications rely on fast, dependable network infrastructure in order to respond quickly and reliably, and the service mesh can be a powerful enabler.
At the same time, service-mesh infrastructure can be difficult to deploy and manage at scale and may be too complex for smaller applications, so enterprises need to carefully consider its potential upsides and downsides in relation to their particular circumstances.
What is a service mesh?
A service mesh is infrastructure software that provides fast and reliable communications between the microservices that applications may need. Its networking features include application identification, load balancing, authentication, and encryption.
Network requests are routed between microservices via proxies that run alongside the service. These proxies form a mesh network to connect the individual microservices. A central controller provides for access control, as well as network and performance management.
A service mesh provides logical isolation of microservices applications from the complexity of network routing and security requirements. The abstraction provided by a service mesh enables rapid and flexible deployment of microservices without constantly requiring the data-center networking team to intervene.
Why do microservices-style apps need service mesh?
Applications based on microservices have a different architecture from hypervisor-based applications. They have numerous services running in individual containers on different servers or cores, and the frequency of transactions between these microservices within a single application may require low latency and significant bandwidth. Plus more than one application may need to access the same microservices.
Container-based micro services can often move their physical location from server to server yet provide only limited data about where they’ve moved to and that their status has changed. This makes it difficult for IT professionals to “find” them to resolve application-performance issues.
Meanwhile, DevOps teams require logical isolation from network complexity. They want to rapidly develop and change applications, and they require networking teams to provide networking and security adjustments such as provisioning vLANs in order to do their work.
Service mesh enables significant networking and security benefits for microservices applications. It abstracts the networking infrastructure, thus enabling microservices applications to maintain networking and security polices without requiring the intervention of the data-center networking team for each change.
Key requirements for networking microservices include:
- Network performance at scale
- Ease of provisioning networking, compute, and storage resources for new applications
- Ability to rapidly scale bandwidth by application
- Workload migration between internal data centers and public cloud
- Application isolation to enhance security and support multi-tenancy
To meet these requirements IT organizations will need to integrate service-mesh automation and management information into a more comprehensive data-center networking-management system–especially as container deployments become more numerous, complex and strategic.
For applications that are well suited to service mesh deployments, IT organizations will need to plan integration of the technology into their overall management/automation platforms. To prepare, IT teams must evaluate the range of service-mesh options–cloud, open source, vendor-supplied–as the technology continues to mature.
Service-mesh technology options can be vendor-supported or open source. Istio is a leading open-source service-mesh option driven by Google. Other open-source projects include Linkerd, HAProxy, NGNIX and Envoy. Leading IaaS suppliers have their own service mesh offerings. Leading network and IT suppliers and start-ups also have service mesh offerings.