We fully expect that customers will run service meshes that include both deployment models. We’ve even made it possible for a single gRPC client to call some services via the proxyless route and others via a sidecar proxy.
When to deploy Traffic Director with proxyless gRPC services
We see three main use cases for the proxyless gRPC approach—simplified gRPC adoption (thanks to a managed networking experience), high performance services in a service mesh, and bringing service mesh to environments where you can’t add sidecar proxies.
Managed networking for simplified gRPC adoption
We talk to customers all the time who are considering adopting gRPC as part of their efforts to modernize their application stack. The benefits of gRPC are clear but, on its own, gRPC doesn’t solve problems like client-side load balancing, service discovery and global failover. Traffic Director’s support for proxyless gRPC services was built to solve these needs, thereby making it easier to adopt gRPC as part of a modernized deployment.
Resource efficiency and performance
Proxies consume resources and those may start to add up as you scale to hundreds or thousands of proxies. Plus, high-performance applications may find it difficult to meet performance targets when sending requests through multiple sidecar proxies (client sidecar proxy, server sidecar proxy, and back again for request/response exchanges).
In our testing, we’ve found that proxyless gRPC can save on networking-related CPU costs compared to sidecar proxies. Benchmarks have shown that introducing sidecar proxies introduces latency due to additional network hops. The proxyless approach promises savings on both of these dimensions.
Finally, we believe that this performance gain will be important for emerging use cases, such as service mesh deployments for telco network functions and 5G/edge computing.
Service mesh for environments where you can’t add sidecar proxies.
We’ve talked to customers who can’t necessarily add sidecar proxies to deployments. Some managed compute environments don’t let you spin up multiple processes (one for the application, one for the proxy) or make changes to an instance’s network stack (for example, using iptables). In such cases, proxyless gRPC applications provide a great way to get the benefits of service mesh.
Enterprise networks are heterogeneous. We built Traffic Director to be flexible so that we can support deployment options that meet your needs. Supported deployment options include Envoy sidecar proxies, Envoy middle/gateway proxies (including our Internal HTTP(S) Load Balancer, which uses Traffic Director under the hood) and, now, proxyless gRPC applications.
This initial release is focused on service discovery and load balancing. We know that service mesh promises a lot more than that—layer 7-based traffic management and security, for example—but we’re excited about this first step. The traffic management capabilities that we’re announcing today, alongside new GCP-managed gRPC health checks, are just one step in making it easy to bring service mesh to your gRPC applications.
We hope you’ll join us and check out the setup guides for Traffic Director with proxyless gRPC services on Compute Engine and Google Kubernetes Engine. To learn more and see Traffic Director’s support for proxyless gRPC services in action, watch our breakout session NET206 on NextOnAir, starting July 28, 2020.