Date
1 - 10 of 10
linkerd
Louis Ryan <lryan@...>
FWIW I kicked off a thread about service meshes on the Network-SIG mailing list to gauge how those folks feel about making improvements to the k8s config model to facilitate solutions like Linkerd.
On Thu, Oct 6, 2016 at 2:41 PM, Louis Ryan <lryan@...> wrote:
|
|
alexis richardson
thanks Louis, that is helpful
On Thu, Oct 6, 2016 at 10:41 PM, Louis Ryan via cncf-toc <cncf-toc@...> wrote: @Alexis,
|
|
Louis Ryan <lryan@...>
@Alexis, For taxation overheads as a strawman - a simple HTTP/1.1 roundtrip latency percentile distribution at 10^n (n = 2..5) qps with a 4k response payload - CPU time measurement of the proxy under these loads - RSS size under load Bonus points for doing the same with HTTP2. As William mentions having good LB & other network function control improves utilization & latency more holistically throughout the cluster but I do think these numbers will be helpful for folks
On Thu, Oct 6, 2016 at 12:06 PM, William Morgan <william@...> wrote:
|
|
William Morgan
We have HTTP/2 in alpha. Gory details here: https://github.com/BuoyantIO/linkerd/issues/174 Performance: a heady topic. That post is still accurate. Note that it's focused on a sidecar / low-mem configuration--as I'm sure you know, the JVM exhibits a, shall we say, "complex" relationship between memory footprint, CPU, throughput, and latency, and changing constraints can dramatically change the resource profile. (This complexity is not ideal for us, of course, but it's a result of a conscious decision to go with production-tested Finagle as opposed to starting with fresh code--just to give you context.) E.g. the K8s configs in our recent Kubernetes post deploy linkerd as a DaemonSet--allowing us to amortize resource cost per node rather than per process, which gives us more breathing room--at the expense of a Horrible Hack to determine the node-local linkerd. Finally, our assertion is that, with better load-balancing and flow control, end-to-end tail latencies can actually be *reduced* even when introducing a per-hop cost--though I'll admit that we don't have a nice experimental setup to back this up yet. HTH, -William
On Thu, Oct 6, 2016 at 10:59 AM, Brian Grant <briangrant@...> wrote:
|
|
alexis richardson
"standardized assessment" meaning "assessment"? or do you have something in mind?
On Thu, 6 Oct 2016, 19:26 Louis Ryan via cncf-toc, <cncf-toc@...> wrote:
|
|
Louis Ryan <lryan@...>
I think it would be useful to have some standardized assessment of the CPU, latency & memory footprint tax that products in this space have. A feature comparison matrix would be nice too.
On Thu, Oct 6, 2016 at 10:59 AM, Brian Grant <briangrant@...> wrote:
|
|
Brian Grant
Thanks for that additional information, William. Are there plans for linkerd to support HTTP2? Is the most recent performance and overhead data available that in https://blog.buoyant.io/
On Wed, Oct 5, 2016 at 6:38 PM, William Morgan <william@...> wrote:
|
|
alexis richardson
Sidebar:
Quite a long list of related projects can be found in the 'traffic management' column of the landscape doc https://docs.google.com/spreadsheets/d/1ify0vCXxum_TKtA99neDWhciZeUbnp8dSNqVGXsQEeA/edit#gid=1990705469 On Thu, Oct 6, 2016 at 2:38 AM, William Morgan via cncf-toc <cncf-toc@...> wrote: (adding Oliver, linkerd's maintainer)
|
|
William Morgan
(adding Oliver, linkerd's maintainer) Hi Brian, Good point about Amalgam8, I should have mentioned them. Also maybe Netflix's Prana? In my mind Traefik, Vulcand, etc. are largely targeted at ingress / "API gateway" use cases. Whereas linkerd, Envoy, Amalgam8, and Prana are focused explicitly on service-to-service communication. Not 100% familiar with all of those projects so I hope that's not a gross mischaracterization. Differentiating features / advantages of linkerd:
Current real-life prod users that we know of (and can talk about--some we can't) are Monzo (UK), Quid (US), Douban (CN), CentralApp (EU), NCBI (US). Lots of other companies in various staging / QA environments that we're tracking and helping to productionize. Of course open source is a funny thing and people will occasionally drop into the Slack and say "so, I've been running linkerd in prod for a few weeks and I noticed this one thing..." so we don't have a complete picture. Let me know if that's helpful. Happy to go into lots more detail. Also, thanks for having me this morning, was really fun to present and you have all been super friendly and supportive! -William
On Wed, Oct 5, 2016 at 5:40 PM, Brian Grant <briangrant@...> wrote:
|
|
Brian Grant
Hi, William. You briefly mentioned other similar proxies, such as Envoy. What do you see as linkerd's differentiating features or advantages compared to the others (Envoy, Traefik, Amalgam8)? Also, are any of your users using linkerd in production?
|
|