peanut-gallery thoughts about GRPC


alexis richardson
 

Jayant

On Tue, Nov 1, 2016 at 10:54 PM, Jayant Kolhe <jkolhe@...> wrote:
Hi Alexis,

but I do agree that getting in front of some users would help greatly. 

That sounds great. I was not planning to be at KubeCon. Varun is planning to be at KubeCon and would love to meet more users. If needed, I can be there for one of the two days..

I propose that Varun meet Julius and Tom whose use case I posted from the Prometheus Github.  Please email me offline and let's arrange.

a

 

Thanks,

- Jayant


On Tue, Nov 1, 2016 at 8:48 AM, Alexis Richardson <alexis@...> wrote:
jayant

thanks again for this, which I have now read about N times 

I think your approach and argument is sound -- but I do agree that getting in front of some users would help greatly.  to start with: will you be at kubecon?

alexis


On Tue, Oct 25, 2016 at 9:26 PM, Jayant Kolhe <jkolhe@...> wrote:

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc









Jayant Kolhe
 

Hi Alexis,

but I do agree that getting in front of some users would help greatly. 

That sounds great. I was not planning to be at KubeCon. Varun is planning to be at KubeCon and would love to meet more users. If needed, I can be there for one of the two days..

Thanks,

- Jayant


On Tue, Nov 1, 2016 at 8:48 AM, Alexis Richardson <alexis@...> wrote:
jayant

thanks again for this, which I have now read about N times 

I think your approach and argument is sound -- but I do agree that getting in front of some users would help greatly.  to start with: will you be at kubecon?

alexis


On Tue, Oct 25, 2016 at 9:26 PM, Jayant Kolhe <jkolhe@...> wrote:

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc








alexis richardson
 

jayant

thanks again for this, which I have now read about N times 

I think your approach and argument is sound -- but I do agree that getting in front of some users would help greatly.  to start with: will you be at kubecon?

alexis


On Tue, Oct 25, 2016 at 9:26 PM, Jayant Kolhe <jkolhe@...> wrote:

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc







Louis Ryan <lryan@...>
 



On Thu, Oct 27, 2016 at 10:56 AM, Martin Nally <mnally@...> wrote:
Thanks, Louis,

Disappointingly, it seems like we don't actually disagree on much. I'm guessing that the set of applications where you would choose to use GRPC is much larger than mine, and the set of applications where I would choose to use HTTP+JSON is much larger than yours, but we would both acknowledge that both sets are important. 

OpenAPI is just another RPC IDL in my eyes, but since its most common use is just documentation, it doesn't bother me as much. JSON-LD has a different set of goals and seems to me part of a different discussion.

My other point is that you need the same qualities-of-service—service discovery, client-side load balancing, well-factored monitoring, context propagation—for HTTP+JSON that you need for RCP/Messaging. Maybe we could work together on a "G+HTTP+JSON" to provide that :) It would have to be IDL-free.

No argument here, improving the experience for HTTP+JSON is something Google is actively investing in. We see these as complimentary and often entirely overlapping (GRPC is based on HTTP(2) after all)


Martin

On Thu, Oct 27, 2016 at 10:30 AM, Louis Ryan <lryan@...> wrote:
Martin,

Perhaps your understanding of GRPC is based on the name (I do bemoan this choice from time to time) rather than the feature set and intent. I would encourage you to read 


Specifically the parts about being message oriented not object or function oriented. GRPC is nothing like the solutions you mention for a wide variety of reasons, I see the comparison with CORBA a lot because we use an IDL with protobuf but that's missing the point.

If by loose-coupling you mean the ability to dynamically compose systems based on metadata and linking across a diverse ecosystem of APIs AND you need to do this in a browser then I would agree GRPC is not for you (he K8S config API comes to mind). I would note however that there are relatively few APIs that fall into that category and none that are infrastructural, most APIs are use-case specific and their composition is done by application code with fore-knowledge of the use-case. Mobile application development overwhelmingly falls into this category.

OpenAPI does a better job than JSON-LD and AtomPub in supporting this dynamic composition model but this is in no small part because it standardizes and IDL that can be processed by toolchains as opposed to AtomPub which expected runtimes to make magic happen. The OpenAPI approach is no different than GRPC+Protobuf other than the physical representation.

I wouldn't advocate using GRPC when HTTP+JSON is a better fit for the use-case, far from it, but for many use-cases I do think it is a better tool.

- Louis

P.S Happy to get in the weeds over a beer the next time I see you 


On Thu, Oct 27, 2016 at 9:55 AM, Brian Grant <briangrant@...> wrote:
+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc






--

Martin Nally, Apigee Engineering


Martin Nally <mnally@...>
 

Thanks, Louis,

Disappointingly, it seems like we don't actually disagree on much. I'm guessing that the set of applications where you would choose to use GRPC is much larger than mine, and the set of applications where I would choose to use HTTP+JSON is much larger than yours, but we would both acknowledge that both sets are important. 

OpenAPI is just another RPC IDL in my eyes, but since its most common use is just documentation, it doesn't bother me as much. JSON-LD has a different set of goals and seems to me part of a different discussion.

My other point is that you need the same qualities-of-service—service discovery, client-side load balancing, well-factored monitoring, context propagation—for HTTP+JSON that you need for RCP/Messaging. Maybe we could work together on a "G+HTTP+JSON" to provide that :) It would have to be IDL-free.

Martin

On Thu, Oct 27, 2016 at 10:30 AM, Louis Ryan <lryan@...> wrote:
Martin,

Perhaps your understanding of GRPC is based on the name (I do bemoan this choice from time to time) rather than the feature set and intent. I would encourage you to read 


Specifically the parts about being message oriented not object or function oriented. GRPC is nothing like the solutions you mention for a wide variety of reasons, I see the comparison with CORBA a lot because we use an IDL with protobuf but that's missing the point.

If by loose-coupling you mean the ability to dynamically compose systems based on metadata and linking across a diverse ecosystem of APIs AND you need to do this in a browser then I would agree GRPC is not for you (he K8S config API comes to mind). I would note however that there are relatively few APIs that fall into that category and none that are infrastructural, most APIs are use-case specific and their composition is done by application code with fore-knowledge of the use-case. Mobile application development overwhelmingly falls into this category.

OpenAPI does a better job than JSON-LD and AtomPub in supporting this dynamic composition model but this is in no small part because it standardizes and IDL that can be processed by toolchains as opposed to AtomPub which expected runtimes to make magic happen. The OpenAPI approach is no different than GRPC+Protobuf other than the physical representation.

I wouldn't advocate using GRPC when HTTP+JSON is a better fit for the use-case, far from it, but for many use-cases I do think it is a better tool.

- Louis

P.S Happy to get in the weeds over a beer the next time I see you 


On Thu, Oct 27, 2016 at 9:55 AM, Brian Grant <briangrant@...> wrote:
+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc






--

Martin Nally, Apigee Engineering


Louis Ryan <lryan@...>
 

Martin,

Perhaps your understanding of GRPC is based on the name (I do bemoan this choice from time to time) rather than the feature set and intent. I would encourage you to read 


Specifically the parts about being message oriented not object or function oriented. GRPC is nothing like the solutions you mention for a wide variety of reasons, I see the comparison with CORBA a lot because we use an IDL with protobuf but that's missing the point.

If by loose-coupling you mean the ability to dynamically compose systems based on metadata and linking across a diverse ecosystem of APIs AND you need to do this in a browser then I would agree GRPC is not for you (he K8S config API comes to mind). I would note however that there are relatively few APIs that fall into that category and none that are infrastructural, most APIs are use-case specific and their composition is done by application code with fore-knowledge of the use-case. Mobile application development overwhelmingly falls into this category.

OpenAPI does a better job than JSON-LD and AtomPub in supporting this dynamic composition model but this is in no small part because it standardizes and IDL that can be processed by toolchains as opposed to AtomPub which expected runtimes to make magic happen. The OpenAPI approach is no different than GRPC+Protobuf other than the physical representation.

I wouldn't advocate using GRPC when HTTP+JSON is a better fit for the use-case, far from it, but for many use-cases I do think it is a better tool.

- Louis

P.S Happy to get in the weeds over a beer the next time I see you 


On Thu, Oct 27, 2016 at 9:55 AM, Brian Grant <briangrant@...> wrote:
+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




Brian Grant
 

+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Martin Nally <mnally@...>
 

I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering


Brian Grant
 

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

Thanks.
 

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

And, if you're building a new microservice-based application, you need to use something. What are the choices?
What should you choose if you need:
  • Features Ben mentions above, such as monitoring and tracing?
  • Clients in multiple languages?
  • Mobile clients as well as microservice clients?
  • 10x greater efficiency than REST+JSON? (Who needs that? Most infrastructure components.)
These benefits are also called out on the GRPC FAQ:
 

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.


So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

Yes, service fabrics and grpc are complementary.
 


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



alexis richardson
 

Fascinating.  Thank-you.  I shall need at least a day to digest this and respond so please be patient with me :-)


On Tue, Oct 25, 2016 at 9:26 PM, Jayant Kolhe <jkolhe@...> wrote:

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc







Jayant Kolhe
 

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc






alexis richardson
 

Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc





Jayant Kolhe
 


gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics. While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance. It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations. We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features. We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




alexis richardson
 

+1, protocols FTW ;-)


On Tue, Oct 25, 2016 at 11:36 AM, Matt T. Proud ⚔ <matt.proud@...> wrote:
Since you asked the peanut gallery:

I would be delighted to see gRPC supersede Thrift and Finangle for a laundry list of reasons.  The crux: being burned by Thrift and Finangle's cross-language and -runtime interoperability problems.  gRPC was motivated by this interoperability on day one; whereas it felt like an afterthought in Thrift.  Further: Finangle's operational metrics — last I looked at them in 2013 — were pretty incomprehensible (frankly felt a deep sense of pity for anyone oncall for a system built on top of it).

gRPC is self-standingly a natural addition to a reference implementation's portfolio.  My only regret was its not arriving "on the block" a year or two sooner — lest another generation's mind be wasted to a substandard technology.  ;)

On Tue, Oct 25, 2016 at 12:16 PM Alexis Richardson via cncf-toc <cncf-toc@...> wrote:
Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Matt T. Proud
 

Since you asked the peanut gallery:

I would be delighted to see gRPC supersede Thrift and Finangle for a laundry list of reasons.  The crux: being burned by Thrift and Finangle's cross-language and -runtime interoperability problems.  gRPC was motivated by this interoperability on day one; whereas it felt like an afterthought in Thrift.  Further: Finangle's operational metrics — last I looked at them in 2013 — were pretty incomprehensible (frankly felt a deep sense of pity for anyone oncall for a system built on top of it).

gRPC is self-standingly a natural addition to a reference implementation's portfolio.  My only regret was its not arriving "on the block" a year or two sooner — lest another generation's mind be wasted to a substandard technology.  ;)

On Tue, Oct 25, 2016 at 12:16 PM Alexis Richardson via cncf-toc <cncf-toc@...> wrote:
Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


alexis richardson
 

Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Brandon Philips <brandon.philips@...>
 

On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Brian Grant
 

+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



alexis richardson
 

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Ben Sigelman
 

Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...