Date   

Re: peanut-gallery thoughts about GRPC

Martin Nally <mnally@...>
 

Thanks, Louis,

Disappointingly, it seems like we don't actually disagree on much. I'm guessing that the set of applications where you would choose to use GRPC is much larger than mine, and the set of applications where I would choose to use HTTP+JSON is much larger than yours, but we would both acknowledge that both sets are important. 

OpenAPI is just another RPC IDL in my eyes, but since its most common use is just documentation, it doesn't bother me as much. JSON-LD has a different set of goals and seems to me part of a different discussion.

My other point is that you need the same qualities-of-service—service discovery, client-side load balancing, well-factored monitoring, context propagation—for HTTP+JSON that you need for RCP/Messaging. Maybe we could work together on a "G+HTTP+JSON" to provide that :) It would have to be IDL-free.

Martin

On Thu, Oct 27, 2016 at 10:30 AM, Louis Ryan <lryan@...> wrote:
Martin,

Perhaps your understanding of GRPC is based on the name (I do bemoan this choice from time to time) rather than the feature set and intent. I would encourage you to read 


Specifically the parts about being message oriented not object or function oriented. GRPC is nothing like the solutions you mention for a wide variety of reasons, I see the comparison with CORBA a lot because we use an IDL with protobuf but that's missing the point.

If by loose-coupling you mean the ability to dynamically compose systems based on metadata and linking across a diverse ecosystem of APIs AND you need to do this in a browser then I would agree GRPC is not for you (he K8S config API comes to mind). I would note however that there are relatively few APIs that fall into that category and none that are infrastructural, most APIs are use-case specific and their composition is done by application code with fore-knowledge of the use-case. Mobile application development overwhelmingly falls into this category.

OpenAPI does a better job than JSON-LD and AtomPub in supporting this dynamic composition model but this is in no small part because it standardizes and IDL that can be processed by toolchains as opposed to AtomPub which expected runtimes to make magic happen. The OpenAPI approach is no different than GRPC+Protobuf other than the physical representation.

I wouldn't advocate using GRPC when HTTP+JSON is a better fit for the use-case, far from it, but for many use-cases I do think it is a better tool.

- Louis

P.S Happy to get in the weeds over a beer the next time I see you 


On Thu, Oct 27, 2016 at 9:55 AM, Brian Grant <briangrant@...> wrote:
+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc






--

Martin Nally, Apigee Engineering


Re: peanut-gallery thoughts about GRPC

Louis Ryan <lryan@...>
 

Martin,

Perhaps your understanding of GRPC is based on the name (I do bemoan this choice from time to time) rather than the feature set and intent. I would encourage you to read 


Specifically the parts about being message oriented not object or function oriented. GRPC is nothing like the solutions you mention for a wide variety of reasons, I see the comparison with CORBA a lot because we use an IDL with protobuf but that's missing the point.

If by loose-coupling you mean the ability to dynamically compose systems based on metadata and linking across a diverse ecosystem of APIs AND you need to do this in a browser then I would agree GRPC is not for you (he K8S config API comes to mind). I would note however that there are relatively few APIs that fall into that category and none that are infrastructural, most APIs are use-case specific and their composition is done by application code with fore-knowledge of the use-case. Mobile application development overwhelmingly falls into this category.

OpenAPI does a better job than JSON-LD and AtomPub in supporting this dynamic composition model but this is in no small part because it standardizes and IDL that can be processed by toolchains as opposed to AtomPub which expected runtimes to make magic happen. The OpenAPI approach is no different than GRPC+Protobuf other than the physical representation.

I wouldn't advocate using GRPC when HTTP+JSON is a better fit for the use-case, far from it, but for many use-cases I do think it is a better tool.

- Louis

P.S Happy to get in the weeds over a beer the next time I see you 


On Thu, Oct 27, 2016 at 9:55 AM, Brian Grant <briangrant@...> wrote:
+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




Re: peanut-gallery thoughts about GRPC

Brian Grant
 

+adding back the grpc folks to comment

On Wed, Oct 26, 2016 at 5:57 PM, Martin Nally via cncf-toc <cncf-toc@...> wrote:
I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: [VOTE] End User Reference Architecture v1.0

Doug Davis <dug@...>
 

Overall it looks good. Just two things that jumped out at me that are missing:
1 - multi-tenancy. I don't think we need to say much, other than its an issue, and while it could appear on several charts, perhaps the "Orchestration & Management Layer" one might be the best single spot for it.
2 - clustering. Probably on the same slide too. While its implied, I just want to be clear that people need to think about how to scale and cluster many many nodes and this arch isn't just for small/single node envs.


thanks
-Doug
_______________________________________________________
STSM | IBM Open Source, Cloud Architecture & Technology
(919) 254-6905 | IBM 444-6905 | dug@...
The more I'm around some people, the more I like my dog

Alexis Richardson via cncf-toc ---10/26/2016 01:41:40 PM---A global service catalogue is an interesting idea. If it is "like DNS but for cloud native apps" th

From: Alexis Richardson via cncf-toc <cncf-toc@...>
To: "Ram, J" <j.ram@...>, Brian Grant <briangrant@...>
Cc: "cncf-toc@..." <cncf-toc@...>
Date: 10/26/2016 01:41 PM
Subject: Re: [cncf-toc] [VOTE] End User Reference Architecture v1.0
Sent by: cncf-toc-bounces@...





A global service catalogue is an interesting idea.  If it is "like DNS but for cloud native apps" then it presupposes an "HTTP but for cloud native apps".  Such as a service broker protocol for discovery, binding and monitoring.


On Wed, 26 Oct 2016, 18:24 Ram, J via cncf-toc, <cncf-toc@...> wrote:

     

     

    From: Brian Grant [mailto:briangrant@...]
    Sent:
    Monday, October 24, 2016 8:18 PM
    To:
    Ram, J [Tech]
    Cc:
    Chris Aniszczyk; cncf-toc@...
    Subject:
    Re: [cncf-toc] [VOTE] End User Reference Architecture v1.0

     

    On Mon, Oct 24, 2016 at 5:28 AM, Ram, J via cncf-toc <cncf-toc@...> wrote:

     

    Sorry, I missed that last call. So apologies if this was discussed.

    Two thoughts/Questions that come to mind when looking thru the slides:

     

    a)      Emphasis on security seem to be missing. It might be implicit, but being explicit might be useful.  So calling out some aspects of it in application definition, orchestration and runtime would change that. I suspect that orchestration and runtime would get more interesting if complex security policies are modelled in the application definition.

    Given that security spans all the layers and is a complex topic, I'm not sure what we'd add at the current level of detail. 

     

    b)      Not sure if this is group to address: I feel, that no consistent implementation or standard for Service Directory exist. The most consistent yellow pages we seem to have DNS. For the new generation of applications, is that enough?  Should we call out Service directory under service management?

    Service naming, discovery, load balancing, and routing (service fabric/mesh approaches) are intended to be covered by slide 6. Is there a specific terminology clarification that you'd like to see? Or would you like us to merge the "Coordination" and "Service Management" sub-bullets into a single list?

     

    What exactly do you mean by "service directory"?

    [JRAM] to reiterate this maybe outside the scope of this discussion. My observation, is that there is no consistent standard for any client to search, lookup and find service providers in the global network in a consistent fashion. DNS is the closest adopted standard and is not really designed for the level of dynamism we need in this new Cloud Based model. Lack of this is clearly emphasized by trickery played in networking stack and DNS stack. Another observation is that there is no global catalogue of all services that are available in the network at internet scale. Every seems to be having their own version of “directory” implementation. In our case, we have DNS, Zookeeper, url router, etc to just name a few…

     

     

    The question for us to answer minimally is: do we want to address this problem architecturally and as a standard ?

     

     

     

     

     

     

     

    From: cncf-toc-bounces@... [mailto:cncf-toc-bounces@...] On Behalf Of Chris Aniszczyk via cncf-toc
    Sent:
    Monday, October 24, 2016 7:15 AM
    To:
    cncf-toc@...
    Subject:
    [cncf-toc] [VOTE] End User Reference Architecture v1.0

     

    Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):

     

    http://drive.google.com/open?id=1uMw2wkK0ubmc3khxqIuxK_rLK_wN89tNCnK7gDmTGR8

     

    This is a call to formalize the reference architecture, so TOC members please vote!

     

    --

    Chris Aniszczyk (@cra) | +1-512-961-6719


    _______________________________________________
    cncf-toc mailing list
    cncf-toc@...
    https://lists.cncf.io/mailman/listinfo/cncf-toc
    _______________________________________________
    cncf-toc mailing list
    cncf-toc@...
    https://lists.cncf.io/mailman/listinfo/cncf-toc_______________________________________________
    cncf-toc mailing list
    cncf-toc@...
    https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: peanut-gallery thoughts about GRPC

Martin Nally <mnally@...>
 

I'm not personally a fan of RPC, at least for the sort of applications I write. I have lived through several generations of RPC—DCE, Corba, Java RMI, and so on. I'm sure GRPC is the best of the RCPs, but it is still RPC. The problem with RPC for me has always been that it results in wide interfaces that couple clients and servers tightly together. It might be the right choice for some applications, but if you value loose coupling over implementation efficiency, I think there are better options. I like all the non-functional properties—service discovery, client-side load balancing, well-factored monitoring, context propagation—can I have all that without the RPC interface model, please?

Martin

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--

Martin Nally, Apigee Engineering


Re: [VOTE] End User Reference Architecture v1.0

alexis richardson
 

A global service catalogue is an interesting idea.  If it is "like DNS but for cloud native apps" then it presupposes an "HTTP but for cloud native apps".  Such as a service broker protocol for discovery, binding and monitoring.


On Wed, 26 Oct 2016, 18:24 Ram, J via cncf-toc, <cncf-toc@...> wrote:

 

 

From: Brian Grant [mailto:briangrant@...]
Sent: Monday, October 24, 2016 8:18 PM
To: Ram, J [Tech]
Cc: Chris Aniszczyk; cncf-toc@...
Subject: Re: [cncf-toc] [VOTE] End User Reference Architecture v1.0

 

On Mon, Oct 24, 2016 at 5:28 AM, Ram, J via cncf-toc <cncf-toc@...> wrote:

 

Sorry, I missed that last call. So apologies if this was discussed.

Two thoughts/Questions that come to mind when looking thru the slides:

 

a)      Emphasis on security seem to be missing. It might be implicit, but being explicit might be useful.  So calling out some aspects of it in application definition, orchestration and runtime would change that. I suspect that orchestration and runtime would get more interesting if complex security policies are modelled in the application definition.

Given that security spans all the layers and is a complex topic, I'm not sure what we'd add at the current level of detail. 

 

b)      Not sure if this is group to address: I feel, that no consistent implementation or standard for Service Directory exist. The most consistent yellow pages we seem to have DNS. For the new generation of applications, is that enough?  Should we call out Service directory under service management?

Service naming, discovery, load balancing, and routing (service fabric/mesh approaches) are intended to be covered by slide 6. Is there a specific terminology clarification that you'd like to see? Or would you like us to merge the "Coordination" and "Service Management" sub-bullets into a single list?

 

What exactly do you mean by "service directory"?

[JRAM] to reiterate this maybe outside the scope of this discussion. My observation, is that there is no consistent standard for any client to search, lookup and find service providers in the global network in a consistent fashion. DNS is the closest adopted standard and is not really designed for the level of dynamism we need in this new Cloud Based model. Lack of this is clearly emphasized by trickery played in networking stack and DNS stack. Another observation is that there is no global catalogue of all services that are available in the network at internet scale. Every seems to be having their own version of “directory” implementation. In our case, we have DNS, Zookeeper, url router, etc to just name a few…

 

 

The question for us to answer minimally is: do we want to address this problem architecturally and as a standard ?

 

 

 

 

 

 

 

From: cncf-toc-bounces@... [mailto:cncf-toc-bounces@...] On Behalf Of Chris Aniszczyk via cncf-toc
Sent: Monday, October 24, 2016 7:15 AM
To: cncf-toc@...
Subject: [cncf-toc] [VOTE] End User Reference Architecture v1.0

 

Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):

 

 

This is a call to formalize the reference architecture, so TOC members please vote!

 

--

Chris Aniszczyk (@cra) | +1-512-961-6719


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: [VOTE] End User Reference Architecture v1.0

Ram, J <j.ram@...>
 

From: Brian Grant [mailto:briangrant@...]
Sent: Monday, October 24, 2016 8:18 PM
To: Ram, J [Tech]
Cc: Chris Aniszczyk; cncf-toc@...
Subject: Re: [cncf-toc] [VOTE] End User Reference Architecture v1.0

 

On Mon, Oct 24, 2016 at 5:28 AM, Ram, J via cncf-toc <cncf-toc@...> wrote:

 

Sorry, I missed that last call. So apologies if this was discussed.

Two thoughts/Questions that come to mind when looking thru the slides:

 

a)      Emphasis on security seem to be missing. It might be implicit, but being explicit might be useful.  So calling out some aspects of it in application definition, orchestration and runtime would change that. I suspect that orchestration and runtime would get more interesting if complex security policies are modelled in the application definition.

Given that security spans all the layers and is a complex topic, I'm not sure what we'd add at the current level of detail. 

 

b)      Not sure if this is group to address: I feel, that no consistent implementation or standard for Service Directory exist. The most consistent yellow pages we seem to have DNS. For the new generation of applications, is that enough?  Should we call out Service directory under service management?

Service naming, discovery, load balancing, and routing (service fabric/mesh approaches) are intended to be covered by slide 6. Is there a specific terminology clarification that you'd like to see? Or would you like us to merge the "Coordination" and "Service Management" sub-bullets into a single list?

 

What exactly do you mean by "service directory"?

[JRAM] to reiterate this maybe outside the scope of this discussion. My observation, is that there is no consistent standard for any client to search, lookup and find service providers in the global network in a consistent fashion. DNS is the closest adopted standard and is not really designed for the level of dynamism we need in this new Cloud Based model. Lack of this is clearly emphasized by trickery played in networking stack and DNS stack. Another observation is that there is no global catalogue of all services that are available in the network at internet scale. Every seems to be having their own version of “directory” implementation. In our case, we have DNS, Zookeeper, url router, etc to just name a few…

 

 

The question for us to answer minimally is: do we want to address this problem architecturally and as a standard ?

 

 

 

 

 

 

 

From: cncf-toc-bounces@... [mailto:cncf-toc-bounces@...] On Behalf Of Chris Aniszczyk via cncf-toc
Sent: Monday, October 24, 2016 7:15 AM
To: cncf-toc@...
Subject: [cncf-toc] [VOTE] End User Reference Architecture v1.0

 

Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):

 

 

This is a call to formalize the reference architecture, so TOC members please vote!

 

--

Chris Aniszczyk (@cra) | +1-512-961-6719


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

 


Re: peanut-gallery thoughts about GRPC

Brian Grant
 

On Fri, Oct 21, 2016 at 10:44 AM, Ben Sigelman via cncf-toc <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

Thanks.
 

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

And, if you're building a new microservice-based application, you need to use something. What are the choices?
What should you choose if you need:
  • Features Ben mentions above, such as monitoring and tracing?
  • Clients in multiple languages?
  • Mobile clients as well as microservice clients?
  • 10x greater efficiency than REST+JSON? (Who needs that? Most infrastructure components.)
These benefits are also called out on the GRPC FAQ:
 

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.


So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

Yes, service fabrics and grpc are complementary.
 


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: peanut-gallery thoughts about GRPC

alexis richardson
 

Fascinating.  Thank-you.  I shall need at least a day to digest this and respond so please be patient with me :-)


On Tue, Oct 25, 2016 at 9:26 PM, Jayant Kolhe <jkolhe@...> wrote:

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc







Re: peanut-gallery thoughts about GRPC

Jayant Kolhe <jkolhe@...>
 

Thanks Alexis.


> Can you please list which ones are notable?  The main one seems to be "streaming" replies.  


Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.


> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.


We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.



While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.


How much does that matter to real world users for the practical cases that such an implementation would facilitate?


When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.



The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.


There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage


We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.


We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.


With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.



> I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?


Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.


> I'm interested to understand how you measure this.


I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.


> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.

> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone

> should try it, but (b) it has compatibility problems that may not be resolved.


I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.




On Tue, Oct 25, 2016 at 9:19 AM, Alexis Richardson <alexis@...> wrote:
Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc






Re: peanut-gallery thoughts about GRPC

alexis richardson
 

Jayant

Many thanks.



On Tue, Oct 25, 2016 at 4:50 PM, Jayant Kolhe <jkolhe@...> wrote:

gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics.


Can you please list which ones are notable?  The main one seems to be "streaming" replies.  This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.





 
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.

How much does that matter to real world users for the practical cases that such an implementation would facilitate?


 
It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations.

I'm sure we can find CNCF community engineers who would be willing and able to have a go.  How hard can it be?




 
We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features.

I'm interested to understand how you measure this.


 
We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.

Please don't take this the wrong way -- I like gRPC and am excited about it!

But: expecting proxies to solve this stuff, kind of undermines the whole approach.  Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.  It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone should try it, but (b) it has compatibility problems that may not be resolved.

I vividly recall Brad Fitz telling me back in 2009 (or thereabouts) that, for HTTP, it is prudent to assume the worst when it comes to widespread adoption.  He pointed out that many servers & proxies still spoke 0.9 at the time.  


 


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc





Re: peanut-gallery thoughts about GRPC

Jayant Kolhe <jkolhe@...>
 


gRPC protocol was designed to build high performance, cross platform and usable libraries for building microservices. It was designed on top of HTTP2 to explicitly make use of
  • Full duplex streams to support bi-directional streaming
  • HPACK compressed headers for efficiently transmitting metadata/sidechannel information. For example, reduces cost for authentication tokens
  • Connection multiplexing. Reduces the per-RPC connection cost for TLS and high latency connections
  • Binary Framing layer with good flow control


Many of gRPC features work well only on HTTP2 semantics. While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance. It would add significant complexity across multiple language implementations and would have higher bar and complexity for implementing and testing interoperability across these implementations. We have also relied on adoption of HTTP2 which has been very rapid and hence the ecosystem is also evolving rapidly to support HTTP2 features. We have also relied on proxies to provide this functionality to allow http1.x only ecosystem to work with gRPC. Such support exists in many proxies (nghttpx, linkerd, envoy) and is coming to others like nginx..Hence we have not implemented gRPC on HTTP 1.x.


On Fri, Oct 21, 2016 at 11:42 AM, Brian Grant <briangrant@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




Re: peanut-gallery thoughts about GRPC

alexis richardson
 

+1, protocols FTW ;-)


On Tue, Oct 25, 2016 at 11:36 AM, Matt T. Proud ⚔ <matt.proud@...> wrote:
Since you asked the peanut gallery:

I would be delighted to see gRPC supersede Thrift and Finangle for a laundry list of reasons.  The crux: being burned by Thrift and Finangle's cross-language and -runtime interoperability problems.  gRPC was motivated by this interoperability on day one; whereas it felt like an afterthought in Thrift.  Further: Finangle's operational metrics — last I looked at them in 2013 — were pretty incomprehensible (frankly felt a deep sense of pity for anyone oncall for a system built on top of it).

gRPC is self-standingly a natural addition to a reference implementation's portfolio.  My only regret was its not arriving "on the block" a year or two sooner — lest another generation's mind be wasted to a substandard technology.  ;)

On Tue, Oct 25, 2016 at 12:16 PM Alexis Richardson via cncf-toc <cncf-toc@...> wrote:
Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: peanut-gallery thoughts about GRPC

Matt T. Proud
 

Since you asked the peanut gallery:

I would be delighted to see gRPC supersede Thrift and Finangle for a laundry list of reasons.  The crux: being burned by Thrift and Finangle's cross-language and -runtime interoperability problems.  gRPC was motivated by this interoperability on day one; whereas it felt like an afterthought in Thrift.  Further: Finangle's operational metrics — last I looked at them in 2013 — were pretty incomprehensible (frankly felt a deep sense of pity for anyone oncall for a system built on top of it).

gRPC is self-standingly a natural addition to a reference implementation's portfolio.  My only regret was its not arriving "on the block" a year or two sooner — lest another generation's mind be wasted to a substandard technology.  ;)

On Tue, Oct 25, 2016 at 12:16 PM Alexis Richardson via cncf-toc <cncf-toc@...> wrote:
Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: peanut-gallery thoughts about GRPC

alexis richardson
 

Brandon,

Thank-you.  It may help if I mention why I raised the question about HTTP 1.x.  Overall we are fans of gRPC at Weaveworks.  But we stumbled into some issues when trying to use it in this case:


alexis


On Mon, Oct 24, 2016 at 9:35 PM, Brandon Philips <brandon.philips@...> wrote:
On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: [VOTE] End User Reference Architecture v1.0

Brian Grant
 

On Mon, Oct 24, 2016 at 5:28 AM, Ram, J via cncf-toc <cncf-toc@...> wrote:

 

Sorry, I missed that last call. So apologies if this was discussed.

Two thoughts/Questions that come to mind when looking thru the slides:

 

a)      Emphasis on security seem to be missing. It might be implicit, but being explicit might be useful.  So calling out some aspects of it in application definition, orchestration and runtime would change that. I suspect that orchestration and runtime would get more interesting if complex security policies are modelled in the application definition.

Given that security spans all the layers and is a complex topic, I'm not sure what we'd add at the current level of detail. 

 

b)      Not sure if this is group to address: I feel, that no consistent implementation or standard for Service Directory exist. The most consistent yellow pages we seem to have DNS. For the new generation of applications, is that enough?  Should we call out Service directory under service management?

Service naming, discovery, load balancing, and routing (service fabric/mesh approaches) are intended to be covered by slide 6. Is there a specific terminology clarification that you'd like to see? Or would you like us to merge the "Coordination" and "Service Management" sub-bullets into a single list?

What exactly do you mean by "service directory"?

 

 

 

 

 

 

 

 

From: cncf-toc-bounces@... [mailto:cncf-toc-bounces@lists.cncf.io] On Behalf Of Chris Aniszczyk via cncf-toc
Sent: Monday, October 24, 2016 7:15 AM
To: cncf-toc@...
Subject: [cncf-toc] [VOTE] End User Reference Architecture v1.0

 

Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):

 

 

This is a call to formalize the reference architecture, so TOC members please vote!

 

--

Chris Aniszczyk (@cra) | +1-512-961-6719


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: [VOTE] End User Reference Architecture v1.0

Brian Grant
 

YES

On Mon, Oct 24, 2016 at 4:16 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:
YES

On Mon, Oct 24, 2016 at 12:15 PM Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):


This is a call to formalize the reference architecture, so TOC members please vote!

--
Chris Aniszczyk (@cra) | +1-512-961-6719
_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: השב: [VOTE] End User Reference Architecture v1.0

Brian Grant
 

On Mon, Oct 24, 2016 at 6:04 AM, Yaron Haviv via cncf-toc <cncf-toc@...> wrote:
Chris, 

Probably too late to comment,  but looking at the charts seems like we are missing a notion of resources binding / dependency,  similar to cloud foundry 

E. G.  A web Micro-Service binds to (or depends on)  a database Micro-Service with a certain url,  this helps orchestration to determine the provisioning steps,  and helps the app find the resources it builds on

Hi, Yaron.

This layer diagram is extremely high-level, and the explanations of the levels are intended to be descriptive rather than exhaustive, so it doesn't preclude the service broker model. 


Yaron 



נשלח ממכשיר הSamsung שלי


-------- הודעה מקורית --------
מאת: Chris Aniszczyk via cncf-toc <cncf-toc@...>
תאריך: 24/10/2016 14:15 (GMT+02:00)
אל: cncf-toc@...
נושא: [cncf-toc] [VOTE] End User Reference Architecture v1.0

Last week at the CNCF TOC meeting, we discussed issues with the CNCF Reference Architecture and felt it was ready to finalize (and much better than what we had before):


This is a call to formalize the reference architecture, so TOC members please vote!

--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: peanut-gallery thoughts about GRPC

Brandon Philips <brandon.philips@...>
 

On gRPC and HTTP 1.x I think the best way to bring gRPC to the HTTP 1.x world is via OpenAPI (formerly swagger) and JSON, see the blog post here: http://www.grpc.io/blog/coreos

We do this in etcd v3: provide endpoints for HTTP 2.x + gRPC and HTTP 1.x + JSON.

On Fri, Oct 21, 2016 at 11:42 AM Brian Grant via cncf-toc <cncf-toc@...> wrote:
+Varun and Jayant to answer that

On Fri, Oct 21, 2016 at 10:57 AM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

I'd like to understand why gRPC doesn't work with HTTP 1.x


On Fri, 21 Oct 2016, 18:45 Ben Sigelman via cncf-toc, <cncf-toc@...> wrote:
Hi all,

"I am not on the TOC, but" I did want to share a few thoughts about GRPC per the call the other day.

I was recently at one of those moderated VC dinners where everyone gets put on the spot to say something "insightful" (sic) – I'm sure we all know the scenario. Anyway, we had to go around the table and talk about "the one OSS project that's poised to change the way the industry functions". There were lots of mentions of Docker, k8s, etc, and for good reason. I had the bad luck of being last and felt like it wasn't useful to just +1 someone else's comment, and I realized that GRPC was in many ways an excellent answer.

Varun alluded to this in his presentation, but to restate it in different words: the value of an RPC system is mostly not actually about the RPC... it's the service discovery, client-side load balancing, well-factored monitoring, context propagation, and so on.

In that way, a high-quality RPC system is arguably the lynchpin of the "user-level OS" that sits just below the application code but above the actual (kernel) syscalls. An alternative approach moves things like RPC into its own process (a la linkerd(*)) and I think that makes sense in certain situations... but when the RPC system depends on data from its host process beyond the RPC payload and peer identity (which is often the case for more sophisticated microservice deployments), OR when "throughput matters" and extra copies are unacceptable, an in-process RPC subsystem is the right approach.

As for whether GRPC is the right in-process RPC system to incubate: I think that's a no-brainer. It has good momentum, the code is of a much higher quality and works in more languages than the alternatives, and Google's decision to adopt it internally will help to ensure that it works within scaled-out systems (both architecturally and in terms of raw performance). Apache Thrift moves quite slowly in my experience and has glaring problems in many languages; Finagle is mature but is limited to JVM (and perhaps bites off more than it can chew at times); other entrants that I'm aware of don't have a strong community behind them.

So yes, this is just an enthusiastic +1 from me. Hope the above makes sense and isn't blindingly obvious. :)

Comments / disagreements welcome –
Ben

(*): re linkerd specifically: I am a fan, and IMO this is a "both/and" situation, not "either/or"...

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


[devroom-managers] "Cloud and Monitoring" and "Containers and Microservices" devrooms Joint Call for Proposals

Chris Aniszczyk
 

FYI for awareness, CNCF is also sponsoring FOSDEM this year

Begin forwarded message:

From: Josh Berkus <jberkus@...>
Date: October 24, 2016 at 8:29:38 PM GMT+2
To: Luna Duclos <luna@...>, fosdem@...
Subject: [devroom-managers] "Cloud and Monitoring" and "Containers and Microservices" devrooms Joint Call for Proposals

Folks:

We'll all about the new container cloud this FOSDEM.  The "Linux
Containers and Microservices" devroom will cover Linux containers,
orchestration, management, CI/CD and container security.  "Monitoring
and Cloud" will cover monitorning microservices and containers, cloud
networking, metrics and more.

https://cncf.io/news/events/2017-02-04/fosdem-2017

As the two devrooms are being organized by different teams in the Cloud
Native community, we have decided to issue a joint CfP.  Please express
your preference for the devroom you want; the organizers will will pick
talks which are appropriate for each devroom and offer you a swap if
required.

Topics we are interested in include:

   Monitoring containerized services
   Automating cloud deployments
   Developing and administering microservices
   Container orchestration
   Continuous Integration & Deployment
   Prometheus, Kubernetes, OpenTracing, Docker, CRIO, etc.
   New projects and technology
   Other container and cloud native talks

...but if your talk or project involves new cloud technologies and/or
containerized microservices, give us a pitch!

We are also looking for Lunchtime Lightning Talks.

Submit by November 26th:
https://cncf.io/news/events/2017-02-04/fosdem-2017

--
--
Josh Berkus
Project Atomic
Red Hat OSAS
_______________________________________________
devroom-managers mailing list
devroom-managers@...
https://lists.fosdem.org/listinfo/devroom-managers