> Can you please list which ones are notable? The main one seems to be "streaming" replies.
Streaming is certainly notable but multiplexing and flow-control are also very important here. Without http2 flow-control features, deployments can easily experience OOMs, we have seen this in practice and so have community users. This is particularly important for infrastructural APIs. Header compression and multiplexing (i.e avoiding additional TLS negotiation) are very important for mobile clients and embedded devices.
> This shouldn't prevent someone building a http 1 compatibility layer, if that what helps people with adoption.
We are not opposed to it. We have not built it yet. We definitely considered it. However, a compatibility layer has significant complexity costs. The matrix of testing and verifying across all language implementations while having good performance seemed very expensive when we looked at it. It is not just http1.x you need to consider if seamless compatibility with existing ecosystem is needed. You have to consider what features of http 1.x work well in existing ecosystem. For example: to build streaming solution, gRPC protocol relies on trailers. While it is http 1.x feature, many proxies/libraries do not support trailers. To make it compatible with existing ecosystem, we need to then consider alternate schemes and have multiple schemes that work in different scenarios.
While implementing gRPC on top of HTTP1.1 downgrade is feasible, such implementation would lose many of gRPC advantages of efficient usage of network resources and high performance.
How much does that matter to real world users for the practical cases that such an implementation would facilitate?
When we talked to users, their biggest concern is migration from their existing system. That existing system has different payloads, different conventions. So having just http1.x transport did not suffice their usecases. Hence, many folks preferred a proxy solution that allowed them to support existing system and build a new system that interoperated with such existing system. It would be good to understand specific use-cases.
The protocol was designed with this in mind so we are certainly not opposed to it, it is certainly a subject we have heard a lot about from the community. It’s probably worth pointing out that we have tested this with curl and HTTP1 to HTTP2 converting proxies and it functions just fine. Whether the conversion/compatibility layer is built into gRPC libraries or a separate set of compatibility libraries or proxies is the decision point.
There are three primary things motivating this desire have come up in our conversations with users: browser support, the lack of upstream HTTP2 support in nginx and Cloud layer-7 LBs, and library platform coverage
We are working on a community proposal for a protocol adaptation to enable browser support that will allow as much of GRPC to work as can be enabled within the limitations of the browser platform, specifically the limitations of XHR. We have also been working with the browser vendors to improve their networking APIs so that GRPC can be supported natively at which point we should be able to phase out this adaptation. The GRPC to JSON-REST gateway pattern has also served the browser use-case quite well and is something we will continue to invest in.
We are actively working on getting HTTP2 upstream support into nginx. This has taken longer than we would like. In the meantime there are a large number of other proxy solutions of decent quality that are available to folks. We are working with the large Cloud vendors on this too.
With regard to library coverage GRPC now scores quite well against the large language X platform matrix of potential users so there are very few deployment scenarios which are not covered. The built-in HTTP APIs in many platforms in many cases are quite poor in terms of their ability to express the HTTP protocol and in terms of efficiency. There are many attempts to improve these APIs (Fetch API for browser & node, new HTTP APi proposal for Java 10 etc.) but they are some ways off. The protocol adaptation we intend for browsers can be used in the interim.
> I'm sure we can find CNCF community engineers who would be willing and able to have a go. How hard can it be?
Based on our initial analysis, the complexity was to ensure support across all implementations. Our experience shows that interop matrix with features/testing and performance takes a lot of effort. We would be happy to support a community effort, but we should have a design conversation whether such support makes more sense in gRPC libraries or a set of separate accompanying libraries or proxies and verify it against different use-cases of users. We believe that the fundamental adoption issues can be addressed without inheriting a significant amount of protocol baggage.
> I'm interested to understand how you measure this.
I have not seen a comprehensive information on how much traffic is http2 vs how much is spdy vs how much is http1.x. Major browsers and most high traffic sites support http2 well. Some charts (https://www.keycdn.com/blog/http2-statistics/) I saw indicate good adoption, but I do not know how comprehensive those are. Internally at Google, we have seen largest traffic being http2 followed by spdy and quic.
> Not having an obvious Joe User adoption path will impede gRPC from being in some sense universal.
> It may also lead to people feeling let down if they hear (a) that gRPC is the new hotness, and that everyone
> should try it, but (b) it has compatibility problems that may not be resolved.
I fully agree with this concern and would love solutions/help. Whenever we considered it, adding more useful features and improving performance seemed to be more important to users than adding compatibility feature unless it gave them complete migration from their existing system. It may be due to set of users we talked to. I would love more input/data here.