On Fri, Mar 31, 2017 at 11:37 AM, Anthony Skipper
We would like to see a separate group working on serverless as well. At
Galactic Fog we have had a serverless implementation on DCOS for about 6
months, and we plan to release our Kubernetes native implementation in the
next couple weeks in the runup to dockercon.
From our perspective we would like the following things:
Agreement on marketing terms. (Call it Serverless or Lambda, everyone
hates FAAS, but serverless is problematic as well)
Agreement on these terms is probably a bit much to expect. For some
time I was hoping we'd settle on "Jeff". While I'm not a lawyer,
Lambda seems like the kind of thing that will turn into a trademark
issue at some point. I think we're stuck with serverless, and when
offering components that fit in a serverless stack we'll have to stick
with things like "serverless function runtime," FaaS, and similar with
a mind to two different audiences.
Audience A: Technical audience, knows serverless well, and wants to
know exactly what piece your project is providing. So you can say
things like "event router" and function runtime to explain where it
fits exactly. This audience also has some potential contributors in it
if the project is OSS.
Audience B: Thinks of serverless-the-concept as it relates to
developer experience, and would be looking to figure out what they can
do with it generally. The focus for those materials has to be on
distinguishing from plain containers, PaaS, etc more than on the
underlying thing your project is going to provide. Already it's
getting kind of muddy, since Amazon and others are rebranding other
aaS offerings as "serverless," such as DynamoDB.
Agreement on core capabilities, from our perspective they are:
API Gateway Support
Config / Secret Capabilities
Performance/Scalability Capabilities (eg. Gestalt and Fission are a couple
order of magnitude faster than Amazon, and that changes the art of the
I agree with these, but I'd put performance as non-core because there
are plenty of workloads where it doesn't matter all that much. Think
about the class of back-office examples that are common: transforming
streams, resizing images, propagating changes to other systems. As
long as they get done, the difference between 100ms and 1000ms can
pass unnoticed since each event is eventually spawning a new function,
and the queue/event system handles backpressure transparently.
Then there's the category of user-facing synchronous workloads that
you'd see an API Gateway used for, where perf matters and users just
abandon anything that's perceivably slow.
None Core Capabilities
Ability to inter-operate between serverless implementations (eg, migration
between them, include up to ad back from public cloud)
Data management capabilities (exposing filesystems or other services in)
Making the implementation of the serveless solution portable across
Data Layer Integration approaches.
I'd probably bump chaining up to core, since all but the very simplest
projects end up with a series of functions that either call each
other, or create events that invoke others.
I wouldn't worry to much about the other big vendor stuff right now.
Serverless is at such an early stage any R&D done by anyone is really
helpful and not really competitive or problematic. (eg Openwhisk has
really cool ideas, and Amazon's attempts to standardize lambda portability
show an approach that is helpful for discussion)
On Fri, Mar 31, 2017 at 11:17 AM, Ryan S. Brown via cncf-toc
If haven't heard Amazon&others raising a general ruckus about serverless
lately, I sincerely hope your vacation to the backwoods was relaxing.
I'm Ryan, and I've been interested in FaaS/serverless for a while now.
Also CC'd on this message are Ben Kehoe (iRobot) and Peter Sbarski
(ServerlessConf/A Cloud Guru). Lately, it seems the open-source interest has
been picking up significantly in addition to all the use in the public
cloud. Just to name a few FaaS/serverless provider projects: Fission &
Funktion on Kubernetes, FaaS on Swarm, and standalone OpenWhisk
(primarily IBM-driven). Even Microsoft's Azure Functions is OSS.
A cynical observer might say that the MS/IBM efforts are open to help
compensate for them starting so late relative to Lambda, but either way the
result is a lot of open or nominally open projects in the FaaS/serverless
area. And with cloud providers looking to embed their various FaaS deeper
into their clouds by integrating their FaaS with cloud-specific events,
making their FaaS the way into customizing how their infra reacts to events.
So why am I writing this email? Well I've been thinking about serverless
as the next step in "cloud native" developer tooling. Look back to the state
of the art in the 00's and you'll see the beginnings of
autoscaling/immutable infrastructure, then move ahead a bit to containerized
applications, then container schedulers, and you can see a trend towards
shorter and shorter lifespans of persistent machines/processes.
Function-as-a-Service is another step in that direction where containers
live for seconds rather than persistently listening. This trajectory seems
pretty intuitive as a developer: as lower layers of the stack become more
standard I should be able to automate/outsource management of them.
I'd like to help the TOC think about where (or whether) serverless/FaaS
should fit into the CNCF's plans for the future. Do you want to talk about
what serverless actually is? Figure out how various OSS fits into a
serverless ecosystem? Compare how FaaS provided in the public cloud differs
from what users need in a hybrid/on-prem environment? Ask away - Ben, Pete,
and I are all here to help out.
Ryan Brown / Senior Software Engineer / Red Hat, Inc.
cncf-toc mailing list
Ryan Brown / Senior Software Engineer, Ansible / Red Hat, Inc.