Date   

Re: [VOTE] CNCF Sandbox proposal

Richard Hartmann
 

On Thu, Mar 1, 2018 at 4:04 PM, alexis richardson <alexis@...> wrote:

As a co-author of this doc I want to endorse the direction as strongly
as possible. If you feel you may want to vote -1, please do so, but
ideally so that we can improve the doc.
I think it's a good direction to take.

As per the discussion in the doc, the name has connotations which are
contrary to the intended meaning. That being said, I couldn't come up
with a better name; back then or in the last two days; neither could
others.


Long story short: No need to block on this; the improved process far
outweighs any potential naming confusion.


Richard


Re: RexRay follow up

alexis richardson
 

Clint

That's promising. What do the CSI people think?

BTW, the name "REX-Ray" seems designed to direct the layperson's
attention as far as possible from the stated purpose of the project.
Might a more descriptive name help here?

a

On Sat, Mar 3, 2018 at 5:28 PM, Kitson, Clinton <clinton.kitson@...> wrote:
Alexis,

I asked the same question ahead of time and got a positive response.

https://github.com/rexray/rexray/issues/1167





Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson
________________________________
From: cncf-toc@... [cncf-toc@...] on behalf of alexis
richardson [alexis@...]
Sent: Friday, March 2, 2018 11:57 PM
To: cncf-toc@...
Cc: cncf-toc@...

Subject: Re: [cncf-toc] RexRay follow up

If rexray and CSI benefit from "co evolution" then that might make sense.
Is that the case? What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:

Thanks Dan, I think having spec and implementation(s) in the same
foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it
correctly, its a a set of tools, packaging, and libraries that aid in
writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing,
OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two
communities are open to that.

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net,
none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems
feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett
<Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with
Bassam. To follow on to what he said, I’m very concerned that this would
become yet another place where the interface to storage would need to be
discussed, and I think that’s a really bad move right now.



As a community, we already have at least three different regular storage
meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to
track at least those to maintain even a basic idea of what’s going on. And
if you really want to be involved with k8s, there’s already a lot more than
that to deal with. As the other orchestrators become more CSI aware, there
will likely be storage meetings for each of them as well.



And the line between the orchestrators and the CSI moves all the time.
For about a year we’ve been talking about snapshots in k8s, and in just the
past week there’s been discussion about moving that into CSI itself. If we
make that move it isn’t just done on paper, it has a material change on
interfaces and how it needs to be implemented.



Adding another thing in the mix in-between makes the lines more blurry
than they already are, and an already difficult problem untenable.



For this reason, I think the CSI needs to re-consider its “spec only”
stance and provide some basic enablement as well as mechanisms that make it
easy for different people to experiment in and around it instead. Each
orchestrator is going to have its CSI implementation to deal with too.
Please, no more! :)



..Garrett

Technical Director @ NetApp

https://netapp.io/



From: cncf-toc@... <cncf-toc@...> On Behalf Of
Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up




https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37



The best place to start here is to refer to the slide above. CSI by its
name is an interface specification. It has always been the intent of the
group to keep it CO agnostic and focused on the key primitives that will
enable volume storage. CSI tackles the fragmentation problem of a single
spec to implement. But there are many other aspects to creating a production
grade plugins that up to this point has intentionally been avoided in the
CSI spec. So for this I think a implementation framework for the storage
eco-system to work together on will be important to the other aspects of our
fragmentation problem. REX-Ray is this framework for CSI, abstract of
storage platform, that will 1) CO centric deployment and documentation 2) a
great user experience from common packaging, docs, configuration which is
important to operators and trusting CSI 3) be a placeholder for proving
functionality that may or may not eventually end up in the CSI spec.







Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

---

email: Clinton.Kitson@...

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"

github: github.com/clintkitson

________________________________

From: cncf-toc@... [cncf-toc@...] on behalf of Gou
Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than
just a CSI implementation... as a multi platform storage orchestrator?
Maybe some clarification on what that means could help, but for example,
with Mesosphere, we use frameworks for complex stateful applications (take
Cassandra for example). Would RexRay help orchestrate storage provisioning
(via CSI) to a framework like that?



On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...>
wrote:

Yes I think so. But really I am a storage idiot. Who else could we ask?



On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.



Would it make sense for Rex-Ray to be even more closely aligned with CSI,
i.e. as a set of libraries and tools for people wanting to build CSI
implementations. For example, CSI has a placeholder repo (see
https://github.com/container-storage-interface/libraries) for libraries and
tools. Similar to the Kubernetes incubator repo. Could RexRay become part of
that or does it need to be its own top-level project?



I worry about confusing developers and end-users with another CNCF
project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:



All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space
and
provides an important service by helping connect apps to storage.
Operators
of clusters are the ones that should be very aware of it as it would
provide
trusted and more quality plugins that are built on top of the existing
CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to
accommodate
the CSI architecture changes that needed to take place. This meant
rolling
in the libStorage functionality which unfortunately skews the numbers a
bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a
foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage
API)
prior to CSI. In most recent we have made architectural changes to be
adhere
to CSI. When libStorage was its control-plane, there was integration
work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator
side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage
operations.
It is an orchestrator and simply gets two components (local/remote
storage &
an OS) connected. It essentially performs the exact same steps that
someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson










Re: RexRay follow up

Kitson, Clinton <clinton.kitson@...>
 

Alexis,

I asked the same question ahead of time and got a positive response.

https://github.com/rexray/rexray/issues/1167





Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
--- 
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of alexis richardson [alexis@...]
Sent: Friday, March 2, 2018 11:57 PM
To: cncf-toc@...
Cc: cncf-toc@...
Subject: Re: [cncf-toc] RexRay follow up

If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Re: RexRay follow up

Yuri Shkuro
 

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

#2 is actually https://github.com/opentracing-contrib/, that contains instrumentation of popular frameworks using OpenTracing APIs. I think it's in a grey area whether contrib is a part of CNCF.

On Sat, Mar 3, 2018 at 12:05 AM, Bassam Tabbara <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@netapp.com> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 








Re: RexRay follow up

alexis richardson
 

If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?


On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Re: RexRay follow up

Bassam Tabbara
 

Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Re: RexRay follow up

Dan Kohn <dan@...>
 

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To:
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github:
github.com/clintkitson



 

 



Re: RexRay follow up

Mueller, Garrett <Garrett.Mueller@...>
 

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...
Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"

github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To:
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github:
github.com/clintkitson



 

 


Re: RexRay follow up

mueller@...
 

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem much worse.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead.

Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/


Re: [VOTE] CNCF Sandbox proposal

Sam Lambert <samlambert@...>
 

+1 binding.

On Thu, Mar 1, 2018 at 12:00 PM, Daniel Bryant <db@...> wrote:
+1 (non-binding)

On Wed, Feb 28, 2018 at 1:42 PM, Chris Aniszczyk <caniszczyk@linuxfoundation.org> wrote:
There’s been a desire within the CNCF TOC and community to provide further clarity around project maturity levels in CNCF and this has resulted into the CNCF Sandbox proposal after a month of discussion: https://github.com/cncf/toc/pull/92

When we initially created the Inception project level, it was intended to provide an avenue for technically interesting early-stage projects that were beneficial to the cloud-native community. We are transitioning Inception projects to the Sandbox. When we say that Sandbox projects are "early stage" this covers the following examples:

- New projects that are designed to extend one or more CNCF projects with functionality or interoperability libraries. In the case of Kubernetes, the Sandbox is intended as a home for projects that would previously have started in the Kubernetes Incubator.
- Independent projects that fit the CNCF mission and provide potential for a novel approach to existing functional areas (or are an attempt to meet an unfulfilled need)
- Projects commissioned or sanctioned by the CNCF, including initial code for CNCF WG collaborations, and "experimental" projects
- Any project that realistically intends to join CNCF Incubation in future and wishes to lay the foundations for that

Please vote (+1/0/-1) by replying to this thread; the full proposal located here: https://github.com/cncf/toc/pull/92

Remember that the TOC has binding votes only, but we do appreciate non-binding votes from the community as a sign of support! Note, if the proposal passes, CNCF staff will make updates to website and all other marketing collateral regarding this change.

--
Chris Aniszczyk (@cra) | +1-512-961-6719




Re: RexRay follow up

Kitson, Clinton <clinton.kitson@...>
 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 



Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
--- 
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





Re: [VOTE] CNCF Sandbox proposal

Daniel Bryant
 

+1 (non-binding)

On Wed, Feb 28, 2018 at 1:42 PM, Chris Aniszczyk <caniszczyk@...> wrote:
There’s been a desire within the CNCF TOC and community to provide further clarity around project maturity levels in CNCF and this has resulted into the CNCF Sandbox proposal after a month of discussion: https://github.com/cncf/toc/pull/92

When we initially created the Inception project level, it was intended to provide an avenue for technically interesting early-stage projects that were beneficial to the cloud-native community. We are transitioning Inception projects to the Sandbox. When we say that Sandbox projects are "early stage" this covers the following examples:

- New projects that are designed to extend one or more CNCF projects with functionality or interoperability libraries. In the case of Kubernetes, the Sandbox is intended as a home for projects that would previously have started in the Kubernetes Incubator.
- Independent projects that fit the CNCF mission and provide potential for a novel approach to existing functional areas (or are an attempt to meet an unfulfilled need)
- Projects commissioned or sanctioned by the CNCF, including initial code for CNCF WG collaborations, and "experimental" projects
- Any project that realistically intends to join CNCF Incubation in future and wishes to lay the foundations for that

Please vote (+1/0/-1) by replying to this thread; the full proposal located here: https://github.com/cncf/toc/pull/92

Remember that the TOC has binding votes only, but we do appreciate non-binding votes from the community as a sign of support! Note, if the proposal passes, CNCF staff will make updates to website and all other marketing collateral regarding this change.

--
Chris Aniszczyk (@cra) | +1-512-961-6719



Re: [VOTE] CNCF Sandbox proposal

Mark Coleman <mark@...>
 

+1 non-binding


On Thu, Mar 1, 2018 at 9:51 AM Ruben Orduz <ruben@...> wrote:
'sandbox' has the connotation of 1) the island of broken toys and 2) incomplete, disposable, non-serious ideas or projects

My 2c

On Thu, Mar 1, 2018 at 10:04 AM, Brian Grant via Lists.Cncf.Io <briangrant=google.com@...> wrote:
What do you not like about the name, and what other name would you prefer?

On Wed, Feb 28, 2018 at 11:13 AM, Richard Hartmann <richih@...> wrote:
On Wed, Feb 28, 2018 at 7:46 PM, Ruben Orduz <ruben@...> wrote:
> +1 non-biding on the spirit, -1 non-binding on the naming.

Same.





--
+31 652134960
Marketing Chair www.cncf.io


Re: RexRay follow up

Gou Rao <grao@...>
 

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





Re: RexRay follow up

alexis richardson
 

Yes please do chime in!


On Thu, 1 Mar 2018, 18:21 Gou Rao, <grao@...> wrote:
I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





Re: RexRay follow up

alexis richardson
 

Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson




Re: RexRay follow up

Bassam Tabbara
 

I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson




Re: [VOTE] CNCF Sandbox proposal

Ruben Orduz <ruben@...>
 

'sandbox' has the connotation of 1) the island of broken toys and 2) incomplete, disposable, non-serious ideas or projects

My 2c

On Thu, Mar 1, 2018 at 10:04 AM, Brian Grant via Lists.Cncf.Io <briangrant=google.com@...> wrote:
What do you not like about the name, and what other name would you prefer?

On Wed, Feb 28, 2018 at 11:13 AM, Richard Hartmann <richih@...> wrote:
On Wed, Feb 28, 2018 at 7:46 PM, Ruben Orduz <ruben@...> wrote:
> +1 non-biding on the spirit, -1 non-binding on the naming.

Same.






Re: [VOTE] CNCF Sandbox proposal

Yong Tang <ytang@...>
 

+1 non-binding


From: cncf-toc@... <cncf-toc@...> on behalf of Chris Aniszczyk <caniszczyk@...>
Sent: Wednesday, February 28, 2018 10:42:02 AM
To: cncf-toc@...
Subject: [cncf-toc] [VOTE] CNCF Sandbox proposal
 
There’s been a desire within the CNCF TOC and community to provide further clarity around project maturity levels in CNCF and this has resulted into the CNCF Sandbox proposal after a month of discussion: https://github.com/cncf/toc/pull/92

When we initially created the Inception project level, it was intended to provide an avenue for technically interesting early-stage projects that were beneficial to the cloud-native community. We are transitioning Inception projects to the Sandbox. When we say that Sandbox projects are "early stage" this covers the following examples:

- New projects that are designed to extend one or more CNCF projects with functionality or interoperability libraries. In the case of Kubernetes, the Sandbox is intended as a home for projects that would previously have started in the Kubernetes Incubator.
- Independent projects that fit the CNCF mission and provide potential for a novel approach to existing functional areas (or are an attempt to meet an unfulfilled need)
- Projects commissioned or sanctioned by the CNCF, including initial code for CNCF WG collaborations, and "experimental" projects
- Any project that realistically intends to join CNCF Incubation in future and wishes to lay the foundations for that

Please vote (+1/0/-1) by replying to this thread; the full proposal located here: https://github.com/cncf/toc/pull/92

Remember that the TOC has binding votes only, but we do appreciate non-binding votes from the community as a sign of support! Note, if the proposal passes, CNCF staff will make updates to website and all other marketing collateral regarding this change.

--
Chris Aniszczyk (@cra) | +1-512-961-6719


Re: RexRay follow up

alexis richardson
 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI. When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson