RexRay follow up


mueller@...
 

No worries. My recommendation is simple: don't pull in REX-Ray as a separate CNCF project. Instead allow the CSI community to decide how to best deliver the tooling necessary to make it easier for everyone to build, support and extend CSI artifacts like storage plugins, and deliver all of that under the guise of CSI. Perhaps we will decide to pull in REX-Ray to do that, as a whole or in part.

After all, the CSI itself isn't even a CNCF project yet. The community believes it's heading in that direction, though. And once that happens, it will be better for everyone if there's one stop shopping for all things CSI. Even prior to that, it's better for us in the community if we limit the number of places where CSI-related things are discussed and developed.

..Garrett


alexis richardson
 

Garrett,

Forgive me, I am not quite sure I follow.  What is your recommendation for rexray in the context of your description below?

A


On Sun, 4 Mar 2018, 15:18 Mueller, Garrett, <Garrett.Mueller@...> wrote:
If I’m taking your meaning correctly, co-evolution (where there are two independent projects that rely on each other) is precisely the situation I was arguing against.

If we do this right, I think there should be three things. CSI itself, orchestrator implementations of CSI, and CSI storage plugins. Each of those three things already co-evolve somewhat independently. Many of us are already working on all three.

A project that helps people write CSI things shouldn’t be a 4th thing, in my opinion. It should be something the CSI community creates and evolves together based directly on the spec that it’s writing.

Thanks,
..Garrett
Technical Director @ NetApp

From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson <alexis@...>
Sent: Saturday, March 3, 2018 2:57:24 AM
To: cncf-toc@...
Cc: cncf-toc@...

Subject: Re: [cncf-toc] RexRay follow up
If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Mueller, Garrett <Garrett.Mueller@...>
 

If I’m taking your meaning correctly, co-evolution (where there are two independent projects that rely on each other) is precisely the situation I was arguing against.

If we do this right, I think there should be three things. CSI itself, orchestrator implementations of CSI, and CSI storage plugins. Each of those three things already co-evolve somewhat independently. Many of us are already working on all three.

A project that helps people write CSI things shouldn’t be a 4th thing, in my opinion. It should be something the CSI community creates and evolves together based directly on the spec that it’s writing.

Thanks,
..Garrett
Technical Director @ NetApp
https://netapp.io


From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson <alexis@...>
Sent: Saturday, March 3, 2018 2:57:24 AM
To: cncf-toc@...
Cc: cncf-toc@...
Subject: Re: [cncf-toc] RexRay follow up
 
If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







alexis richardson
 

Clint

That's promising. What do the CSI people think?

BTW, the name "REX-Ray" seems designed to direct the layperson's
attention as far as possible from the stated purpose of the project.
Might a more descriptive name help here?

a

On Sat, Mar 3, 2018 at 5:28 PM, Kitson, Clinton <clinton.kitson@...> wrote:
Alexis,

I asked the same question ahead of time and got a positive response.

https://github.com/rexray/rexray/issues/1167





Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson
________________________________
From: cncf-toc@... [cncf-toc@...] on behalf of alexis
richardson [alexis@...]
Sent: Friday, March 2, 2018 11:57 PM
To: cncf-toc@...
Cc: cncf-toc@...

Subject: Re: [cncf-toc] RexRay follow up

If rexray and CSI benefit from "co evolution" then that might make sense.
Is that the case? What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:

Thanks Dan, I think having spec and implementation(s) in the same
foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it
correctly, its a a set of tools, packaging, and libraries that aid in
writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing,
OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two
communities are open to that.

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net,
none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems
feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett
<Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with
Bassam. To follow on to what he said, I’m very concerned that this would
become yet another place where the interface to storage would need to be
discussed, and I think that’s a really bad move right now.



As a community, we already have at least three different regular storage
meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to
track at least those to maintain even a basic idea of what’s going on. And
if you really want to be involved with k8s, there’s already a lot more than
that to deal with. As the other orchestrators become more CSI aware, there
will likely be storage meetings for each of them as well.



And the line between the orchestrators and the CSI moves all the time.
For about a year we’ve been talking about snapshots in k8s, and in just the
past week there’s been discussion about moving that into CSI itself. If we
make that move it isn’t just done on paper, it has a material change on
interfaces and how it needs to be implemented.



Adding another thing in the mix in-between makes the lines more blurry
than they already are, and an already difficult problem untenable.



For this reason, I think the CSI needs to re-consider its “spec only”
stance and provide some basic enablement as well as mechanisms that make it
easy for different people to experiment in and around it instead. Each
orchestrator is going to have its CSI implementation to deal with too.
Please, no more! :)



..Garrett

Technical Director @ NetApp

https://netapp.io/



From: cncf-toc@... <cncf-toc@...> On Behalf Of
Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up




https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37



The best place to start here is to refer to the slide above. CSI by its
name is an interface specification. It has always been the intent of the
group to keep it CO agnostic and focused on the key primitives that will
enable volume storage. CSI tackles the fragmentation problem of a single
spec to implement. But there are many other aspects to creating a production
grade plugins that up to this point has intentionally been avoided in the
CSI spec. So for this I think a implementation framework for the storage
eco-system to work together on will be important to the other aspects of our
fragmentation problem. REX-Ray is this framework for CSI, abstract of
storage platform, that will 1) CO centric deployment and documentation 2) a
great user experience from common packaging, docs, configuration which is
important to operators and trusting CSI 3) be a placeholder for proving
functionality that may or may not eventually end up in the CSI spec.







Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

---

email: Clinton.Kitson@...

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"

github: github.com/clintkitson

________________________________

From: cncf-toc@... [cncf-toc@...] on behalf of Gou
Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than
just a CSI implementation... as a multi platform storage orchestrator?
Maybe some clarification on what that means could help, but for example,
with Mesosphere, we use frameworks for complex stateful applications (take
Cassandra for example). Would RexRay help orchestrate storage provisioning
(via CSI) to a framework like that?



On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...>
wrote:

Yes I think so. But really I am a storage idiot. Who else could we ask?



On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.



Would it make sense for Rex-Ray to be even more closely aligned with CSI,
i.e. as a set of libraries and tools for people wanting to build CSI
implementations. For example, CSI has a placeholder repo (see
https://github.com/container-storage-interface/libraries) for libraries and
tools. Similar to the Kubernetes incubator repo. Could RexRay become part of
that or does it need to be its own top-level project?



I worry about confusing developers and end-users with another CNCF
project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:



All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space
and
provides an important service by helping connect apps to storage.
Operators
of clusters are the ones that should be very aware of it as it would
provide
trusted and more quality plugins that are built on top of the existing
CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to
accommodate
the CSI architecture changes that needed to take place. This meant
rolling
in the libStorage functionality which unfortunately skews the numbers a
bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a
foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage
API)
prior to CSI. In most recent we have made architectural changes to be
adhere
to CSI. When libStorage was its control-plane, there was integration
work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator
side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage
operations.
It is an orchestrator and simply gets two components (local/remote
storage &
an OS) connected. It essentially performs the exact same steps that
someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson










Kitson, Clinton <clinton.kitson@...>
 

Alexis,

I asked the same question ahead of time and got a positive response.

https://github.com/rexray/rexray/issues/1167





Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
--- 
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of alexis richardson [alexis@...]
Sent: Friday, March 2, 2018 11:57 PM
To: cncf-toc@...
Cc: cncf-toc@...
Subject: Re: [cncf-toc] RexRay follow up

If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?

On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Yuri Shkuro
 

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

#2 is actually https://github.com/opentracing-contrib/, that contains instrumentation of popular frameworks using OpenTracing APIs. I think it's in a grey area whether contrib is a part of CNCF.

On Sat, Mar 3, 2018 at 12:05 AM, Bassam Tabbara <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@netapp.com> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 








alexis richardson
 

If rexray and CSI benefit from "co evolution" then that might make sense.  Is that the case?  What does the community think?


On Sat, 3 Mar 2018, 05:05 Bassam Tabbara, <bassam@...> wrote:
Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Bassam Tabbara
 

Thanks Dan, I think having spec and implementation(s) in the same foundation make sense.

In this case, Rex-Ray is not an implementation. If I understood it correctly, its a  a set of tools, packaging, and libraries that aid in writing CSI plugins. So it feels a bit different.

It almost like saying there is OpenTracing, OpenTracing-Packaging-and-Tools, and Jaeger as three separate projects.

I think it would make more sense to make Rex-Ray part of CSI if the two communities are open to that. 

On Mar 2, 2018, at 4:48 PM, Dan Kohn <dan@...> wrote:

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: 
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: 
github.com/clintkitson



 

 







Dan Kohn <dan@...>
 

CNCF has three precedents of separate specs and implementations:

+ CNI and the CNI plugins (most prominently Calico, Flannel and Weave Net, none of which are yet CNCF projects)
+ OpenTracing and Jaeger
+ TUF and Notary

So, the example of CSI as the spec and REX-Ray as an implementation seems feasible. Whether it is advisable is, of course, up to the TOC.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com

On Fri, Mar 2, 2018 at 5:36 PM, Mueller, Garrett <Garrett.Mueller@...> wrote:

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...


Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To:
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github:
github.com/clintkitson



 

 



Mueller, Garrett <Garrett.Mueller@...>
 

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem untenable.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead. Each orchestrator is going to have its CSI implementation to deal with too. Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Kitson, Clinton
Sent: Friday, March 2, 2018 12:28 PM
To: cncf-toc@...
Subject: Re: [cncf-toc] RexRay follow up

 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

 

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 

 

 

 

Clint Kitson

Technical Director for {code}

CNCF Governing Board Member

--- 

mobile: "+1 424 645 4116"

team: theCodeTeam.com

twitter: "@clintkitson"

github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To:
cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

 

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:

Yes I think so.  But really I am a storage idiot. Who else could we ask?

 

On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:

I’m glad to see the strong alignment between Rex-Ray and CSI.

 

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

 

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<
clinton.kitson@...> wrote:


Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "
+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github:
github.com/clintkitson



 

 


mueller@...
 

I see where you’re coming from Clint, but in this case I agree with Bassam. To follow on to what he said, I’m very concerned that this would become yet another place where the interface to storage would need to be discussed, and I think that’s a really bad move right now.

 

As a community, we already have at least three different regular storage meetings: the CNCF storage WG, the k8s storage-sig and CSI. You have to track at least those to maintain even a basic idea of what’s going on. And if you really want to be involved with k8s, there’s already a lot more than that to deal with. As the other orchestrators become more CSI aware, there will likely be storage meetings for each of them as well.

 

And the line between the orchestrators and the CSI moves all the time. For about a year we’ve been talking about snapshots in k8s, and in just the past week there’s been discussion about moving that into CSI itself. If we make that move it isn’t just done on paper, it has a material change on interfaces and how it needs to be implemented.

 

Adding another thing in the mix in-between makes the lines more blurry than they already are, and an already difficult problem much worse.

 

For this reason, I think the CSI needs to re-consider its “spec only” stance and provide some basic enablement as well as mechanisms that make it easy for different people to experiment in and around it instead.

Please, no more! :)

 

..Garrett

Technical Director @ NetApp

https://netapp.io/


Kitson, Clinton <clinton.kitson@...>
 

https://docs.google.com/presentation/d/1BthkP9OftIICEn1h9ML1F0TKXhaUxXILlcDpuz3Sjzg/edit#slide=id.g31eff88140_0_37

The best place to start here is to refer to the slide above. CSI by its name is an interface specification. It has always been the intent of the group to keep it CO agnostic and focused on the key primitives that will enable volume storage. CSI tackles the fragmentation problem of a single spec to implement. But there are many other aspects to creating a production grade plugins that up to this point has intentionally been avoided in the CSI spec. So for this I think a implementation framework for the storage eco-system to work together on will be important to the other aspects of our fragmentation problem. REX-Ray is this framework for CSI, abstract of storage platform, that will 1) CO centric deployment and documentation 2) a great user experience from common packaging, docs, configuration which is important to operators and trusting CSI 3) be a placeholder for proving functionality that may or may not eventually end up in the CSI spec. 



Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
--- 
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


From: cncf-toc@... [cncf-toc@...] on behalf of Gou Rao [grao@...]
Sent: Thursday, March 1, 2018 10:21 AM
To: cncf-toc@...
Cc: CNCF TOC
Subject: Re: [cncf-toc] RexRay follow up

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





Gou Rao <grao@...>
 

I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





alexis richardson
 

Yes please do chime in!


On Thu, 1 Mar 2018, 18:21 Gou Rao, <grao@...> wrote:
I think Clint should chime in here, but I had seen RexRay as more than just a CSI implementation... as a multi platform storage orchestrator?  Maybe some clarification on what that means could help, but for example, with Mesosphere, we use frameworks for complex stateful applications (take Cassandra for example).  Would RexRay help orchestrate storage provisioning (via CSI) to a framework like that?

On Thu, Mar 1, 2018 at 9:59 AM, alexis richardson <alexis@...> wrote:
Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson





alexis richardson
 

Yes I think so.  But really I am a storage idiot. Who else could we ask?


On Thu, 1 Mar 2018, 17:53 Bassam Tabbara, <bassam@...> wrote:
I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.
On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson




Bassam Tabbara
 

I’m glad to see the strong alignment between Rex-Ray and CSI.

Would it make sense for Rex-Ray to be even more closely aligned with CSI, i.e. as a set of libraries and tools for people wanting to build CSI implementations. For example, CSI has a placeholder repo (see https://github.com/container-storage-interface/libraries) for libraries and tools. Similar to the Kubernetes incubator repo. Could RexRay become part of that or does it need to be its own top-level project?

I worry about confusing developers and end-users with another CNCF project that attempt to achieve the same goal — CSI.

On Mar 1, 2018, at 8:48 AM, alexis richardson <alexis@...> wrote:

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI.  When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson




alexis richardson
 

All - questions?

(thanks Clint, this is super helpful)


On Tue, Feb 20, 2018 at 6:00 PM, Kitson, Clinton
<clinton.kitson@...> wrote:

Correct Brian, REX-Ray should be transparent to end users in this space and
provides an important service by helping connect apps to storage. Operators
of clusters are the ones that should be very aware of it as it would provide
trusted and more quality plugins that are built on top of the existing CSI
spec.

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate
the CSI architecture changes that needed to take place. This meant rolling
in the libStorage functionality which unfortunately skews the numbers a bit.
The {code} team has been primary maintainers on the framework where
collaborators have mainly focused on building drivers. Other storage
companies who understand the complexity involved in building a solid CSI
implementation see the value and commonality that can be addressed by
REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the
users listed in the slides. Up to this point, usage levels have been tied
closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can
discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API)
prior to CSI. In most recent we have made architectural changes to be adhere
to CSI. When libStorage was its control-plane, there was integration work
performed to make libStorage a volume plugin and additionally to Cloud
Foundry. Today, anyone who implements CSI on the cluster orchestrator side
can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations.
It is an orchestrator and simply gets two components (local/remote storage &
an OS) connected. It essentially performs the exact same steps that someone
would manually perform to get these two components communicating and the
reverse on tear down.

Persistent state: It is completely stateless today.




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
---
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


Kitson, Clinton <clinton.kitson@...>
 


Correct Brian, REX-Ray should be transparent to end users in this space and provides an important service by helping connect apps to storage. Operators of clusters are the ones that should be very aware of it as it would provide trusted and more quality plugins that are built on top of the existing CSI spec. 

REX-Ray stats: Recently REX-Ray went through some refactoring to accommodate the CSI architecture changes that needed to take place. This meant rolling in the libStorage functionality which unfortunately skews the numbers a bit. The {code} team has been primary maintainers on the framework where collaborators have mainly focused on building drivers. Other storage companies who understand the complexity involved in building a solid CSI implementation see the value and commonality that can be addressed by REX-Ray and are interested in collaborating if supported via a foundation.

Production users: Yes, REX-Ray is being used in production by some of the users listed in the slides. Up to this point, usage levels have been tied closely to production deployment of Mesos & Docker.

Sandbox: I believe the numbers and history justify incubation, but we can discuss it.

Control plane: REX-Ray used to have its own control plane (libStorage API) prior to CSI. In most recent we have made architectural changes to be adhere to CSI.  When libStorage was its control-plane, there was integration work performed to make libStorage a volume plugin and additionally to Cloud Foundry. Today, anyone who implements CSI on the cluster orchestrator side can talk with any REX-Ray plugin.

Data plane: REX-Ray is not involved in the data-plane of storage operations. It is an orchestrator and simply gets two components (local/remote storage & an OS) connected. It essentially performs the exact same steps that someone would manually perform to get these two components communicating and the reverse on tear down.

Persistent state: It is completely stateless today. 




Clint Kitson
Technical Director for {code}
CNCF Governing Board Member
--- 
email: Clinton.Kitson@...
mobile: "+1 424 645 4116"
team: theCodeTeam.com
twitter: "@clintkitson"
github: github.com/clintkitson


Brian Grant
 

Thanks to Clinton for presenting.


A response to Alexis's question/point about end-user benefit: I would expect it to be similar to the end-user benefit of CNI: more high-quality infrastructure options available in cloud-native environments. Not all of our projects will be used directly by users. Some may be used by other projects as components, libraries, frameworks, APIs, and so on.

Some questions about RexRay:
  • On contributors: My reading of the github stats is that there have been primarily 2 contributors over the past 6 months, both from Dell/EMC.
  • Are any of the listed users using RexRay in production?
  • Would the proposed Sandbox meet the project's need for a neutral home, at least initially?
  • Does RexRay itself include a control plane? Persistent state? If so, has it been adopted by any other project that has its own control plane?