Date   

Re: first CSI example implementation

alexis richardson
 



On Sun, Jul 2, 2017 at 7:11 AM, Bernstein, Joshua <Joshua.Bernstein@...> wrote:
Of course. Wouldn't leave them out! 

:-)




 

-Josh

On Jul 1, 2017, at 12:10 AM, Alexis Richardson <alexis@...> wrote:

Josh

How about people from docker and kubernetes?  Are they in the loop?

Alexis


On Sat, 1 Jul 2017, 06:48 Bernstein, Joshua, <Joshua.Bernstein@...> wrote:
Hi Alexis,

I'm not sure exactly the number of people, but it's been a large effort from everyone around the community between the folks at Mesosphere, those of us at Dell, and all of the rest of the folks in the community and companies that provided feedback and made the 0.1 spec come together so quickly. 

-Josh

On Jun 30, 2017, at 1:58 PM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

this looks like very quick progress; impressive

how many people are involved?


On Fri, Jun 30, 2017 at 7:08 PM, Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@googlegroups.com>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@googlegroups.com.
To post to this group, send email to cncf-wg-storage@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: first CSI example implementation

Bernstein, Joshua <Joshua.Bernstein@...>
 

Of course. Wouldn't leave them out! 

-Josh

On Jul 1, 2017, at 12:10 AM, Alexis Richardson <alexis@...> wrote:

Josh

How about people from docker and kubernetes?  Are they in the loop?

Alexis


On Sat, 1 Jul 2017, 06:48 Bernstein, Joshua, <Joshua.Bernstein@...> wrote:
Hi Alexis,

I'm not sure exactly the number of people, but it's been a large effort from everyone around the community between the folks at Mesosphere, those of us at Dell, and all of the rest of the folks in the community and companies that provided feedback and made the 0.1 spec come together so quickly. 

-Josh

On Jun 30, 2017, at 1:58 PM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

this looks like very quick progress; impressive

how many people are involved?


On Fri, Jun 30, 2017 at 7:08 PM, Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@...>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@....
To post to this group, send email to cncf-wg-storage@....
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: first CSI example implementation

alexis richardson
 

Josh

How about people from docker and kubernetes?  Are they in the loop?

Alexis


On Sat, 1 Jul 2017, 06:48 Bernstein, Joshua, <Joshua.Bernstein@...> wrote:
Hi Alexis,

I'm not sure exactly the number of people, but it's been a large effort from everyone around the community between the folks at Mesosphere, those of us at Dell, and all of the rest of the folks in the community and companies that provided feedback and made the 0.1 spec come together so quickly. 

-Josh

On Jun 30, 2017, at 1:58 PM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

this looks like very quick progress; impressive

how many people are involved?


On Fri, Jun 30, 2017 at 7:08 PM, Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@...>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@....
To post to this group, send email to cncf-wg-storage@....
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: first CSI example implementation

Bernstein, Joshua <Joshua.Bernstein@...>
 

Hi Alexis,

I'm not sure exactly the number of people, but it's been a large effort from everyone around the community between the folks at Mesosphere, those of us at Dell, and all of the rest of the folks in the community and companies that provided feedback and made the 0.1 spec come together so quickly. 

-Josh

On Jun 30, 2017, at 1:58 PM, Alexis Richardson via cncf-toc <cncf-toc@...> wrote:

this looks like very quick progress; impressive

how many people are involved?


On Fri, Jun 30, 2017 at 7:08 PM, Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@googlegroups.com>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@googlegroups.com.
To post to this group, send email to cncf-wg-storage@...om.
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: first CSI example implementation

alexis richardson
 

this looks like very quick progress; impressive

how many people are involved?


On Fri, Jun 30, 2017 at 7:08 PM, Chris Aniszczyk via cncf-toc <cncf-toc@...> wrote:
FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@googlegroups.com>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@googlegroups.com.
To post to this group, send email to cncf-wg-storage@...om.
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



first CSI example implementation

Chris Aniszczyk
 

FYI

---------- Forwarded message ----------
From: Clinton Kitson <clintonskitson@...>
Date: Sat, Jul 1, 2017 at 12:56 AM
Subject: first CSI example implementation
To: cncf-wg-storage <cncf-wg-storage@...>


Hello team,

Lots of great progress being made re the CSI specification.

On today's call Andrew Kutz of {code} by Dell EMC provided a first working example of the CSI specification written in Go. At this current time there is a single pull request that includes the different components that make all sides (server/client) of it work. This PR and future submitted code will mutate based on discussions, but for now it is a partially working example with AWS EBS!


There were specific goals in generating this:
- Generate a simple and pure CSI server side plugin endpoint in Go
- Pragmatically work through specification to identify gaps
- Create tools and tests that can assist with CI against the specification
- Discover if there are things that can be done to make it easier for plugin implementers
- Collect feedback about project desires as compared to examples and tools

He was able to show a few really cool things today.

1) a standalone AWS EBS plugin process (csp) that serves the CSI endpoints. This implementation has absolute minimal code and is intended to be a pure CSI implementation.

2) the client (csc) that is a tool that can be used to speak with the CSI endpoints. It provides functionality similar to what a CO would implement and thus makes it practical to truly test the specification and understand limitations.

3) the daemon (csd) that could possibly make it easier for plugin developers. We have identified input validation, logging, etc as common features that can possibly be shared across plugins. This was implemented by using Go plugins, where the csd is able to dynamically load the standalone csp's (same exact package as mentioned in #1). The csd actually exposes gRPC upward, and uses gRPC to speak in-memory via Go io.pipe package to any of the pure CSI plugin packages.

Both 1 & 3 are able to provide a mechanism to further test and actually have discussions around packaging and build process for the plugins since it is a real process now that serves as endpoints.

Looking forward to feedback on how the project might see these examples and tools fitting moving forward.


--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@googlegroups.com.
To post to this group, send email to cncf-wg-storage@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/5c0bd689-fff0-4b81-8faf-edd389ce7d98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719


Re: Infrakit Questions

Solomon Hykes <solomon.hykes@...>
 

Rob, Zach, to clarify: do you have specific concerns about InfraKit that we should ask Dave to address?

My understanding is that 1) InfraKit is OS-agnostic and does not require LinuxKit 2) InfraKit does not impose an immutable operating system pattern.


Rob, you suggest more review: specifically what should we review? I think we should aim to get to "yes" or "no" in a timely fashion.

On Tue, Jun 27, 2017 at 10:32 AM, Zachary Smith via cncf-toc <cncf-toc@...> wrote:
I'd agree with Rob here.  LinuxKit is certain a component, but I think that the full hardware and network lifecycle associated with booting "all the things" is a pretty broad and messy space right now, particularly across private datacenters vs public clouds.

-Zac

On Tue, Jun 27, 2017 at 11:27 AM, Rob Hirschfeld via cncf-toc <cncf-toc@...> wrote:
Responding to request from TOC meeting last week...

I think that Day 1 and Day 2 provisioning is key area for CNCF to cover; however, I think that the space is transforming in several different ways so I would suggest more review by the TOC.  Obviously, I have an interest in this since I'm a lead on Digital Rebar.  For that reason, I'm reluctant to push against or pull for related projects.

For LinuxKit specifically, I think the emphasis on immutable operating systems should be considered carefully.  There are many benefits to this approach but they cannot be applied generally to legacy workloads and management tooling.  I believe that operational adoption is accelerated when tooling fits well with both new and existing ops models.

Again - I'm happy to show how we solve this problem with Digital Rebar at a TOC.  It's not just about physical provisioning - managing server life-cycle in multiple infastructures is a key design requirement.  Tooling that does not address the full life-cycle may actually make management harder over time.

Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:56 AM, Alex Baretto <axbaretto@...> wrote:
+1 to Alexis and Rob.

I'd really like to see a good breakdown comparison between Infrakit and digital rebar, bosh, cloudformation, fog,and others

Alex Baretto



On Tue, Jun 06, 2017 at 08:51 Rob Hirschfeld via cncf-toc <Rob Hirschfeld via cncf-toc > wrote:
All,

I'd be happy to present / demo Digital Rebar to provide another cloud native perspective on how to address hybrid infrastructure automation.  I believe that would help provide a helpful perspective on operational concerns and how to address them in a way that fits the CNCF community.  As you know, we've been heavily involved in the Kubernetes community and have been showing an approach that uses the community Ansible for Kubernetes.  We've also done demos also showing LinuxKit integration.

Rob

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:41 AM, Alexis Richardson <alexis@...> wrote:
Thanks David, Patrick et al., for Infrakit pres today!

https://docs.google.com/presentation/d/1Lzy94UNzdSXkqZCvrwjkcChKpU8u2waDqGx_Sjy5eJ8/edit#slide=id.g22ccd21963_2_0


Per Bryan's Q re Terraform, it would also be good to hear about BOSH &
Infrakit feature comparison.  And other related tech you see in the
space.




_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--
Zachary Smith, CEO of Packet

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Re: Infrakit Questions

Zachary Smith
 

I'd agree with Rob here.  LinuxKit is certain a component, but I think that the full hardware and network lifecycle associated with booting "all the things" is a pretty broad and messy space right now, particularly across private datacenters vs public clouds.

-Zac

On Tue, Jun 27, 2017 at 11:27 AM, Rob Hirschfeld via cncf-toc <cncf-toc@...> wrote:
Responding to request from TOC meeting last week...

I think that Day 1 and Day 2 provisioning is key area for CNCF to cover; however, I think that the space is transforming in several different ways so I would suggest more review by the TOC.  Obviously, I have an interest in this since I'm a lead on Digital Rebar.  For that reason, I'm reluctant to push against or pull for related projects.

For LinuxKit specifically, I think the emphasis on immutable operating systems should be considered carefully.  There are many benefits to this approach but they cannot be applied generally to legacy workloads and management tooling.  I believe that operational adoption is accelerated when tooling fits well with both new and existing ops models.

Again - I'm happy to show how we solve this problem with Digital Rebar at a TOC.  It's not just about physical provisioning - managing server life-cycle in multiple infastructures is a key design requirement.  Tooling that does not address the full life-cycle may actually make management harder over time.

Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:56 AM, Alex Baretto <axbaretto@...> wrote:
+1 to Alexis and Rob.

I'd really like to see a good breakdown comparison between Infrakit and digital rebar, bosh, cloudformation, fog,and others

Alex Baretto



On Tue, Jun 06, 2017 at 08:51 Rob Hirschfeld via cncf-toc <Rob Hirschfeld via cncf-toc > wrote:
All,

I'd be happy to present / demo Digital Rebar to provide another cloud native perspective on how to address hybrid infrastructure automation.  I believe that would help provide a helpful perspective on operational concerns and how to address them in a way that fits the CNCF community.  As you know, we've been heavily involved in the Kubernetes community and have been showing an approach that uses the community Ansible for Kubernetes.  We've also done demos also showing LinuxKit integration.

Rob

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:41 AM, Alexis Richardson <alexis@...> wrote:
Thanks David, Patrick et al., for Infrakit pres today!

https://docs.google.com/presentation/d/1Lzy94UNzdSXkqZCvrwjkcChKpU8u2waDqGx_Sjy5eJ8/edit#slide=id.g22ccd21963_2_0


Per Bryan's Q re Terraform, it would also be good to hear about BOSH &
Infrakit feature comparison.  And other related tech you see in the
space.




_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--
Zachary Smith, CEO of Packet
+1 212.812.4178


Re: Infrakit Questions

Rob Hirschfeld
 

Responding to request from TOC meeting last week...

I think that Day 1 and Day 2 provisioning is key area for CNCF to cover; however, I think that the space is transforming in several different ways so I would suggest more review by the TOC.  Obviously, I have an interest in this since I'm a lead on Digital Rebar.  For that reason, I'm reluctant to push against or pull for related projects.

For LinuxKit specifically, I think the emphasis on immutable operating systems should be considered carefully.  There are many benefits to this approach but they cannot be applied generally to legacy workloads and management tooling.  I believe that operational adoption is accelerated when tooling fits well with both new and existing ops models.

Again - I'm happy to show how we solve this problem with Digital Rebar at a TOC.  It's not just about physical provisioning - managing server life-cycle in multiple infastructures is a key design requirement.  Tooling that does not address the full life-cycle may actually make management harder over time.

Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:56 AM, Alex Baretto <axbaretto@...> wrote:
+1 to Alexis and Rob.

I'd really like to see a good breakdown comparison between Infrakit and digital rebar, bosh, cloudformation, fog,and others

Alex Baretto



On Tue, Jun 06, 2017 at 08:51 Rob Hirschfeld via cncf-toc <Rob Hirschfeld via cncf-toc > wrote:
All,

I'd be happy to present / demo Digital Rebar to provide another cloud native perspective on how to address hybrid infrastructure automation.  I believe that would help provide a helpful perspective on operational concerns and how to address them in a way that fits the CNCF community.  As you know, we've been heavily involved in the Kubernetes community and have been showing an approach that uses the community Ansible for Kubernetes.  We've also done demos also showing LinuxKit integration.

Rob

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Rob
____________________________
Rob Hirschfeld, 512-773-7522
RackN CEO/Founder (rob@...)

I am in CENTRAL (-6) time
http://robhirschfeld.com
twitter: @zehicle, github: zehicle

On Tue, Jun 6, 2017 at 8:41 AM, Alexis Richardson <alexis@...> wrote:
Thanks David, Patrick et al., for Infrakit pres today!

https://docs.google.com/presentation/d/1Lzy94UNzdSXkqZCvrwjkcChKpU8u2waDqGx_Sjy5eJ8/edit#slide=id.g22ccd21963_2_0


Per Bryan's Q re Terraform, it would also be good to hear about BOSH &
Infrakit feature comparison.  And other related tech you see in the
space.




CSI regular community sync

Chris Aniszczyk
 

FYI

---------- Forwarded message ----------
From: Jie Yu <jie@...>
Date: Tue, Jun 27, 2017 at 6:57 AM
Subject: CSI regular community sync
To: container-storage-interface-community@...
Cc: cncf-wg-storage@...


Hi folks,

We'll be starting regular community sync on CSI. The goal is to use that forum for open issue discussions and getting feedbacks from the community. All the details about the meeting can be found here:

Feel free to suggest agenda items in the doc! Our first meeting will be 7/13/2017 (see details in the doc). Let us know if you have any question!

- Jie

--
You received this message because you are subscribed to the Google Groups "cncf-wg-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cncf-wg-storage+unsubscribe@googlegroups.com.
To post to this group, send email to cncf-wg-storage@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cncf-wg-storage/CAMHxVRFvSshhDv23N6KZY0bdd-jOttgWNQPdYmqaiHQaOV9Kdg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
Chris Aniszczyk (@cra) | +1-512-961-6719


HUP HUP - CNCF TOC Goals and Operating Principles - v0.3

alexis richardson
 

Last call for comments.

TOC vote to follow.

On Mon, Jun 12, 2017 at 9:44 PM, Alexis Richardson <alexis@...> wrote:
Broadening beyond TOC to add CNCF GB & Marketing.


CNCF community,

PLEASE review this doc whose purpose is to summarise the thinking of
the TOC concerning project selection, governance, and other frequently
requested topics.

https://docs.google.com/document/d/1Yl3IPpZnEWJRaXSBsTQ22ymQF57N5x_nHVHvmJdAj9Y/edit

This is important - please do engage.  Currently this document is a
draft.  Since the TOC operates by vote, these principles may in future
become written precedent.

alexis



On Mon, May 15, 2017 at 4:43 PM, Alexis Richardson <alexis@...> wrote:
> Hi
>
> Out of a desire to start writing down more how CNCF works, and what
> our principles are, Brian, Ken and I pulled some ideas into a doc:
>
>
https://docs.google.com/document/d/1Yl3IPpZnEWJRaXSBsTQ22ymQF57N5x_nHVHvmJdAj9Y/edit
>
> Comments are solicited.
>
> Please don't be too harsh - this is just the first iteration.
>
> alexis


Re: Notary/TuF & GPG (& Harbor)

Evan Cordell
 

Just wanted to weigh in from CoreOS. We are using Notary for signing packages as well for the Quay container registry running at Quay.io. 

Signing packages is tricky and TUF seems to get things right. I would also add that there's nothing preventing GPG integration in the future if that's desirable (for key management and signing operations, not instead of TUF metadata). I believe rust-tuf has that as a goal.


Re: Notary/TuF & GPG (& Harbor)

alexis richardson
 

Thanks Justin, that is very helpful & certainly length-appropriate.



On Thu, Jun 22, 2017 at 3:50 AM, Justin Cappos via cncf-toc <cncf-toc@...> wrote:
I didn't do a deep dive, but it looks like the "simple signing" design from Fedora would enable an attacker that has compromised the signing server to compromise user devices (even with HSMs, etc.).  I also wasn't sure if there was a secure way to do key revocation in the case where an incident did occur.  These sorts of issues happen a lot more than one would expect [1-5] plus see [6] for dozens of other incidents.

TUF is designed to handle exactly these kinds of incidents while still retaining a high degree of security.  Actually, many ideas in TUF came out of security issues we found in YUM, APT, and other package managers [7,8].  We integrated ideas from an earlier system of ours into YUM, APT, YaST, Pacman, etc. back around 2009.

I'd be happy to talk more if there are any questions or thoughts, but want to keep this being too long or from rambling too far off-topic...

Thanks,

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc



Notary/TuF & GPG (& Harbor)

Justin Cappos
 

I didn't do a deep dive, but it looks like the "simple signing" design from Fedora would enable an attacker that has compromised the signing server to compromise user devices (even with HSMs, etc.).  I also wasn't sure if there was a secure way to do key revocation in the case where an incident did occur.  These sorts of issues happen a lot more than one would expect [1-5] plus see [6] for dozens of other incidents.

TUF is designed to handle exactly these kinds of incidents while still retaining a high degree of security.  Actually, many ideas in TUF came out of security issues we found in YUM, APT, and other package managers [7,8].  We integrated ideas from an earlier system of ours into YUM, APT, YaST, Pacman, etc. back around 2009.

I'd be happy to talk more if there are any questions or thoughts, but want to keep this being too long or from rambling too far off-topic...

Thanks,


Re: Notary/TuF & GPG (& Harbor)

alexis richardson
 

Scott

What are your thoughts on Notary?

a


On Wed, Jun 21, 2017 at 6:41 PM, Scott McCarty via cncf-toc <cncf-toc@...> wrote:
Per the comments on GnuPG - the ubiquitous use of GPG is what drove Red Hat to work on what we call "simple signing" [1][2]. We would love to partner on more of this work.


[1]: http://www.projectatomic.io/blog/2016/07/working-with-containers-image-made-easy/

[2]: https://access.redhat.com/articles/2750891

Best Regards

Scott M


On 06/20/2017 05:23 PM, Alexis Richardson via cncf-toc wrote:
Thanks Richard.  +1 on .debs.  My 2c is that signing functionality used to be quite inhumane, and any project seeking to do better could certainly focus on being "pleasant".  Although the Notary didn't highlight this specifically, it sounded like they haven't ignored it either.


On Tue, Jun 20, 2017 at 7:38 PM, Richard Hartmann <richih@... <mailto:richih@...>> wrote:

    On Tue, Jun 20, 2017 at 6:03 PM, Alexis Richardson via cncf-toc
    <cncf-toc@... <mailto:cncf-toc@...>> wrote:

    > Thanks Patrick & Docker people for Notary pres. I personally
    found it very
    > useful & educational, having avoided package signing myself as
    much as
    > possible ;-)
    >
    > I would love to understand how a GPG person would make the case
    for sticking
    > with just that.

    Speaking as a Debian Developer, most of my work in that regard is
    underpinned by GnuPG. A lot of the functionality mentioned could be
    built with GnuPG and installed base and integration in many, many
    workflows and systems is a huge advantage in potential adaption. That
    being said, features like built-in quorum, expiring signatures, and
    other mechanisms can't easily be replicated with GnuPG, or its
    brethren, in their current form.

    I can see merit in both extending the PGP world to cover these aspects
    and in creating a new infrastructure.

    I am willing to bet that feature velocity will be higher outside of
    the PGP ecosystem as the installed base could be a disadvantage in
    this context. Also, some mechanisms are not designed for anything
    exceeding a certain scale.


    While this is not an endorsement of any particular project or path
    forward, I can say that the general functionality is highly needed.
    Years ago, I implemented a data store for a financial customer with
    third-party commercial hashsum timestamping services; that was not
    very pleasant at all. The functionality in and as of itself would be
    useful in a _lot_ of regards.


    Richard




_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc

--

Scott McCarty, RHCA

Technical Product Marketing: Containers

Email: smccarty@...

Phone: 312-660-3535

Cell: 330-807-1043

Web: http://crunchtools.com

When should you split your application into multiple containers? http://red.ht/22xKw9i

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc


Re: Notary/TuF & GPG (& Harbor)

Scott McCarty
 

Per the comments on GnuPG - the ubiquitous use of GPG is what drove Red Hat to work on what we call "simple signing" [1][2]. We would love to partner on more of this work.


[1]: http://www.projectatomic.io/blog/2016/07/working-with-containers-image-made-easy/

[2]: https://access.redhat.com/articles/2750891

Best Regards

Scott M

On 06/20/2017 05:23 PM, Alexis Richardson via cncf-toc wrote:
Thanks Richard. +1 on .debs. My 2c is that signing functionality used to be quite inhumane, and any project seeking to do better could certainly focus on being "pleasant". Although the Notary didn't highlight this specifically, it sounded like they haven't ignored it either.


On Tue, Jun 20, 2017 at 7:38 PM, Richard Hartmann <richih@... <mailto:richih@...>> wrote:

On Tue, Jun 20, 2017 at 6:03 PM, Alexis Richardson via cncf-toc
<cncf-toc@... <mailto:cncf-toc@...>> wrote:

> Thanks Patrick & Docker people for Notary pres. I personally
found it very
> useful & educational, having avoided package signing myself as
much as
> possible ;-)
>
> I would love to understand how a GPG person would make the case
for sticking
> with just that.

Speaking as a Debian Developer, most of my work in that regard is
underpinned by GnuPG. A lot of the functionality mentioned could be
built with GnuPG and installed base and integration in many, many
workflows and systems is a huge advantage in potential adaption. That
being said, features like built-in quorum, expiring signatures, and
other mechanisms can't easily be replicated with GnuPG, or its
brethren, in their current form.

I can see merit in both extending the PGP world to cover these aspects
and in creating a new infrastructure.

I am willing to bet that feature velocity will be higher outside of
the PGP ecosystem as the installed base could be a disadvantage in
this context. Also, some mechanisms are not designed for anything
exceeding a certain scale.


While this is not an endorsement of any particular project or path
forward, I can say that the general functionality is highly needed.
Years ago, I implemented a data store for a financial customer with
third-party commercial hashsum timestamping services; that was not
very pleasant at all. The functionality in and as of itself would be
useful in a _lot_ of regards.


Richard




_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc
--

Scott McCarty, RHCA

Technical Product Marketing: Containers

Email: smccarty@...

Phone: 312-660-3535

Cell: 330-807-1043

Web: http://crunchtools.com

When should you split your application into multiple containers? http://red.ht/22xKw9i


Re: Notary/TuF & GPG (& Harbor)

alexis richardson
 

Thanks Richard.  +1 on .debs.  My 2c is that signing functionality used to be quite inhumane, and any project seeking to do better could certainly focus on being "pleasant".  Although the Notary didn't highlight this specifically, it sounded like they haven't ignored it either.


On Tue, Jun 20, 2017 at 7:38 PM, Richard Hartmann <richih@...> wrote:
On Tue, Jun 20, 2017 at 6:03 PM, Alexis Richardson via cncf-toc
<cncf-toc@...> wrote:

> Thanks Patrick & Docker people for Notary pres.  I personally found it very
> useful & educational, having avoided package signing myself as much as
> possible ;-)
>
> I would love to understand how a GPG person would make the case for sticking
> with just that.

Speaking as a Debian Developer, most of my work in that regard is
underpinned by GnuPG. A lot of the functionality mentioned could be
built with GnuPG and installed base and integration in many, many
workflows and systems is a huge advantage in potential adaption. That
being said, features like built-in quorum, expiring signatures, and
other mechanisms can't easily be replicated with GnuPG, or its
brethren, in their current form.

I can see merit in both extending the PGP world to cover these aspects
and in creating a new infrastructure.

I am willing to bet that feature velocity will be higher outside of
the PGP ecosystem as the installed base could be a disadvantage in
this context. Also, some mechanisms are not designed for anything
exceeding a certain scale.


While this is not an endorsement of any particular project or path
forward, I can say that the general functionality is highly needed.
Years ago, I implemented a data store for a financial customer with
third-party commercial hashsum timestamping services; that was not
very pleasant at all. The functionality in and as of itself would be
useful in a _lot_ of regards.


Richard


Re: Notary/TuF & GPG (& Harbor)

Richard Hartmann
 

On Tue, Jun 20, 2017 at 6:03 PM, Alexis Richardson via cncf-toc
<cncf-toc@...> wrote:

Thanks Patrick & Docker people for Notary pres. I personally found it very
useful & educational, having avoided package signing myself as much as
possible ;-)

I would love to understand how a GPG person would make the case for sticking
with just that.
Speaking as a Debian Developer, most of my work in that regard is
underpinned by GnuPG. A lot of the functionality mentioned could be
built with GnuPG and installed base and integration in many, many
workflows and systems is a huge advantage in potential adaption. That
being said, features like built-in quorum, expiring signatures, and
other mechanisms can't easily be replicated with GnuPG, or its
brethren, in their current form.

I can see merit in both extending the PGP world to cover these aspects
and in creating a new infrastructure.

I am willing to bet that feature velocity will be higher outside of
the PGP ecosystem as the installed base could be a disadvantage in
this context. Also, some mechanisms are not designed for anything
exceeding a certain scale.


While this is not an endorsement of any particular project or path
forward, I can say that the general functionality is highly needed.
Years ago, I implemented a data store for a financial customer with
third-party commercial hashsum timestamping services; that was not
very pleasant at all. The functionality in and as of itself would be
useful in a _lot_ of regards.


Richard


Re: Zoom

Camille Fournier
 

To be clear I dialed in but it was totally unclear how to unmute myself. I own a phone with a mute button perhaps there's a default setting we could fix to not default phone to mute

On Jun 20, 2017 11:58 AM, "Eduardo Silva" <eduardo@...> wrote:
actually there is phone-only option Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)

On Tue, Jun 20, 2017 at 9:55 AM, Camille Fournier via cncf-toc <cncf-toc@...> wrote:
Zoom is cool but I need something phone-only that doesn't mute me in a fashion where I don't control it myself. Can we fix config default or move to something else?

C

_______________________________________________
cncf-toc mailing list
cncf-toc@...
https://lists.cncf.io/mailman/listinfo/cncf-toc




--
Eduardo Silva
Open Source, Treasure Data
http://www.treasuredata.com/opensource

 


Re: Notary/TuF & GPG (& Harbor)

alexis richardson
 

That's good info.

Keen to learn more from the community about this use case and project!


On Tue, 20 Jun 2017, 18:05 Solomon Hykes, <solomon.hykes@...> wrote:
Notary has also been shipping to enterprise customers as part of Docker EE. Good to know Vmware has followed suit. If enterprise adoption is a point of evaluation we can put together a few case studies.

On Tuesday, June 20, 2017, Mark Peek via cncf-toc <cncf-toc@...> wrote:

Harbor is an open source enterprise registry built on top of Docker distribution. It adds enterprise features such as RBAC, LDAP/AD support, auditing, Notary, and other features (follow link below). While standalone, it is also being shipped with the vSphere Integrated Containers product.

 

https://github.com/vmware/harbor

 

My apologies if there was confusion on my Notary/Harbor comment on the call. The Notary team was asked about the number of github stars and/or the broader community. The point I was trying to make in support is since Notary is included into Harbor (with over 2k stars) and shipping to enterprise customers, the Notary project has more scope than just their own repo.

 

Mark

 

From: Alexis Richardson <alexis@...>
Date: Tuesday, June 20, 2017 at 9:03 AM
To: Alexis Richardson via cncf-toc <cncf-toc@...>
Cc: Patrick Chanezon <patrick.chanezon@...>
Subject: Notary/TuF & GPG (& Harbor)

 

Hi all 

 

Thanks Patrick & Docker people for Notary pres.  I personally found it very useful & educational, having avoided package signing myself as much as possible ;-)

 

I would love to understand how a GPG person would make the case for sticking with just that.

 

I would love to hear more from Mark about Harbor as a broader use case for Notary.

 

alexis