TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and brought together
a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
|
|
toggle quoted message
Show quoted text
TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and brought together
a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
|
|

Alex Chircop
TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and brought together
a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
This email and any attachments are confidential to the intended recipient and may also be privileged or copyrighted material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient please delete it from your system and notify the sender. StorageOS Ltd is a company registered in England and Wales with company number 09614942. Registered office address: 2 Minton Place, Victoria Road, Bicester, Oxfordshire, OX26 6QB.
|
|

Alex Jones
|
|

Davanum Srinivas
Alois.
Big thanks for bringing this up!
Some observations from me: - there are 21 applications currently in the queue, a bunch of them are resubmissions where the TOC has a set of questions and they came back with answers. - The process needs to be consistent (irrespective of when a project was submitted or who is evaluating it) to prevent angst/worry on the part of the submitters. - TOC's are not particularly healthy as they are understaffed, we do push some of the submissions to talk to either TAGs or k8s SIGs for example and come back with written responses and endorsements for example. - Getting time on calendars for everyone on the TOC is challenge, so we have/will run into quorum issues trying to schedule additional calls to get through the backlog - "People might lose interest, change roles or jobs, etc." << I get it, but i'd rather not accept something that is ephemeral and doesn't really have folks who can drive things for the longer term, Sorry. - On a good day, TOC is able to handle about 10-12 submissions. Trying to do more will be just rubber stamping instead of actually looking through, reading and watching the stuff the submitters have requested.
I do get the need for speed and I agree that we need to do better. So let's have this conversation and see how we can proceed next.
Alois, I am happy to add this to the TOC agenda and walk through the issues and work through possible solutions,
thanks, Dims
toggle quoted message
Show quoted text
TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and brought together
a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
|
|

Chris Short
Just a thought, could the Sandbox process be amended to be run by a designed to be nimble, yet to be created committee/council/board?
Chris Short
He/Him/His
Sr. Developer Advocate, AWS Kubernetes (GitOps)
TZ=America/Detroit
toggle quoted message
Show quoted text
On Apr 30, 2022, at 12:18, Davanum Srinivas <davanum@...> wrote:
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
|
Alois.
Big thanks for bringing this up!
Some observations from me:
- there are 21 applications currently in the queue, a bunch of them are resubmissions where the TOC has a set of questions and they came back with answers.
- The process needs to be consistent (irrespective of when a project was submitted or who is evaluating it) to prevent angst/worry on the part of the submitters.
- TOC's are not particularly healthy as they are understaffed, we do push some of the submissions to talk to either TAGs or k8s SIGs for example and come back with written responses and endorsements for example.
- Getting time on calendars for everyone on the TOC is challenge, so we have/will run into quorum issues trying to schedule additional calls to get through the backlog
- "People might lose interest, change roles or jobs, etc." << I get it, but i'd rather not accept something that is ephemeral and doesn't really have folks who can drive things for the longer term, Sorry.
- On a good day, TOC is able to handle about 10-12 submissions. Trying to do more will be just rubber stamping instead of actually looking through, reading and watching the stuff the submitters have requested.
I do get the need for speed and I agree that we need to do better. So let's have this conversation and see how we can proceed next.
Alois, I am happy to add this to the TOC agenda and walk through the issues and work through possible solutions,
thanks,
Dims
TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and
brought together a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
--
|
|

Davanum Srinivas
Chris,
I would prefer first to beef up our existing bodies and not spread thin folks already doing too much. My personal preference.
However I am not taking it off the table.
It would be great if folks on this thread (who are not yet active in a TAG) pick one or more TAGs and actively participate in the activities, that would be wonderful for sure.
-- Dims
toggle quoted message
Show quoted text
On Sat, Apr 30, 2022 at 12:22 PM Short, Chris < cbshort@...> wrote:
Just a thought, could the Sandbox process be amended to be run by a designed to be nimble, yet to be created committee/council/board?
Chris Short
He/Him/His
Sr. Developer Advocate, AWS Kubernetes (GitOps)
TZ=America/Detroit
On Apr 30, 2022, at 12:18, Davanum Srinivas <davanum@...> wrote:
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
|
Alois.
Big thanks for bringing this up!
Some observations from me:
- there are 21 applications currently in the queue, a bunch of them are resubmissions where the TOC has a set of questions and they came back with answers.
- The process needs to be consistent (irrespective of when a project was submitted or who is evaluating it) to prevent angst/worry on the part of the submitters.
- TOC's are not particularly healthy as they are understaffed, we do push some of the submissions to talk to either TAGs or k8s SIGs for example and come back with written responses and endorsements for example.
- Getting time on calendars for everyone on the TOC is challenge, so we have/will run into quorum issues trying to schedule additional calls to get through the backlog
- "People might lose interest, change roles or jobs, etc." << I get it, but i'd rather not accept something that is ephemeral and doesn't really have folks who can drive things for the longer term, Sorry.
- On a good day, TOC is able to handle about 10-12 submissions. Trying to do more will be just rubber stamping instead of actually looking through, reading and watching the stuff the submitters have requested.
I do get the need for speed and I agree that we need to do better. So let's have this conversation and see how we can proceed next.
Alois, I am happy to add this to the TOC agenda and walk through the issues and work through possible solutions,
thanks,
Dims
TOC-Members,
This email is a follow up of a conversation I started with dims.
I am asking you to rethink the current Sandbox process – mostly regarding speed. The idea of having a way to collaborate across organisations under a neutral foundation is key ideal of the CNCF. However, this is proving
to get harder; mostly regarding the speed of acceptance.
Let me share an example. We have put together an industry consortium to define a common, vendor-neutral standard for feature flagging (https://openfeature.dev ) and
brought together a consortium spanning key industry players and end users (see interested parties:
https://github.com/open-feature/community/blob/main/interested-parties.md ). We submitted for sandbox early this year and given the current backlog and progress on evaluating projects it is very likely that the project will get accepted until late this
year.
Pulling these activities off, getting buy in form key stakeholder and driving momentum to move them forward is a major effort. A key part of proces is being able to operate under a neutral entry like the CNCF. If this
process is taking a very long time, it has negative impact on these initiatives. People might lose interest, change roles or jobs, etc.
The sandbox process is an essential part of evolving the cloud native landscape, but it is broken and needs to evolve.
I am proposing to make this a continuous process; maybe involving TAGs (again), who can support handling to workload and defining a set of criteria allowing project to prepare for being accepted quickly. Just some ideas
on criteria:
- Obviously, the project being cloud native
- Clear goals and roadmap
- A community engagement plan.
- A team/consortium that can the delivery on the project’s goals.
Immediate steps then should be to get the current backlog down and define a “service level” by when project should be able to expect a response.
I am willing to support on improving the process, if needed/wanted.
This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and
understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.
--
|
|
On 4/30/22 09:18, Davanum Srinivas wrote: - there are 21 applications currently in the queue, a bunch of them are resubmissions where the TOC has a set of questions and they came back with answers. FWIW, this part of the process could be made considerably more efficient for both the projects and the TOC as well. Right now, when sandbox projects are sent out and come back with answers, there's no paper trail for the questions and the answers. This forces TOC members to re-evaluate the project from scratch. My suggestion for a simple process that would solve this. If a project needs to answer questions or get inspected by a TAG, what happens is: 1. The TOC writes the questions in an issue in the TOC repo 2. The project or the TAG (depending) answer those questions in the TOC repo. 3. Evaluation is resumed whenever the questions are answered. This would spare TOC members from effectively starting over each time a project re-applies. -- -- Josh Berkus Kubernetes Community Architect OSPO, OCTO
|
|

Davanum Srinivas
I like that Josh! A bit more work on the TOC side, but the paper trail is for sure a good idea.
Amye, TOC members, Please chime in as well.
-- Dims
PS: we'll keep talking, when ready we can PR the changes to existing process(es) and then make it official (and then roll it out).
toggle quoted message
Show quoted text
On Sat, Apr 30, 2022 at 5:38 PM Josh Berkus < jberkus@...> wrote: On 4/30/22 09:18, Davanum Srinivas wrote:
> - there are 21 applications currently in the queue, a bunch of them are
> resubmissions where the TOC has a set of questions and they came back
> with answers.
FWIW, this part of the process could be made considerably more efficient
for both the projects and the TOC as well.
Right now, when sandbox projects are sent out and come back with
answers, there's no paper trail for the questions and the answers. This
forces TOC members to re-evaluate the project from scratch.
My suggestion for a simple process that would solve this. If a project
needs to answer questions or get inspected by a TAG, what happens is:
1. The TOC writes the questions in an issue in the TOC repo
2. The project or the TAG (depending) answer those questions in the TOC
repo.
3. Evaluation is resumed whenever the questions are answered.
This would spare TOC members from effectively starting over each time a
project re-applies.
--
-- Josh Berkus
Kubernetes Community Architect
OSPO, OCTO
|
|

Josh Gavant
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review
|
|
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1] * Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
toggle quoted message
Show quoted text
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review
|
|
I, for one, would love to see the sandbox process be faster and improve.
With regard to moving more work to the TAGs, two things come to mind.
First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.
Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?
Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
- I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
- TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.
Cheers,
Matt
toggle quoted message
Show quoted text
On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1]
* Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
|
|
toggle quoted message
Show quoted text
On Sun, 1 May 2022, 19:27 Matt Farina, < matt@...> wrote: I, for one, would love to see the sandbox process be faster and improve.
With regard to moving more work to the TAGs, two things come to mind.
First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.
Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?
Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
- I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
- TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.
Cheers,
Matt
On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1]
* Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
|
|
A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.
Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess? Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission.
One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included
toggle quoted message
Show quoted text
On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1
On Sun, 1 May 2022, 19:27 Matt Farina, < matt@...> wrote: I, for one, would love to see the sandbox process be faster and improve.
With regard to moving more work to the TAGs, two things come to mind.
First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.
Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?
Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
- I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
- TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.
Cheers,
Matt
On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1]
* Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
|
|
I should add, that’s not intended as a criticism - the number of very early stage applications from individuals and single vendors has increased, which over time opened up the question for the TOC of whether it’s really right to commit CNCF resources for these projects.
Those discussions naturally move us away from the original intention that the process should involve very little assessment or subjective judgement (e.g. the intention was to avoid a complicated definition of what is “mature enough” for sandbox)
toggle quoted message
Show quoted text
On Mon, 2 May 2022 at 18:48, Liz Rice < liz@...> wrote: A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.
Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess? Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission.
One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included
On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1
On Sun, 1 May 2022, 19:27 Matt Farina, < matt@...> wrote: I, for one, would love to see the sandbox process be faster and improve.
With regard to moving more work to the TAGs, two things come to mind.
First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.
Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?
Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
- I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
- TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.
Cheers,
Matt
On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1]
* Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
|
|
What resources do sandbox projects consume? How is that resource consumption justified?
IMO the main effort of a sandbox project should be getting into a position to apply for incubation, or keep going a bit longer, or shut down. This pruning should be pretty good at keeping out bad projects.
toggle quoted message
Show quoted text
On Mon, 2 May 2022, 18:58 Liz Rice, < liz@...> wrote: I should add, that’s not intended as a criticism - the number of very early stage applications from individuals and single vendors has increased, which over time opened up the question for the TOC of whether it’s really right to commit CNCF resources for these projects.
Those discussions naturally move us away from the original intention that the process should involve very little assessment or subjective judgement (e.g. the intention was to avoid a complicated definition of what is “mature enough” for sandbox)
On Mon, 2 May 2022 at 18:48, Liz Rice < liz@...> wrote: A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.
Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess? Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission.
One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included
On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1
On Sun, 1 May 2022, 19:27 Matt Farina, < matt@...> wrote: I, for one, would love to see the sandbox process be faster and improve.
With regard to moving more work to the TAGs, two things come to mind.
First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.
Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?
Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
- I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
- TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.
Cheers,
Matt
On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:
* Form Program: Annual Sandbox Review [1]
* Create summary slides [2]
[2]
Will have details Tuesday as part of TAG update.
Matt
[Edited Message Follows]
+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.
It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.
|
|
Replying top-level as my thoughts jump across the thread.
I didn't run the numbers, yet I believe that the pace of submissions has picked up. That alone can increase backlog.
We tried SIGs (now TAGs) doing due diligence for projects. The level of scrutiny, and the closeness to the guidance material available, was different across TAGs. In effect, this meant inconsistent processes which is arguably unfair. And in cases of disagreements, TOC is pulled in automatically anyway. A clear delegation from TOC might be possible, yet project advancement is one of the main tasks of TOC and arguably what votees expect TOC to do. In any case, it does change any of the underlying desires.
What TAGs could provide is an initial proving ground, though: Projects could give a presentation and go through questions and feedback in a more limited scope, allowing them to polish their submittal.
While I know that the current sandbox process is designed to be very low barrier, I am still not convinced that this is an obviously desirable design goal. It is true that a neutral playing field is good and helps some projects grow. It is also true that "CNCF project" holds immense marketing value and many efforts are ephemeral, in particular if largely driven by perf & marketing. Back when sandbox criteria were relaxed, I was of the opinion that they should remain more stringent. I have come to wonder if four levels wouldn't be more appropriate: An initial runway on which projects can be put; but also pruned more aggressively if they do not show growth/adoption/the usual. E.g. once submitted they have three? six? twelve? months to show certain progress or are removed outright. Medium term, this might also allow for a smaller jump towards Incubating, which is currently significant.
Orthogonally, I believe we can manage expectations better. One possible approach would be to create dashboards and reports of the underlying data to help manage expectations and keep ourselves honest. What are the average and median times a project takes from stage X to stage Y? How has this changed over time? Another would be to rework the process & documentation; e.g. Incubation had distinct requirement docs which TAGs copied together and deduplicated back during the DD trials.
Having seen things from both sides now, and since CNCF started, I can understand both the frustrations about some timelines better and also understand how a few dedicated people are trying to do their best with the time they have. On all sides.
Best, Richard
|
|
Richard how would you formalise this? The goal, IMO, is to reduce the subjective judgment on entry to sandbox, and increase the quantitative aspects
toggle quoted message
Show quoted text
On Thu, 5 May 2022, 13:38 Richard Hartmann, < richih@...> wrote: Replying top-level as my thoughts jump across the thread.
I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.
We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.
What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.
While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.
Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.
Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.
Best,
Richard
|
|
On Thu, May 5, 2022 at 3:57 PM Alexis Richardson <alexis@...> wrote: Richard how would you formalise this? Which parts, specifically? I think we need consensus on a direction before we, potentially, start new/updating processes. The goal, IMO, is to reduce the subjective judgment on entry to sandbox, and increase the quantitative aspects Agreed. At the same time, we need to take Goodhart's law[1] into account. A more quantitative approach to inform project progression is an obvious target for project optimization. At the same time, a more quantitative tally of TOC's input and work would help make processes more transparent and thus predictable. Put differently, I am not convinced that we can optimize human judgement away and would rather try to optimize on the side of transparent processes. Best, Richard [1] https://en.wikipedia.org/wiki/Goodhart%27s_law
|
|

Bob Killen
I agree on quite a few points :) Replying in line with some thoughts
> We tried SIGs (now TAGs) doing due diligence for projects. The level > of scrutiny, and the closeness to the guidance material available, was > different across TAGs. In effect, this meant inconsistent processes > which is arguably unfair. And in cases of disagreements, TOC is pulled > in automatically anyway.
The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc. I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.
> What TAGs could provide is an initial proving ground, though: Projects > could give a presentation and go through questions and feedback in a > more limited scope, allowing them to polish their submittal.
+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.
> While I know that the current sandbox process is designed to be very > low barrier, I am still not convinced that this is an obviously > desirable design goal. It is true that a neutral playing field is good > and helps some projects grow. It is also true that "CNCF project" > holds immense marketing value and many efforts are ephemeral, in > particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that > they should remain more stringent.
I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.
> I have come to wonder if four > levels wouldn't be more appropriate: An initial runway on which > projects can be put; but also pruned more aggressively if they do not > show growth/adoption/the usual. E.g. once submitted they have three? > six? twelve? months to show certain progress or are removed outright.
I was literally talking with a co-worker about this thought yesterday as a potential idea :) I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.
> Another would be to rework the process & documentation; e.g. > Incubation had distinct requirement docs which TAGs copied together > and deduplicated back during the DD trials.
+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.
I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)
- Bob
toggle quoted message
Show quoted text
On Thu, May 5, 2022 at 7:38 AM Richard Hartmann < richih@...> wrote: Replying top-level as my thoughts jump across the thread.
I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.
We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.
What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.
While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.
Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.
Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.
Best,
Richard
|
|