Sandbox process needs to evolve to support cross industry collaboation


Reitbauer, Alois
 

+1

 

From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson via lists.cncf.io <alexis=weave.works@...>
Date: Thursday, 5. May 2022 at 18:30
To: Liz Rice <liz@...>
Cc: Alexis Richardson via cncf-toc <cncf-toc@...>, Bob Killen <killen.bob@...>, Richard Hartmann <richih@...>
Subject: Re: [cncf-toc] Sandbox process needs to evolve to support cross industry collaboation

CAUTION: This email originated from outside of Dynatrace. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Remember, the point of cncf is not to create ways for committees to sit in judgment over projects.  It is to make great projects that enable end user success.  That is all.

 

 

On Thu, 5 May 2022, 17:19 Liz Rice, <liz@...> wrote:

Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

 

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

 

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:

Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..

 

 

On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:

I agree on quite a few points :)  Replying in line with some thoughts

 

> We tried SIGs (now TAGs) doing due diligence for projects. The level

> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

 

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

 

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

 

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

 

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.

> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

 

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

 

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

 

I was literally talking with a co-worker about this thought yesterday as a potential idea :)

I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.

 

 

> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

 

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.

 

 

I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

 

- Bob

 

 

 

On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:

Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard




This email may contain confidential information. If it appears this message was sent to you by mistake, please let us know of the error. In this case, we also ask that you do not further forward the content and delete it. Thank you for your cooperation and understanding. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4020 Linz, Austria, Am Fünfundzwanziger Turm 20.


Liz Rice <liz@...>
 

There is a different sandbox logo, and projects are required to explicitly say they are sandbox whenever they mention that they are CNCF projects. The staff are pretty good at chasing up if folks report there are projects not complying with that

On Thu, 5 May 2022 at 19:10, alexis richardson <alexis@...> wrote:
We need VCs to sit on their hands until Incubation


On Thu, 5 May 2022 at 19:01, Brendan Burns <bburns@...> wrote:
Just for a historic perspective. When we did this discussion the last time, we identified that there are fundamentally two divergent goals that we have to balance:

Projects Goal #1) Bring multiple, potentially competing parties together in a neutral space so they can collaborate and innovate in open source without worrying about ownership. This goal means that the bar for Sandbox should be as low as possible to facilitate as much collaboration and innovation as possible.

Projects Goal #2) Get the CNCF 'label' for their project from a marketing perspective to spur interest, growth and (potentially) venture capital. This goal means that the bar for Sandbox should be rigorous so that we don't dilute CNCF brand/resources for random projects.

No matter how many lowest levels you add (4 instead of 3, 5 instead of 4, etc) none of this will go away. At the lowest level you always have to balance these two different, divergent goals.

Where we landed was that to try to make the Sandbox bar pretty low, but also try to make (and enforce) the usage of the CNCF logo/imprimatur for Sandbox projects.

At the time, we suggested crafting a separate 'sandbox' logo that looked like it was drawn with crayons (and perhaps even had toddlers in a sandbox) so that people really understood that there was no CNCF endorsement implied by being in Sandbox.

Afaik, this never happened, but I think the important lesson is that adding additional levels will not solve the problem, it just moves it.

And also, the problem is fundamentally unsolveable. All you can hope for is achieving some sort of balance (and adjusting from time to time based on experience to retain this balance)

--brendan



From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson via lists.cncf.io <alexis=weave.works@...>
Sent: Thursday, May 5, 2022 9:26 AM
To: Liz Rice <liz@...>
Cc: Alexis Richardson via cncf-toc <cncf-toc@...>; Bob Killen <killen.bob@...>; Richard Hartmann <richih@...>
Subject: [EXTERNAL] Re: [cncf-toc] Sandbox process needs to evolve to support cross industry collaboation
 
Remember, the point of cncf is not to create ways for committees to sit in judgment over projects.  It is to make great projects that enable end user success.  That is all.


On Thu, 5 May 2022, 17:19 Liz Rice, <liz@...> wrote:
Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:
Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






alexis richardson
 

We need VCs to sit on their hands until Incubation


On Thu, 5 May 2022 at 19:01, Brendan Burns <bburns@...> wrote:
Just for a historic perspective. When we did this discussion the last time, we identified that there are fundamentally two divergent goals that we have to balance:

Projects Goal #1) Bring multiple, potentially competing parties together in a neutral space so they can collaborate and innovate in open source without worrying about ownership. This goal means that the bar for Sandbox should be as low as possible to facilitate as much collaboration and innovation as possible.

Projects Goal #2) Get the CNCF 'label' for their project from a marketing perspective to spur interest, growth and (potentially) venture capital. This goal means that the bar for Sandbox should be rigorous so that we don't dilute CNCF brand/resources for random projects.

No matter how many lowest levels you add (4 instead of 3, 5 instead of 4, etc) none of this will go away. At the lowest level you always have to balance these two different, divergent goals.

Where we landed was that to try to make the Sandbox bar pretty low, but also try to make (and enforce) the usage of the CNCF logo/imprimatur for Sandbox projects.

At the time, we suggested crafting a separate 'sandbox' logo that looked like it was drawn with crayons (and perhaps even had toddlers in a sandbox) so that people really understood that there was no CNCF endorsement implied by being in Sandbox.

Afaik, this never happened, but I think the important lesson is that adding additional levels will not solve the problem, it just moves it.

And also, the problem is fundamentally unsolveable. All you can hope for is achieving some sort of balance (and adjusting from time to time based on experience to retain this balance)

--brendan



From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson via lists.cncf.io <alexis=weave.works@...>
Sent: Thursday, May 5, 2022 9:26 AM
To: Liz Rice <liz@...>
Cc: Alexis Richardson via cncf-toc <cncf-toc@...>; Bob Killen <killen.bob@...>; Richard Hartmann <richih@...>
Subject: [EXTERNAL] Re: [cncf-toc] Sandbox process needs to evolve to support cross industry collaboation
 
Remember, the point of cncf is not to create ways for committees to sit in judgment over projects.  It is to make great projects that enable end user success.  That is all.


On Thu, 5 May 2022, 17:19 Liz Rice, <liz@...> wrote:
Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:
Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






Brendan Burns
 

Just for a historic perspective. When we did this discussion the last time, we identified that there are fundamentally two divergent goals that we have to balance:

Projects Goal #1) Bring multiple, potentially competing parties together in a neutral space so they can collaborate and innovate in open source without worrying about ownership. This goal means that the bar for Sandbox should be as low as possible to facilitate as much collaboration and innovation as possible.

Projects Goal #2) Get the CNCF 'label' for their project from a marketing perspective to spur interest, growth and (potentially) venture capital. This goal means that the bar for Sandbox should be rigorous so that we don't dilute CNCF brand/resources for random projects.

No matter how many lowest levels you add (4 instead of 3, 5 instead of 4, etc) none of this will go away. At the lowest level you always have to balance these two different, divergent goals.

Where we landed was that to try to make the Sandbox bar pretty low, but also try to make (and enforce) the usage of the CNCF logo/imprimatur for Sandbox projects.

At the time, we suggested crafting a separate 'sandbox' logo that looked like it was drawn with crayons (and perhaps even had toddlers in a sandbox) so that people really understood that there was no CNCF endorsement implied by being in Sandbox.

Afaik, this never happened, but I think the important lesson is that adding additional levels will not solve the problem, it just moves it.

And also, the problem is fundamentally unsolveable. All you can hope for is achieving some sort of balance (and adjusting from time to time based on experience to retain this balance)

--brendan



From: cncf-toc@... <cncf-toc@...> on behalf of alexis richardson via lists.cncf.io <alexis=weave.works@...>
Sent: Thursday, May 5, 2022 9:26 AM
To: Liz Rice <liz@...>
Cc: Alexis Richardson via cncf-toc <cncf-toc@...>; Bob Killen <killen.bob@...>; Richard Hartmann <richih@...>
Subject: [EXTERNAL] Re: [cncf-toc] Sandbox process needs to evolve to support cross industry collaboation
 
Remember, the point of cncf is not to create ways for committees to sit in judgment over projects.  It is to make great projects that enable end user success.  That is all.


On Thu, 5 May 2022, 17:19 Liz Rice, <liz@...> wrote:
Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:
Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






alexis richardson
 

Remember, the point of cncf is not to create ways for committees to sit in judgment over projects.  It is to make great projects that enable end user success.  That is all.


On Thu, 5 May 2022, 17:19 Liz Rice, <liz@...> wrote:
Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:
Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






Liz Rice
 

Four levels would increase the total work required to assess a project through their life cycle. There might be good reasons to do it, but I don't see that it would solve the initial problem raised on this thread: speeding up the response to the first application at the earliest stage. 

The original point of Sandbox was to enable a neutral place for experimentation, for projects that wouldn't meet incubation criteria. A project only needs neutrality if and when there's more than one organisation keen to get involved; that's why I'm suggesting that could be the criteria for Sandbox inclusion. I'm further suggesting those organizations should be CNCF members so that they have "skin in the game"

(Of course the TOC might decide there are other reasons to support early stage projects that don't need neutrality - I'm just reminding the original intent.)

On Thu, May 5, 2022 at 4:02 PM alexis richardson <alexis@...> wrote:
Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






alexis richardson
 

Stringent implies work, judgement, and value.  It seems that scaling wall has been hit already..


On Thu, 5 May 2022, 15:44 Bob Killen, <killen.bob@...> wrote:
I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






Bob Killen
 

I agree on quite a few points :)  Replying in line with some thoughts

> We tried SIGs (now TAGs) doing due diligence for projects. The level
> of scrutiny, and the closeness to the guidance material available, was
> different across TAGs. In effect, this meant inconsistent processes
> which is arguably unfair. And in cases of disagreements, TOC is pulled
> in automatically anyway.

The TOC is the approval body and should be involved in DD, but I do think delegating portions of it to the TAGs is still a good idea and could play a large role in scaling the process. If there have been issues with varying levels of scrutiny in the past, this could be a mentorship and/or documentation opportunity. Think "ride-alongs" for reviewing DD, calling out what to look for, etc.  I also don't necessarily want to volunteer them, but TAG Contributor Strategy would be an excellent resource to pull in to review areas of governance and community health.

> What TAGs could provide is an initial proving ground, though: Projects
> could give a presentation and go through questions and feedback in a
> more limited scope, allowing them to polish their submittal.

+1 to involving them early, an initial consult would likely help with firming up applications before applying to Sandbox.

> While I know that the current sandbox process is designed to be very
> low barrier, I am still not convinced that this is an obviously
> desirable design goal. It is true that a neutral playing field is good
> and helps some projects grow. It is also true that "CNCF project"
> holds immense marketing value and many efforts are ephemeral, in
> particular if largely driven by perf & marketing.
> Back when sandbox criteria were relaxed, I was of the opinion that
> they should remain more stringent.

I have held the same opinion - I thought they should, to a degree, remain more stringent. While Sandbox does not have any formal marketing support from the CNCF, that doesn't mean companies or other groups can't market them as a "CNCF Project." Smaller or independent projects that might not have those sorts of resources will have a harder time climbing the ladder.

> I have come to wonder if four
> levels wouldn't be more appropriate: An initial runway on which
> projects can be put; but also pruned more aggressively if they do not
> show growth/adoption/the usual. E.g. once submitted they have three?
> six? twelve? months to show certain progress or are removed outright.

I was literally talking with a co-worker about this thought yesterday as a potential idea :)
I don't know if it's the answer, but I do really like the idea of a timebox with explicit criteria for exiting. It should not require a deep dive into the project to determine if they are ready to move up to sandbox. I'd also like to see restrictions on the branding/marketing of "CNCF Project" at this level. A potential alternative might be "Cloud Native Inception Project" or something along those lines.


> Another would be to rework the process & documentation; e.g.
> Incubation had distinct requirement docs which TAGs copied together
> and deduplicated back during the DD trials.

+1 to firming up requirements/docs. While I think there needs to be some room for TOC discretion, I think being more explicit with requirements will help reduce the toil involved with the DD process.


I have a slew more thoughts, but this subject might be a good discussion during a TOC meeting :)

- Bob



On Thu, May 5, 2022 at 7:38 AM Richard Hartmann <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






Richard Hartmann
 

On Thu, May 5, 2022 at 3:57 PM Alexis Richardson <alexis@...> wrote:

Richard how would you formalise this?
Which parts, specifically? I think we need consensus on a direction
before we, potentially, start new/updating processes.


The goal, IMO, is to reduce the subjective judgment on entry to sandbox, and increase the quantitative aspects
Agreed. At the same time, we need to take Goodhart's law[1] into
account. A more quantitative approach to inform project progression is
an obvious target for project optimization. At the same time, a more
quantitative tally of TOC's input and work would help make processes
more transparent and thus predictable.

Put differently, I am not convinced that we can optimize human
judgement away and would rather try to optimize on the side of
transparent processes.


Best,
Richard


[1] https://en.wikipedia.org/wiki/Goodhart%27s_law


alexis richardson
 

Richard how would you formalise this?  The goal, IMO, is to reduce the subjective judgment on entry to sandbox, and increase the quantitative aspects


On Thu, 5 May 2022, 13:38 Richard Hartmann, <richih@...> wrote:
Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard






Richard Hartmann
 

Replying top-level as my thoughts jump across the thread.


I didn't run the numbers, yet I believe that the pace of submissions
has picked up. That alone can increase backlog.

We tried SIGs (now TAGs) doing due diligence for projects. The level
of scrutiny, and the closeness to the guidance material available, was
different across TAGs. In effect, this meant inconsistent processes
which is arguably unfair. And in cases of disagreements, TOC is pulled
in automatically anyway.
A clear delegation from TOC might be possible, yet project advancement
is one of the main tasks of TOC and arguably what votees expect TOC to
do. In any case, it does change any of the underlying desires.

What TAGs could provide is an initial proving ground, though: Projects
could give a presentation and go through questions and feedback in a
more limited scope, allowing them to polish their submittal.


While I know that the current sandbox process is designed to be very
low barrier, I am still not convinced that this is an obviously
desirable design goal. It is true that a neutral playing field is good
and helps some projects grow. It is also true that "CNCF project"
holds immense marketing value and many efforts are ephemeral, in
particular if largely driven by perf & marketing.
Back when sandbox criteria were relaxed, I was of the opinion that
they should remain more stringent. I have come to wonder if four
levels wouldn't be more appropriate: An initial runway on which
projects can be put; but also pruned more aggressively if they do not
show growth/adoption/the usual. E.g. once submitted they have three?
six? twelve? months to show certain progress or are removed outright.
Medium term, this might also allow for a smaller jump towards
Incubating, which is currently significant.


Orthogonally, I believe we can manage expectations better. One
possible approach would be to create dashboards and reports of the
underlying data to help manage expectations and keep ourselves honest.
What are the average and median times a project takes from stage X to
stage Y? How has this changed over time?
Another would be to rework the process & documentation; e.g.
Incubation had distinct requirement docs which TAGs copied together
and deduplicated back during the DD trials.



Having seen things from both sides now, and since CNCF started, I can
understand both the frustrations about some timelines better and also
understand how a few dedicated people are trying to do their best with
the time they have. On all sides.


Best,
Richard


alexis richardson
 

What resources do sandbox projects consume?  How is that resource consumption justified? 

IMO the main effort of a sandbox project should be getting into a position to apply for incubation, or keep going a bit longer, or shut down. This pruning should be pretty good at keeping out bad projects.


On Mon, 2 May 2022, 18:58 Liz Rice, <liz@...> wrote:
I should add, that’s not intended as a criticism -  the number of very early stage applications from individuals and single vendors has increased, which over time opened up the question for the TOC of whether it’s really right to commit CNCF resources for these projects. 

Those discussions naturally move us away from the original intention that the process should involve very little assessment or subjective judgement (e.g. the intention was to avoid a complicated definition of what is “mature enough” for sandbox) 


On Mon, 2 May 2022 at 18:48, Liz Rice <liz@...> wrote:
A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.

Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess?  Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission. 

One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included 


On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1 

On Sun, 1 May 2022, 19:27 Matt Farina, <matt@...> wrote:
I, for one, would love to see the sandbox process be faster and improve.

With regard to moving more work to the TAGs, two things come to mind.

First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.

Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?

Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
  1. I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
  2. TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.

Cheers,
Matt

On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt

On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review





Liz Rice
 

I should add, that’s not intended as a criticism -  the number of very early stage applications from individuals and single vendors has increased, which over time opened up the question for the TOC of whether it’s really right to commit CNCF resources for these projects. 

Those discussions naturally move us away from the original intention that the process should involve very little assessment or subjective judgement (e.g. the intention was to avoid a complicated definition of what is “mature enough” for sandbox) 


On Mon, 2 May 2022 at 18:48, Liz Rice <liz@...> wrote:
A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.

Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess?  Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission. 

One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included 


On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1 

On Sun, 1 May 2022, 19:27 Matt Farina, <matt@...> wrote:
I, for one, would love to see the sandbox process be faster and improve.

With regard to moving more work to the TAGs, two things come to mind.

First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.

Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?

Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
  1. I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
  2. TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.

Cheers,
Matt

On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt

On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review





Liz Rice <liz@...>
 

A little history: the current process was supposed to be super-lightweight, to reflect the very, very low bar for Sandbox projects - essentially, is it cloud native. I don’t remember the exact number but I’m pretty sure we got through a lot more than 12 applications in the first meeting.

Maybe it’s worth the TOC revisiting what that low bar really should be so that it’s easier and quicker to assess?  Here’s a suggestion that would make it super lightweight but I think still be in line with the CNCF mission. 

One of our reasons to exist is to enable multiple organisations to have a neutral place to collaborate, even if the project is little more than at the paper napkin stage. Based on this, we could define the bar for Sandbox as: a project needs to have support from minimum two CNCF member organisations who consider themselves stakeholders in the project. That could mean they’re involved in building it, or interested in using it. The onus is on the project to find those stakeholders before applying. The TOC’s approval would simply be a check that they agree that it’s a cloud native project and that they don’t have any other objection to it being included 


On Sun, 1 May 2022 at 19:29, alexis richardson <alexis@...> wrote:
+1 

On Sun, 1 May 2022, 19:27 Matt Farina, <matt@...> wrote:
I, for one, would love to see the sandbox process be faster and improve.

With regard to moving more work to the TAGs, two things come to mind.

First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.

Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?

Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
  1. I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
  2. TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.

Cheers,
Matt

On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt

On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review





alexis richardson
 

+1 


On Sun, 1 May 2022, 19:27 Matt Farina, <matt@...> wrote:
I, for one, would love to see the sandbox process be faster and improve.

With regard to moving more work to the TAGs, two things come to mind.

First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.

Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?

Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
  1. I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
  2. TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.

Cheers,
Matt

On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt

On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review





Matt Farina
 

I, for one, would love to see the sandbox process be faster and improve.

With regard to moving more work to the TAGs, two things come to mind.

First, when TAGs did more in the past they were inconsistent across each other and added their own criteria. This was a problem I don't want to see again. For example, I remember when one project lead was proposing a project for sandbox. He was frustrated because his project was criticized for not meeting a graduation criteria and for a criteria that was of the TAGs own making. I don't think we want this to happen again.

Second, sandbox projects don't get or need an in depth technical analysis. That shows up for incubation. I'm wondering, what would a TAG do here that wouldn't be repeated by the TOC when they go to look at it?

Having been through sandbox reviews twice now and having given advice to some projects that wanted to go for sandbox I've learned a few areas that could use some improvement...
  1. I've answered a lot of questions about things not in the docs. Things that provide context to the CNCF, what sandbox is, what I think the TOC is looking for, and how to communicate well to the TOC. I think this could be better documented.
  2. TAGs have a unique intersection where they have experts in an area and they work with the projects. I (and the rest of the TOC) don't scale on advising projects for sandbox. The TAGs may be able to do that. While I wouldn't require it, it could be useful for those who want to submit a sandbox project to present to the appropriate TAG and get guidance from them. For those who need it, getting some mentoring from a TAG could be useful.
These are just my initial thoughts. Happy to hear agreement, disagreement, or things built upon this.

Cheers,
Matt

On Sun, May 1, 2022, at 11:09 AM, Matt Young wrote:
We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt

On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review





Matt Young
 

We have been thinking about this in TAG Observability as well, and have work in flight that’s related:

* Form Program: Annual Sandbox Review [1]
    * Create summary slides [2]


[2] 

Will have details Tuesday as part of TAG update.

Matt


On Sun, May 1, 2022 at 9:41 AM Josh Gavant <joshgavant@...> wrote:

[Edited Message Follows]

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review


Josh Gavant
 
Edited

+1 to pushing more to TAGs. Perhaps each proposed project can be assigned to a TAG and a member of the TAG can lead a technical review and guide the project's leads on criteria for acceptance. That could help TOC reviews go more smoothly, make them more likely to succeed, and ensure projects and contributors don't get lost or feel unsupported along the journey.

It could also give first-time CNCF/TAG contributors an idea of where to start - they could pick an open project for the TAG and review and present it to the group.

As a start in this direction a couple months ago I created a label in TAG App Delivery to track project review requests from TOC: https://github.com/cncf/tag-app-delivery/issues?q=label%3Atoc-review


Davanum Srinivas
 

I like that Josh! A bit more work on the TOC side, but the paper trail is for sure a good idea.

Amye, TOC members,
Please chime in as well.

-- Dims

PS: we'll keep talking, when ready we can PR the changes to existing process(es) and then make it official (and then roll it out).


On Sat, Apr 30, 2022 at 5:38 PM Josh Berkus <jberkus@...> wrote:
On 4/30/22 09:18, Davanum Srinivas wrote:
> - there are 21 applications currently in the queue, a bunch of them are
> resubmissions where the TOC has a set of questions and they came back
> with answers.

FWIW, this part of the process could be made considerably more efficient
for both the projects and the TOC as well.

Right now, when sandbox projects are sent out and come back with
answers, there's no paper trail for the questions and the answers.  This
forces TOC members to re-evaluate the project from scratch.

My suggestion for a simple process that would solve this.  If a project
needs to answer questions or get inspected by a TAG, what happens is:

1. The TOC writes the questions in an issue in the TOC repo
2. The project or the TAG (depending) answer those questions in the TOC
repo.
3. Evaluation is resumed whenever the questions are answered.

This would spare TOC members from effectively starting over each time a
project re-applies.


--
-- Josh Berkus
    Kubernetes Community Architect
    OSPO, OCTO



--
Davanum Srinivas :: https://twitter.com/dims


Josh Berkus
 

On 4/30/22 09:18, Davanum Srinivas wrote:
- there are 21 applications currently in the queue, a bunch of them are resubmissions where the TOC has a set of questions and they came back with answers.
FWIW, this part of the process could be made considerably more efficient for both the projects and the TOC as well.

Right now, when sandbox projects are sent out and come back with answers, there's no paper trail for the questions and the answers. This forces TOC members to re-evaluate the project from scratch.

My suggestion for a simple process that would solve this. If a project needs to answer questions or get inspected by a TAG, what happens is:

1. The TOC writes the questions in an issue in the TOC repo
2. The project or the TAG (depending) answer those questions in the TOC repo.
3. Evaluation is resumed whenever the questions are answered.

This would spare TOC members from effectively starting over each time a project re-applies.


--
-- Josh Berkus
Kubernetes Community Architect
OSPO, OCTO