Thoughts on KubeCon


Bryan Cantrill <bryan@...>
 


On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Anthony Skipper <anthony@...>
 

I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 


On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Nick Chase
 

Even a max of 3-5 from one vendor would be a significant difference from the 68 from one company, 41 from another....

----  Nick


On 10/3/2018 2:54 PM, Anthony Skipper wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan

--
Nick Chase, Head of Technical and Marketing Content, Mirantis
Editor in Chief, Open Cloud Digest Author, Machine Learning for Mere Mortals


Matt Farina
 

If we talk about vendor limits should we exclude SIG specific and project specific sessions? That is the intros and deep dives. That gives some orgs high numbers (like 18 of the vendor who has 41).

Do existing sponsorship levels include numbers of speaking slots?

-- 
Matt Farina
mattfarina.com




Dan Kohn <dan@...>
 

On Wed, Oct 3, 2018 at 3:20 PM Matt Farina <matt@...> wrote:
If we talk about vendor limits should we exclude SIG specific and project specific sessions? That is the intros and deep dives. That gives some orgs high numbers (like 18 of the vendor who has 41).

This has useful context for how talks are selected: https://www.cncf.io/blog/2018/05/29/get-your-kubecon-talk-accepted/

At a high level, the Intro/Deep Dive tracks are separate from the CFP tracks, and we calculate statistics separately.
 
Do existing sponsorship levels include numbers of speaking slots?

We have 6 diamond sponsors that each get a 5-minute sponsored keynote. All other keynotes and CFP tracks are rated by the program committee and then selected into tracks by the co-chairs.
--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Bryan Cantrill <bryan@...>
 


One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

        - Bryan
 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Shannon Williams <shannon@...>
 

+1

Best Regards,

Shannon Williams
+1 (650) 521-6902


On Oct 3, 2018, at 11:54 AM, Anthony Skipper <anthony@...> wrote:

I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


William Morgan
 

I was a reviewer for both China and North America this year, and a double-blind academia reviewer earlier in life. I sent some feedback to the program committee which mostly echoed Brian's blog post:

Primarily, I think that there should be two distinct decisions: 1) is this going to be a "good" presentation, in isolation; and 2) is this presentation going to be good in the context of everything else that's happening at the conference. I think that third-party reviewers such as myself should only try to evaluate #1, and in a second step the conference organizers should evaluate #2. Then in #1 we could even make this a blind review process where the presenter name and company is hidden at review time, and in #2 you could be explicitly focused on making a good cohesive diverse conference. Right now those two competing desires are mixed together.

I also had some suggestions about how to make the scoring more deterministic across reviewers by providing a more explicit rubric.

Personally I'd like to see a much stronger emphasis on practitioner talks. "I used X in prod and here are the challenges we had to overcome." But IMO the most important first step is to have reliable and consistent reviewer scores, for which double blind reviews would really help.

-William


On Wed, Oct 3, 2018 at 11:47 AM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Quinton Hoole
 

Thanks for the insightful and thought-provoking blog post Bryan.  I missed the call yesterday, but co-incidentally had been noodling with similar thoughts recently, as, anecdotally, I’m also not convinced that we have the best submission review outcomes today.  I think that introducing the the partially double-blind review process would be a great step forward, and may well obviate the need for complicated per-vendor limits.

I also think that it would be super-useful for submission rejection notices to be accompanied by a few brief reviewer notes (e.g. “too much marketing pitch”, “not open source”, “previously presented”, “duplicated submission”, “off topic" etc) to help submitters to improve their chances in future (and perhaps also clarify any possible misperceptions by reviewers, as the submissions are by necessity brief). As just one illustrative data point, all 10 of my submissions to KubeCon China and US were rejected, and none of the rejections seem explainable by any of the “how to improve the odds” guidelines.  So I have no idea what to do differently in future.

Q

From: <cncf-toc@...> on behalf of Bryan Cantrill <bryan@...>
Date: Wednesday, October 3, 2018 at 12:58
To: "anthony@..." <anthony@...>
Cc: CNCF TOC <cncf-toc@...>
Subject: Re: [cncf-toc] Thoughts on KubeCon


One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

        - Bryan
 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Nick Chase
 

+1  


On Wed, Oct 3, 2018 at 4:35 PM Quinton Hoole <quinton.hoole@...> wrote:
Thanks for the insightful and thought-provoking blog post Bryan.  I missed the call yesterday, but co-incidentally had been noodling with similar thoughts recently, as, anecdotally, I’m also not convinced that we have the best submission review outcomes today.  I think that introducing the the partially double-blind review process would be a great step forward, and may well obviate the need for complicated per-vendor limits.

I also think that it would be super-useful for submission rejection notices to be accompanied by a few brief reviewer notes (e.g. “too much marketing pitch”, “not open source”, “previously presented”, “duplicated submission”, “off topic" etc) to help submitters to improve their chances in future (and perhaps also clarify any possible misperceptions by reviewers, as the submissions are by necessity brief). As just one illustrative data point, all 10 of my submissions to KubeCon China and US were rejected, and none of the rejections seem explainable by any of the “how to improve the odds” guidelines.  So I have no idea what to do differently in future.

Q

From: <cncf-toc@...> on behalf of Bryan Cantrill <bryan@...>
Date: Wednesday, October 3, 2018 at 12:58
To: "anthony@..." <anthony@...>
Cc: CNCF TOC <cncf-toc@...>
Subject: Re: [cncf-toc] Thoughts on KubeCon


One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

        - Bryan
 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Rob Lalonde
 

+1


From: cncf-toc@... <cncf-toc@...> on behalf of Shannon Williams <shannon@...>
Sent: Wednesday, October 3, 2018 4:19:09 PM
To: Anthony Skipper
Cc: bryan@...; Alexis Richardson via cncf-toc
Subject: Re: [cncf-toc] Thoughts on KubeCon
 
+1

Best Regards,

Shannon Williams
+1 (650) 521-6902


On Oct 3, 2018, at 11:54 AM, Anthony Skipper <anthony@...> wrote:

I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Doug Davis <dug@...>
 

"Quinton Hoole" <quinton.hoole@...> wrote:
> I also think that it would be super-useful for submission rejection
> notices to be accompanied by a few brief reviewer notes (e.g. “too
> much marketing pitch”, “not open source”, “previously presented”,
> “duplicated submission”, “off topic" etc) to help submitters to
> improve their chances in future (and perhaps also clarify any


Big +1

> possible misperceptions by reviewers, as the submissions are by
> necessity brief). As just one illustrative data point, all 10 of my
> submissions to KubeCon China and US were rejected, and none of the
> rejections seem explainable by any of the “how to improve the odds”
> guidelines.  So I have no idea what to do differently in future.


-Doug


Nick Chase
 



On Wed, Oct 3, 2018 at 4:35 PM Quinton Hoole <quinton.hoole@...> wrote:
I also think that it would be super-useful for submission rejection notices to be accompanied by a few brief reviewer notes (e.g. “too much marketing pitch”, “not open source”, “previously presented”, “duplicated submission”, “off topic" etc) to help submitters to improve their chances in future (and perhaps also clarify any possible misperceptions by reviewers, as the submissions are by necessity brief). As just one illustrative data point, all 10 of my submissions to KubeCon China and US were rejected, and none of the rejections seem explainable by any of the “how to improve the odds” guidelines.  So I have no idea what to do differently in future.

I recognize that it's not always that cut-and-dried, BTW; I've been on the selection team for several conferences and sometimes it's just a matter of "there were 10 slots and you ranked #11".  But not always.   


alex@...
 

I think it's important that tactical measures (e.g., double-blind, vendor talk limits, etc.) should be in the service of a general goal. IMO the first responsibility of conference organizers is to the conference attendees. A primary goal might look lie: make an engaging and useful conference, that fosters community development of cloud native software.

There is always a push among talk proposers to have a completely "objective" admissions process, but (1) an "objective" admissions process may well make for a worse conference, and (2) it is impossible for the committee to not impose some editorial view in the talks they select. So you may as well embrace it.

To the issue at hand: IME the best reason to have double-blind reviews is that it increases the amount of diversity, both in submissions and in balance of accepts. I reckon this is for much the same way that blind auditions helped diversity in orchestras.

What I very seriously doubt is that double-blind reviews will have the affect Bryan seems to think it will. In practice, most talks proposed about interesting technology (linkerd/Istio/Envoy, say) will be inextricably linked to the vendors that produce the technology, and the authoritative credibility they carry. Likewise most "customer success" talks will be linked to the customer itself. It will be harder on balance to judge talks on their merit without some idea of the "believability" of these authors. This is in contrast to scientific papers, where the submission process is geared towards papers that explore some idea abstract the people involved, excepting talks from industry, which will carry more weight because they have operational expertise, even though they are therefore less anonymous.

There may be other reasons to do double-blind review, but personally I can't think of them.


On Wed, Oct 3, 2018 at 2:20 PM Nick Chase <nchase@...> wrote:
On Wed, Oct 3, 2018 at 4:35 PM Quinton Hoole <quinton.hoole@...> wrote:
I also think that it would be super-useful for submission rejection notices to be accompanied by a few brief reviewer notes (e.g. “too much marketing pitch”, “not open source”, “previously presented”, “duplicated submission”, “off topic" etc) to help submitters to improve their chances in future (and perhaps also clarify any possible misperceptions by reviewers, as the submissions are by necessity brief). As just one illustrative data point, all 10 of my submissions to KubeCon China and US were rejected, and none of the rejections seem explainable by any of the “how to improve the odds” guidelines.  So I have no idea what to do differently in future.

I recognize that it's not always that cut-and-dried, BTW; I've been on the selection team for several conferences and sometimes it's just a matter of "there were 10 slots and you ranked #11".  But not always.   


Brian Grant
 

Please remember that "vendors" are also in many cases the primary contributors to CNCF projects. 

I talked to one of the co-chairs. There are vastly more talks submitted by project contributors than by end users. Perhaps that should be an ask to our end-user community -- submit more talks.


On Wed, Oct 3, 2018 at 12:59 PM Bryan Cantrill <bryan@...> wrote:

One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

        - Bryan
 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


Camille Fournier
 

What percentage of end user talks were accepted?


On Wed, Oct 3, 2018, 8:12 PM Brian Grant via Lists.Cncf.Io <briangrant=google.com@...> wrote:
Please remember that "vendors" are also in many cases the primary contributors to CNCF projects. 

I talked to one of the co-chairs. There are vastly more talks submitted by project contributors than by end users. Perhaps that should be an ask to our end-user community -- submit more talks.


On Wed, Oct 3, 2018 at 12:59 PM Bryan Cantrill <bryan@...> wrote:

One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

        - Bryan
 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:
I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:


Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:


There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan


David Baldwin <dbaldwin@...>
 

It could also help if there was an option for a session with a sponsorship. We were told no when asked and after we missed the submission deadline. It wasn’t clear if there are other options for us to get in. 

 

I know other conferences enable sessions with sponsorships sometimes depending on the level.  Doesn’t address your need for more user sessions unless we co-present with our customers which is an option and there are some who are willing.

 

David

 

David Baldwin

Product Management

Splunk Inc.

Voice & Text: 510-301-4524

 

From: <cncf-toc@...> on behalf of "Brian Grant via Lists.Cncf.Io" <briangrant=google.com@...>
Reply-To: "briangrant@..." <briangrant@...>
Date: Wednesday, October 3, 2018 at 8:12 PM
To: Bryan Cantrill <bryan@...>
Cc: "cncf-toc@..." <cncf-toc@...>
Subject: Re: [cncf-toc] Thoughts on KubeCon

 

Please remember that "vendors" are also in many cases the primary contributors to CNCF projects. 

 

I talked to one of the co-chairs. There are vastly more talks submitted by project contributors than by end users. Perhaps that should be an ask to our end-user community -- submit more talks.

 

 

On Wed, Oct 3, 2018 at 12:59 PM Bryan Cantrill <bryan@...> wrote:

 

One per vendor might be too acute, as some vendors are doing much more than others.  But having some system that limits the number of submissions per vendor (and therefore force the vendors to adopt some process to determine their best submissions) would probably help -- and would also help address the too-low acceptance rate...

 

        - Bryan

 

 

On Wed, Oct 3, 2018 at 11:54 AM Anthony Skipper <anthony@...> wrote:

I would agree with double blind.  But a max of 1 talk per vendor might also go a long way. 

 

On Wed, Oct 3, 2018 at 2:47 PM Bryan Cantrill <bryan@...> wrote:

 

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:

 

 

Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:

 

 

There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

 

         - Bryan


Dan Kohn <dan@...>
 

On Wed, Oct 3, 2018 at 8:14 PM Camille Fournier <skamille@...> wrote:
What percentage of end user talks were accepted?

27.8% of talks are from end users.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Alena Prokharchyk
 

I'm not sure going with double blind for Kubecon talk submissions is a good idea. In academic conferences, the paper itself is a good enough justification as it includes all the information needed to make a fair judgement. Kubecon submissions are short abstracts, and can't be judged the same way. Speaker's presentation skills, the projects he/she is involved in, the presentations given in the past should be taken into consideration. Unless we ask to include slides and transcript of the presentation as a part of the submission, there is not enough basis to do double blind voting.

A disclaimer: some of my talks were accepted to kubecon, some were rejected. As a speaker (and I don't consider myself to be a particularly good one) I'd really like to know the reasons behind both decisions.


From: cncf-toc@... <cncf-toc@...> on behalf of Dan Kohn <dan@...>
Sent: Wednesday, October 3, 2018 5:29:27 PM
To: Camille Fournier
Cc: Brian Grant; Bryan Cantrell; cncf-toc@...
Subject: Re: [cncf-toc] Thoughts on KubeCon
 
On Wed, Oct 3, 2018 at 8:14 PM Camille Fournier <skamille@...> wrote:
What percentage of end user talks were accepted?

27.8% of talks are from end users.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Camille Fournier
 

No I mean, of the total number of submissions made by end users, what percentage were accepted? Given that the overall rate was 13%


On Wed, Oct 3, 2018, 8:29 PM Dan Kohn <dan@...> wrote:
On Wed, Oct 3, 2018 at 8:14 PM Camille Fournier <skamille@...> wrote:
What percentage of end user talks were accepted?

27.8% of talks are from end users.

--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com