Re: Thoughts on KubeCon

William Morgan

I was a reviewer for both China and North America this year, and a double-blind academia reviewer earlier in life. I sent some feedback to the program committee which mostly echoed Brian's blog post:

Primarily, I think that there should be two distinct decisions: 1) is this going to be a "good" presentation, in isolation; and 2) is this presentation going to be good in the context of everything else that's happening at the conference. I think that third-party reviewers such as myself should only try to evaluate #1, and in a second step the conference organizers should evaluate #2. Then in #1 we could even make this a blind review process where the presenter name and company is hidden at review time, and in #2 you could be explicitly focused on making a good cohesive diverse conference. Right now those two competing desires are mixed together.

I also had some suggestions about how to make the scoring more deterministic across reviewers by providing a more explicit rubric.

Personally I'd like to see a much stronger emphasis on practitioner talks. "I used X in prod and here are the challenges we had to overcome." But IMO the most important first step is to have reliable and consistent reviewer scores, for which double blind reviews would really help.


On Wed, Oct 3, 2018 at 11:47 AM Bryan Cantrill <bryan@...> wrote:

On the call yesterday, Dan asked me to send out my thoughts on double-blind reviewing.  My e-mail quickly turned into a blog entry:

Something that I probably didn't highlight well enough in there is Kathryn McKinley's excellent piece on double-blind review:

There are certainly lots of ways to attack this problem, but I view double-blind as an essential piece -- but probably not sufficient on its own.

         - Bryan

Join to automatically receive all group messages.