toggle quoted messageShow quoted text
So, being a track chair is decidedly an emphatically thankless, time consuming and stressful role. I’ve been volunteering as one for PyCon for the last 5 years, so 1) please know I know your pain and 2) I’m coming with good will in my comments/suggestions below:
1. Limiting the number of submissions (and in fact participation) per submitter helps in many ways: folks tend to be more deliberate with their submissions (i.e. less carpet bombing, lessens back channel quid-pro-quo, collusion and hedging). It also helps tremendously to increase “speaker entropy” by not allowing one person to participate in more than 2 sessions as submitter or co-speaker.
2. In terms of doing google background checks for all talks, I highly recommend to scope to only keynotes and *paid* workshops. It’s entirely too time consuming to dig YouTube talks and watch enough of it to get a good idea and fair assessment whether the speaker is knowledgeable and engaging. So better to use scarce time and resources when (extra) money is on the line or keynotes due to their notoriety.
3. Making the CFP more comprehensive, without being overly onerous is an art and science — and a necessity to making informed decisions. We tweak it slightly every year to account past-year observations, down to the naming, labelling and presence of fields in the form. We take feedback, but specially from URMs and make sure the form isn’t a barrier or otherwise intimidating.
I’m more than happy to participate/partake/help with this process in future events. Happy to discuss these issues at length as well — yall know where to find me :)
On Thu, Oct 4, 2018 at 2:06 PM Liz Rice <liz@...
Hi all from a current co-chair :-) Some great constructive ideas here, this is turning into a good discussion.
On the double-blinding, I was involved in discussions about this after Austin and again after Copenhagen. Both times we came to the conclusion that double-blind wouldn't work, mostly for the reasons Alena & Justin described. I don't recall hearing the two-phase suggestion before though, and I think this is really worth exploring further.
We'd have to reduce the number of submissions to make that in any way manageable. The idea of beefing up the CFP requirements could help (but is it possible we will put off some really knowledgable folks from contributing if we make it more onerous?) I think we need a bigger pool of review committee participants (who actually do it diligently) and perhaps should solicit more widely for volunteers for that.
I'm inclined to say we shouldn't allow more than two submissions from any individual, but I don't think it's fair to impose submission limits per company - partly because it would be hard for them to manage, but more importantly this would likely end up with fewer new voices getting a chance, as the companies will no doubt push for their star performers to be on stage.
On feedback - you can get it if you ask! For Copenhagen I gave individual feedback to everyone who asked via the speakers email (I guess that was about 20 people). I hope I'm not going to regret saying this as obviously that's not a scalable process and I may have just opened the floodgates!
But based on that experience it would be a LOT of work to give meaningful feedback for every submission. You might imagine you could just forward the reviewer comments, but in practice, for the vast majority the comments don't by themselves explain why a talk didn't make the cut. For example, many talks get a perfectly decent score and positive comments, but still don't get picked. They might have simply been up against even better talks, or we had to choose between similar talks, or (believe it or not) we felt we couldn't have any more talks in a track from a given company, and so on. The reviewer comments wouldn't reflect any of that. One concern here is a lot of people seeing the positive comments and thinking they had been unfairly overlooked because obviously there was nothing wrong with their submission. If it's not useful, actionable feedback there's no point sending it.
Perhaps we should document more of the co-chair decisions as we go along? Definitely worth considering, though it adds up to more work for the co-chairs (not that it will affect me as my term comes to an end after Seattle).
On the CFP request for resources, for Copenhagen we didn't have this and I ended up doing lots of googling about submitters. Based on that I suggested asking for resources to offer submitters the chance to show off their best work rather than have us trying to guess what to look at. Why do we need to see this? Trying to establish whether folks are subject matter experts, or prone to vendor pitching, or trying to identify potential exciting keynote speakers...
As I said, I really do think we should explore the ideas in this thread, especially the two-phase process, but let's not let perfection be the enemy of the good here. In one of the "should we double-blind" discussions one of my predecessors made the point that the outcome is more important than the process; if the end result is an agenda that the community finds engaging and exciting, that represents a diversity of viewpoints, and that drives forward the technology and adoption of cloud native, then we're in a pretty good place. It's my firm belief that the community want to see a mix of tech talks and end-user stories; talks by subject matter experts as well as new voices; diversity in all dimensions including company, but recognising that there are an awful lot of talented, knowledgable people working full-time on cloud native in a small number of companies. I'm sure we made mistakes, but we really did try our best to try to reflect all that in the agenda.
I'm really happy to see constructive engagement about all this. But time presses and I must, for now, fly!
On Thu, Oct 4, 2018 at 12:10 PM alexis richardson <alexis@...> wrote:
For the record, this list is probably the best meeting place of record for the community to air requests and commentary. So, yes, lots of people are here and listening to everything that is being said. We all want kubecon to get better and better so please keep the flow of thoughts coming
It is of course OK to issue rebuttals to Bryan.
On Thu, 4 Oct 2018, 16:53 Bryan Cantrill, <bryan@...
I think it's disconcerting (if somewhat comical) that the concern that the ideas shared here would get rebuttals -- a concern that I and I think many members of the TOC also likely share -- itself got a rebuttal. I think the discussion here is terrific, but I am concerned that the tone from the CNCF seems to be more of trying to explain how these concerns either aren't real concerns or are already being addressed. I hope that staff is hearing that there is broad consensus that change is needed -- and that this should be embraced as a positive and natural consequence of the popularity of both the technologies and the conference rather than something to be resisted or explained away.
In particular: I very much share the concern about the length limit imposed by the CFP. 900 characters is absurdly short (the "3 tweet" characterization is apt); a 900 word limit would be much more reasonable. I also share the concern about the dividing up of the proposal between "abstract" and "benefit to the community" and so on; a good abstract should contain everything that is needed to evaluate it -- and that evaluation criteria should be clearly spelled out. By encouraging longer, more comprehensive abstracts, you will be encouraging better written ones -- which will give the PC a better basis for being double-blind in early rounds. As a concrete step, I might encourage a group to be drawn up that consists of folks that have experience in both KubeCon, in other practitioner conferences, and in academic conferences (a few of whom have already identified themselves on this thread!); I think that their broad perspective is invaluable.
On Thu, Oct 4, 2018 at 8:22 AM Dan Kohn <dan@...
On Thu, Oct 4, 2018 at 11:18 AM Matthew Farina <matt@...
Yes, we're putting together a Google Doc with comments and the conference co-chairs will be providing some responses.
This sort of feels like the ideas shared here are going to get rebuttals. Can we instead take all of this as ideas to look at how we refine and improve things in the future? Where we can intentionally lead the efforts to continuously improve and adapt as things change.
The conference and the processes associated with it have iterated significantly each time. I assure you we are all reading this feedback carefully and thinking through the implications of adopting it. I think there is less status quo bias than you might suspect.