Date   

Re: Thoughts on KubeCon

Liz Rice
 

Only four of the submissions on Knative were from Google! Perhaps it goes to show that a lot of other people are also interested in this technology? Again I go back to my point that a lot of people (and not just from one company) submitting on a topic suggests that at this moment in time it's of broad interest to the community. 

I'm not going to dig out all the numbers on Istio but it was the same kind of thing. We can't pick talks that aren't submitted!



On Tue, Oct 9, 2018 at 7:43 PM Ruben Orduz <ruben@...> wrote:
I'm aware this is a bit a political minefield here, but I'm concerned the committee(s) are unintentionally choosing winners here (same for KubeCon EU Købnhavn). What I mean is this: "popularity" of a topic or tech can be driven/influenced by movers and shakers in the field. Google pushes for a tool they are working on will get much more traction than a competing tool from a small third party. A dramatic example of this phenomenon is having a whole track dedicated to Istio even though it was as yet a somewhat unproven technology on the field and far from production-ready for enterprise customers who tend to wait until a tech is more stable before deploying it. Several other service meshy-techs felt shunned by this. 

I'm getting the same feeling about knative here. Seeing the over abundance of talk proposals about it, it was perceived as a good gauge of community interest, which again, a behemoth is behind pushing it so that's no surprise.

I would posit we need to be more careful to unintentionally pick favorites based on popularity, specially when there's a huge asymmetry in terms of marketing power and community outreach among competitors in any given tech.

Best,
Ruben 

On Tue, Oct 9, 2018 at 1:46 PM Matt Farina <matt@...> wrote:
Liz, thanks for sharing those details. I know this is a tough job. Thanks for putting up with this extra work of the questioning and people poking at the ideas here. Anything I’m suggesting is more about clarifying for future conferences and trying to be explicit where we may not have been before.

I completely understand the desire to identify hot technologies. With 2/3 of the proposed talks on one technology it speaks to a level of hotness.

But, there are a couple other ways to look at this situation…

First, there is as a track attendee. 4 of 6 presentations on the same technology is not exciting and does not give me a diverse view. For someone not in the know it gives the impression that the space is not very diverse and that the main piece of technology is “work-in-progress” (the label on knative). Is this the impression we want conference attendees to have?

Second, there is from the perspective of people proposing sessions.

For Kubernetes there are currently numerous serverless technologies including, but not limited to:

  • knative
  • OpenFaaS
  • Kubeless
  • Fission
  • Brigade
  • Virtual Kubelet (works with serverless containers like ACI, Fargate, etc but is not FaaS)

Jupyter bills itself as a web application and notebook. It’s getting a lot of buzz but I’ve not heard of it being billed as Serverless.

There are also tools like serverless that can work with numerous technologies including kubeless (on this list) that are workflow solutions.

In addition there are things the CNCF serverless working group has been working on like cloudevents (which has an intro and deep dive out of band from the serverless track).

The serverless track then has 4 of 6 session on knative, 1 of 6 on something else (Jupyter), and sessions on other serverless technologies being rejected.

Can we all see how decisions here could be interpreted with malicious intent and how it could put a negative view on the conference and decision making process? Whether it happened that way or not, people could come to malicious conclusions.

This all leads me to other questions...

Do we want end-user presentations in this space? Since knative is hot but not ready for production some other technology would be used by them. But, it’s useful today and not “hot”. How do we encourage end-users to present here? Is “hot” or useful today more important?

Is a goal diversity? If so, mirroring presentations with only those that are hot doesn’t provide for diversity.

If some of the intent and goal components could be ironed out it would help future decision makers.



-- 
Matt Farina
mattfarina.com



On Oct 9, 2018, at 1:26 PM, Liz Rice <liz@...> wrote:

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...


Subject: Re: [cncf-toc] Thoughts on KubeCon


 

Here is a summary of the discussion so far:



-- 
Liz Rice

--
Liz Rice
@lizrice  | lizrice.com+44 (0) 780 126 1145


Re: Thoughts on KubeCon

Ruben Orduz <ruben@...>
 

I'm aware this is a bit a political minefield here, but I'm concerned the committee(s) are unintentionally choosing winners here (same for KubeCon EU Købnhavn). What I mean is this: "popularity" of a topic or tech can be driven/influenced by movers and shakers in the field. Google pushes for a tool they are working on will get much more traction than a competing tool from a small third party. A dramatic example of this phenomenon is having a whole track dedicated to Istio even though it was as yet a somewhat unproven technology on the field and far from production-ready for enterprise customers who tend to wait until a tech is more stable before deploying it. Several other service meshy-techs felt shunned by this. 

I'm getting the same feeling about knative here. Seeing the over abundance of talk proposals about it, it was perceived as a good gauge of community interest, which again, a behemoth is behind pushing it so that's no surprise.

I would posit we need to be more careful to unintentionally pick favorites based on popularity, specially when there's a huge asymmetry in terms of marketing power and community outreach among competitors in any given tech.

Best,
Ruben 

On Tue, Oct 9, 2018 at 1:46 PM Matt Farina <matt@...> wrote:
Liz, thanks for sharing those details. I know this is a tough job. Thanks for putting up with this extra work of the questioning and people poking at the ideas here. Anything I’m suggesting is more about clarifying for future conferences and trying to be explicit where we may not have been before.

I completely understand the desire to identify hot technologies. With 2/3 of the proposed talks on one technology it speaks to a level of hotness.

But, there are a couple other ways to look at this situation…

First, there is as a track attendee. 4 of 6 presentations on the same technology is not exciting and does not give me a diverse view. For someone not in the know it gives the impression that the space is not very diverse and that the main piece of technology is “work-in-progress” (the label on knative). Is this the impression we want conference attendees to have?

Second, there is from the perspective of people proposing sessions.

For Kubernetes there are currently numerous serverless technologies including, but not limited to:

  • knative
  • OpenFaaS
  • Kubeless
  • Fission
  • Brigade
  • Virtual Kubelet (works with serverless containers like ACI, Fargate, etc but is not FaaS)

Jupyter bills itself as a web application and notebook. It’s getting a lot of buzz but I’ve not heard of it being billed as Serverless.

There are also tools like serverless that can work with numerous technologies including kubeless (on this list) that are workflow solutions.

In addition there are things the CNCF serverless working group has been working on like cloudevents (which has an intro and deep dive out of band from the serverless track).

The serverless track then has 4 of 6 session on knative, 1 of 6 on something else (Jupyter), and sessions on other serverless technologies being rejected.

Can we all see how decisions here could be interpreted with malicious intent and how it could put a negative view on the conference and decision making process? Whether it happened that way or not, people could come to malicious conclusions.

This all leads me to other questions...

Do we want end-user presentations in this space? Since knative is hot but not ready for production some other technology would be used by them. But, it’s useful today and not “hot”. How do we encourage end-users to present here? Is “hot” or useful today more important?

Is a goal diversity? If so, mirroring presentations with only those that are hot doesn’t provide for diversity.

If some of the intent and goal components could be ironed out it would help future decision makers.



-- 
Matt Farina
mattfarina.com



On Oct 9, 2018, at 1:26 PM, Liz Rice <liz@...> wrote:

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...


Subject: Re: [cncf-toc] Thoughts on KubeCon


 

Here is a summary of the discussion so far:



-- 
Liz Rice
@lizrice  | lizrice.com +44 (0) 780 126 1145


Re: Thoughts on KubeCon

Yaron Haviv <yaronh@...>
 

liz,

its not the number of submission which should count, i guess if "theoretically" there was a large company w special interest in Knative they can make hundred submission to promote it.

We saw the same phenomena in EU, some companies rule the agenda, making it hard for smaller members to demonstrate their innovation. some of the sessions i attended you could easily see that the driving decision wasn't how qualified the speaker was or how interesting/relevant the session.

Matt, BTW you left out nuclio serverless platform

Yaron



From: Matthew Farina
Sent: Tuesday, October 9, 8:46 PM
Subject: Re: [cncf-toc] Thoughts on KubeCon
To: Liz Rice
Cc: Yaron Haviv, Dan Kohn, CNCF TOC


Liz, thanks for sharing those details. I know this is a tough job. Thanks for putting up with this extra work of the questioning and people poking at the ideas here. Anything I’m suggesting is more about clarifying for future conferences and trying to be explicit where we may not have been before.

I completely understand the desire to identify hot technologies. With 2/3 of the proposed talks on one technology it speaks to a level of hotness.

But, there are a couple other ways to look at this situation…

First, there is as a track attendee. 4 of 6 presentations on the same technology is not exciting and does not give me a diverse view. For someone not in the know it gives the impression that the space is not very diverse and that the main piece of technology is “work-in-progress” (the label on knative). Is this the impression we want conference attendees to have?

Second, there is from the perspective of people proposing sessions.

For Kubernetes there are currently numerous serverless technologies including, but not limited to:

knativeOpenFaaSKubelessFissionBrigadeVirtual Kubelet (works with serverless containers like ACI, Fargate, etc but is not FaaS)

Jupyter bills itself as a web application and notebook. It’s getting a lot of buzz but I’ve not heard of it being billed as Serverless.

There are also tools like serverless that can work with numerous technologies including kubeless (on this list) that are workflow solutions.

In addition there are things the CNCF serverless working group has been working on like cloudevents (which has an intro and deep dive out of band from the serverless track).

The serverless track then has 4 of 6 session on knative, 1 of 6 on something else (Jupyter), and sessions on other serverless technologies being rejected.

Can we all see how decisions here could be interpreted with malicious intent and how it could put a negative view on the conference and decision making process? Whether it happened that way or not, people could come to malicious conclusions.

This all leads me to other questions...

Do we want end-user presentations in this space? Since knative is hot but not ready for production some other technology would be used by them. But, it’s useful today and not “hot”. How do we encourage end-users to present here? Is “hot” or useful today more important?

Is a goal diversity? If so, mirroring presentations with only those that are hot doesn’t provide for diversity.

If some of the intent and goal components could be ironed out it would help future decision makers.



-- 
Matt Farina

On Oct 9, 2018, at 1:26 PM, Liz Rice <liz@...> wrote:

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:
Dan,
 
looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process
how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?
 
Yaron
 
From: cncf-toc@... <cncf-toc@...On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
Subject: Re: [cncf-toc] Thoughts on KubeCon

 
Here is a summary of the discussion so far:
 
--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io

-- 
Liz Rice
@lizrice  | lizrice.com | +44 (0) 780 126 1145




Re: Thoughts on KubeCon

Liz Rice
 

Eughhh, reading this through I fear it might come over as defensive, but since I've collected the information, I'm going ahead regardless. Let's look at the *actual* situation on this serverless track:
  • The "Jupyter" talk is titled "Running Serverless HPC Workloads on top of Kubernetes and Jupyter Notebooks". Which sounds pretty serverless to me. The serverless tech in question is Fn, per the abstract.
  • One of the Knative talks is an end-user story from T-Mobile
  • One of the non-Knative talks is an end-user story about a bank in Paraguay working with OpenFaaS. 
  • One of the Knative sessions is a BOF, and because it seems so hot, a BOF seems appropriate. (And before anyone asks, no, no-one suggested a broader BOF about Serverless.)
  • There were a few submissions on Fission, but the highest rated was by a speaker who had a much better submission on a different topic (which did get accepted), and the others really didn't score that well (given our high bar). 
  • Kubeless was mentioned in two abstracts, one of which is an accepted talk and the other really didn't get a great score. 
  • I can't find any mention of Brigade or Virtual Kubelet in any of these submissions. 
  • I can't meaningfully search for "serverless" the technology in this spreadsheet :-) 
Did we worry about whether this was too much Knative? Yes, we did consider it. Is it the perfect Serverless track? Probably not. Do I think the choices we made are reasonable, given the submissions and the reviewer feedback we had? Absolutely! Was there "malicious intent"? Fight me! :-)

I'm not going to go into this level of detail about anything else, I promise.


On Tue, Oct 9, 2018 at 6:46 PM Matthew Farina <matt@...> wrote:
Liz, thanks for sharing those details. I know this is a tough job. Thanks for putting up with this extra work of the questioning and people poking at the ideas here. Anything I’m suggesting is more about clarifying for future conferences and trying to be explicit where we may not have been before.

I completely understand the desire to identify hot technologies. With 2/3 of the proposed talks on one technology it speaks to a level of hotness.

But, there are a couple other ways to look at this situation…

First, there is as a track attendee. 4 of 6 presentations on the same technology is not exciting and does not give me a diverse view. For someone not in the know it gives the impression that the space is not very diverse and that the main piece of technology is “work-in-progress” (the label on knative). Is this the impression we want conference attendees to have?

Second, there is from the perspective of people proposing sessions.

For Kubernetes there are currently numerous serverless technologies including, but not limited to:

  • knative
  • OpenFaaS
  • Kubeless
  • Fission
  • Brigade
  • Virtual Kubelet (works with serverless containers like ACI, Fargate, etc but is not FaaS)

Jupyter bills itself as a web application and notebook. It’s getting a lot of buzz but I’ve not heard of it being billed as Serverless.

There are also tools like serverless that can work with numerous technologies including kubeless (on this list) that are workflow solutions.

In addition there are things the CNCF serverless working group has been working on like cloudevents (which has an intro and deep dive out of band from the serverless track).

The serverless track then has 4 of 6 session on knative, 1 of 6 on something else (Jupyter), and sessions on other serverless technologies being rejected.

Can we all see how decisions here could be interpreted with malicious intent and how it could put a negative view on the conference and decision making process? Whether it happened that way or not, people could come to malicious conclusions.

This all leads me to other questions...

Do we want end-user presentations in this space? Since knative is hot but not ready for production some other technology would be used by them. But, it’s useful today and not “hot”. How do we encourage end-users to present here? Is “hot” or useful today more important?

Is a goal diversity? If so, mirroring presentations with only those that are hot doesn’t provide for diversity.

If some of the intent and goal components could be ironed out it would help future decision makers.



-- 
Matt Farina
mattfarina.com



On Oct 9, 2018, at 1:26 PM, Liz Rice <liz@...> wrote:

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...


Subject: Re: [cncf-toc] Thoughts on KubeCon


 

Here is a summary of the discussion so far:



-- 
Liz Rice

--
Liz Rice
@lizrice  | lizrice.com+44 (0) 780 126 1145


Re: Thoughts on KubeCon

Matt Farina
 

Liz, thanks for sharing those details. I know this is a tough job. Thanks for putting up with this extra work of the questioning and people poking at the ideas here. Anything I’m suggesting is more about clarifying for future conferences and trying to be explicit where we may not have been before.

I completely understand the desire to identify hot technologies. With 2/3 of the proposed talks on one technology it speaks to a level of hotness.

But, there are a couple other ways to look at this situation…

First, there is as a track attendee. 4 of 6 presentations on the same technology is not exciting and does not give me a diverse view. For someone not in the know it gives the impression that the space is not very diverse and that the main piece of technology is “work-in-progress” (the label on knative). Is this the impression we want conference attendees to have?

Second, there is from the perspective of people proposing sessions.

For Kubernetes there are currently numerous serverless technologies including, but not limited to:

  • knative
  • OpenFaaS
  • Kubeless
  • Fission
  • Brigade
  • Virtual Kubelet (works with serverless containers like ACI, Fargate, etc but is not FaaS)

Jupyter bills itself as a web application and notebook. It’s getting a lot of buzz but I’ve not heard of it being billed as Serverless.

There are also tools like serverless that can work with numerous technologies including kubeless (on this list) that are workflow solutions.

In addition there are things the CNCF serverless working group has been working on like cloudevents (which has an intro and deep dive out of band from the serverless track).

The serverless track then has 4 of 6 session on knative, 1 of 6 on something else (Jupyter), and sessions on other serverless technologies being rejected.

Can we all see how decisions here could be interpreted with malicious intent and how it could put a negative view on the conference and decision making process? Whether it happened that way or not, people could come to malicious conclusions.

This all leads me to other questions...

Do we want end-user presentations in this space? Since knative is hot but not ready for production some other technology would be used by them. But, it’s useful today and not “hot”. How do we encourage end-users to present here? Is “hot” or useful today more important?

Is a goal diversity? If so, mirroring presentations with only those that are hot doesn’t provide for diversity.

If some of the intent and goal components could be ironed out it would help future decision makers.



-- 
Matt Farina
mattfarina.com



On Oct 9, 2018, at 1:26 PM, Liz Rice <liz@...> wrote:

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...


Subject: Re: [cncf-toc] Thoughts on KubeCon


 

Here is a summary of the discussion so far:



-- 
Liz Rice
@lizrice  | lizrice.com +44 (0) 780 126 1145


Re: Thoughts on KubeCon

Liz Rice
 

Matt, thank you for your thoughtful response. I like your list and your focus on identifying solutions for things that need to be improved. 

Yaron, by my very quick reckoning in a rather complicated spreadsheet: of ~60 submissions under Serverless, ~40 of them mentioned Knative. If number of submissions has some rough correlation to "what the community is currently interested in" (and I believe it does) then Knative is currently very hot, and we have tried to reflect this in the agenda. There's actually a seventh talk from the Serverless list that we accepted into the Observability track because we felt it straddled both topics. 




On Tue, Oct 9, 2018 at 5:15 PM Yaron Haviv <yaronh@...> wrote:

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...


Subject: Re: [cncf-toc] Thoughts on KubeCon

 

Here is a summary of the discussion so far:

--
Liz Rice
@lizrice  | lizrice.com+44 (0) 780 126 1145


Re: Thoughts on KubeCon

Yaron Haviv <yaronh@...>
 

Dan,

 

looking at the schedule, the fact that out of total 6 sessions in the Serverless track there are 4 talks about Knative raises a serious question about the bias of this process

how come the only other two sessions are on OpenFaaS and Jupyter (serverless? really)  and other efforts in the space are left in the cold ?

 

Yaron

 

From: cncf-toc@... <cncf-toc@...> On Behalf Of Dan Kohn
Sent: Monday, October 8, 2018 23:35
To: cncf-toc@...
Subject: Re: [cncf-toc] Thoughts on KubeCon

 

Here is a summary of the discussion so far:

 


Re: Thoughts on KubeCon

Dan Kohn <dan@...>
 

On Tue, Oct 9, 2018 at 11:14 AM Matt Farina <matt@...> wrote:
 
if TLF is there for the benefit of it’s members than shouldn’t most feel they are on a level playing field in this 501c6?
 
If TLF is a 501c6 for the benefit of it’s members than shouldn’t we look at a setup that benefits all of the members.

These were just two asides in a long and thoughtful comment, but I did want to address that employment with a CNCF member has never played a factor in what talks have been accepted to KubeCon + CloudNativeCon.

The LF (and the CNCF as its project) is a 501(c)(6), but it's not correct that we just exist for the benefit of our members. The reality is than an organization like CNCF has many constituencies, including our members, the TOC, our project maintainers, our end users, developers considering using or contributing to our projects, and others. Those are also some of the constituencies we're aiming to satisfy with KubeCon + CloudNativeCon.
--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Re: Thoughts on KubeCon

Matt Farina
 

If we don’t have problems we’re trying to solve or things we’re trying to improve, how can we review the proposed solutions against the issues? Wouldn’t it be easy to start having color of the bike shed conversations?

With that in mind I wanted to break out some of the issues I saw in the comments so far. And then get into some solutions.

1) We have too low of an acceptance rate for talks

There are so many proposed talks and only a small percentage are accepted. This makes the whole space around the talks highly competitive. Especially since speaking helps with project uptake, kicking off new projects, and career advancement (because let’s be honest, people use them for this).

Can we appreciate that this is a problem born from success? Kudos to the people who’ve organized the conferences to get us here.

With a low rate of acceptance we also end up with a high rate of rejection. This leads to hurt feelings and wondering what it takes to get in.

2) There is concern that one or a couple vendors will have an outsized presence compared to their contemporaries.

If we’re honest, many try to game systems to their advantage. Sometimes we’re even more upset that a competitor did a better job at it than us. But, if TLF is there for the benefit of it’s members than shouldn’t most feel they are on a level playing field in this 501c6?

3) We want more end user talks

The best form of advertising is word of mouth. One end user sharing with another end user. End users talking also helps vendors and project developers hear what works well and what needs improving. There are many reasons end user talks are good for the ecosystem.

4) We want to ensure a high quality bar on the KubeCon/CloudNativeCon talks

If the number of talks accepted is low we want to make sure the quality is high.

What can we do to improve these? Here are some ideas from the conversations (and that I’ve pulled from other conferences)…

A) Have camps in addition to cons. In local cities enable people to self organize camps. This provides 3 benefits:

  1. It opens up content to people in the local areas, many of whom won’t attend a con. That expands the message radius.
  2. Camps provide more speaking opportunities in the cloud native space. People can still get their message out in talks.
  3. It provides a sort of “minor leagues”, to use a sports analogy, where speakers can work on their skills and test out ideas. More practice leads to better talks later which helps the quality bar.

WordCamps, by the WordPress community, provide a nice example of what these can look like. OpenStack, Drupal, and others have done this well, too.

B) Limit the number of general session spots per vendor (excluding project intros, deep dives, lightening talks, etc).

Fairness is hard to judge. If TLF is a 501c6 for the benefit of it’s members than shouldn’t we look at a setup that benefits all of the members. For example, if the number was limited to 20 general sessions per vendor in the current con only one company would have been impacted and if limited to 15 only 2 companies would have been impacted.

If members are concerned with one or two companies having and outsized impact on general sessions there are ways to handle that. This combined with (A) provides a way to limit the general sessions at KubeCon/CloudNativeCon while still providing an opportunity for those vendor presentations to be heard.

C) When proposed sessions are submitted collect more information.

What should this information be? Should it be academic level information? Someone who can write up a complex and long idea isn’t necessarily someone who is a good speaker or presenter. Instead of trying to turn this into an academic conference, which I do enjoy, I would prefer keeping the focus on end-user enablement. Enabling end users helps our vendors and projects go further and be more successful.

Instead, I would collect:

  • A list of benefits people who attend will walk away with. This puts the focus on the audience and enabling them
  • A list of previous places the speaker has already presented along with asking, optionally, for links to video. This will help people evaluate someones speaking capability
  • A field for additional comments for the reviewers

This additional information will help reviewers make decisions. For example, if two people want to give a very similar talk their ability to speak to a crowd may come into play in judging the quality of the final presentation. This does help already proven speakers which is why the camps and the next suggestion are important.

The 900 character field has a place. We need to have something to share in the schedule that goes just beyond skimming.

D) Teach people to present

Being a good speaker is hard. When I first started speaking it was rough. I might only put in a few hours into prepping and then just wing it. When I heard of people putting in 40 hours of prep for a 1 hour presentation and I was shocked. But, their presentations were always better than mine. I eventually read books and learned to speak and learned that many things are not obvious.

So, what about offering a free online course on presenting targeted at speaking at cons and camps. Other conferences have done this so it’s not a new idea. This can, also, help with the quality level by helping the people with a good idea learn how to present it well.

E) Provide feedback on all submissions

The first time I was a reviewer at an academic style conference I was surprised that I had to rate and give feedback on all 30 of the sessions I had to review and each of them was 3 pages long. It was a fair amount of work. As someone who had talks not get accepted to the same conference I found the feedback to be invaluable.

This will help people know what happened. It’s more work for reviewers. But, it gives reviewers a different angle to review the presentation and it’s kinder to people who submitted.

F) Actively encourage more end-users to speak

When we see someone who has a good story to tell we should identify them and encourage them to speak about it. We could even organize around this type of thing and talk about it.




Sorry this got to be a little long. This is me restructuring our conversation and throwing a few things at the wall to see if anything sticks. Feel free to pick it apart…. or not.

-- 
Matt Farina
mattfarina.com




Re: Thoughts on KubeCon

Michael Hausenblas <mhausenb@...>
 

Background: I was in academia/research for 12y+, submitted hundreds of
papers, reviewed even more and was serving on many dozens of PC,
organizing workshops, serving as general chair and program chair,
yadayada … yawn …

The number one thing I liked about industry conferences, especially
after I moved from research to industry (struggling to get my PhD and
master student’s papers accepted) was that: 1. industry conferences
focus on sharing knowledge, lessons learned while academia focuses on
where you made a mistake (or: I’ve done that same research 20 years
ago, where’s the improvement), and 2. the lack of structural and
formal review processes.

Let me be very clear on this: blind, double blind, triple blind, feel
free to do whatever you *think* makes sense. The only thing I’m rather
certain would help if we’d get rid of the compartementalization, that
is, rather than reviewing my little corner at KubeCon (serverless,
machine learning, what have you), let *all* reviewers access *all*
submissions. This model works very well for O’Reilly (where I’ve been
reviewing for Strata and Velocity for years) and gives you way more
objective results, since it cancels out the bias across the reviews
and the reviewers.


Cheers,
Dr. Michael Hausenblas
(sorry, couldn’t resist ;)

--
Michael Hausenblas, Developer Advocate
OpenShift by Red Hat
Mobile: +353 86 0215164 | Twitter: @mhausenblas
http://openshift.com | http://mhausenblas.info

-----Original Message-----
From: Dan Kohn <dan@...>
Reply: Dan Kohn <dan@...>
Date: 8 October 2018 at 21:35:44
To: cncf-toc@... <cncf-toc@...>
Subject:  Re: [cncf-toc] Thoughts on KubeCon

Here is a summary of the discussion so far:

https://docs.google.com/document/d/1sDXfk5MHAmHZVdIx1t4PREo_SSXKcloCOUYjZIo4jBs/
--
Dan Kohn
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com




Re: Thoughts on KubeCon

Aparna Sinha
 

A double blind process of review combined with a more complete paper submission (instead of a short abstract) would ensure standards similar to those in academia - which is a known system, and for many, the expected process for such reviews.

Having come from academia, I was surprised to see the process we have been following. I've served as a reviewer for Kubecon and found that the abstracts were insufficiently detailed to be evaluated. While we reject many such abstracts, I think having a higher bar on the submission would spare that effort and discourage submissions that are half baked.  [ In some cases, I reviewed submissions where the author simply requested a talk without providing any content! ]  It is also unneccessary for the authors to be identifiable. 

Kubecon has turned out well despite the process largely as a result of the amazing work of the conference chairs. The current process relies heavily on having chairs that understand the needs of the community, and are investing the time to put together a compelling program. This isn't efficient and has the potential to be unfair.  Given the strong interest and volume of submissions, requiring a 1-2 page paper instead of an abstracts seems reasonable and will raise the bar while reducing reviewer / chair burden. 

- Aparna

On Mon, Oct 8, 2018 at 1:35 PM, Dan Kohn <dan@...> wrote:
Here is a summary of the discussion so far:




--
Aparna Sinha
Group Product Manager
Kubernetes

650-283-6086 (m)


Re: Thoughts on KubeCon

Dan Kohn <dan@...>
 

Here is a summary of the discussion so far:


Re: Thoughts on KubeCon

Matt Farina
 

> So, being a track chair is decidedly an emphatically thankless, time consuming and stressful role.

As someone who has been a conference track chair, I can say it sure it.

So, I want to thank Liz and Janet for the work they are putting in right now, the chairs of the past, and those who support them. Even though folks give feedback with an eye on improving things, I for one, want to say the work of the chairs is appreciated.


> End users submitted 14% of the talks to KubeCon but were accepted to present 28% of the sessions.
> So, end users had a 26% acceptance rate.

It’s great to see this metric. We should look to improve these numbers in the future, shouldn’t we? Without end users, and growth in end users, the projects and vendors won’t be doing all that well. The best marketing is word of mouth by people using things. So, let’s see if we can improve on these end-user numbers for 2019.

-- 
Matt Farina
mattfarina.com




Re: Thoughts on KubeCon

Ruben Orduz <ruben@...>
 

So, being a track chair is decidedly an emphatically thankless, time consuming and stressful role. I’ve been volunteering as one for PyCon for the last 5 years, so 1) please know I know your pain and 2) I’m coming with good will in my comments/suggestions below:

1. Limiting the number of submissions (and in fact participation) per submitter helps in many ways: folks tend to be more deliberate with their submissions (i.e. less carpet bombing, lessens back channel quid-pro-quo, collusion and hedging). It also helps tremendously to increase “speaker entropy” by not allowing one person to participate in more than 2 sessions as submitter or co-speaker.

2. In terms of doing google background checks for all talks, I highly recommend to scope to only keynotes and *paid* workshops. It’s entirely too time consuming to dig YouTube talks and watch enough of it to get a good idea and fair assessment whether the speaker is knowledgeable and engaging. So better to use scarce time and resources when (extra) money is on the line or keynotes due to their notoriety.

3. Making the CFP more comprehensive, without being overly onerous is an art and science — and a necessity to making informed decisions. We tweak it slightly every year to account past-year observations, down to the naming, labelling and presence of fields in the form. We take feedback, but specially from URMs and make sure the form isn’t a barrier or otherwise intimidating.

I’m more than happy to participate/partake/help with this process in future events. Happy to discuss these issues at length as well — yall know where to find me :)

Best,
Ruben

On Thu, Oct 4, 2018 at 2:06 PM Liz Rice <liz@...> wrote:
Hi all from a current co-chair :-)  Some great constructive ideas here, this is turning into a good discussion.

On the double-blinding, I was involved in discussions about this after Austin and again after Copenhagen. Both times we came to the conclusion that double-blind wouldn't work, mostly for the reasons Alena & Justin described. I don't recall hearing the two-phase suggestion before though, and I think this is really worth exploring further. 

We'd have to reduce the number of submissions to make that in any way manageable. The idea of beefing up the CFP requirements could help (but is it possible we will put off some really knowledgable folks from contributing if we make it more onerous?) I think we need a bigger pool of review committee participants (who actually do it diligently) and perhaps should solicit more widely for volunteers for that.  

I'm inclined to say we shouldn't allow more than two submissions from any individual, but I don't think it's fair to impose submission limits per company - partly because it would be hard for them to manage, but more importantly this would likely end up with fewer new voices getting a chance, as the companies will no doubt push for their star performers to be on stage. 

On feedback - you can get it if you ask! For Copenhagen I gave individual feedback to everyone who asked via the speakers email (I guess that was about 20 people). I hope I'm not going to regret saying this as obviously that's not a scalable process and I may have just opened the floodgates! 

But based on that experience it would be a LOT of work to give meaningful feedback for every submission. You might imagine you could just forward the reviewer comments, but in practice, for the vast majority the comments don't by themselves explain why a talk didn't make the cut. For example, many talks get a perfectly decent score and positive comments, but still don't get picked. They might have simply been up against even better talks, or we had to choose between similar talks, or (believe it or not) we felt we couldn't have any more talks in a track from a given company, and so on. The reviewer comments wouldn't reflect any of that. One concern here is a lot of people seeing the positive comments and thinking they had been unfairly overlooked because obviously there was nothing wrong with their submission. If it's not useful, actionable feedback there's no point sending it. 

Perhaps we should document more of the co-chair decisions as we go along? Definitely worth considering, though it adds up to more work for the co-chairs (not that it will affect me as my term comes to an end after Seattle). 

On the CFP request for resources, for Copenhagen we didn't have this and I ended up doing lots of googling about submitters. Based on that I suggested asking for resources to offer submitters the chance to show off their best work rather than have us trying to guess what to look at. Why do we need to see this? Trying to establish whether folks are subject matter experts, or prone to vendor pitching, or trying to identify potential exciting keynote speakers... 

As I said, I really do think we should explore the ideas in this thread, especially the two-phase process, but let's not let perfection be the enemy of the good here. In one of the "should we double-blind" discussions one of my predecessors made the point that the outcome is more important than the process; if the end result is an agenda that the community finds engaging and exciting, that represents a diversity of viewpoints, and that drives forward the technology and adoption of cloud native, then we're in a pretty good place. It's my firm belief that the community want to see a mix of tech talks and end-user stories; talks by subject matter experts as well as new voices; diversity in all dimensions including company, but recognising that there are an awful lot of talented, knowledgable people working full-time on cloud native in a small number of companies. I'm sure we made mistakes, but we really did try our best to try to reflect all that in the agenda. 

I'm really happy to see constructive engagement about all this. But time presses and I must, for now, fly! 

Liz



On Thu, Oct 4, 2018 at 12:10 PM alexis richardson <alexis@...> wrote:
Hi

For the record, this list is probably the best meeting place of record for the community to air requests and commentary.  So, yes, lots of people are here and listening to everything that is being said.  We all want kubecon to get better and better so please keep the flow of thoughts coming

It is of course OK to issue rebuttals to Bryan.

A


On Thu, 4 Oct 2018, 16:53 Bryan Cantrill, <bryan@...> wrote:

I think it's disconcerting (if somewhat comical) that the concern that the ideas shared here would get rebuttals -- a concern that I and I think many members of the TOC also likely share -- itself got a rebuttal.  I think the discussion here is terrific, but I am concerned that the tone from the CNCF seems to be more of trying to explain how these concerns either aren't real concerns or are already being addressed.  I hope that staff is hearing that there is broad consensus that change is needed -- and that this should be embraced as a positive and natural consequence of the popularity of both the technologies and the conference rather than something to be resisted or explained away.

In particular: I very much share the concern about the length limit imposed by the CFP.  900 characters is absurdly short (the "3 tweet" characterization is apt); a 900 word limit would be much more reasonable.  I also share the concern about the dividing up of the proposal between "abstract" and "benefit to the community" and so on; a good abstract should contain everything that is needed to evaluate it -- and that evaluation criteria should be clearly spelled out.  By encouraging longer, more comprehensive abstracts, you will be encouraging better written ones -- which will give the PC a better basis for being double-blind in early rounds.  As a concrete step, I might encourage a group to be drawn up that consists of folks that have experience in both KubeCon, in other practitioner conferences, and in academic conferences (a few of whom have already identified themselves on this thread!); I think that their broad perspective is invaluable.

         - Bryan


On Thu, Oct 4, 2018 at 8:22 AM Dan Kohn <dan@...> wrote:
On Thu, Oct 4, 2018 at 11:18 AM Matthew Farina <matt@...> wrote:
Yes, we're putting together a Google Doc with comments and the conference co-chairs will be providing some responses.
 
This sort of feels like the ideas shared here are going to get rebuttals. Can we instead take all of this as ideas to look at how we refine and improve things in the future? Where we can intentionally lead the efforts to continuously improve and adapt as things change.

The conference and the processes associated with it have iterated significantly each time. I assure you we are all reading this feedback carefully and thinking through the implications of adopting it. I think there is less status quo bias than you might suspect. 

--
Liz Rice
@lizrice  | lizrice.com+44 (0) 780 126 1145


Re: Thoughts on KubeCon

Dan Kohn <dan@...>
 

On Thu, Oct 4, 2018 at 2:44 PM Nick Chase <nchase@...> wrote:

How many people did we have this time?  Did they do the whole conference or individual tracks?
 
My experience has been with the OpenStack Summit, where each track had its own committee, so reviewers had only a few dozen to a couple hundred to review, not thousands.  That worked very well.

We had 75 people participate on the program committee for Seattle. From https://www.cncf.io/blog/2018/05/29/get-your-kubecon-talk-accepted/ :

The regular conference sessions have 9 tracks and are submitted through the CFP process. The conference co-chairs for Shanghai and Seattle are Liz Rice of Aqua Security and Janet Kuo of Google. They are in the process of selecting a program committee of around 60 experts, which includes project maintainers, active community members, and highly-rated presenters from past events. Program committee members register for the topic areas they’re comfortable covering, and CNCF staff randomly assign a subset of relevant talks to each member. We then collate all of the reviews and the conference co-chairs spend a very challenging week assembling a coherent set of topic tracks and keynotes from the highest-rated talks.
--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Re: Thoughts on KubeCon

Nick Chase
 

On 10/4/2018 2:06 PM, Liz Rice wrote:
Hi all from a current co-chair :-)  Some great constructive ideas here, this is turning into a good discussion.

On the double-blinding, I was involved in discussions about this after Austin and again after Copenhagen. Both times we came to the conclusion that double-blind wouldn't work, mostly for the reasons Alena & Justin described. I don't recall hearing the two-phase suggestion before though, and I think this is really worth exploring further. 

We'd have to reduce the number of submissions to make that in any way manageable.

Why?  You're essentially doing this now when you narrow things down anyway.

The idea of beefing up the CFP requirements could help (but is it possible we will put off some really knowledgable folks from contributing if we make it more onerous?)

I don't think it's practical to require a whole deck, but a bit more detail is not an imposition, IMHO.

I think we need a bigger pool of review committee participants (who actually do it diligently) and perhaps should solicit more widely for volunteers for that. 

How many people did we have this time?  Did they do the whole conference or individual tracks?

My experience has been with the OpenStack Summit, where each track had its own committee, so reviewers had only a few dozen to a couple hundred to review, not thousands.  That worked very well.

I'm inclined to say we shouldn't allow more than two submissions from any individual, but I don't think it's fair to impose submission limits per company - partly because it would be hard for them to manage, but more importantly this would likely end up with fewer new voices getting a chance, as the companies will no doubt push for their star performers to be on stage.

+1

Perhaps we should document more of the co-chair decisions as we go along? Definitely worth considering, though it adds up to more work for the co-chairs (not that it will affect me as my term comes to an end after Seattle).

Even a multiple choice would be good, and wouldn't take much effort.

----  Nick


Re: Thoughts on KubeCon

Dan Kohn <dan@...>
 

On Wed, Oct 3, 2018 at 8:14 PM Camille Fournier <skamille@...> wrote:
What percentage of end user talks were accepted?
 
On Wed, Oct 3, 2018, 8:29 PM Dan Kohn <dan@...> wrote:
27.8% of talks are from end users.

On Wed, Oct 3, 2018 at 8:44 PM Camille Fournier <skamille@...> wrote:
No I mean, of the total number of submissions made by end users, what percentage were accepted? Given that the overall rate was 13%

End users submitted 14% of the talks to KubeCon but were accepted to present 28% of the sessions. So, end users had a 26% acceptance rate.
--
Dan Kohn <dan@...>
Executive Director, Cloud Native Computing Foundation https://www.cncf.io
+1-415-233-1000 https://www.dankohn.com


Re: Thoughts on KubeCon

Liz Rice
 

Hi all from a current co-chair :-)  Some great constructive ideas here, this is turning into a good discussion.

On the double-blinding, I was involved in discussions about this after Austin and again after Copenhagen. Both times we came to the conclusion that double-blind wouldn't work, mostly for the reasons Alena & Justin described. I don't recall hearing the two-phase suggestion before though, and I think this is really worth exploring further. 

We'd have to reduce the number of submissions to make that in any way manageable. The idea of beefing up the CFP requirements could help (but is it possible we will put off some really knowledgable folks from contributing if we make it more onerous?) I think we need a bigger pool of review committee participants (who actually do it diligently) and perhaps should solicit more widely for volunteers for that.  

I'm inclined to say we shouldn't allow more than two submissions from any individual, but I don't think it's fair to impose submission limits per company - partly because it would be hard for them to manage, but more importantly this would likely end up with fewer new voices getting a chance, as the companies will no doubt push for their star performers to be on stage. 

On feedback - you can get it if you ask! For Copenhagen I gave individual feedback to everyone who asked via the speakers email (I guess that was about 20 people). I hope I'm not going to regret saying this as obviously that's not a scalable process and I may have just opened the floodgates! 

But based on that experience it would be a LOT of work to give meaningful feedback for every submission. You might imagine you could just forward the reviewer comments, but in practice, for the vast majority the comments don't by themselves explain why a talk didn't make the cut. For example, many talks get a perfectly decent score and positive comments, but still don't get picked. They might have simply been up against even better talks, or we had to choose between similar talks, or (believe it or not) we felt we couldn't have any more talks in a track from a given company, and so on. The reviewer comments wouldn't reflect any of that. One concern here is a lot of people seeing the positive comments and thinking they had been unfairly overlooked because obviously there was nothing wrong with their submission. If it's not useful, actionable feedback there's no point sending it. 

Perhaps we should document more of the co-chair decisions as we go along? Definitely worth considering, though it adds up to more work for the co-chairs (not that it will affect me as my term comes to an end after Seattle). 

On the CFP request for resources, for Copenhagen we didn't have this and I ended up doing lots of googling about submitters. Based on that I suggested asking for resources to offer submitters the chance to show off their best work rather than have us trying to guess what to look at. Why do we need to see this? Trying to establish whether folks are subject matter experts, or prone to vendor pitching, or trying to identify potential exciting keynote speakers... 

As I said, I really do think we should explore the ideas in this thread, especially the two-phase process, but let's not let perfection be the enemy of the good here. In one of the "should we double-blind" discussions one of my predecessors made the point that the outcome is more important than the process; if the end result is an agenda that the community finds engaging and exciting, that represents a diversity of viewpoints, and that drives forward the technology and adoption of cloud native, then we're in a pretty good place. It's my firm belief that the community want to see a mix of tech talks and end-user stories; talks by subject matter experts as well as new voices; diversity in all dimensions including company, but recognising that there are an awful lot of talented, knowledgable people working full-time on cloud native in a small number of companies. I'm sure we made mistakes, but we really did try our best to try to reflect all that in the agenda. 

I'm really happy to see constructive engagement about all this. But time presses and I must, for now, fly! 

Liz



On Thu, Oct 4, 2018 at 12:10 PM alexis richardson <alexis@...> wrote:
Hi

For the record, this list is probably the best meeting place of record for the community to air requests and commentary.  So, yes, lots of people are here and listening to everything that is being said.  We all want kubecon to get better and better so please keep the flow of thoughts coming

It is of course OK to issue rebuttals to Bryan.

A


On Thu, 4 Oct 2018, 16:53 Bryan Cantrill, <bryan@...> wrote:

I think it's disconcerting (if somewhat comical) that the concern that the ideas shared here would get rebuttals -- a concern that I and I think many members of the TOC also likely share -- itself got a rebuttal.  I think the discussion here is terrific, but I am concerned that the tone from the CNCF seems to be more of trying to explain how these concerns either aren't real concerns or are already being addressed.  I hope that staff is hearing that there is broad consensus that change is needed -- and that this should be embraced as a positive and natural consequence of the popularity of both the technologies and the conference rather than something to be resisted or explained away.

In particular: I very much share the concern about the length limit imposed by the CFP.  900 characters is absurdly short (the "3 tweet" characterization is apt); a 900 word limit would be much more reasonable.  I also share the concern about the dividing up of the proposal between "abstract" and "benefit to the community" and so on; a good abstract should contain everything that is needed to evaluate it -- and that evaluation criteria should be clearly spelled out.  By encouraging longer, more comprehensive abstracts, you will be encouraging better written ones -- which will give the PC a better basis for being double-blind in early rounds.  As a concrete step, I might encourage a group to be drawn up that consists of folks that have experience in both KubeCon, in other practitioner conferences, and in academic conferences (a few of whom have already identified themselves on this thread!); I think that their broad perspective is invaluable.

         - Bryan


On Thu, Oct 4, 2018 at 8:22 AM Dan Kohn <dan@...> wrote:
On Thu, Oct 4, 2018 at 11:18 AM Matthew Farina <matt@...> wrote:
Yes, we're putting together a Google Doc with comments and the conference co-chairs will be providing some responses.
 
This sort of feels like the ideas shared here are going to get rebuttals. Can we instead take all of this as ideas to look at how we refine and improve things in the future? Where we can intentionally lead the efforts to continuously improve and adapt as things change.

The conference and the processes associated with it have iterated significantly each time. I assure you we are all reading this feedback carefully and thinking through the implications of adopting it. I think there is less status quo bias than you might suspect. 

--
Liz Rice
@lizrice  | lizrice.com+44 (0) 780 126 1145


Re: Thoughts on KubeCon

alexis richardson
 

Hi

For the record, this list is probably the best meeting place of record for the community to air requests and commentary.  So, yes, lots of people are here and listening to everything that is being said.  We all want kubecon to get better and better so please keep the flow of thoughts coming

It is of course OK to issue rebuttals to Bryan.

A


On Thu, 4 Oct 2018, 16:53 Bryan Cantrill, <bryan@...> wrote:

I think it's disconcerting (if somewhat comical) that the concern that the ideas shared here would get rebuttals -- a concern that I and I think many members of the TOC also likely share -- itself got a rebuttal.  I think the discussion here is terrific, but I am concerned that the tone from the CNCF seems to be more of trying to explain how these concerns either aren't real concerns or are already being addressed.  I hope that staff is hearing that there is broad consensus that change is needed -- and that this should be embraced as a positive and natural consequence of the popularity of both the technologies and the conference rather than something to be resisted or explained away.

In particular: I very much share the concern about the length limit imposed by the CFP.  900 characters is absurdly short (the "3 tweet" characterization is apt); a 900 word limit would be much more reasonable.  I also share the concern about the dividing up of the proposal between "abstract" and "benefit to the community" and so on; a good abstract should contain everything that is needed to evaluate it -- and that evaluation criteria should be clearly spelled out.  By encouraging longer, more comprehensive abstracts, you will be encouraging better written ones -- which will give the PC a better basis for being double-blind in early rounds.  As a concrete step, I might encourage a group to be drawn up that consists of folks that have experience in both KubeCon, in other practitioner conferences, and in academic conferences (a few of whom have already identified themselves on this thread!); I think that their broad perspective is invaluable.

         - Bryan


On Thu, Oct 4, 2018 at 8:22 AM Dan Kohn <dan@...> wrote:
On Thu, Oct 4, 2018 at 11:18 AM Matthew Farina <matt@...> wrote:
Yes, we're putting together a Google Doc with comments and the conference co-chairs will be providing some responses.
 
This sort of feels like the ideas shared here are going to get rebuttals. Can we instead take all of this as ideas to look at how we refine and improve things in the future? Where we can intentionally lead the efforts to continuously improve and adapt as things change.

The conference and the processes associated with it have iterated significantly each time. I assure you we are all reading this feedback carefully and thinking through the implications of adopting it. I think there is less status quo bias than you might suspect. 


Re: Thoughts on KubeCon

Bryan Cantrill <bryan@...>
 


I think it's disconcerting (if somewhat comical) that the concern that the ideas shared here would get rebuttals -- a concern that I and I think many members of the TOC also likely share -- itself got a rebuttal.  I think the discussion here is terrific, but I am concerned that the tone from the CNCF seems to be more of trying to explain how these concerns either aren't real concerns or are already being addressed.  I hope that staff is hearing that there is broad consensus that change is needed -- and that this should be embraced as a positive and natural consequence of the popularity of both the technologies and the conference rather than something to be resisted or explained away.

In particular: I very much share the concern about the length limit imposed by the CFP.  900 characters is absurdly short (the "3 tweet" characterization is apt); a 900 word limit would be much more reasonable.  I also share the concern about the dividing up of the proposal between "abstract" and "benefit to the community" and so on; a good abstract should contain everything that is needed to evaluate it -- and that evaluation criteria should be clearly spelled out.  By encouraging longer, more comprehensive abstracts, you will be encouraging better written ones -- which will give the PC a better basis for being double-blind in early rounds.  As a concrete step, I might encourage a group to be drawn up that consists of folks that have experience in both KubeCon, in other practitioner conferences, and in academic conferences (a few of whom have already identified themselves on this thread!); I think that their broad perspective is invaluable.

         - Bryan


On Thu, Oct 4, 2018 at 8:22 AM Dan Kohn <dan@...> wrote:
On Thu, Oct 4, 2018 at 11:18 AM Matthew Farina <matt@...> wrote:
Yes, we're putting together a Google Doc with comments and the conference co-chairs will be providing some responses.
 
This sort of feels like the ideas shared here are going to get rebuttals. Can we instead take all of this as ideas to look at how we refine and improve things in the future? Where we can intentionally lead the efforts to continuously improve and adapt as things change.

The conference and the processes associated with it have iterated significantly each time. I assure you we are all reading this feedback carefully and thinking through the implications of adopting it. I think there is less status quo bias than you might suspect. 

5201 - 5220 of 7724