+1 for "review", +1 for learning from ACM & academia
I still think we cannot pretend to be unbiased. Even algos are biased.
toggle quoted message
Show quoted text
On Thu, Aug 1, 2019 at 2:27 PM Matt Farina <matt@...> wrote: Thanks for kicking off this thread, Gareth.
I'm reminded of a couple things when it comes to attribution and bias:
Academic and society papers (e.g., ACM) have author attribution including institution. The ACM template goes so far as to include the department. https://www.acm.org/publications/proceedings-template In journalistic publications associations and their context are documented. Whether they are the reporters (e.g., they have shares in a company they are writing about) or even those of the publication they are writing for (e.g., the publications parent company owns who they are writing about).
I would hope we have a review process for any documentation being produced. We review PRs, editors review books and papers, and we should have a documented process for reviewing documentation produced by the TOC/SIGs.
Would a template for these papers make sense with a template section on the authors leading or requiring them to disclose their organizations be useful?
I know the apps space has A LOT of projects, products, and companies. I've seen numerous people share different ideas of what they think should be in it. Some are looking for any little thing that can be a competitive advantage to differentiate themselves. I would suggest a good process to look out for the best interest of the end users while attempting to limit bias or at least disclose it well.
- Matt Farina
On Thu, Aug 1, 2019, at 8:49 AM, Liz Rice wrote:
Agreed, this is an important point, and good to expose to sunlight.
I like Alexis’ authorship statements and the point about listing authors and their affiliations.
Sometimes people’s biases might not even be obvious to their co-collaborators, so I think it would be appropriate to have some explicit guidelines that individuals are expected to flag up when they have a COI.
For example if a SIG is doing an assessment on project X, contributors might explicitly say
“project X competes with project Y that I’m a maintainer of / I have contributed substantially to ” or “project X is potentially competitive with a product from my company”.
And then
“as a result I don’t think it’s appropriate for me to take part in this assessment” or “as a result I am knowledgeable in the area, so I’d like to contribute, but please flag if you think my biases are showing”
Liz On 1 Aug 2019, 11:44 +0100, Sarah Allen <sarah@...>, wrote:
Thanks for raising this Gareth. This is an open issue for SIG Security where we have a growing number of individuals participating in assessments and an open issue to write up guidelines: https://github.com/cncf/sig-security/issues/156
Having guidance from the TOC would be very helpful to be able to reference, and I've written up a TOC issue here: https://github.com/cncf/toc/issues/270
Sarah Allen SIG-Security co-chair
On Thu, Aug 1, 2019 at 4:58 AM alexis richardson <alexis@...> wrote:
Thanks for posting this Gareth.
IMO it is better to be open about bias than to pretend it away.
We could state that documents coming from CNCF TOC & SIGs are marked as "Authored by members of the CNCF community", and list all contributors and affiliations. This would be in contrast to documents commissioned by the CNCF organisation which are published as official CNCF docs, authored by the CNCF staff.
On Thu, Aug 1, 2019 at 9:22 AM Gareth Rushgrove <gareth@...> wrote:
Hi All
On a couple of calls yesterday (SIG Security, and discussions about the proposed SIG App Delivery), the topic of bias or conflict of interest came up. In discussion we thought it worth bringing to the ToC, so here is an email.
One of the things being discussed as part of the SIG App Delivery mission is "develop informational resources like guides, tutorials and white papers". SIG Security produces recommendations for projects and the ToC and is also looking at guidance. I'm sure other SIGs have in mind to do something similar.
Part of the power of CNCF is it's a shared place for folks to genuinely work together. But I don't think we should deny or otherwise hide our bias, especially as we get into CNCF branded and published material. I think most people want to do the right thing, but having some guidance and discussion would help. Consider a few of the following:
1. Conducting a private security review of a product associated with a competitor 2. Guidance on <CNCF project> and <Cloud provider> written by <Cloud provider> 3. Tutorial on <CNCF project> which mentions <non-CNCF project> 4. Comparisons of <CNCF projects> and <non-CNCF projects> 5. Guidance on <CNCF project> which competes with <other CNCF project> 6. Guidance on <CNCF project> which competes with <non-CNCF project> associated with <authors employee> 7. Organising a <CNCF branded event> which competes directly with <CNCF member> event
Non of these are simply good or bad, context always matters. A few things that could be discussed (not concrete suggestions, more to start a conversation.)
1. All guidance carries authors and contributors and their affiliations 2. Contributors sign some impartiality document (social more than legal) 3. Clear review process which explicitly takes in bias 4. No single-vendor content attributed to CNCF
I think the ToC are probably _very_ aware of this sort of thing, but as CNCF SIGs expand, more folks probably need to consider the same things. I think CNCF affiliation is different from project affiliation. Doing that collectively would be good. What processes do we need in place? And are they SIG specific or more general? Is this something folks care about?
Thanks
Gareth
-- Gareth Rushgrove @garethr
devopsweekly.com morethanseven.net garethrushgrove.com
|