Date
1 - 5 of 5
[cncf-sig-security] Vulnerability scanning for CNCF projects
Liz,
Love this. As part of the assessments SIG-Security performs, we've begun highlighting the importance of secure development practices. The last few assessments we've begun pushing more for this, as well as responsible disclosure instructions and general security mindedness for project sustainment. This fits in alignment with those efforts. We currently have the assessment process undergoing some updates (held currently for kubecon) and this make it a great time to potentially include this. I personally would like to see license dependencies and dependency trees to help push forward in the area of SBOM.
I think we should be clear however in what our thresholds and terms are in this area, offhand i can think of the following potentials:
* Listing of vulns in deliverable artifacts
* Listing licensing dependencies
* SBOM
* vulnerability threshold and prioritizing resolution in prior to artifact delivery
* vulnerability threshold and prioritizing resolution post artifact delivery
Definitely worth a conversation and follow-ups. Do you have anything in mind that are must haves off the above or anything I missed or misunderstood?
~Emily Fox
On Wed, Nov 18, 2020 at 11:41 AM Liz Rice <liz@...> wrote:
Hi TOC and SIG Security folksOn Friday I got a nice preview from Shubhra Kar and his team at the LF about some tools they are building to provide insights and stats for LF (and therefore CNCF) projects. One that's of particular interest is an integration of scanning security issues.We require graduated projects to have security reviews, and SIG Security are offering additional assessments, but we don't really have any standards around whether project artifacts shipping with vulnerabilities. Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?
This tooling is off to a great start. The current numbers for a lot of our projects look really quite bad, but this may be to do with scanning all the repos related to a project's org. I'd imagine there are also some false positives from things like dependencies only used in test that don't affect the security of the executables that end users run - we may want to look at just reporting vulnerabilities from a project's deployable artifacts.
As well as vulnerability scanning this is showing license dependencies, which could be very useful.
For discussion, how we want to use this kind of info, and whether we want to formalize requirements on projects (e.g. at graduation or incubation levels).
Copying Shubra in case he would like to comment further. .Enjoy KubeCon!Liz
" Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?"
We have a graduation requirement around CII badging which requires a security disclosure process so it's there but not codified formally, we could do that, I think the important thing is that projects also publish advisories in a standard way (like via the github security API)
We should treat the LF tool suite as another option for projects to take advantage of, already many projects are using Snyk, FOSSA, Whitesource etc that is listed here: https://github.com/cncf/servicedesk#tools
You can kind of get an SBOM (depending you define sbom ;p) for some of our projects already: https://app.fossa.com/attribution/c189c5b9-fe2c-45f2-ba40-c34c36bab868
I think offering projects more choice is always better as the landscape changes often in tooling.
On Wed, Nov 18, 2020 at 10:54 AM Emily Fox <themoxiefoxatwork@...> wrote:
Liz,Love this. As part of the assessments SIG-Security performs, we've begun highlighting the importance of secure development practices. The last few assessments we've begun pushing more for this, as well as responsible disclosure instructions and general security mindedness for project sustainment. This fits in alignment with those efforts. We currently have the assessment process undergoing some updates (held currently for kubecon) and this make it a great time to potentially include this. I personally would like to see license dependencies and dependency trees to help push forward in the area of SBOM.I think we should be clear however in what our thresholds and terms are in this area, offhand i can think of the following potentials:* Listing of vulns in deliverable artifacts* Listing licensing dependencies* SBOM* vulnerability threshold and prioritizing resolution in prior to artifact delivery* vulnerability threshold and prioritizing resolution post artifact deliveryDefinitely worth a conversation and follow-ups. Do you have anything in mind that are must haves off the above or anything I missed or misunderstood?~Emily FoxOn Wed, Nov 18, 2020 at 11:41 AM Liz Rice <liz@...> wrote:Hi TOC and SIG Security folksOn Friday I got a nice preview from Shubhra Kar and his team at the LF about some tools they are building to provide insights and stats for LF (and therefore CNCF) projects. One that's of particular interest is an integration of scanning security issues.We require graduated projects to have security reviews, and SIG Security are offering additional assessments, but we don't really have any standards around whether project artifacts shipping with vulnerabilities. Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?
This tooling is off to a great start. The current numbers for a lot of our projects look really quite bad, but this may be to do with scanning all the repos related to a project's org. I'd imagine there are also some false positives from things like dependencies only used in test that don't affect the security of the executables that end users run - we may want to look at just reporting vulnerabilities from a project's deployable artifacts.
As well as vulnerability scanning this is showing license dependencies, which could be very useful.
For discussion, how we want to use this kind of info, and whether we want to formalize requirements on projects (e.g. at graduation or incubation levels).
Copying Shubra in case he would like to comment further. .Enjoy KubeCon!Liz
--
Chris Aniszczyk (@cra)
Gareth Rushgrove
On Wed, 18 Nov 2020 at 16:54, Emily Fox <themoxiefoxatwork@...> wrote:
HUGE DISCLAIMER. I work at Snyk, which is the service powering the
scans. I'm also a maintainer of Conftest as part of the Open Policy
Agent project and know a bunch of folks on here. I'm not trying to
sell you anything, other nice vendors exist, etc. I just happen to
have opinions and experience here.
vulnerabilities. It's indicative of the problem domain more so than
projects doing the wrong thing. Fixing starts with visibility.
few things you can do here.
* As you note, starting with non-test dependencies is a good idea
* Then start with the most severe and those which can be fixed, and
repeat. Standards like CVSS exist, as well as more involved
vendor-specific mechanisms. CVSS is mainly simple to read on the
surface (Low 0.1 - 3.9, Medium 4.0 - 6.9, High 7.0 - 8.9, Critical 9.0
- 10.0)
* Each time you clear a new threshold, put in checks in CI to help
enforce things in the future
For instance:
* Start with Critical (CVSS 9+), non-test issues that have a fix available
* Add a CI check to break the build for CVSS 9+, non-test, fixable issues
* Do the same for 8+ non-test
* Do the same for 9+ test
...
etc.
In this way what seems an impossibly large bit of work gets broken
down and you get value quickly. You can absolutely do this at your own
pace. I wouldn't advocate for CNCF to set deadlines, though guidelines
and reporting for graduated projects might be useful.
Separately, you likely want to have some level of triage for
vulnerabilities that don't have fixes available yet. The above
approach is somewhat mechanical, triage needs more context and
security experience. I'd at least recommend having maintainers triage
Critical severity issues in dependencies. Assuming that's rare, you
can extend this as far as you like and have time to do (to High, or
Medium, or a specific CVSS threshold).
potential a different type of vulnerability. As one example,
compromised test vulnerabilities have the potential to steal build
credentials and suddenly someone is shipping a compromised version of
software to end users using your release toolchain.
I'm sure the above is obvious to some, but I thought it was worth
laying out. It should also be pretty tool agnostic.
As mentioned, happy to join conversations if folks are discussing.
Gareth
--
Gareth Rushgrove
@garethr
garethr.dev
devopsweekly.com
I'd be happy to join and help here.
Liz,
Love this. As part of the assessments SIG-Security performs, we've begun highlighting the importance of secure development practices. The last few assessments we've begun pushing more for this, as well as responsible disclosure instructions and general security mindedness for project sustainment. This fits in alignment with those efforts. We currently have the assessment process undergoing some updates (held currently for kubecon) and this make it a great time to potentially include this. I personally would like to see license dependencies and dependency trees to help push forward in the area of SBOM.
I think we should be clear however in what our thresholds and terms are in this area, offhand i can think of the following potentials:
* Listing of vulns in deliverable artifacts
* Listing licensing dependencies
* SBOM
* vulnerability threshold and prioritizing resolution in prior to artifact delivery
* vulnerability threshold and prioritizing resolution post artifact delivery
Definitely worth a conversation and follow-ups. Do you have anything in mind that are must haves off the above or anything I missed or misunderstood?
HUGE DISCLAIMER. I work at Snyk, which is the service powering the
scans. I'm also a maintainer of Conftest as part of the Open Policy
Agent project and know a bunch of folks on here. I'm not trying to
sell you anything, other nice vendors exist, etc. I just happen to
have opinions and experience here.
The current numbers for a lot of our projects look really quite badThis is nearly always the case when projects or company first look at
vulnerabilities. It's indicative of the problem domain more so than
projects doing the wrong thing. Fixing starts with visibility.
reviewing such a massive amount of data for project owners might take way too much timeThe main thing to do is break the problem down. Luckily there are a
few things you can do here.
* As you note, starting with non-test dependencies is a good idea
* Then start with the most severe and those which can be fixed, and
repeat. Standards like CVSS exist, as well as more involved
vendor-specific mechanisms. CVSS is mainly simple to read on the
surface (Low 0.1 - 3.9, Medium 4.0 - 6.9, High 7.0 - 8.9, Critical 9.0
- 10.0)
* Each time you clear a new threshold, put in checks in CI to help
enforce things in the future
For instance:
* Start with Critical (CVSS 9+), non-test issues that have a fix available
* Add a CI check to break the build for CVSS 9+, non-test, fixable issues
* Do the same for 8+ non-test
* Do the same for 9+ test
...
etc.
In this way what seems an impossibly large bit of work gets broken
down and you get value quickly. You can absolutely do this at your own
pace. I wouldn't advocate for CNCF to set deadlines, though guidelines
and reporting for graduated projects might be useful.
Separately, you likely want to have some level of triage for
vulnerabilities that don't have fixes available yet. The above
approach is somewhat mechanical, triage needs more context and
security experience. I'd at least recommend having maintainers triage
Critical severity issues in dependencies. Assuming that's rare, you
can extend this as far as you like and have time to do (to High, or
Medium, or a specific CVSS threshold).
false positives from things like dependencies only used in testI wouldn't think of test vulnerabilities as false positives, just
potential a different type of vulnerability. As one example,
compromised test vulnerabilities have the potential to steal build
credentials and suddenly someone is shipping a compromised version of
software to end users using your release toolchain.
I'm sure the above is obvious to some, but I thought it was worth
laying out. It should also be pretty tool agnostic.
As mentioned, happy to join conversations if folks are discussing.
Gareth
~Emily Fox
@TheMoxieFox
On Wed, Nov 18, 2020 at 11:41 AM Liz Rice <liz@...> wrote:
Hi TOC and SIG Security folks
On Friday I got a nice preview from Shubhra Kar and his team at the LF about some tools they are building to provide insights and stats for LF (and therefore CNCF) projects. One that's of particular interest is an integration of scanning security issues.
We require graduated projects to have security reviews, and SIG Security are offering additional assessments, but we don't really have any standards around whether project artifacts shipping with vulnerabilities. Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?
This tooling is off to a great start. The current numbers for a lot of our projects look really quite bad, but this may be to do with scanning all the repos related to a project's org. I'd imagine there are also some false positives from things like dependencies only used in test that don't affect the security of the executables that end users run - we may want to look at just reporting vulnerabilities from a project's deployable artifacts.
As well as vulnerability scanning this is showing license dependencies, which could be very useful.
For discussion, how we want to use this kind of info, and whether we want to formalize requirements on projects (e.g. at graduation or incubation levels).
Copying Shubra in case he would like to comment further. .
Enjoy KubeCon!
Liz
--
Gareth Rushgrove
@garethr
garethr.dev
devopsweekly.com
Gadi Naor
This is a great initiative that also sends a message that security is part of the core functionality.
Few suggestions:
- If we can ensure CNCF projects follow Container Image authoring best practices, such as building Images from scratch or distroless images - it will eliminate a lot of the noise static scanners generate.
- For projects are designed to run on k8s - scanning the deployment assets for security best practices of the manifests (k8s manifests, helm charts, kustomized resources) with tools such as conftest, kyverno, commercial or a combination should be used to verify components do not run as privileged, do not run on host namespaces, have network policies, .etc etc.
- In cases where exceptions must be made there should be clear process, and audited policy/config for that - e.g. CVEs that can not get fixed, components that need certain escalated privileges to function etc.
Gadi
On Thu, Nov 19, 2020 at 11:59 AM Gareth Rushgrove <gareth@...> wrote:
On Wed, 18 Nov 2020 at 16:54, Emily Fox <themoxiefoxatwork@...> wrote:
>
> Liz,
> Love this. As part of the assessments SIG-Security performs, we've begun highlighting the importance of secure development practices. The last few assessments we've begun pushing more for this, as well as responsible disclosure instructions and general security mindedness for project sustainment. This fits in alignment with those efforts. We currently have the assessment process undergoing some updates (held currently for kubecon) and this make it a great time to potentially include this. I personally would like to see license dependencies and dependency trees to help push forward in the area of SBOM.
> I think we should be clear however in what our thresholds and terms are in this area, offhand i can think of the following potentials:
> * Listing of vulns in deliverable artifacts
> * Listing licensing dependencies
> * SBOM
> * vulnerability threshold and prioritizing resolution in prior to artifact delivery
> * vulnerability threshold and prioritizing resolution post artifact delivery
>
> Definitely worth a conversation and follow-ups. Do you have anything in mind that are must haves off the above or anything I missed or misunderstood?
>
I'd be happy to join and help here.
HUGE DISCLAIMER. I work at Snyk, which is the service powering the
scans. I'm also a maintainer of Conftest as part of the Open Policy
Agent project and know a bunch of folks on here. I'm not trying to
sell you anything, other nice vendors exist, etc. I just happen to
have opinions and experience here.
> The current numbers for a lot of our projects look really quite bad
This is nearly always the case when projects or company first look at
vulnerabilities. It's indicative of the problem domain more so than
projects doing the wrong thing. Fixing starts with visibility.
> reviewing such a massive amount of data for project owners might take way too much time
The main thing to do is break the problem down. Luckily there are a
few things you can do here.
* As you note, starting with non-test dependencies is a good idea
* Then start with the most severe and those which can be fixed, and
repeat. Standards like CVSS exist, as well as more involved
vendor-specific mechanisms. CVSS is mainly simple to read on the
surface (Low 0.1 - 3.9, Medium 4.0 - 6.9, High 7.0 - 8.9, Critical 9.0
- 10.0)
* Each time you clear a new threshold, put in checks in CI to help
enforce things in the future
For instance:
* Start with Critical (CVSS 9+), non-test issues that have a fix available
* Add a CI check to break the build for CVSS 9+, non-test, fixable issues
* Do the same for 8+ non-test
* Do the same for 9+ test
...
etc.
In this way what seems an impossibly large bit of work gets broken
down and you get value quickly. You can absolutely do this at your own
pace. I wouldn't advocate for CNCF to set deadlines, though guidelines
and reporting for graduated projects might be useful.
Separately, you likely want to have some level of triage for
vulnerabilities that don't have fixes available yet. The above
approach is somewhat mechanical, triage needs more context and
security experience. I'd at least recommend having maintainers triage
Critical severity issues in dependencies. Assuming that's rare, you
can extend this as far as you like and have time to do (to High, or
Medium, or a specific CVSS threshold).
> false positives from things like dependencies only used in test
I wouldn't think of test vulnerabilities as false positives, just
potential a different type of vulnerability. As one example,
compromised test vulnerabilities have the potential to steal build
credentials and suddenly someone is shipping a compromised version of
software to end users using your release toolchain.
I'm sure the above is obvious to some, but I thought it was worth
laying out. It should also be pretty tool agnostic.
As mentioned, happy to join conversations if folks are discussing.
Gareth
> ~Emily Fox
> @TheMoxieFox
>
>
> On Wed, Nov 18, 2020 at 11:41 AM Liz Rice <liz@...> wrote:
>>
>> Hi TOC and SIG Security folks
>>
>> On Friday I got a nice preview from Shubhra Kar and his team at the LF about some tools they are building to provide insights and stats for LF (and therefore CNCF) projects. One that's of particular interest is an integration of scanning security issues.
>>
>> We require graduated projects to have security reviews, and SIG Security are offering additional assessments, but we don't really have any standards around whether project artifacts shipping with vulnerabilities. Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?
>>
>> This tooling is off to a great start. The current numbers for a lot of our projects look really quite bad, but this may be to do with scanning all the repos related to a project's org. I'd imagine there are also some false positives from things like dependencies only used in test that don't affect the security of the executables that end users run - we may want to look at just reporting vulnerabilities from a project's deployable artifacts.
>>
>> As well as vulnerability scanning this is showing license dependencies, which could be very useful.
>>
>> For discussion, how we want to use this kind of info, and whether we want to formalize requirements on projects (e.g. at graduation or incubation levels).
>>
>> Copying Shubra in case he would like to comment further. .
>>
>> Enjoy KubeCon!
>> Liz
>
>
--
Gareth Rushgrove
@garethr
garethr.dev
devopsweekly.com
--
| |||
US. 2443 Fillmore St, San Francisco, CA, 94115 IL. 5 Miconis St, Tel Aviv, 6777214 | |||
M. +972-52-6618811 | |||
Web. www.alcide.io GitHub. github.com/alcideio | |||
|
Complete Kubernetes & Service Mesh Security.
Bridging Security & DevOps.
alexis richardson
+1
This is a great initiative that also sends a message that security is part of the core functionality.Few suggestions:
- If we can ensure CNCF projects follow Container Image authoring best practices, such as building Images from scratch or distroless images - it will eliminate a lot of the noise static scanners generate.
- For projects are designed to run on k8s - scanning the deployment assets for security best practices of the manifests (k8s manifests, helm charts, kustomized resources) with tools such as conftest, kyverno, commercial or a combination should be used to verify components do not run as privileged, do not run on host namespaces, have network policies, .etc etc.
- In cases where exceptions must be made there should be clear process, and audited policy/config for that - e.g. CVEs that can not get fixed, components that need certain escalated privileges to function etc.
GadiOn Thu, Nov 19, 2020 at 11:59 AM Gareth Rushgrove <gareth@...> wrote:On Wed, 18 Nov 2020 at 16:54, Emily Fox <themoxiefoxatwork@...> wrote:
>
> Liz,
> Love this. As part of the assessments SIG-Security performs, we've begun highlighting the importance of secure development practices. The last few assessments we've begun pushing more for this, as well as responsible disclosure instructions and general security mindedness for project sustainment. This fits in alignment with those efforts. We currently have the assessment process undergoing some updates (held currently for kubecon) and this make it a great time to potentially include this. I personally would like to see license dependencies and dependency trees to help push forward in the area of SBOM.
> I think we should be clear however in what our thresholds and terms are in this area, offhand i can think of the following potentials:
> * Listing of vulns in deliverable artifacts
> * Listing licensing dependencies
> * SBOM
> * vulnerability threshold and prioritizing resolution in prior to artifact delivery
> * vulnerability threshold and prioritizing resolution post artifact delivery
>
> Definitely worth a conversation and follow-ups. Do you have anything in mind that are must haves off the above or anything I missed or misunderstood?
>
I'd be happy to join and help here.
HUGE DISCLAIMER. I work at Snyk, which is the service powering the
scans. I'm also a maintainer of Conftest as part of the Open Policy
Agent project and know a bunch of folks on here. I'm not trying to
sell you anything, other nice vendors exist, etc. I just happen to
have opinions and experience here.
> The current numbers for a lot of our projects look really quite bad
This is nearly always the case when projects or company first look at
vulnerabilities. It's indicative of the problem domain more so than
projects doing the wrong thing. Fixing starts with visibility.
> reviewing such a massive amount of data for project owners might take way too much time
The main thing to do is break the problem down. Luckily there are a
few things you can do here.
* As you note, starting with non-test dependencies is a good idea
* Then start with the most severe and those which can be fixed, and
repeat. Standards like CVSS exist, as well as more involved
vendor-specific mechanisms. CVSS is mainly simple to read on the
surface (Low 0.1 - 3.9, Medium 4.0 - 6.9, High 7.0 - 8.9, Critical 9.0
- 10.0)
* Each time you clear a new threshold, put in checks in CI to help
enforce things in the future
For instance:
* Start with Critical (CVSS 9+), non-test issues that have a fix available
* Add a CI check to break the build for CVSS 9+, non-test, fixable issues
* Do the same for 8+ non-test
* Do the same for 9+ test
...
etc.
In this way what seems an impossibly large bit of work gets broken
down and you get value quickly. You can absolutely do this at your own
pace. I wouldn't advocate for CNCF to set deadlines, though guidelines
and reporting for graduated projects might be useful.
Separately, you likely want to have some level of triage for
vulnerabilities that don't have fixes available yet. The above
approach is somewhat mechanical, triage needs more context and
security experience. I'd at least recommend having maintainers triage
Critical severity issues in dependencies. Assuming that's rare, you
can extend this as far as you like and have time to do (to High, or
Medium, or a specific CVSS threshold).
> false positives from things like dependencies only used in test
I wouldn't think of test vulnerabilities as false positives, just
potential a different type of vulnerability. As one example,
compromised test vulnerabilities have the potential to steal build
credentials and suddenly someone is shipping a compromised version of
software to end users using your release toolchain.
I'm sure the above is obvious to some, but I thought it was worth
laying out. It should also be pretty tool agnostic.
As mentioned, happy to join conversations if folks are discussing.
Gareth
> ~Emily Fox
> @TheMoxieFox
>
>
> On Wed, Nov 18, 2020 at 11:41 AM Liz Rice <liz@...> wrote:
>>
>> Hi TOC and SIG Security folks
>>
>> On Friday I got a nice preview from Shubhra Kar and his team at the LF about some tools they are building to provide insights and stats for LF (and therefore CNCF) projects. One that's of particular interest is an integration of scanning security issues.
>>
>> We require graduated projects to have security reviews, and SIG Security are offering additional assessments, but we don't really have any standards around whether project artifacts shipping with vulnerabilities. Should we have something in place for requiring projects to have a process to fix vulnerability issues (at least the serious ones)?
>>
>> This tooling is off to a great start. The current numbers for a lot of our projects look really quite bad, but this may be to do with scanning all the repos related to a project's org. I'd imagine there are also some false positives from things like dependencies only used in test that don't affect the security of the executables that end users run - we may want to look at just reporting vulnerabilities from a project's deployable artifacts.
>>
>> As well as vulnerability scanning this is showing license dependencies, which could be very useful.
>>
>> For discussion, how we want to use this kind of info, and whether we want to formalize requirements on projects (e.g. at graduation or incubation levels).
>>
>> Copying Shubra in case he would like to comment further. .
>>
>> Enjoy KubeCon!
>> Liz
>
>
--
Gareth Rushgrove
@garethr
garethr.dev
devopsweekly.com
--
Gadi Naor ◆ CTO & Security Plumber US. 2443 Fillmore St, San Francisco, CA, 94115IL. 5 Miconis St, Tel Aviv, 6777214M. +972-52-6618811 Web. www.alcide.io
GitHub. github.com/alcideio
Complete Kubernetes & Service Mesh Security.Bridging Security & DevOps.