On Wed, 18 Nov 2020 at 16:54, Emily Fox <firstname.lastname@example.org> wrote:
I'd be happy to join and help here.
HUGE DISCLAIMER. I work at Snyk, which is the service powering the
scans. I'm also a maintainer of Conftest as part of the Open Policy
Agent project and know a bunch of folks on here. I'm not trying to
sell you anything, other nice vendors exist, etc. I just happen to
have opinions and experience here.
The current numbers for a lot of our projects look really quite badThis is nearly always the case when projects or company first look at
vulnerabilities. It's indicative of the problem domain more so than
projects doing the wrong thing. Fixing starts with visibility.
reviewing such a massive amount of data for project owners might take way too much timeThe main thing to do is break the problem down. Luckily there are a
few things you can do here.
* As you note, starting with non-test dependencies is a good idea
* Then start with the most severe and those which can be fixed, and
repeat. Standards like CVSS exist, as well as more involved
vendor-specific mechanisms. CVSS is mainly simple to read on the
surface (Low 0.1 - 3.9, Medium 4.0 - 6.9, High 7.0 - 8.9, Critical 9.0
* Each time you clear a new threshold, put in checks in CI to help
enforce things in the future
* Start with Critical (CVSS 9+), non-test issues that have a fix available
* Add a CI check to break the build for CVSS 9+, non-test, fixable issues
* Do the same for 8+ non-test
* Do the same for 9+ test
In this way what seems an impossibly large bit of work gets broken
down and you get value quickly. You can absolutely do this at your own
pace. I wouldn't advocate for CNCF to set deadlines, though guidelines
and reporting for graduated projects might be useful.
Separately, you likely want to have some level of triage for
vulnerabilities that don't have fixes available yet. The above
approach is somewhat mechanical, triage needs more context and
security experience. I'd at least recommend having maintainers triage
Critical severity issues in dependencies. Assuming that's rare, you
can extend this as far as you like and have time to do (to High, or
Medium, or a specific CVSS threshold).
false positives from things like dependencies only used in testI wouldn't think of test vulnerabilities as false positives, just
potential a different type of vulnerability. As one example,
compromised test vulnerabilities have the potential to steal build
credentials and suddenly someone is shipping a compromised version of
software to end users using your release toolchain.
I'm sure the above is obvious to some, but I thought it was worth
laying out. It should also be pretty tool agnostic.
As mentioned, happy to join conversations if folks are discussing.