On 2023-08-14 14:51, Richard Sandiford wrote:
I think it would help to clarify what the aim of the security policy is.
Specifically:

(1) What service do we want to provide to users by classifying one thing
     as a security bug and another thing as not a security bug?

(2) What service do we want to provide to the GNU community by the same
     classification?

I think it will be easier to agree on the classification if we first
agree on that.

I actually wanted to do a talk on this at the Cauldron this year and *then* propose this for the gcc community, but I guess we could do this early :)

So the core intent of a security policy for a project is to make clear the security stance of the project, specifying to the extent possible what kind of uses are considered safe and what kinds of bugs would be considered security issues in the context of those uses.

There are a few advantages of doing this:

1. It makes it clear to users of the project the scope in which the project could be used and what safety it could reasonably expect from the project. In the context of GCC for example, it cannot expect the compiler to do a safety check of untrusted sources; the compiler will consider #include "/etc/passwd" just as valid code as #include <stdio.h> and as a result, the onus is on the user environment to validate the input sources for safety.

2. It helps the security community (Mitre and other CNAs and security researchers) set correct expectations of the project so that they don't cry wolf for every segfault or ICE under the pretext that code could presumably be run as a service somehow and hence result in a "DoS".

3. This in turn helps stave off spurious CVE submissions that cause needless churn in downstream distributions. LLVM is already starting to see this[1] and it's only a matter of time before people start doing this for GCC.

4. It helps make a distinction between important bugs and security bugs; they're often conflated as one and the same thing. Security bugs are special because they require different handling from those that do not have a security impact, regardless of their actual importance. Unfortunately one of the reasons they're special is because there's a bunch of (pretty dumb) automation out there that rings alarm bells on every single CVE. Without a clear understanding of the context under which a project can be used, these alarm bells can be made unreasonably loud (due to incorrect scoring, see the LLVM CVE for instance; just one element in that vector changes the score from 0.0 to 5.5), causing needless churn in not just the code base but in downstream releases and end user environments.

5. This exercise is also a great start in developing an understanding of which parts in GCC are security sensitive and in what sense. Runtime libraries for example have a direct impact on application security. Compiler impact is a little less direct. Hardening features have another effect, but it's more mitigation-oriented than direct safety. This also informs us about the impact of various project actions such as bundling third-party libraries and development and maintenance of tooling within GCC and will hopefully guide policies around those practices.

I hope this is a sufficient start. We don't necessarily want to get into the business of acknowledging or rejecting security issues as upstream at the moment (but see also the CNA discussion[2] of what we intend to do in that space for glibc) but having uniform upstream guidelines would be helpful to researchers as well as downstream consumers to help decide what constitutes a security issue.

Thanks,
Sid

[1] https://nvd.nist.gov/vuln/detail/CVE-2023-29932
[2] https://inbox.sourceware.org/libc-alpha/1a44f25a-5aa3-28b7-1ecb-b3991d44c...@gotplt.org/T/

Reply via email to