I think it would help to clarify what the aim of the security policy is.
Specifically:

(1) What service do we want to provide to users by classifying one thing
    as a security bug and another thing as not a security bug?

(2) What service do we want to provide to the GNU community by the same
    classification?

I think it will be easier to agree on the classification if we first
agree on that.

Siddhesh Poyarekar <siddh...@gotplt.org> writes:
> Hi,
>
> Here's the updated draft of the top part of the security policy with all 
> of the recommendations incorporated.
>
> Thanks,
> Sid
>
>
> What is a GCC security bug?
> ===========================
>
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
>
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
>
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.
>
>      Support libraries such as libiberty, libcc1 libvtv and libcpp have
>      been developed separately to share code with other tools such as
>      binutils and gdb.  These libraries again have similar challenges to
>      compiler drivers.  While they are expected to be robust against
>      arbitrary input, they should only be used with trusted inputs.
>
>      Libraries such as zlib that bundled into GCC to build it will be
>      treated the same as the compiler drivers and programs as far as
>      security coverage is concerned.  However if you find an issue in
>      these libraries independent of their use in GCC, you should reach
>      out to their upstream projects to report them.
>
>      As a result, the only case for a potential security issue in all
>      these cases is when it ends up generating vulnerable output for
>      valid input source code.
>
>      As a result, the only case for a potential security issue in the
>      compiler is when it generates vulnerable application code for
>      trusted input source code that is conforming to the relevant
>      programming standard or extensions documented as supported by GCC
>      and the algorithm expressed in the source code does not have the
>      vulnerability.  The output application code could be considered
>      vulnerable if it produces an actual vulnerability in the target
>      application, specifically in the following cases:
>
>      - The application dereferences an invalid memory location despite
>        the application sources being valid.
>      - The application reads from or writes to a valid but incorrect
>        memory location, resulting in an information integrity issue or an
>        information leak.
>      - The application ends up running in an infinite loop or with
>        severe degradation in performance despite the input sources having
>        no such issue, resulting in a Denial of Service.  Note that
>        correct but non-performant code is not a security issue candidate,
>        this only applies to incorrect code that may result in performance
>        degradation severe enough to amount to a denial of service.
>      - The application crashes due to the generated incorrect code,
>        resulting in a Denial of Service.

One difficulty is that wrong-code bugs are rarely confined to
a particular source code structure.  Something that causes a
miscompilation of a bounds check could later be discovered to cause a
miscompilation of something that is less obviously security-sensitive.
Or the same thing could happen in reverse.  And it's common for the
same bug to be reported multiple times, against different testcases.

The proposal says that certain kinds of wrong code could be a security
bug.  But what will be the criteria for deciding whether a wrong code
bug that *could* be classified as a security bug is in fact a security
bug?  Does someone have to show that at least one security-sensitive
application is vulnerable?  Or would it be based on a reasonable worst
case (to borrow a concept from the CVSS scoring)?

If it's based on proof, then:

(1) Doesn't that put FOSS projects (and particular projects in Debian/
    Red Hat/SUSE distros) in a more elevated position relative to other
    users?  Someone would be prepared to tell the Debian security team
    about a security bug in Debian, but should they be required to tell
    the Debian security team about a security bug in proprietary code
    that's compiled with GCC?  (I'm just picking Debian as an example.)

(2) As mentioned above, proof of security sensitivity could be provided
    alongside the first report of a codegen bug, or later.  What will be
    the practical difference be between these two cases?  How will the
    experience of the reporter differ?

If it's based on reasonable worst case, then most wrong code bugs
would be security bugs.

> [...]
> Security hardening implemented in GCC
> -------------------------------------
>
>      GCC implements a number of security features that reduce the impact
>      of security issues in applications, such as -fstack-protector,
>      -fstack-clash-protection, _FORTIFY_SOURCE and so on.  A failure in
>      these features functioning perfectly in all situations is not a
>      security issue in itself since they're dependent on heuristics and
>      may not always have full coverage for protection.

I don't follow the last sentence.  Many security hardening features are
precise (or least relatively precise) about what they do.  What they do
might only offer incomplete protection.  But they can still be evaluated
on their own terms, against their documention.  (And I would argue they
can also be evaluated against reasonable expectation.)

For example, -fzero-call-used-regs=used zeros all call-used registers
that are "set or referenced in the function".  It's easy to establish
whether the option is doing this by examining the assembly code.
Zeroing those registers doesn't prevent all data leakage through
registers, and so in that sense doesn't provide "full coverage for
protection".  But that isn't what the option promises.

In a reasonable worst case scenario, the failure of a security protection
feature to provide the promised protection could allow an exploit that
wouldn't have been possible otherwise.  IMO that makes it a security bug.

Thanks,
Richard

Reply via email to