On 29.01.2015, at 09:50, Raoul Duke <rao...@gmail.com> wrote:
>
>> Easier to keep the generator simple and let the compiler do the 
>> optimisations it already understands.
>
> Based on things like the experiences of Tahoe LAFS with compiler
> optimizations vs. security - let alone plain bugs - I am scared if
> anybody uses any optimizations at all, especially for serious
> business. I have not researched the position se4L takes on this. Are
> the proofs about the before-optimization code, or do they consider the
> after-optimization assembly output?

Our current iteration of the binary verification uses "gcc -O1”, so after mild 
optimisations. For highest security requirements this would currently be the 
one to take.

For “normal" production builds and benchmarking we do use gcc -O2, which we 
have so far found reliable for seL4. We use a fairly easy to translate subset 
of C, and -O2 is the most frequently used code path of gcc, so we are 
reasonably confident that gcc works correctly in this case, but there is no 
proof, only tests and (some) inspection of the binary.

We think we're fairly close to changing this and have the binary verification 
apply to the gcc -O2 build, so the highest security scenarios will have the 
same performance benefits, but we’re not quite there yet.

Note that we don’t have crypto code or code with constant time execution 
requirements in the kernel. These are not the only things that can go wrong, of 
course, but they are the ones where compiler optimisations often will give you 
headaches. Timing analysis is done on the binary, after optimisation.

Cheers,
Gerwin


________________________________

The information in this e-mail may be confidential and subject to legal 
professional privilege and/or copyright. National ICT Australia Limited accepts 
no liability for any damage caused by this email or its attachments.
_______________________________________________
Devel mailing list
Devel@sel4.systems
https://sel4.systems/lists/listinfo/devel

Reply via email to