* Kees Cook <keesc...@chromium.org> wrote: > On Tue, Nov 15, 2016 at 11:16 AM, Peter Zijlstra <pet...@infradead.org> wrote: > > > > > > On 15 November 2016 19:06:28 CET, Kees Cook <keesc...@chromium.org> wrote: > > > >>I'll want to modify this in the future; I have a config already doing > >>"Bug on data structure corruption" that makes the warn/bug choice. > >>It'll need some massaging to fit into the new refcount_t checks, but > >>it should be okay -- there needs to be a way to complete the > >>saturation, etc, but still kill the offending process group. > > > > Ideally we'd create a new WARN like construct that continues in kernel > > space > > and terminates the process on return to user. That way there would be > > minimal > > kernel state corruption.
Yeah, so the problem is that sometimes you are p0wned the moment you return to a corrupted stack, and some of these checks only detect corruption after the fact. > Right, though I'd like to be conservative about the kernel execution > continuing... I'll experiment with it. So what I'd love to see is to have a kernel option that re-introduces some historic root (and other) holes that can be exploited deterministically - obviously default disabled. I'd restrict this to reasonably 'deterministic' holes, and the exploits themselves could be somewhere in tools/. (Obviously only where the maintainers agree to host the code.) They wouldn't give a root shell, they'd only test whether they reached uid0 (or some other elevated privilege). The advantages of such a suite would be: - Uptodate tests on modern kernels: It would allow the (controlled) testing of live kernel exploits even on the latest kernel - and would allow the testing of various defensive measures. - It would also make sure that defensive measures _remain_ effective against similar categories of bugs. We've had defensive measure regressions in the past, which was only discovered when the next exploit came out ... - Testing of new defensive measures: It would help convert this whole probabilistic and emotion driven "kernel protection" business into something somewhat more rational. For example new protection mechanisms should have a demonstrated ability to turn an existing exploit test into something less dangerous. - Education: It would teach kernel developers the various patterns of holes, right in the code. Maybe being more directly exposed to what can get you p0wned is both a stronger education force plus it could give people ideas about how to protect better. - I also think that collecting the various problems into a single place will give us new insights into patterns, bug counts and various exploit techniques. The disadvantages would be: - Maintenance: do we want to add extra (compiled out by default) code to the kernel whose only purpose is to demonstrate certain types of bugs? - Exposing exploits: Do we want to host a powerful collection of almost-exploits in tools/ ? I don't think we have a choice but to face the problem directly - but others might disagree. I think most of the negatives could be kept small by starting small, allowing maintainers to explicitly opt-in, and observing the effects as we go. But YMMV. Thanks, Ingo