On Tue, Feb 19, 2019 at 6:37 PM Jann Horn wrote:
>
> On Wed, Feb 20, 2019 at 1:55 AM Kees Cook wrote:
> > + if (WARN_ONCE(cr4_pin && (val & cr4_pin) == 0,
>
> Don't you mean `cr4_pin && (val & cr4_pin) != cr4_pin)`?
Whoops! Yes, thanks. :)
--
Kees Cook
On Wed, Feb 20, 2019 at 12:15 AM Dominik Brodowski
wrote:
>
> On Tue, Feb 19, 2019 at 04:54:49PM -0800, Kees Cook wrote:
> > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> > index cb28e98a0659..7e0ea4470f8e 100644
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x8
On Tue, Feb 19, 2019 at 04:54:49PM -0800, Kees Cook wrote:
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index cb28e98a0659..7e0ea4470f8e 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -312,10 +312,16 @@ static __init int setup_d
On Wed, Feb 20, 2019 at 1:55 AM Kees Cook wrote:
> Several recent exploits have used direct calls to the native_write_cr4()
> function to disable SMEP and SMAP before then continuing their exploits
> using userspace memory access. This pins bits of cr4 so that they cannot
> be changed through a co
Several recent exploits have used direct calls to the native_write_cr4()
function to disable SMEP and SMAP before then continuing their exploits
using userspace memory access. This pins bits of cr4 so that they cannot
be changed through a common function. This is not intended to be general
ROP prot
5 matches
Mail list logo