On Sun, 2018-02-04 at 19:43 +0100, Thomas Gleixner wrote:
> Yet another possibility is to avoid the function entry and accouting magic
> and use the generic gcc return thunk:
>
> __x86_return_thunk:
> call L2
> L1:
> pause
> lfence
> jmp L1
> L2:
> lea 8(%
On Sun, 2018-02-04 at 19:43 +0100, Thomas Gleixner wrote:
>
> __x86_return_thunk would look like this:
>
> __x86_return_thunk:
> testl $0xf, PER_CPU_VAR(call_depth)
> jnz 1f
> stuff_rsb
> 1:
> declPER_CPU_VAR(call_depth)
> ret
>
> The ca
On Tue, 23 Jan 2018, Ingo Molnar wrote:
> * David Woodhouse wrote:
>
> > > On SkyLake this would add an overhead of maybe 2-3 cycles per function
> > > call and
> > > obviously all this code and data would be very cache hot. Given that the
> > > average
> > > number of function calls per syst
[ Dropping large CC list ]
On 25/01/2018 18:16, Greg Kroah-Hartman wrote:
> On Thu, Jan 25, 2018 at 05:19:04PM +0100, Mason wrote:
>
>> On 23/01/2018 10:30, David Woodhouse wrote:
>>
>>> Skylake takes predictions from the generic branch target buffer when
>>> the RSB underflows.
>>
>> Adding LAK
On 01/27/2018 05:42 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 26, 2018 at 07:11:47PM +, Hansen, Dave wrote:
>> The need for RSB stuffing in all the various scenarios and what the heck it
>> actually mitigates is freakishly complicated. I've tried to write it all
>> down in one place: ht
On Fri, Jan 26, 2018 at 07:11:47PM +, Hansen, Dave wrote:
> The need for RSB stuffing in all the various scenarios and what the heck it
> actually mitigates is freakishly complicated. I've tried to write it all
> down in one place: https://goo.gl/pXbvBE
Thank you for sharing that.
One ques
On Fri, Jan 26, 2018 at 09:19:09AM -0800, Linus Torvalds wrote:
> But did we do that "disable stuffing with SMEP"? I'm not seeing it. In
> my tree, it's only conditional on X86_FEATURE_RETPOLINE.
Or rather, enable stuffing on !SMEP:
+ if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+!boo
On Fri, 2018-01-26 at 14:02 -0500, Konrad Rzeszutek Wilk wrote:
>
> -ECONFUSED, see ==>
>
> Is this incorrect then?
> I see:
>
> 241 * Skylake era CPUs have a separate issue with *underflow* of the
>
> 242 * RSB, when they will predict 'ret' targets from the generic
>
The need for RSB stuffing in all the various scenarios and what the heck it
actually mitigates is freakishly complicated. I've tried to write it all down
in one place: https://goo.gl/pXbvBE
On Fri, Jan 26, 2018 at 09:59:01AM -0800, Andi Kleen wrote:
> On Fri, Jan 26, 2018 at 09:19:09AM -0800, Linus Torvalds wrote:
> > On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse
> > wrote:
> > >
> > > Do we need to look again at the fact that we've disabled the RSB-
> > > stuffing for SMEP?
> >
On Fri, 2018-01-26 at 18:44 +, Van De Ven, Arjan wrote:
> your question was specific to RSB not BTB. But please show the empirical
> evidence for RSB ?
We were hypothesising, which should have been clear from:
On Fri, 2018-01-26 at 09:11 +, David Woodhouse wrote:
> Likewise if the RSB on
> > you asked before and even before you sent the email I confirmed to
> > you that the document is correct
> >
> > I'm not sure what the point is to then question that again 15 minutes
> > later other than creating more noise.
>
> Apologies, I hadn't seen the comment on IRC.
>
> Sometimes the do
On Fri, 2018-01-26 at 18:28 +, Van De Ven, Arjan wrote:
> > As you know well, I mean "we think Intel's document is not
> > correct".
>
> you asked before and even before you sent the email I confirmed to
> you that the document is correct
>
> I'm not sure what the point is to then question tha
> On Fri, 2018-01-26 at 10:12 -0800, Arjan van de Ven wrote:
> > On 1/26/2018 10:11 AM, David Woodhouse wrote:
> > >
> > > I am *actively* ignoring Skylake right now. This is about per-SKL
> > > userspace even with SMEP, because we think Intel's document lies to us.
> >
> > if you think we lie to y
On Fri, 2018-01-26 at 10:12 -0800, Arjan van de Ven wrote:
> On 1/26/2018 10:11 AM, David Woodhouse wrote:
> >
> > I am *actively* ignoring Skylake right now. This is about per-SKL
> > userspace even with SMEP, because we think Intel's document lies to us.
>
> if you think we lie to you then I th
On Fri, 2018-01-26 at 09:59 -0800, Andi Kleen wrote:
> On Fri, Jan 26, 2018 at 09:19:09AM -0800, Linus Torvalds wrote:
> >
> > On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse wrote:
> > >
> > >
> > > Do we need to look again at the fact that we've disabled the RSB-
> > > stuffing for SMEP?
> >
On 1/26/2018 10:11 AM, David Woodhouse wrote:
I am *actively* ignoring Skylake right now. This is about per-SKL
userspace even with SMEP, because we think Intel's document lies to us.
if you think we lie to you then I think we're done with the conversation?
Please tell us then what you deploy
On Fri, Jan 26, 2018 at 09:19:09AM -0800, Linus Torvalds wrote:
> On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse wrote:
> >
> > Do we need to look again at the fact that we've disabled the RSB-
> > stuffing for SMEP?
>
> Absolutely. SMEP helps make people a lot less worried about things,
> but
On Fri, 2018-01-26 at 17:29 +, David Woodhouse wrote:
> On Fri, 2018-01-26 at 09:19 -0800, Linus Torvalds wrote:
> > On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse
> > wrote:
> > > Do we need to look again at the fact that we've disabled the RSB-
> > > stuffing for SMEP?
> >
> > Absolutely.
On Fri, 2018-01-26 at 09:19 -0800, Linus Torvalds wrote:
> On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse wrote:
> >
> >
> > Do we need to look again at the fact that we've disabled the RSB-
> > stuffing for SMEP?
> Absolutely. SMEP helps make people a lot less worried about things,
> but it d
On Fri, Jan 26, 2018 at 1:11 AM, David Woodhouse wrote:
>
> Do we need to look again at the fact that we've disabled the RSB-
> stuffing for SMEP?
Absolutely. SMEP helps make people a lot less worried about things,
but it doesn't fix the "BTB only contains partial addresses" case.
But did we do
On Thu, 2018-01-25 at 18:23 -0800, Dave Hansen wrote:
> On 01/25/2018 06:11 PM, Liran Alon wrote:
> >
> > It is true that attacker cannot speculate to a kernel-address, but it
> > doesn't mean it cannot use the leaked kernel-address together with
> > another unrelated vulnerability to build a reli
On Thu, 2018-01-25 at 18:11 -0800, Liran Alon wrote:
>
> P.S:
> It seems to me that all these issues could be resolved completely at
> hardware in future CPUs if BTB/BHB/RSB entries were tagged with
> prediction-mode (or similar metadata). It will be nice if Intel/AMD
> could share if that is the
fradead.org; pet...@infradead.org; t...@linutronix.de;
> gre...@linuxfoundation.org; mhira...@kernel.org; ar...@linux.intel.com;
> thomas.lenda...@amd.com; Williams, Dan J ;
> j...@8bytes.org; k...@vger.kernel.org; aarca...@redhat.com
> Subject: Re: [RFC 09/10] x86/enter: Create macros to rest
- dave.han...@intel.com wrote:
> On 01/25/2018 06:11 PM, Liran Alon wrote:
> > It is true that attacker cannot speculate to a kernel-address, but
> it
> > doesn't mean it cannot use the leaked kernel-address together with
> > another unrelated vulnerability to build a reliable exploit.
>
> T
On 01/25/2018 06:11 PM, Liran Alon wrote:
> It is true that attacker cannot speculate to a kernel-address, but it
> doesn't mean it cannot use the leaked kernel-address together with
> another unrelated vulnerability to build a reliable exploit.
The address doesn't leak if you can't execute there.
- dave.han...@intel.com wrote:
> On 01/23/2018 03:13 AM, Liran Alon wrote:
> > Therefore, breaking KASLR. In order to handle this, every exit from
> > kernel-mode to user-mode should stuff RSB. In addition, this
> stuffing
> > of RSB may need to be done from a fixed address to avoid leaking
>
On 01/23/2018 03:13 AM, Liran Alon wrote:
> Therefore, breaking KASLR. In order to handle this, every exit from
> kernel-mode to user-mode should stuff RSB. In addition, this stuffing
> of RSB may need to be done from a fixed address to avoid leaking the
> address of the RSB stuffing itself.
With
On Thu, Jan 25, 2018 at 05:19:04PM +0100, Mason wrote:
> On 23/01/2018 10:30, David Woodhouse wrote:
>
> > Skylake takes predictions from the generic branch target buffer when
> > the RSB underflows.
>
> Adding LAKML.
>
> AFAIU, some ARM Cortex cores have the same optimization.
> (A9 maybe, A17
On 23/01/2018 10:30, David Woodhouse wrote:
> Skylake takes predictions from the generic branch target buffer when
> the RSB underflows.
Adding LAKML.
AFAIU, some ARM Cortex cores have the same optimization.
(A9 maybe, A17 probably, some recent 64-bit cores)
Are there software work-arounds for
> On Jan 23, 2018, at 5:59 PM, Van De Ven, Arjan
> wrote:
>
>
>>> It is a reasonable approach. Let a process who needs max security
>>> opt in with disabled dumpable. It can have a flush with IBPB clear before
>>> starting to run, and have STIBP set while running.
>>>
>>
>> Do we maybe wan
> > It is a reasonable approach. Let a process who needs max security
> > opt in with disabled dumpable. It can have a flush with IBPB clear before
> > starting to run, and have STIBP set while running.
> >
>
> Do we maybe want a separate opt in? I can easily imagine things like
> web browsers
On Tue, 2018-01-23 at 17:00 -0800, Andy Lutomirski wrote:
> On Tue, Jan 23, 2018 at 4:47 PM, Tim Chen wrote:
> >
> > On 01/23/2018 03:14 PM, Woodhouse, David wrote:
> > >
> > > On Tue, 2018-01-23 at 14:49 -0800, Andi Kleen wrote:
> > > >
> > > > >
> > > > > Not sure. Maybe to start, the answe
On Tue, Jan 23, 2018 at 4:47 PM, Tim Chen wrote:
> On 01/23/2018 03:14 PM, Woodhouse, David wrote:
>> On Tue, 2018-01-23 at 14:49 -0800, Andi Kleen wrote:
Not sure. Maybe to start, the answer might be to allow it to be set for
the ultra-paranoid, but in general don't enable it by defaul
On 01/23/2018 03:14 PM, Woodhouse, David wrote:
> On Tue, 2018-01-23 at 14:49 -0800, Andi Kleen wrote:
>>> Not sure. Maybe to start, the answer might be to allow it to be set for
>>> the ultra-paranoid, but in general don't enable it by default. Having it
>>> enabled would be an alternative to so
Ingo Molnar writes:
>
> Is there any reason why this wouldn't work?
To actually maintain the true call depth you would need to intercept the
return of the function too, because the counter has to be decremented
at the end of the function.
Plain ftrace cannot do that because it only intercepts th
On Tue, Jan 23, 2018 at 11:14:36PM +, Woodhouse, David wrote:
> On Tue, 2018-01-23 at 14:49 -0800, Andi Kleen wrote:
> > > Not sure. Maybe to start, the answer might be to allow it to be set for
> > > the ultra-paranoid, but in general don't enable it by default. Having it
> > > enabled would
On Tue, 2018-01-23 at 14:49 -0800, Andi Kleen wrote:
> > Not sure. Maybe to start, the answer might be to allow it to be set for
> > the ultra-paranoid, but in general don't enable it by default. Having it
> > enabled would be an alternative to someone deciding to disable SMT, since
> > that woul
> Not sure. Maybe to start, the answer might be to allow it to be set for
> the ultra-paranoid, but in general don't enable it by default. Having it
> enabled would be an alternative to someone deciding to disable SMT, since
> that would have even more of a performance impact.
I agree. A reasona
On 1/23/2018 10:20 AM, Woodhouse, David wrote:
> On Tue, 2018-01-23 at 10:12 -0600, Tom Lendacky wrote:
>>
+.macro UNRESTRICT_IB_SPEC
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS
+ PUSH_MSR_REGS
+ WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $0
>>>
>> I think you should
On Sun 2018-01-21 20:28:17, David Woodhouse wrote:
> On Sun, 2018-01-21 at 11:34 -0800, Linus Torvalds wrote:
> > All of this is pure garbage.
> >
> > Is Intel really planning on making this shit architectural? Has
> > anybody talked to them and told them they are f*cking insane?
> >
> > Please,
On Tue, 2018-01-23 at 10:12 -0600, Tom Lendacky wrote:
>
> >> +.macro UNRESTRICT_IB_SPEC
> >> + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS
> >> + PUSH_MSR_REGS
> >> + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $0
> >
> I think you should be writing 2, not 0, since I'm reasonably
> confide
On 1/21/2018 1:14 PM, Andy Lutomirski wrote:
>
>
>> On Jan 20, 2018, at 11:23 AM, KarimAllah Ahmed wrote:
>>
>> From: Tim Chen
>>
>> Create macros to control Indirect Branch Speculation.
>>
>> Name them so they reflect what they are actually doing.
>> The macros are used to restrict and unrestr
On 01/23/2018 01:27 AM, Ingo Molnar wrote:
>
> - All asynchronous contexts (IRQs, NMIs, etc.) stuff the RSB before IRET.
> (The
>tracking could probably made IRQ and maybe even NMI safe, but the
> worst-case
>nesting scenarios make my head ache.)
This all sounds totally workable to m
- dw...@infradead.org wrote:
> On Sun, 2018-01-21 at 14:27 -0800, Linus Torvalds wrote:
> > On Sun, Jan 21, 2018 at 2:00 PM, David Woodhouse
> wrote:
> > >>
> > >> The patches do things like add the garbage MSR writes to the
> kernel
> > >> entry/exit points. That's insane. That says "we're
On Tue, 2018-01-23 at 11:44 +0100, Ingo Molnar wrote:
> * David Woodhouse wrote:
> > Hm? We still have GCC emitting 'call __fentry__' don't we? Would be nice to
> > get
> > to the point where we can patch *that* out into a NOP... or are you saying
> > we
> > already can?
> Yes, we already can
* David Woodhouse wrote:
> On Tue, 2018-01-23 at 11:15 +0100, Ingo Molnar wrote:
> >
> > BTW., the reason this is enabled on all distro kernels is because the
> > overhead
> > is a single patched-in NOP instruction in the function epilogue, when
> > tracing
> > is disabled. So it's not ev
On Tue, 2018-01-23 at 11:23 +0100, Ingo Molnar wrote:
> * David Woodhouse wrote:
>
> >
> > >
> > > On SkyLake this would add an overhead of maybe 2-3 cycles per function
> > > call and
> > > obviously all this code and data would be very cache hot. Given that the
> > > average
> > > number
On Tue, 2018-01-23 at 11:15 +0100, Ingo Molnar wrote:
>
> BTW., the reason this is enabled on all distro kernels is because the
> overhead is
> a single patched-in NOP instruction in the function epilogue, when tracing is
> disabled. So it's not even a CALL+RET - it's a patched in NOP.
Hm? We
* David Woodhouse wrote:
> > On SkyLake this would add an overhead of maybe 2-3 cycles per function call
> > and
> > obviously all this code and data would be very cache hot. Given that the
> > average
> > number of function calls per system call is around a dozen, this would be
> > _much_
* David Woodhouse wrote:
> On Tue, 2018-01-23 at 08:53 +0100, Ingo Molnar wrote:
> >
> > The patch below demonstrates the principle, it forcibly enables dynamic
> > ftrace
> > patching (CONFIG_DYNAMIC_FTRACE=y et al) and turns mcount/__fentry__ into a
> > RET:
> >
> > 81a01a40 <__
On Tue, 2018-01-23 at 10:27 +0100, Ingo Molnar wrote:
> * Ingo Molnar wrote:
>
> >
> > Is there a testcase for the SkyLake 16-deep-call-stack problem that I could
> > run?
> > Is there a description of the exact speculative execution vulnerability
> > that has
> > to be addressed to begin wi
On Tue, 2018-01-23 at 08:53 +0100, Ingo Molnar wrote:
>
> The patch below demonstrates the principle, it forcibly enables dynamic
> ftrace
> patching (CONFIG_DYNAMIC_FTRACE=y et al) and turns mcount/__fentry__ into a
> RET:
>
> 81a01a40 <__fentry__>:
> 81a01a40: c3
* Ingo Molnar wrote:
> Is there a testcase for the SkyLake 16-deep-call-stack problem that I could
> run?
> Is there a description of the exact speculative execution vulnerability that
> has
> to be addressed to begin with?
Ok, so for now I'm assuming that this is the 16 entries return-stac
* Ingo Molnar wrote:
> * David Woodhouse wrote:
>
> > But wait, why did I say "mostly"? Well, not everyone has a retpoline
> > compiler yet... but OK, screw them; they need to update.
> >
> > Then there's Skylake, and that generation of CPU cores. For complicated
> > reasons they actually end
* David Woodhouse wrote:
> But wait, why did I say "mostly"? Well, not everyone has a retpoline
> compiler yet... but OK, screw them; they need to update.
>
> Then there's Skylake, and that generation of CPU cores. For complicated
> reasons they actually end up being vulnerable not just on indir
[apologies for breaking the reply-thread]
David wrote:
> I think we've covered the technical part of this now, not that you like
> it â not that any of us *like* it. But since the peanut gallery is
> paying lots of attention it's probably worth explaining it a little
> more for their benefit.
i'
On Sun, 2018-01-21 at 14:27 -0800, Linus Torvalds wrote:
> On Sun, Jan 21, 2018 at 2:00 PM, David Woodhouse wrote:
> >>
> >> The patches do things like add the garbage MSR writes to the kernel
> >> entry/exit points. That's insane. That says "we're trying to protect
> >> the kernel". We already h
On Sun, Jan 21, 2018 at 2:00 PM, David Woodhouse wrote:
>>
>> The patches do things like add the garbage MSR writes to the kernel
>> entry/exit points. That's insane. That says "we're trying to protect
>> the kernel". We already have retpoline there, with less overhead.
>
> You're looking at IBRS
On Sun, 2018-01-21 at 13:35 -0800, Linus Torvalds wrote:
> On Sun, Jan 21, 2018 at 12:28 PM, David Woodhouse wrote:
> > As a hack for existing CPUs, it's just about tolerable — as long as it
> > can die entirely by the next generation.
>
> That's part of the big problem here. The speculation contr
On Sun, Jan 21, 2018 at 12:28 PM, David Woodhouse wrote:
> On Sun, 2018-01-21 at 11:34 -0800, Linus Torvalds wrote:
>> All of this is pure garbage.
>>
>> Is Intel really planning on making this shit architectural? Has
>> anybody talked to them and told them they are f*cking insane?
>>
>> Please, a
On Sun, 2018-01-21 at 11:34 -0800, Linus Torvalds wrote:
> All of this is pure garbage.
>
> Is Intel really planning on making this shit architectural? Has
> anybody talked to them and told them they are f*cking insane?
>
> Please, any Intel engineers here - talk to your managers.
If the altern
> On Jan 20, 2018, at 11:23 AM, KarimAllah Ahmed wrote:
>
> From: Tim Chen
>
> Create macros to control Indirect Branch Speculation.
>
> Name them so they reflect what they are actually doing.
> The macros are used to restrict and unrestrict the indirect branch
> speculation.
> They do not
From: Tim Chen
Create macros to control Indirect Branch Speculation.
Name them so they reflect what they are actually doing.
The macros are used to restrict and unrestrict the indirect branch speculation.
They do not *disable* (or *enable*) indirect branch speculation. A trip back to
user-space
64 matches
Mail list logo