Hi,

> On Feb 12, 2018, at 10:37 AM, Shawn Webb <shawn.w...@hardenedbsd.org> wrote:
> 
> On Mon, Feb 12, 2018 at 02:45:27PM +0000, Tycho Nightingale wrote:
>> Author: tychon
>> Date: Mon Feb 12 14:45:27 2018
>> New Revision: 329162
>> URL: https://svnweb.freebsd.org/changeset/base/329162
>> 
>> Log:
>>  Provide further mitigation against CVE-2017-5715 by flushing the
>>  return stack buffer (RSB) upon returning from the guest.
>> 
>>  This was inspired by this linux commit:
>>  
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kvm?id=117cc7a908c83697b0b737d15ae1eb5943afe35b
>> 
>>  Reviewed by:        grehan
>>  Sponsored by:       Dell EMC Isilon
>>  Differential Revision:      https://reviews.freebsd.org/D14272
>> 
>> Modified:
>>  head/sys/amd64/vmm/amd/svm_support.S
>>  head/sys/amd64/vmm/intel/vmcs.c
>>  head/sys/amd64/vmm/intel/vmx.h
>>  head/sys/amd64/vmm/intel/vmx_support.S
>> 
>> Modified: head/sys/amd64/vmm/amd/svm_support.S
>> ==============================================================================
>> --- head/sys/amd64/vmm/amd/svm_support.S     Mon Feb 12 14:44:21 2018        
>> (r329161)
>> +++ head/sys/amd64/vmm/amd/svm_support.S     Mon Feb 12 14:45:27 2018        
>> (r329162)
>> @@ -113,6 +113,23 @@ ENTRY(svm_launch)
>>      movq %rdi, SCTX_RDI(%rax)
>>      movq %rsi, SCTX_RSI(%rax)
>> 
>> +    /*
>> +     * To prevent malicious branch target predictions from
>> +     * affecting the host, overwrite all entries in the RSB upon
>> +     * exiting a guest.
>> +     */
>> +    mov $16, %ecx   /* 16 iterations, two calls per loop */
>> +    mov %rsp, %rax
>> +0:  call 2f         /* create an RSB entry. */
>> +1:  pause
>> +    call 1b         /* capture rogue speculation. */
>> +2:  call 2f         /* create an RSB entry. */
>> +1:  pause
>> +    call 1b         /* capture rogue speculation. */
>> +2:  sub $1, %ecx
>> +    jnz 0b
>> +    mov %rax, %rsp
>> +
>>      /* Restore host state */
>>      pop %r15
>>      pop %r14
>> 
> 
> For amd systems, isn't use of lfence required for performance
> reasons[1]? Or am I conflating two things?
> 
> 1: https://reviews.llvm.org/D41723

For what AMD calls V2 (the window of a speculative execution between indirect 
branch predictions and resolution of the correct target) there are a few 
mitigations cited in their white paper:

        
https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf

depending on the specific code you are trying to “fix”.  In my interpretation 
lfence is a component of several of the possible mitigations when one wants to 
“fix” a specific indirect branch but does not ensure that subsequent branches 
will not be speculated around.  In this case we are trying to guard against the 
more generic case of "entering more privileged code” i.e. returning from the 
guest to hypervisor aka host and protect all subsequent indirect branches 
without needing to apply an lfence to them individually.  To do that, I’ve 
implemented mitigation V2-3 where the return address predictor is filled with 
benign entries.

Does that help?

Tycho
_______________________________________________
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to