>>> No, the test is bypassed if XSAVE is 0, not 1. XSAVE being 0 also >>> implies that AVX flag [as well as FMA and XOP] is 0, which is why is >>> jumps to 'done' and not 'clear_avx'. >> This assertion is unfortunately not true on RHEL-6 guests on AVX capable >> CPUs in XEN VM.
Could you spell it for me? Which flags does guest observe exactly? XSAVE=0 and AVX=1? I.e. XEN cared to mask XSAVE flag, but not AVX? Is bit masking configurable? If not, how come it clears XSAVE, but not AVX (and FMA)? Wouldn't one consider it a bug? I'm not trying to push it away, just understand... > In fact, I can't find in any spec the assertion that XSAVE=0 implies > AVX=FMA=XOP=0 (I checked "Intel 64 and IA-32 Architectures > Software Developer's Manual" volumes 1/2/3 and Intel appnote 485, "Intel > Processor Identification and the CPUID Instruction". > > The only assertion I found is that XSAVE=0 implies OSXSAVE=0 (and > OSXSAVE=1 implies XSAVE=1). But in order to be able to use AVX, you *have to* arrange OSXSAVE=1 (and of course corresponding bit in XCR0) and prerequisite for this is XSAVE being 1. I.e. there shouldn't be CPUs that have AVX, but not XSAVE. > Also, I believe 13.7 implies that it's wrong to clear SSE feature bits > when XCR0.SSE=0: That's why it's '&jnc (&label("clear_avx"));' now, not "clear_xmm". As for XEN, if it in fact masks XSAVE, but not AVX bits, than even check for XSAVE bit should '&jnc (&label("clear_avx"));' instead of "done". As well as that x86_64cpuid.pl should test for XSAVE... ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org