On Jan 16, 2018, at 2:56 AM, Vít Šesták <groups-no-private-mail--contact-me-at--contact.v6ak....@v6ak.com> wrote: > > * If an application does not mitigate Spectre and attacker finds useful entry > point, attacker can read memory of the application (but nothing more). > * If VM kernel does not mitigate Spectre and attacker finds useful entry > point, attacker can probably read memory of whole VM (but other VMs are not > affected). > * If Xen does not mitigate Spectre and attacker finds useful entry point, > attacker can probably read memory of whole system.
Can you explain why you think that Spectre can't escape the container (VM)? It seems that is the main issue, Spectre escapes the container. I read the whitepaper and what Spectre is doing is, it's accessing memory it should not have access to, and then uses a few simple tricks to extract the data it should not have access to. This happens on a processor level so any bounds checks that are outside the CPU core will not prevent that. Given the nature of the attack, I do not think that hardware virtualization would stop this attack. Reasoning: If HW Virtualization was doing privilege checks on memory access in speculatively executed code, it would severely impact or completely remove the performance gains from speculative execution. I would be *very* happy to be wrong about that so if you have info to the contrary, please let me know. Here's how spectre works (conceptual - the existing sample implementations are just that, examples): - Trick the CPU into doing something it shouldn't to, like in our case access another VM's memory. - This memory access happens in a speculative execution, which is built for speed and doesn't have time to check whether or not I actually have the right to access this memory. - Speculative execution continues, and I load some of my own data into the processor, but which data depends on the value of the byte I read in the previous step. - The CPU realizes I didn't have access, and reverts register states - The CPU does not, however, remove my data from the cache - I can then use cache timing to figure out *what part* of my own data was cached - Once I know what part of my data was cached, I know the value of the byte that I read illegally. If Hardware virtualization were to protect against this attack, it would need to either have bounds checks inside the processor core, or flush caches whenever different VMs run, all of which would severely impact performance. So I don't think they do it. Reasoning: The entire point of HW virtualization is to have very fast and seamless context switching so that if I have 10 different VMs running, the processor does not lose performance from that. So you keep caches, and you keep speculatively executing what you believe to be the correct branch of an if statement. HW virtualization vs. software seems to have been implemented mainly to improve performance, and not to improve security/isolation. I found various snippets of information hinting at this as well, but again, I'd be happy to be wrong! But, if I am right, then qubes isolation is compromised. Sorry this got a bit long. -- You received this message because you are subscribed to the Google Groups "qubes-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to qubes-users+unsubscr...@googlegroups.com. To post to this group, send email to qubes-users@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-users/A0271FCD-6100-4839-BEF1-1A46540EE9B5%40gmail.com. For more options, visit https://groups.google.com/d/optout.