On Fri, May 31, 2019 at 10:04 PM Nadav Amit <na...@vmware.com> wrote: > > On May 31, 2019, at 12:20 PM, Jann Horn <ja...@google.com> wrote: > > On Fri, May 31, 2019 at 8:29 PM Nadav Amit <na...@vmware.com> wrote: > >> [ +Jann Horn ] > >> > >>> On May 31, 2019, at 3:57 AM, Peter Zijlstra <pet...@infradead.org> wrote: > >>> > >>> On Thu, May 30, 2019 at 11:36:44PM -0700, Nadav Amit wrote: > >>>> When we flush userspace mappings, we can defer the TLB flushes, as long > >>>> the following conditions are met: > >>>> > >>>> 1. No tables are freed, since otherwise speculative page walks might > >>>> cause machine-checks. > >>>> > >>>> 2. No one would access userspace before flush takes place. Specifically, > >>>> NMI handlers and kprobes would avoid accessing userspace. > > [...] > >> A #MC might be caused. I tried to avoid it by not allowing freeing of > >> page-tables in such way. Did I miss something else? Some interaction with > >> MTRR changes? I’ll think about it some more, but I don’t see how. > > > > I don't really know much about this topic, but here's a random comment > > since you cc'ed me: If the physical memory range was freed and > > reallocated, could you end up with speculatively executed cached > > memory reads from I/O memory? (And if so, would that be bad?) > > Thanks. I thought that your experience with TLB page-freeing bugs may > be valuable, and you frequently find my mistakes. ;-) > > Yes, speculatively executed cached reads from the I/O memory are a concern. > IIRC they caused #MC on AMD. If page-tables are not changes, but only PTEs > are changed, I don’t see how it can be a problem. I also looked at the MTRR > setting code, but I don’t see a concrete problem.
Can the *physical memory range* not be freed and assigned to another device? Like, when you mess around with memory hotplug and PCI hotplug?