Documentation/cachetlb.txt says:

"In general, when Linux is changing an existing virtual-->physical
mapping to a new value, the sequence will be ...:

          flush_cache_page(vma, addr, pfn);
          set_pte(pte_pointer, new_pte_val);
          flush_tlb_page(vma, addr);

The cache level flush will always be first, because this allows
us to properly handle systems whose caches are strict and require
a virtual-->physical translation to exist for a virtual address
when that virtual address is flushed from the cache."

Let's assume we've got a multi threaded application, with 2 threads,
each of them bound to its own CPU.
Let's assume the CPU caches are tagged by virtual address and the virtual-->physical translation must remain valid while we flush the
cache.

CPU #1 is in a loop that accesses the stuff on a given virtual page.

CPU #2 executes the sequence above, trying to tear down the mapping
of the same page. Once CPU#2 has completed flush_cache_page()
(that is not a NO-OP), there is no valid stuff of the page any more
in any of the the CPU caches. CPU #2 will continue somewhat later
to invalidate the PTE due to some HW delays.

In the mean time, CPU #1 detects a cache miss. It has the valid
translation in its TLB, it can re-fetch the cache line from the page.

Now CPU #2 can really invalidate the PTE and flush the TLB-s.

We can end up a cache line of the CPU #1 that has no more valid
virtual-->physical translation.

As far as I can understand, there is a contradiction:
- either we keep the valid translation while flushing, and
 the other CPU can re-fetch the data
- or we nuke the PTE first, and the flush wont work

Can someone please explain me how it is assumed to work?

Thanks,

Zoltan Menyhart

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to