On Thu, Mar 29, 2018 at 15:55:13 +0100, Alex Bennée wrote: > > Emilio G. Cota <c...@braap.org> writes: (snip) > > +/* lock the page(s) of a TB in the correct acquisition order */ > > +static inline void page_lock_tb(const TranslationBlock *tb) > > +{ > > + if (likely(tb->page_addr[1] == -1)) { > > + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); > > + return; > > + } > > + if (tb->page_addr[0] < tb->page_addr[1]) { > > + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); > > + page_lock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS)); > > + } else { > > + page_lock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS)); > > + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); > > + } > > +} (snip) > > + /* > > + * Add the TB to the page list. > > + * To avoid deadlock, acquire first the lock of the lower-addressed > > page. > > + */ > > + p = page_find_alloc(phys_pc >> TARGET_PAGE_BITS, 1); > > + if (likely(phys_page2 == -1)) { > > tb->page_addr[1] = -1; > > + page_lock(p); > > + tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK); > > + } else { > > + p2 = page_find_alloc(phys_page2 >> TARGET_PAGE_BITS, 1); > > + if (phys_pc < phys_page2) { > > + page_lock(p); > > + page_lock(p2); > > + } else { > > + page_lock(p2); > > + page_lock(p); > > + } > > Give we repeat this check further up perhaps a: > > page_lock_pair(PageDesc *p1, th_page_addr_t phys1, PageDesc *p2, > tb_page_addr_t phys2)
After trying, I don't think it's worth the trouble. Note that page_lock_tb expands to nothing in user-mode, whereas the latter snippet is shared by user-mode and !user-mode. Dealing with that gets ugly quickly; besides, we'd have to optionally return *p1 and *p2, plus choose whether to use page_find or page_find_alloc.. (snip) > The diff is a little messy around tb_page_add but I think we need an > assert_page_lock(p) which compiles to check mmap_lock in CONFIG_USER > instead of the assert_memory_lock(). > > Then we can be clear about tb, memory and page locks. I've added an extra patch to v2 to do this, since this patch is already huge. Thanks, Emilio