On Sun, Sep 15, 2013 at 02:24:07PM +0100, Peter Maydell wrote: > On 15 September 2013 13:17, Michael S. Tsirkin <m...@redhat.com> wrote: > > On Sun, Sep 15, 2013 at 12:23:41PM +0100, Peter Maydell wrote: > >>. If it's a "pure container" then it > >> doesn't respond for areas that none of its subregions > >> cover (it can't, it has no idea what it should do). > > > > Interesting. This really is completely undocumented > > though. > > > > documentation merely says: > > specifies a priority that allows the core to decide which of two > > regions at > > the same address are visible (highest wins) > > > > which makes one think the only thing affecting > > visibility is the priority. > > Yes, we could definitely stand to improve the documentation. > > > I wonder whether there's a way to describe this > > in terms that dont expose the implementation of > > render_memory_region, somehow. > > > > Maybe like follows: > > when multiple regions cover the same address only one region is going to > > "win" and get invoked for an access. > > The winner can be determined as follows: > > - "pure container" regions created with memory_region_init(..) > > are ignored > > - if multiple non-container regions cover an address, the winner is > > determined using a priority vector, built of priority field values > > from address space down to our region (i.e. region priority, followed by > > a subregion priority, followed by a sub subregion priority etc). > > > > These priority vectors are compared as follows: > > > > - if vectors are identical, which wins is undefined > > - otherwise if one vector is a sub-vector of another, > > which wins is undefined > > - otherwise the first vector in the lexicographical > > order wins > > This just seems to me to be documenting a theoretical > implementation. If we're lucky it has the same semantics > as what we actually do; if we're unlucky it doesn't in > some corner case and is just confusing. And it would > certainly be confusing if you look at the code and it does > absolutely nothing like the documentation's algorithm. > > I think we could simply add an extra bullet point in the > 'Visibility' section of docs/memory.txt saying: > - if a search within a container or alias subregion does not find > a match, we continue with the next subregion in priority order. > > plus a note at the bottom saying: > "Notice that these rules mean that if a container subregion > does not handle an address in its range (ie it has "holes" > where it has not mapped its own subregions) then lower > priority siblings of the container will be checked to see if they > should be used for accesses within those "holes". > > We could also do with explicitly mentioning in the section > about priority that priority is container local, since this seems > to be the number one confusion among people looking at > memory region priority. > > >> Mostly this doesn't come up because you don't need > >> to play games with overlapping memory regions and > >> containers very often: the common case is "nothing > >> overlaps at all". But the facilities are there if you need > >> them. > > > Dynamic regions like PI BARs are actually very common, > > IMO this means overlapping case is actually very common. > > Yes, that's true, but even then it's usually just "overlaps > of subregions within a single container" and there isn't > a need to worry about within-container versus outside-container > interactions. > > -- PMM
Well actually if you look at the 'subregion collisions' mail that I sent, must of the collisions are exactly of this sort. For example, mcfg in q35 overlaps the pci hole, so any pci bar can be made to overlap it. I guess documentation should just explain what happens when regions overlap and not try to say whether this case is common, or not. -- MST