On Thu, 25 Aug 2022 at 12:57, David Hildenbrand <da...@redhat.com> wrote: > > On 25.08.22 13:47, Peter Maydell wrote: > > On Thu, 25 Aug 2022 at 08:27, David Hildenbrand <da...@redhat.com> wrote: > >> On 24.08.22 21:55, Peter Maydell wrote: > >>> Lumps of memory can be any size you like and anywhere in > >>> memory you like. Sometimes we are modelling real hardware > >>> that has done something like that. Sometimes it's just > >>> a convenient way to model a device. Generic code in > >>> QEMU does need to cope with this... > >> > >> But we are talking about system RAM here. And judging by the fact that > >> this is the first time dump.c blows up like this, this doesn't seem to > >> very common, no? > > > > What's your definition of "system RAM", though? The biggest > > I'd say any RAM memory region that lives in address_space_memory / > get_system_memory(). That's what softmmu/memory_mapping.c cares about > and where we bail out here. > > > > bit of RAM in the system? Anything over X bytes? Whatever > > the machine set up as MachineState::ram ? As currently > > written, dump.c is operating on every RAM MemoryRegion > > in the system, which includes a lot of things which aren't > > "system RAM" (for instance, it includes framebuffers and > > ROMs). > > Anything in address_space_memory / get_system_memory(), correct. And > this seems to be the first time that we fail here, so it's either a case > we should be handling in dump code (as you indicate) or some case we > shouldn't have to worry about (as I questioned).
I suspect that most of the odd-alignment things are not going to be ones you really care about having in a dump, but the difficulty is going to be defining what counts as "a region we don't care about", because we don't really have "purposes" attached to MemoryRegions. So in practice the dump code is going to have to either (a) be able to put odd-alignment regions into the dump, and put them all in or (b) skip all of them, regardless. Chances of anybody noticing a difference between a and b in practice seem minimal :-) -- PMM