On Tue, Aug 18, 2015 at 5:51 PM, Andrey Korolyov wrote:
> "Fixed" with cherry-pick of the
> 7a72f7a140bfd3a5dae73088947010bfdbcf6a40 and its predecessor
> 7103f60de8bed21a0ad5d15d2ad5b7a333dda201. Of course this is not a real
> fix as the only race precondition is shifted/disappeared by a clear
>
"Fixed" with cherry-pick of the
7a72f7a140bfd3a5dae73088947010bfdbcf6a40 and its predecessor
7103f60de8bed21a0ad5d15d2ad5b7a333dda201. Of course this is not a real
fix as the only race precondition is shifted/disappeared by a clear
assumption. Though there are not too many hotplug users around, I h
On Fri, Jun 19, 2015 at 7:57 PM, Andrey Korolyov wrote:
>> I don`t think that it could be ACPI-related in any way, instead, it
>> looks like race in vhost or simular mm-touching mechanism. The
>> repeated hits you mentioned should be fixed as well indeed, but they
>> can be barely the reason for t
> I don`t think that it could be ACPI-related in any way, instead, it
> looks like race in vhost or simular mm-touching mechanism. The
> repeated hits you mentioned should be fixed as well indeed, but they
> can be barely the reason for this problem.
Please find a trace from a single dimm plugging
> I've checked logs, so far I don't see anything suspicious there
> except of "acpi PNP0C80:00: Already enumerated" lines,
> probably rising log level might show more info
> + upload full logs
> + enable ACPI debug info to so that dimm device's _CRS would show up
> + QEMU's CLI that was used to
On Tue, 16 Jun 2015 17:41:03 +0300
Andrey Korolyov wrote:
> > Answering back to myself - I made a wrong statement before, the
> > physical mapping *are* different with different cases, of course!
> > Therefore, the issue looks much simpler and I`d have a patch over a
> > couple of days if nobody
> Answering back to myself - I made a wrong statement before, the
> physical mapping *are* different with different cases, of course!
> Therefore, the issue looks much simpler and I`d have a patch over a
> couple of days if nobody fix this earlier.
>
... and another (possibly last) update. This is
>
> Please find the full cli args and two guest logs for DIMM
> initalization attached. As you can see, the freshly populated DIMMs
> are probably misplaced in SRAT ('already populated' messages), despite
> the fact that the initialized ranges are looking correct at a glance.
> When VM is migrated
On Thu, Jun 11, 2015 at 8:14 PM, Andrey Korolyov wrote:
> Hello Igor,
>
> the current hotplug code for dimms effectively prohibiting a
> successful migration for VM if memory was added after startup:
>
> - start a VM with certain amount of empty memory slots,
> - add some dimms and online them in
Hello Igor,
the current hotplug code for dimms effectively prohibiting a
successful migration for VM if memory was added after startup:
- start a VM with certain amount of empty memory slots,
- add some dimms and online them in guest (I am transitioning from 2
to 16G with 512Mb DIMMs),
- migrate
10 matches
Mail list logo