[PATCH] guest-agent: document allow-rpcs in config file section

2024-07-18 Thread Thomas Lamprecht
While the `allow-rpcs` option is documented in the CLI options
section, it was missing in the section about the configuration file
syntax.

And while it's mentioned that "the list of keys follows the command line
options", having `block-rpcs` there but not `allow-rpcs` seems like
being a potential source of confusion; and as it's cheap to add let's
just do so.

Signed-off-by: Thomas Lamprecht 
---
 docs/interop/qemu-ga.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/interop/qemu-ga.rst b/docs/interop/qemu-ga.rst
index 72fb75a6f5..dd4245ece8 100644
--- a/docs/interop/qemu-ga.rst
+++ b/docs/interop/qemu-ga.rst
@@ -131,6 +131,7 @@ fsfreeze-hook  string
 statedir   string
 verboseboolean
 block-rpcs string list
+allow-rpcs string list
 =  ===
 
 See also
-- 
2.39.2





Re: [PATCH] vl: change PID file path resolve error to warning

2022-10-28 Thread Thomas Lamprecht
On 28/10/2022 09:11, Fiona Ebner wrote:
> Am 27.10.22 um 14:17 schrieb Daniel P. Berrangé:
>> On Thu, Oct 27, 2022 at 12:14:43PM +0200, Fiona Ebner wrote:
>>> +warn_report("not removing PID file on exit: cannot resolve PID 
>>> file"
>>> +" path: %s: %s", pid_file, strerror(errno));
>>> +return;
>>>  }
>> I don't think using warn_report is desirable here.
>>
>> If the behaviour of passing a pre-unlinked pidfile is considered
>> valid, then we should allow it without printing a warning every
>> time an application does this.
>>
>> warnings are to highlight non-fatal mistakes by applications, and
>> this is not a mistake, it is intentionally supported behaviour.
>
> But what if the path resolution fails in a scenario where the caller did
> not pre-unlik the PID file? Should the warning only be printed when the
> errno is not ENOENT? Might still not be accurate in all cases though.

ENOENT would be IMO a good heuristic for silence, as I see no point in
warning that something won't be cleaned up if it's already gone from
POV of QEMU.

Iff, I'd then personally only log at some level that shows up on
debug/high-verbosity settings, that should cover the a bit odd setups where,
e.g. the PID file is there but not visible to QEMU, e.g., because being
located in another mount namespace (in which case the management stack that
put it there probably wants to handle that more explicitly anyway). Or do
you meant something else with "not accurate in all cases"?

best regards,
Thomas




Re: [PATCH for 6.2 v2 5/5] bios-tables-test: Update golden binaries

2021-11-11 Thread Thomas Lamprecht
On 11.11.21 12:32, Igor Mammedov wrote:
> On Thu, 11 Nov 2021 03:34:37 -0500
> "Michael S. Tsirkin"  wrote:
> 
>> On Wed, Nov 10, 2021 at 04:11:40PM -0500, Igor Mammedov wrote:
>>> From: Julia Suvorova 
>>>
>>> The changes are the result of
>>> 'hw/i386/acpi-build: Deny control on PCIe Native Hot-Plug in _OSC'
>>> and listed here:
>>>
>>> Method (_OSC, 4, NotSerialized)  // _OSC: Operating System Capabilities
>>>  {
>>>  CreateDWordField (Arg3, Zero, CDW1)
>>>  If ((Arg0 == ToUUID 
>>> ("33db4d5b-1ff7-401c-9657-7441c03dd766") /* PCI Host Bridge Device */))
>>>  {
>>>  CreateDWordField (Arg3, 0x04, CDW2)
>>>  CreateDWordField (Arg3, 0x08, CDW3)
>>>  Local0 = CDW3 /* \_SB_.PCI0._OSC.CDW3 */
>>> -Local0 &= 0x1F
>>> +Local0 &= 0x1E
>>>
>>> Signed-off-by: Julia Suvorova 
>>> Signed-off-by: Igor Mammedov 
>>> ---
>>>  tests/qtest/bios-tables-test-allowed-diff.h |  16 
>>>  tests/data/acpi/q35/DSDT| Bin 8289 -> 8289 bytes
>>>  tests/data/acpi/q35/DSDT.acpihmat   | Bin 9614 -> 9614 bytes
>>>  tests/data/acpi/q35/DSDT.bridge | Bin 11003 -> 11003 bytes
>>>  tests/data/acpi/q35/DSDT.cphp   | Bin 8753 -> 8753 bytes
>>>  tests/data/acpi/q35/DSDT.dimmpxm| Bin 9943 -> 9943 bytes
>>>  tests/data/acpi/q35/DSDT.dmar   | Bin 0 -> 8289 bytes
>>>  tests/data/acpi/q35/DSDT.ipmibt | Bin 8364 -> 8364 bytes
>>>  tests/data/acpi/q35/DSDT.ivrs   | Bin 8306 -> 8306 bytes
>>>  tests/data/acpi/q35/DSDT.memhp  | Bin 9648 -> 9648 bytes
>>>  tests/data/acpi/q35/DSDT.mmio64 | Bin 9419 -> 9419 bytes
>>>  tests/data/acpi/q35/DSDT.multi-bridge   | Bin 8583 -> 8583 bytes
>>>  tests/data/acpi/q35/DSDT.nohpet | Bin 8147 -> 8147 bytes
>>>  tests/data/acpi/q35/DSDT.nosmm  | Bin 0 -> 8289 bytes
>>>  tests/data/acpi/q35/DSDT.numamem| Bin 8295 -> 8295 bytes
>>>  tests/data/acpi/q35/DSDT.smm-compat | Bin 0 -> 8289 bytes
>>>  tests/data/acpi/q35/DSDT.smm-compat-nosmm   | Bin 0 -> 8289 bytes
>>>  tests/data/acpi/q35/DSDT.tis.tpm12  | Bin 8894 -> 8894 bytes
>>>  tests/data/acpi/q35/DSDT.tis.tpm2   | Bin 8894 -> 8894 bytes
>>>  tests/data/acpi/q35/DSDT.xapic  | Bin 35652 -> 35652 bytes  
>> Why do we have all the new files?  What is going on here?
> I think new files are not necessary.
> 
> I can update patch if we decide to keep ACPI enabled by default.
> 
> So question is:
>   do we revert to native pcie or stay with apci hootplug for 6.2?
> 

FWIW, we had to add some compat handling in Proxmox VE for the original change
as we do not pin Linux VM machines between cold-starts (they normally do not 
care
much about some HW/CPU bits added/dropped/moved) and the change here messed a 
bit
with the guest OS network configuration, as systemd's predictable interface 
naming
changed the name from, e.g., enp18 to ens6p18.

I mean, we wondered a bit over the original change here and contemplated 
reverting
it in our downstream build. While we read the reasons got a report of any of 
that
problems happen from our upper 6 digit count of systems reporting to our repos.
Ultimately we did not went with the revert to avoid problems if this was QEMU's
way forward, wrong choice it seems, and it now additionally seems that ACPI 
hotplug
produces boot-loops in some guests with seabios and serial or no display.

Anyhow (sorry for the whole back-story/rambling), if QEMU reverts this for 6.2 I
think we'd pull the line now and revert it in our 6.1 build we plan to fully 
roll
out soon, to avoid this whole mess for most of our user base in the first 
place..


- Thomas




Re: [PATCH v2] monitor/qmp: fix race on CHR_EVENT_CLOSED without OOB

2021-04-08 Thread Thomas Lamprecht
On 08.04.21 14:49, Markus Armbruster wrote:
> Kevin Wolf  writes:
>> Am 08.04.2021 um 11:21 hat Markus Armbruster geschrieben:
>>> Should this go into 6.0?
>>
>> This is something that the responsible maintainer needs to decide.
> 
> Yes, and that's me.  I'm soliciting opinions.
> 
>> If it helps you with the decision, and if I understand correctly, it is
>> a regression from 5.1, but was already broken in 5.2.
> 
> It helps.
> 
> Even more helpful would be a risk assessment: what's the risk of
> applying this patch now vs. delaying it?

Stefan is on vacation this week, but I can share some information, maybe it
helps.

> 
> If I understand Stefan correctly, Proxmox observed VM hangs.  How
> frequent are these hangs?  Did they result in data corruption?


They were not highly frequent, but frequent enough to get roughly a bit over a
dozen of reports in our forum, which normally means something is off but its
limited to certain HW, storage-tech used or load patterns.

We had initially a hard time to reproduce this, but a user finally could send
us a backtrace of a hanging VM and with that information we could pin it enough
down and Stefan came up with a good reproducer (see v1 of this patch).

We didn't got any report of actual data corruption due to this, but the VM
hangs completely, so a user killing it may produce that theoretical; but only
for those program running in the guest that where not made power-loss safe
anyway...

> 
> How confident do we feel about the fix?
> 

Cannot comment from a technical POV, but can share the feedback we got with it.

Some context about reach:
We have rolled the fix out to all repository stages which had already a build of
5.2, that has a reach of about 100k to 300k installations, albeit we only have
some rough stats about the sites that accesses the repository daily, cannot 
really
tell who actually updated to the new versions, but there are some quite 
update-happy
people in the community, so with that in mind and my experience of the feedback
loop of rolling out updates, I'd figure a lower bound one can assume without 
going
out on a limb is ~25k.

Positive feedback from users:
We got some positive feedback from people which ran into this at least once per
week about the issue being fixed with that. In total almost a dozen user 
reported
improvements, a good chunk of those which reported the problem in the first 
place.

Mixed feedback:
We had one user which reported still getting QMP timeouts, but that their VMs 
did
not hang anymore (could be high load or the like). Only one user reported that 
it
did not help, still investigating there, they have quite high CPU pressure stats
and it actually may also be another issue, cannot tell for sure yet though.

Negative feedback:
We had no new users reporting of new/worse problems in that direction, at least
from what I'm aware off.

Note, we do not use OOB currently, so above does not speak for the OOB case at
all.




Re: [PATCH] i386/acpi: restore device paths for pre-5.1 vms

2021-03-23 Thread Thomas Lamprecht
On 23.03.21 15:55, Vitaly Cheptsov wrote:
>> 23 марта 2021 г., в 17:48, Michael S. Tsirkin  написал(а):
>>
>> The issue is with people who installed a VM using 5.1 qemu,
>> migrated to 5.2, booted there and set a config on a device
>> e.g. IP on a NIC.
>> They now have a 5.1 machine type but changing uid back
>> like we do will break these VMs.
>>
>> Unlikley to be common but let's at least create a way for these people
>> to used these VMs.
>>
> They can simply set the 5.2 VM version in such a case. I do not want to 
let this legacy hack to be enabled in any modern QEMU VM version, as it 
violates ACPI specification and makes the life more difficult for various other 
software like bootloaders and operating systems.

Yeah here I agree with Vitaly, if they already used 5.2 and made some 
configurations
for those "new" devices they can just keep using 5.2?

If some of the devices got configured on 5.1 and some on 5.2 there's nothing we 
can
do anyway, from a QEMU POV - there the user always need to choose one machine 
version
and fix up the device configured while on the other machine.




Re: [PATCH v5] sphinx: adopt kernel readthedoc theme

2021-03-23 Thread Thomas Lamprecht
On 23.03.21 12:53, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 

Just saw this patch by accident and as we also use the alabaster theme
for the Proxmox Backup project I wanted to share some insights from our
usage, as I checked that theme out closely a few months ago and did some
adaptions for, partially overlapping, short-comings we found.


> The default "alabaster" sphinx theme has a couple shortcomings:
> - the navbar moves along the page

That can be fixed with the following conf.py 'html_theme_options' setting:

'fixed_sidebar': True,

https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=docs/conf.py;h=cfa4158d6b284172929785991f710d6237e9992c;hb=2ab2ca9c241f8315f51f9c74a50d7223c875a04b#l161

> - the search bar is not always at the same place

Can be also addressed by setting 'html_sidebars' to a fixed order, e.g.:

html_sidebars = {
'**': [
'searchbox.html',
'navigation.html',
'relations.html',
]
} 

Can also be customized for different pages, e.g., we do so for landing pages:

https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=docs/conf.py;h=cfa4158d6b284172929785991f710d6237e9992c;hb=2ab2ca9c241f8315f51f9c74a50d7223c875a04b#l188

I added also a short JS snipped to scroll the heading of the current chapter in
the sidebar TOC into view (adapted from rust book).
https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=docs/custom.js;h=7964b2cb0ea9433596845618f1679f1672ce38b8;hb=2ab2ca9c241f8315f51f9c74a50d7223c875a04b

If you want, you could check out the result at our hosted docs site:
https://pbs.proxmox.com/docs/managing-remotes.html

> - it lacks some contrast and colours

That is true, and IMO the rtd theme really uses a better colour palette,
especially for things like "Topic" blocks.
In fact we pondered switching over to rtd, so please don't see my mail
as me advertising that all issues can be fixed into alabaster, just wanted
to share what we did to overcome the first two short-comings mentioned here.

cheers,
Thomas




Re: [PATCH] i386/acpi: restore device paths for pre-5.1 vms

2021-03-01 Thread Thomas Lamprecht
On 01.03.21 20:59, Vitaly Cheptsov wrote:
> After fixing the _UID value for the primary PCI root bridge in
> af1b80ae it was discovered that this change updates Windows
> configuration in an incompatible way causing network configuration
> failure unless DHCP is used. More details provided on the list:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg08484.html
> 
> This change reverts the _UID update from 1 to 0 for q35 and i440fx
> VMs before version 5.2 to maintain the original behaviour when
> upgrading.
> 
> Cc: qemu-sta...@nongnu.org
> Cc: qemu-devel@nongnu.org
> Reported-by: Thomas Lamprecht 
> Suggested-by: Michael S. Tsirkin 
> Signed-off-by: Vitaly Cheptsov 

Thanks for sending this! Works as advertised and can be cleanly cherry-picked
on top of the v5.2.0 tag.

Tested-by: Thomas Lamprecht 





Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths

2021-03-01 Thread Thomas Lamprecht
On 01.03.21 15:20, Igor Mammedov wrote:
> On Mon, 1 Mar 2021 08:45:53 +0100
> Thomas Lamprecht  wrote:
>> On 01.03.21 08:20, Michael S. Tsirkin wrote:
>>> There are various testing efforts the reason this got undetected is
>>> because it does not affect linux guests, and even for windows
>>> they kind of recover, there's just some boot slowdown around 
>>> reconfiguration.
>>> Not easy to detect automatically given windows has lots of random
>>> downtime during boot around updates etc etc.
>>>   
>>
>> No, Windows does not reconfigure, this is a permanent change, one is just 
>> lucky
>> if one has a DHCP server around in the network accessible for the guest.
>> As static addresses setup on that virtual NIC before that config is gone,
>> no recovery whatsoever until manual intervention.
> Static IP's are the pain guest admin picked up to deal with so he might have 
> to
> reconfigure guest OS when it decides to rename NICs. In this case moving
> to new QEMU is alike to updating BIOS which fixed PCI description.
> (On QEMU side we try to avoid breaking changes, but sometime it happens anyway
> and it's up guest admin to fix OS quirks)
> 

heh, I agree, but users see it very differently, QEMU got updated, something
stopped working/changed/... -> QEMU at fault.

>> I meant more of a "dump HW layout to .txt file, commit to git, and ensure
>> there's no diff without and machine version bump" (very boiled down), e.g., 
>> like
>> ABI checks for kernel builds are often done by distros - albeit those are 
>> easier
>> as its quite clear what and how the kernel ABI can be used.
> ACPI tables are not considered as ABI change in QEMU, technically tables that 
> QEMU
> generates are firmware and not version-ed (same like we don't tie anything to
> specific firmware versions). 
> 
> However we rarely do version ACPI changes (only when it breaks something or
> we suspect it would break and we can't accept that breakage), this time it 
> took
> a lot of time to find out that. We try to minimize such cases as every
> versioning knob adds up to maintenance.
> 
> For ACPI tables changes, QEMU has bios-tables-test, but it lets us to catch
> unintended changes only.
> Technically it's possible to keep master tables for old machine versions
> and test against it. But I'm not sure if we should do that, because some
> (most) changes are harmless or useful and should apply to all machine
> versions.
> So we will end up in the same situation, where we decide if a change
> should be versioned or not.
> 
> 

OK, fair enough. Many thanks for providing some rationale!




Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths

2021-02-28 Thread Thomas Lamprecht
On 01.03.21 08:20, Michael S. Tsirkin wrote:
> On Mon, Mar 01, 2021 at 08:12:35AM +0100, Thomas Lamprecht wrote:
>> On 28.02.21 21:43, Michael S. Tsirkin wrote:
>>> Sure. The way to do that is to tie old behaviour to old machine
>>> versions. We'll need it in stable too ...
>>
>> Yeah, using machine types is how its meant to be with solving migration
>> breakage, sure.
>> But that means we have to permanently pin the VM, and any backup restored 
>> from
>> that to that machine type *forever*. That'd be new for us as we always could
>> allow a newer machine type for a fresh start (i.e., non migration or the 
>> like)
>> here, and mean that lots of other improvements guarded by a newer machine 
>> type
>> for those VMs will.
> 
> If you don't do that, that is a bug as any virtual hardware
> can change across machine types.

For us a feature, for fresh starts one gets the current virtual HW but for
live migration or our live snapshot code it stays compatible. Works quite
well here for many years, as we can simply test the HW changes on existing
VMs - which failed here due to lack of static IPs in the test bed. So yes,
it has its problems as it is not really  what an OS considers as HW change
so big that it makes it a new device, mostly Windows is a PITA here as seen
in this issue.

I mean, QEMU deprecates very old machines at some point anyway, so even then
it is impossible to keep to the old machine forever, but otoh redoing some
changes after a decade or two can be fine, I guess?

> 
>> And yeah, stable is wanted, but extrapolating from the current stable 
>> releases
>> frequency, where normally there's maximal one after 5-6 months from the .0
>> release, means that this will probably still hit all those distributions I
>> mentioned or is there something more soon planned?
>>
>> Also, is there any regression testing infrastructure around to avoid such
>> changes in the future? This change got undetected for 7 months, which can be
>> pretty the norm for QEMU releases, so some earlier safety net would be good? 
>> Is
>> there anything which dumps various default machine HW layouts and uses them 
>> for
>> an ABI check of some sorts?
> 
> There are various testing efforts the reason this got undetected is
> because it does not affect linux guests, and even for windows
> they kind of recover, there's just some boot slowdown around reconfiguration.
> Not easy to detect automatically given windows has lots of random
> downtime during boot around updates etc etc.
> 

No, Windows does not reconfigure, this is a permanent change, one is just lucky
if one has a DHCP server around in the network accessible for the guest.
As static addresses setup on that virtual NIC before that config is gone,
no recovery whatsoever until manual intervention.

I meant more of a "dump HW layout to .txt file, commit to git, and ensure
there's no diff without and machine version bump" (very boiled down), e.g., like
ABI checks for kernel builds are often done by distros - albeit those are easier
as its quite clear what and how the kernel ABI can be used.




Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths

2021-02-28 Thread Thomas Lamprecht
On 28.02.21 21:43, Michael S. Tsirkin wrote:
> Sure. The way to do that is to tie old behaviour to old machine
> versions. We'll need it in stable too ...

Yeah, using machine types is how its meant to be with solving migration
breakage, sure.
But that means we have to permanently pin the VM, and any backup restored from
that to that machine type *forever*. That'd be new for us as we always could
allow a newer machine type for a fresh start (i.e., non migration or the like)
here, and mean that lots of other improvements guarded by a newer machine type
for those VMs will.

Why not a switch + machine type, solves migration and any special cases of it
but also allows machine updates but also to keep the old behavior?

And yeah, stable is wanted, but extrapolating from the current stable releases
frequency, where normally there's maximal one after 5-6 months from the .0
release, means that this will probably still hit all those distributions I
mentioned or is there something more soon planned?

Also, is there any regression testing infrastructure around to avoid such
changes in the future? This change got undetected for 7 months, which can be
pretty the norm for QEMU releases, so some earlier safety net would be good? Is
there anything which dumps various default machine HW layouts and uses them for
an ABI check of some sorts?




Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths

2021-02-28 Thread Thomas Lamprecht
Hi Vitaly,

On 28.02.21 10:11, vit9696 wrote:
> For us this breaks the ability to control the boot options between the 
> operating system and the OVMF. It happens because the operating system builds 
> the DPs based on ACPI (in fact the only source available to it), while OVMF 
> uses another source. The previous behaviour also violates the specification, 
> so I do not believe there is room for reverting it. I believe it is also not 
> possible to update QEMU to internally use the 1 UID, since it may conflict 
> with the case when there are multiple PCI bus.

I think you may have misunderstood me a little bit, I did not ask for this to
be reverted in upstream QEMU, it's quite clear to me that this should be the
new default behaviour and should have been since ever.

Albeit, I must ask what makes macOS special to not be allowed doing things that
Windows and Linux guest can do just fine?

I mainly asked for other drawbacks of such a revert as it is currently the
straight forward stop gap solution for us as downstream. What we probably will
do, is keeping this as default to the new standard behavior and adding a switch
to revert to the old one - our QEMU integration library in Proxmox VE can then
set this for old VMs and use the new standard for new ones on VM start, that
way we keep backward compatible - as only Windows VMs seems to be affected we
can even do this only for those (we have a OS type config property from which
we can derive this).

>
> In my opinion, the most logical workaround is to provide in-guest steps to 
> update VM configuration to account for this.

Often the Hypervisor admin and Guest admin are not the same, so this is only
a small band-aid and for most helping only after the fact.

We also have quite easy to setup clustering so this means that such affected
VMs will seemingly break on migration to an update node for lots of users - for
us an unacceptable situation to expose our users with and honestly, I have a
hard time seeing me and colleagues to wish spending our nerves to direct
hundreds of reports to the documented solution (some will certainly find it on
their own, but whatever one does, lots won't) and dealing with their,
relatable, fit they'll throw and me having to hold back telling them off to
just use Linux instead ;-)

And I think that other integrator will get some reports too, and FWICT there's
no outside way an user can use to revert to the old behavior.
Note that QEMU 5.2 is not yet released in some major distributions, e.g.,
Debian will ship it with Bullseye which release is still months away, latest
Fedora (33) is shipping QEMU 5.1, so RHEL/CentOS are probably using something
even older and Ubuntu will only add it in 21.04, also two months away.

Currently, QEMU 5.2 which introduces this change, is only released in some is
released in faster moving targets, where Windows VMs are more often for
non-server workloads (educated guess) which again correlates with higher
probability to use of DHCP and not static address assignment (again, educated
guess) - which is the most obvious and noticeable thing we and our users saw
break.

Which brings me again to my other point, there may be lots of other things
breaking in a more subtle way, we do not know but can tell there's lots of
device reshuffling going on when checking out the Windows Device Manager I
cannot immagine that the loss of network configuration is the only thing that
breaks is the only thing that breaks.

So why all this fuss and wall of text? Because I think that this will affect
lots of users, most of them in distros which will only ship the problematic
QEMU version later this year. How many affected there will be: no idea, but we
got quite some reports (compared to usual small stuff breakage) with only
rolling this QEMU version out *partially*, to only some parts of our user base.

That's why I personally think it may be worth to think about adding a switch to
QEMU directly to keep the backwards compatible, albeit standard incompatible
behavior either as opt-in or opt-out to new standard-conform behavior. And
while I thought opt-out is the way to go when starting out this message, I now
rather think opt-in to new is, at least if rustling bells of users with
Windows + static IPs is thought to be worth to avoid. As said, if there's quorum
against this, we can live fine with keeping that switch as downstream patch but
I'd like to avoid that and certainly won't just rush forward shipping it but
wait until next week, maybe there are some other opinions or better ideas.

cheers,
Thomas




Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths

2021-02-27 Thread Thomas Lamprecht
On 30.07.20 17:58, Michael S. Tsirkin wrote:
> macOS uses ACPI UIDs to build the DevicePath for NVRAM boot options,
> while OVMF firmware gets them via an internal channel through QEMU.
> Due to a bug in QEMU ACPI currently UEFI firmware and ACPI have
> different values, and this makes the underlying operating system
> unable to report its boot option.
> 
> The particular node in question is the primary PciRoot (PCI0 in ACPI),
> which for some reason gets assigned 1 in ACPI UID and 0 in the
> DevicePath. This is due to the _UID assigned to it by build_dsdt in
> hw/i386/acpi-build.c Which does not correspond to the primary PCI
> identifier given by pcibus_num in hw/pci/pci.c
> 
> Reference with the device paths, OVMF startup logs, and ACPI table
> dumps (SysReport):
> https://github.com/acidanthera/bugtracker/issues/1050
> 
> In UEFI v2.8, section "10.4.2 Rules with ACPI _HID and _UID" ends with
> the paragraph,
> 
> Root PCI bridges will use the plug and play ID of PNP0A03, This will
> be stored in the ACPI Device Path _HID field, or in the Expanded
> ACPI Device Path _CID field to match the ACPI name space. The _UID
> in the ACPI Device Path structure must match the _UID in the ACPI
> name space.
> 
> (See especially the last sentence.)
> 
> Considering *extra* root bridges / root buses (with bus number > 0),
> QEMU's ACPI generator actually does the right thing; since QEMU commit
> c96d9286a6d7 ("i386/acpi-build: more traditional _UID and _HID for PXB
> root buses", 2015-06-11).
> 
> However, the _UID values for root bridge zero (on both i440fx and q35)
> have always been "wrong" (from UEFI perspective), going back in QEMU to
> commit 74523b850189 ("i386: add ACPI table files from seabios",
> 2013-10-14).
> 
> Even in SeaBIOS, these _UID values have always been 1; see commit
> a4d357638c57 ("Port rombios32 code from bochs-bios.", 2008-03-08) for
> i440fx, and commit ecbe3fd61511 ("seabios: q35: add dsdt", 2012-12-01)
> for q35.
> 
> Suggested-by: Laszlo Ersek 
> Tested-by: vit9696 
> Signed-off-by: Michael S. Tsirkin 
> ---
>  hw/i386/acpi-build.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index b7bc2a..7a5a8b3521 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1497,7 +1497,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  dev = aml_device("PCI0");
>  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
>  aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> -aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(0)));
>  aml_append(sb_scope, dev);
>  aml_append(dsdt, sb_scope);
>  
> @@ -1512,7 +1512,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
>  aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
>  aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> -aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(0)));
>  aml_append(dev, build_q35_osc_method());
>  aml_append(sb_scope, dev);
>  aml_append(dsdt, sb_scope);
> 

This "breaks" Windows guests created/installed before this change in the sense
of Windows gets confused and declares that most of the devices changed and thus
it has new entries for them in the device manager where settings of the old one
do not apply anymore.

We were made aware of this by our users when making QEMU 5.2.0 available on
a more used repository of us. Users complained that their static network
configuration got thrown out in Windows 2016 or 2019 server VMs, and Windows 
tried
to use DHCP (which was not available in their environments) and thus their 
Windows
VMs had no network connectivity at all anymore.

It's currently not yet quite 100% clear to me with what QEMU version the 
Windows VM
must be installed with, from reading the patch I have to believe it must be 
before
that, but we got mixed reports and a colleague could not replicate it from 
upgrade
of 4.0 to 5.2 (I did /not/ confirm that one). Anyway, just writing this all to 
avoid
people seeing different results and brushing this off.

So here's my personal reproducer, as said, I think that one should be able to 
just
use QEMU 5.1 to install a Windows guest and start it with 5.2 afterwards to see 
this
issue, but YMMV.

Note. I always used the exact same QEMU command (see below) for installation,
reproducing and bisect.

1. Installed Windows 2016 1616 VM using QEMU 3.0.1
   - VirtIO net/scsi driver from VirtIO win 190
2. Setup static network in the VM and shutdown
3. Started VM with 5.2.0 -> Network gone, new "Ethernet #2" adapter shows up 
instead

Starting the  "Device Manager" and enabling "View -> Show hidden devices" showed
me a greyed out device duplicate for 

Re: [RFC PATCH 0/3] block: Synchronous bdrv_*() from coroutine in different AioContext

2020-05-14 Thread Thomas Lamprecht
On 5/12/20 4:43 PM, Kevin Wolf wrote:
> Stefan (Reiter), after looking a bit closer at this, I think there is no
> bug in QEMU, but the bug is in your coroutine code that calls block
> layer functions without moving into the right AioContext first. I've
> written this series anyway as it potentially makes the life of callers
> easier and would probably make your buggy code correct.

> However, it doesn't feel right to commit something like patch 2 without
> having a user for it. Is there a reason why you can't upstream your
> async snapshot code?

I mean I understand what you mean, but it would make the interface IMO so
much easier to use, if one wants to explicit schedule it beforehand they
can still do. But that would open the way for two styles doing things, not
sure if this would seen as bad. The assert about from patch 3/3 would be
already really helping a lot, though.

Regarding upstreaming, there was some historical attempt to upstream it
from Dietmar, but in the time frame of ~ 8 to 10 years ago or so.
I'm not quite sure why it didn't went through then, I see if I can get some
time searching the mailing list archive.

We'd be naturally open and glad to upstream it, what it effectively allow
us to do is to not block the VM to much during snapshoting it live.

I pushed a tree[0] with mostly just that specific code squashed together (hope
I did not break anything), most of the actual code is in commit [1].
It'd be cleaned up a bit and checked for coding style issues, but works good
here.

Anyway, thanks for your help and pointers!

[0]: https://github.com/ThomasLamprecht/qemu/tree/savevm-async
[1]: 
https://github.com/ThomasLamprecht/qemu/commit/ffb9531f370ef0073e4b6f6021f4c47ccd702121




Re: [RFC PATCH 2/3] block: Allow bdrv_run_co() from different AioContext

2020-05-12 Thread Thomas Lamprecht
On 5/12/20 4:43 PM, Kevin Wolf wrote:
> Coroutine functions that are entered through bdrv_run_co() are already
> safe to call from synchronous code in a different AioContext because
> bdrv_coroutine_enter() will schedule them in the context of the node.
> 
> However, the coroutine fastpath still requires that we're already in the
> right AioContext when called in coroutine context.
> 
> In order to make the behaviour more consistent and to make life a bit
> easier for callers, let's check the AioContext and automatically move
> the current coroutine around if we're not in the right context yet.
> 
> Signed-off-by: Kevin Wolf 
> ---
>  block/io.c | 15 ++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/block/io.c b/block/io.c
> index c1badaadc9..7808e8bdc0 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -895,8 +895,21 @@ static int bdrv_run_co(BlockDriverState *bs, 
> CoroutineEntry *entry,
> void *opaque, int *ret)
>  {
>  if (qemu_in_coroutine()) {
> -/* Fast-path if already in coroutine context */
> +Coroutine *self = qemu_coroutine_self();
> +AioContext *bs_ctx = bdrv_get_aio_context(bs);
> +AioContext *co_ctx = qemu_coroutine_get_aio_context(self);
> +
> +if (bs_ctx != co_ctx) {
> +/* Move to the iothread of the node */
> +aio_co_schedule(bs_ctx, self);
> +qemu_coroutine_yield();
> +}
>  entry(opaque);
> +if (bs_ctx != co_ctx) {
> +/* Move back to the original AioContext */
> +aio_co_schedule(bs_ctx, self);

shouldn't it use co_ctx here, as else it's just scheduled again on the one from 
bs?

Looks OK for me besides that.

> +qemu_coroutine_yield();
> +}
>  } else {
>  Coroutine *co = qemu_coroutine_create(entry, opaque);
>  *ret = NOT_DONE;
> 





Re: bdrv_drained_begin deadlock with io-threads

2020-04-03 Thread Thomas Lamprecht
On 4/2/20 7:10 PM, Kevin Wolf wrote:
> Am 02.04.2020 um 18:47 hat Kevin Wolf geschrieben:
>> So I think this is the bug: Calling blk_wait_while_drained() from
>> anywhere between blk_inc_in_flight() and blk_dec_in_flight() is wrong
>> because it will deadlock the drain operation.
>>
>> blk_aio_read/write_entry() take care of this and drop their reference
>> around blk_wait_while_drained(). But if we hit the race condition that
>> drain hasn't yet started there, but it has when we get to
>> blk_co_preadv() or blk_co_pwritev_part(), then we're in a buggy code
>> path.
> 
> With the following patch, it seems to survive for now. I'll give it some
> more testing tomorrow (also qemu-iotests to check that I didn't
> accidentally break something else.)
> 

So I only followed the discussion loosely, but tried some simple reproducing
to ensure it was an issue independent of some artifacts on Dietmar's setup.

Before that patch I got always a hang before reaching the fifth drive-backup
+ block-job-cancel cycle. With your patch applied I had no hang so far,
currently into >885 cycles (and yes I confirmed stress -d 5 was really
running).

So, FWIW, the patch definitively fixes the issue or at least the symptoms
here, I cannot comment on its correctness or the like at all, as I'm
currently missing to much background.

cheers,
Thomas




Re: [Qemu-devel] [RFC 1/1] qemu-ga: add missing libpcre to MSI build

2017-07-07 Thread Thomas Lamprecht

Hi,

On 06/02/2017 01:42 PM, Marc-André Lureau wrote:

Hi

On Thu, Jun 1, 2017 at 5:08 PM Thomas Lamprecht<t.lampre...@proxmox.com>
wrote:


glib depends on libpcre which was not shipped with the MSI, thus
starting of the qemu-ga.exe failed with the respective error message.

Tell WIXL to ship this library with the MSI to avoid this problem.

Signed-off-by: Thomas Lamprecht<t.lampre...@proxmox.com>
CC: Stefan Weil<s...@weilnetz.de>
CC: Michael Roth<mdr...@linux.vnet.ibm.com>


It depends on your glib build, but since Fedora is one of the most
maintained cross mingw- distrib, it make sense to fix the build there.


But even if it isn't best to ship an unnecessary library it shouldn't
harm either. I'd like to make it nicer but after looking at the complexity
of possibilities to do so I rather want to avoid it, especially with my
almost non-existing knowledge of windows builds.


Other solutions would involve either using wixl-specific require
preprocessor directive (which comes with a bunch of unused files since
those are mostly generated from mingw*- packages), or coming up with some
kind of dynamic dependency resolution (approach similar to Richard W.M.
Jones nsiswrapper). However this last approach is quite limited, since it
doesn't reach to data files etc.
In the meantime:
  Reviewed-by: Marc-André Lureau<marcandre.lur...@redhat.com>



Thank you for the review!Has this any chance to still get into qemu 2.10?
Would be nice.

cheers,
Thomas


I haven't done much with the qga or WIXL, so I send this as a RFC.
I hope that I guessed the right people to get CC'ed from MAINTAINERS.

This fixes a current qemu-ga MSI build, I tested it successfully with
Windows 7
and Windows 10 as guest OS.

I cross built from a Fedora 25 LXC container.

The Guid for the libpcre was generated byhttps://www.guidgen.com/  as
suggested
by:

http://wixtoolset.org/documentation/manual/v3/howtos/general/generate_guids.html

  qga/installer/qemu-ga.wxs | 4 
  1 file changed, 4 insertions(+)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index fa2260cafa..5af11627f8 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -125,6 +125,9 @@

  

+  
+
+  

  
@@ -173,6 +176,7 @@



+  
  

  
--
2.11.0



--

Marc-André Lureau








[Qemu-devel] [RFC 1/1] qemu-ga: add missing libpcre to MSI build

2017-06-01 Thread Thomas Lamprecht
glib depends on libpcre which was not shipped with the MSI, thus
starting of the qemu-ga.exe failed with the respective error message.

Tell WIXL to ship this library with the MSI to avoid this problem.

Signed-off-by: Thomas Lamprecht <t.lampre...@proxmox.com>
CC: Stefan Weil <s...@weilnetz.de>
CC: Michael Roth <mdr...@linux.vnet.ibm.com>
---

I haven't done much with the qga or WIXL, so I send this as a RFC.
I hope that I guessed the right people to get CC'ed from MAINTAINERS.

This fixes a current qemu-ga MSI build, I tested it successfully with Windows 7
and Windows 10 as guest OS.

I cross built from a Fedora 25 LXC container.

The Guid for the libpcre was generated by https://www.guidgen.com/ as suggested
by:
http://wixtoolset.org/documentation/manual/v3/howtos/general/generate_guids.html

 qga/installer/qemu-ga.wxs | 4 
 1 file changed, 4 insertions(+)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index fa2260cafa..5af11627f8 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -125,6 +125,9 @@
   
 
   
+  
+
+  
   
 
@@ -173,6 +176,7 @@
   
   
   
+  
 
 
 
-- 
2.11.0





[Qemu-devel] [Bug 1581936] Re: Frozen Windows 7 VMs with VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

2016-06-12 Thread Thomas Lamprecht
I can partly confirm this, see (and parents):
https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg04048.html

It sounds just a little strange to me, so I'll recheck to be double sure
every configure option is the same on my Arch Linux and Debian machine.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1581936

Title:
  Frozen Windows 7 VMs with VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

Status in QEMU:
  Confirmed

Bug description:
  Hi,

  As already posted on the QEMU devel list [1] I stumbled upon a problem
  with QEMU in version 2.5.1.1 and 2.6.0.

  the VM shows Windows loading
  files for the installation, then the "Starting Windows" screen appears
  here it hangs and never continues.

  Changing the "-vga" option to cirrus solves this, the installation can
  proceed and finish. When changing back to std (or also qxl, vmware) the
  installed VM also hangs on the "Starting Windows" screen while qemu
  showing a little but no excessive load.

  This phenomena appears also with QEMU 2.6.0 but not with 2.6.0-rc4, a
  git bisect shows fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7 (vga: make
  sure vga register setup for vbe stays intact (CVE-2016-3712)) as the
  culprit for this regression, as its a fix for a DoS its not an option to
  just revert it, I guess.

  The bisect log is:

  git bisect start
  # bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
release
  git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
  # good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
v2.6.0-rc4 release
  git bisect good 975eb6a547f809608ccb08c221552f11af25
  # good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
setup on vbe changes
  git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
  # bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking 
branch 'remotes/kraxel/tags/pull-vga-20160509-1' into staging
  git bisect bad 53db932604dfa7bb9241d132e0173894cf54261c
  # bad: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure vga register 
setup for vbe stays intact (CVE-2016-3712).
  git bisect bad fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7
  # first bad commit: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure 
vga register setup for vbe stays intact (CVE-2016-3712).

  
  I could reproduce that with QEMU 2.5.1 and QEMU 2.6 on a Debian derivate
  (Promox VE) with 4.4 Kernel and also with QEMU 2.6 on an Arch Linux
  System with a 4.5 Kernel, so it should not be host distro depended. Both
  machines have Intel x86_64 processors.
  The problem should be reproducible with said Versions or a build from
  git including the above mentioned commit (fd3c136) by starting a VM with
  an Windows 7 ISO, e.g.:

  Freezing installation (as vga defaults to std I marked it as optional):
  ./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 [-vga 
(std|qxl|vmware)]

  Working installation:
  ./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 -vga 
cirrus

  If someone has already an installed Windows 7 VM this behaviour should be
  also observable when trying to start it with the new versions of QEMU.

  Noteworthy may be that Windows 10 is working, I do not had time to get
  other Windows versions and test them, I'll do that as soon as possible.
  Various Linux system also seems do work fine, at least I did not ran
  into an issue there yet.

  I also tried testing with SeaBIOS and OVMF as firmware, as initially I
  had no idea what broke, both lead to the same result - without the 
  CVE-2016-3712 fix they both work, with not.
  Further, KVM enabled and disabled does not make any difference.

  
  [1] http://lists.nongnu.org/archive/html/qemu-devel/2016-05/msg02416.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1581936/+subscriptions



Re: [Qemu-devel] [PATCH] vga: add sr_vbe register set

2016-05-24 Thread Thomas Lamprecht
On 05/23/2016 11:39 PM, Thomas Lamprecht wrote:
> Hi,
>
> sorry for the delay.
>
> On 20.05.2016 12:06, Gerd Hoffmann wrote:
>>Hi,
>>
>>> ./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom
>>> W7SP1_PROFESSIONAL.iso -m 1024 -smp 2 -enable-kvm -cpu host -drive
>>> if=pflash,format=raw,unit=0,readonly,file=OVMF_CODE-pure-efi.fd -drive
>>> if=pflash,format=raw,unit=1,file=/tmp/OVMF_VARS.fd
>> Still not reproduced.  Installed win7, then updated with sp1, rebooted,
>> still working fine.
>>
>> Can you double-check you really tested with a fixed qemu version?
>
> I checked on an Arch Host and there I cannot reproduce this, it works
> fine, with and without OVMF.
> Sorry for causing trouble/noise, it seems that the Debian based host
> has here another problem (here it resets constantly with OVMF).
> For a "tested by" (if even wanted) I'd like to recheck on a plain
> Debian Jessie tomorrow, this didn't had any suspicious qemu-vga
> related packages installed or modified but maybe I'm overlooking
> something.
>

I can reproduce it on a pure Debian Jessie system. It works with the
patch and without OVMF but not with OVMF, after the "Windows loading
files" finishes it just resets the VM.

So on Arch I have no problems but on Debian. Kernel 3.16 vs 4.5 and Gcc
4.9 vs 6.1. OVMF was the same version on both, namely 26bd643 pure (from
your repo).
Although the Debian based Proxmox VE Distro has Kernel 4.4  (Ubuntu
Kernel) and there it doesn't works also, GCC is the same there (just for
info).

The used Kernel:
Linux debian-pure 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2
(2016-04-08) x86_64 GNU/Linux


GCC:
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.9/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 4.9.2-10'
--with-bugurl=file:///usr/share/doc/gcc-4.9/README.Bugs
--enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr
--program-suffix=-4.9 --enable-shared --enable-linker-build-id
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix
--with-gxx-include-dir=/usr/include/c++/4.9 --libdir=/usr/lib
--enable-nls --with-sysroot=/ --enable-clocale=gnu
--enable-libstdcxx-debug --enable-libstdcxx-time=yes
--enable-gnu-unique-object --disable-vtable-verify --enable-plugin
--with-system-zlib --disable-browser-plugin --enable-java-awt=gtk
--enable-gtk-cairo
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64/jre
--enable-java-home
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.9-amd64
--with-arch-directory=amd64
--with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc
--enable-multiarch --with-arch-32=i586 --with-abi=m64
--with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic
--enable-checking=release --build=x86_64-linux-gnu
--host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.9.2 (Debian 4.9.2-10)


My command:
./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom
/mnt/iso/template/iso/W7SP1_PROFESSIONAL.iso -m 2048 -enable-kvm -drive
if=pflash,format=raw,unit=0,readonly,file=/root/OVMF_CODE-pure-efi.fd
-drive if=pflash,format=raw,unit=1,file=/root/OVMF_VARS-pure-efi.fd


My ./configure for this test:
Install prefix/usr/local
BIOS directory/usr/local/share/qemu
binary directory  /usr/local/bin
library directory /usr/local/lib
module directory  /usr/local/lib/qemu
libexec directory /usr/local/libexec
include directory /usr/local/include
config directory  /usr/local/etc
local state directory   /usr/local/var
Manual directory  /usr/local/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path   /root/qemu
C compilercc
Host C compiler   cc
C++ compiler  c++
Objective-C compiler cc
ARFLAGS   rv
CFLAGS-O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -pthread
-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include  -g
QEMU_CFLAGS   -I/usr/include/pixman-1   -Werror -fPIE -DPIE -m64
-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
-Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings
-Wmissing-prototypes -fno-strict-aliasing -fno-common  -Wendif-labels
-Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security
-Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration
-Wold-style-definition -Wtype-limits -fstack-protector-strong  
-I/usr/include/libpng12
LDFLAGS   -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
make  make
install   install
pythonpython -B
smbd  /usr/sbin/smbd
module supportno
host CPU  x86_64
host big endian   no
target list   x86_64-softmmu
tcg debug enabled no
gprof enabled no
sparse enabledno
strip binariesno
profiler  no
stati

Re: [Qemu-devel] [PATCH] vga: add sr_vbe register set

2016-05-23 Thread Thomas Lamprecht

Hi,

sorry for the delay.

On 20.05.2016 12:06, Gerd Hoffmann wrote:

   Hi,


./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom
W7SP1_PROFESSIONAL.iso -m 1024 -smp 2 -enable-kvm -cpu host -drive
if=pflash,format=raw,unit=0,readonly,file=OVMF_CODE-pure-efi.fd -drive
if=pflash,format=raw,unit=1,file=/tmp/OVMF_VARS.fd

Still not reproduced.  Installed win7, then updated with sp1, rebooted,
still working fine.

Can you double-check you really tested with a fixed qemu version?


I checked on an Arch Host and there I cannot reproduce this, it works fine, 
with and without OVMF.
Sorry for causing trouble/noise, it seems that the Debian based host has here 
another problem (here it resets constantly with OVMF).
For a "tested by" (if even wanted) I'd like to recheck on a plain Debian Jessie 
tomorrow, this didn't had any suspicious qemu-vga related packages installed or modified 
but maybe I'm overlooking something.

Really thanks for the quick patch and the fast bug recognition!

cheers,
Thomas






Re: [Qemu-devel] [PATCH] vga: add sr_vbe register set

2016-05-17 Thread Thomas Lamprecht
Hi,

On 05/17/2016 12:50 PM, Gerd Hoffmann wrote:
>   Hi,
>
>>> This way we can allow guests update sr[] registers as they want, without
>>> allowing them disrupt vbe video modes that way.
>> Just documenting my test with the patch here:
>>
>> This fixes the issue with QEMU 2.5.1.1 but only if I'm using SeaBIOS.
>>
>> OVMF leads to a almost similar result as without the patch, after
>> "windows is loading files" the "Starting Windows" appears short then
>>  hangs then the screen remains blank, so the blank screen is new with OVMF.
> Doesn't reproduce here.
>
> Details please:  Which ovmf version?  With/without csm?  32/64 bit
> windows version?

I tested with:

edk2.git-ovmf-x64-0-20151117.b1317.g386cdfb.noarch.rpm
$ git show 386cdfb
commit 386cdfbecbbacb600ffc8e2ffa8c7af1b3855a61
Author: Mark Rutland 
Date:   Tue Nov 17 13:58:19 2015 +

which I used for a little while now and to see if its just my "old" OVMF
I also tested the newest version from your jenkins repo (much thanks for 
them!!):

git show 05b2f9c
commit 05b2f9c94e0c0b663ff2d2fb55397d8215eeb3f5
Author: Dandan Bi 
Date:   Tue May 10 18:51:44 2016 +0800

So without csm.

The OS is 64bit windows 7 professional SP1

The QEMU command for my test:

./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom W7SP1_PROFESSIONAL.iso -m 
1024 -smp 2 -enable-kvm -cpu host -drive 
if=pflash,format=raw,unit=0,readonly,file=OVMF_CODE-pure-efi.fd -drive 
if=pflash,format=raw,unit=1,file=/tmp/OVMF_VARS.fd


Happy to provide more details/test if needed.

best regards,
Thomas




Re: [Qemu-devel] [PATCH] vga: add sr_vbe register set

2016-05-17 Thread Thomas Lamprecht
Hi,

thanks for the patch.

On 05/17/2016 10:54 AM, Gerd Hoffmann wrote:
> Commit "fd3c136 vga: make sure vga register setup for vbe stays intact
> (CVE-2016-3712)." causes a regression.  The win7 installer is unhappy
> because it can't freely modify vga registers any more while in vbe mode.
>
> This patch introduces a new sr_vbe register set.  The vbe_update_vgaregs
> will fill sr_vbe[] instead of sr[].  Normal vga register reads and
> writes go to sr[].  Any sr register read access happens through a new
> sr() helper function which will read from sr_vbe[] with vbe active and
> from sr[] otherwise.
>
> This way we can allow guests update sr[] registers as they want, without
> allowing them disrupt vbe video modes that way.

Just documenting my test with the patch here:

This fixes the issue with QEMU 2.5.1.1 but only if I'm using SeaBIOS.

OVMF leads to a almost similar result as without the patch, after
"windows is loading files" the "Starting Windows" appears short then
 hangs then the screen remains blank, so the blank screen is new with OVMF.

The same is with QEMU 2.6.0, with SeaBIOS it is working with this patch but
with OVMF still not.

best regards,
Thomas

>
> Reported-by: Thomas Lamprecht <tho...@lamprecht.org>
> Signed-off-by: Gerd Hoffmann <kra...@redhat.com>
> ---
>  hw/display/vga.c | 50 --
>  hw/display/vga_int.h |  1 +
>  2 files changed, 29 insertions(+), 22 deletions(-)
>
> diff --git a/hw/display/vga.c b/hw/display/vga.c
> index 4a55ec6..9ebc54f 100644
> --- a/hw/display/vga.c
> +++ b/hw/display/vga.c
> @@ -149,6 +149,11 @@ static inline bool vbe_enabled(VGACommonState *s)
>  return s->vbe_regs[VBE_DISPI_INDEX_ENABLE] & VBE_DISPI_ENABLED;
>  }
>  
> +static inline uint8_t sr(VGACommonState *s, int idx)
> +{
> +return vbe_enabled(s) ? s->sr_vbe[idx] : s->sr[idx];
> +}
> +
>  static void vga_update_memory_access(VGACommonState *s)
>  {
>  hwaddr base, offset, size;
> @@ -163,8 +168,8 @@ static void vga_update_memory_access(VGACommonState *s)
>  s->has_chain4_alias = false;
>  s->plane_updated = 0xf;
>  }
> -if ((s->sr[VGA_SEQ_PLANE_WRITE] & VGA_SR02_ALL_PLANES) ==
> -VGA_SR02_ALL_PLANES && s->sr[VGA_SEQ_MEMORY_MODE] & VGA_SR04_CHN_4M) 
> {
> +if ((sr(s, VGA_SEQ_PLANE_WRITE) & VGA_SR02_ALL_PLANES) ==
> +VGA_SR02_ALL_PLANES && sr(s, VGA_SEQ_MEMORY_MODE) & VGA_SR04_CHN_4M) 
> {
>  offset = 0;
>  switch ((s->gr[VGA_GFX_MISC] >> 2) & 3) {
>  case 0:
> @@ -234,7 +239,7 @@ static void 
> vga_precise_update_retrace_info(VGACommonState *s)
>((s->cr[VGA_CRTC_OVERFLOW] >> 6) & 2)) << 8);
>  vretr_end_line = s->cr[VGA_CRTC_V_SYNC_END] & 0xf;
>  
> -clocking_mode = (s->sr[VGA_SEQ_CLOCK_MODE] >> 3) & 1;
> +clocking_mode = (sr(s, VGA_SEQ_CLOCK_MODE) >> 3) & 1;
>  clock_sel = (s->msr >> 2) & 3;
>  dots = (s->msr & 1) ? 8 : 9;
>  
> @@ -486,7 +491,6 @@ void vga_ioport_write(void *opaque, uint32_t addr, 
> uint32_t val)
>  printf("vga: write SR%x = 0x%02x\n", s->sr_index, val);
>  #endif
>  s->sr[s->sr_index] = val & sr_mask[s->sr_index];
> -vbe_update_vgaregs(s);
>  if (s->sr_index == VGA_SEQ_CLOCK_MODE) {
>  s->update_retrace_info(s);
>  }
> @@ -680,13 +684,13 @@ static void vbe_update_vgaregs(VGACommonState *s)
>  
>  if (s->vbe_regs[VBE_DISPI_INDEX_BPP] == 4) {
>  shift_control = 0;
> -s->sr[VGA_SEQ_CLOCK_MODE] &= ~8; /* no double line */
> +s->sr_vbe[VGA_SEQ_CLOCK_MODE] &= ~8; /* no double line */
>  } else {
>  shift_control = 2;
>  /* set chain 4 mode */
> -s->sr[VGA_SEQ_MEMORY_MODE] |= VGA_SR04_CHN_4M;
> +s->sr_vbe[VGA_SEQ_MEMORY_MODE] |= VGA_SR04_CHN_4M;
>  /* activate all planes */
> -s->sr[VGA_SEQ_PLANE_WRITE] |= VGA_SR02_ALL_PLANES;
> +s->sr_vbe[VGA_SEQ_PLANE_WRITE] |= VGA_SR02_ALL_PLANES;
>  }
>  s->gr[VGA_GFX_MODE] = (s->gr[VGA_GFX_MODE] & ~0x60) |
>  (shift_control << 5);
> @@ -836,7 +840,7 @@ uint32_t vga_mem_readb(VGACommonState *s, hwaddr addr)
>  break;
>  }
>  
> -if (s->sr[VGA_SEQ_MEMORY_MODE] & VGA_SR04_CHN_4M) {
> +if (sr(s, VGA_SEQ_MEMORY_MODE) & VGA_SR04_CHN_4M) {
>  /* chain 4 mode : simplest access */
>  assert(addr < s->vram_size);
>  ret = s->vram_ptr[addr];
&g

Re: [Qemu-devel] Regression with windows 7 VMs and VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

2016-05-15 Thread Thomas Lamprecht
On 15.05.2016 11:28, Stefan Weil wrote:
> Am 15.05.2016 um 01:13 schrieb Thomas Lamprecht:
>> Hi all,
>>
>> I recently ran into Problems when trying to install some Windows VMs
>> this was after an update to QEMU 2.5.1.1, the VM shows Windows loading
>> files for the installation, then the "Starting Windows" screen appears
>> here it hangs and never continues.
>>
>> Changing the "-vga" option to cirrus solves this, the installation can
>> proceed and finish. When changing back to std (or also qxl, vmware) the
>> installed VM also hangs on the "Starting Windows" screen while qemu
>> showing a little but no excessive load.
>>
>> This phenomena appears also with QEMU 2.6.0 but not with 2.6.0-rc4, a
>> git bisect shows fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7 (vga: make
>> sure vga register setup for vbe stays intact (CVE-2016-3712)) as the
>> culprit for this regression, as its a fix for a DoS its not an option to
>> just revert it, I guess.
>> The (short) bisect log is:
>>
>> git bisect start
>> # bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
>> release
>> git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
>> # good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
>> v2.6.0-rc4 release
>> git bisect good 975eb6a547f809608ccb08c221552f11af25
>> # good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
>> setup on vbe changes
>> git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
>> # bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking 
>> branch 'remotes/kraxel/tags/pull-vga-20160509-1' into staging
>> git bisect bad 53db932604dfa7bb9241d132e0173894cf54261c
>>
>> I could reproduce that with QEMU 2.5.1 and QEMU 2.6 on a Debian derivate
>> (Promox VE) with 4.4 Kernel and also with QEMU 2.6 on an Arch Linux
>> System with a 4.5 Kernel, so it should not be host distro depended. Both
>> machines have Intel x86_64 processors.
>> The problem should be reproducible with said Versions or a build from
>> git including the above mentioned commit (fd3c136) by starting a VM with
>> an Windows 7 ISO, e.g.:
>>
>> Hanging installation
>> ./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024
>>
>> Working installation:
>> ./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 -vga 
>> cirrus
>>
>> Noteworthy may be that Windows 10 is working, I do not had time to get
>> other Windows versions and test them, I'll do that as soon as possible.
>> Various Linux system also seems to work fine, at least I did not ran
>> into an issue there yet.
>>
>> I also tried testing with SeaBIOS and OVMF, as initially I had no idea
>> what broke, both lead to the same result - without the CVE-2016-3712 fix
>> they both work, with not.
>> Further, KVM enabled and disabled does not make any difference.
>>
>> If I can take any further step, e.g. open a bug report at another place
>> or help with testing I'd glad to do so.
>>
>> best regards,
>> Thomas
> 
> Hi Thomas,
> 
> thanks for the bug report.
> 
> I added Gerd to the address list, so I'm sure your report will be noticed.
> 
> Bugs can be reported at Launchpad (see
> http://wiki.qemu.org/Contribute/ReportABug).
> Maybe your report could be posted there, too, so people looking for
> known problems
> will find it at the well known location.
> 
> Cheers
> Stefan
> 

Hi Stefan,

thanks for the response and the directions, I opened bug #1581936
https://bugs.launchpad.net/bugs/1581936

Oh and I noticed that I omitted some of the git bisect log in my previous
message, I corrected that in the bug report, also here is the full one:

git bisect start
# bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
release
git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
# good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
v2.6.0-rc4 release
git bisect good 975eb6a547f809608ccb08c221552f11af25
# good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
setup on vbe changes
git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
# bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking branch 
'remotes/kraxel/tags/pull-vga-20160509-1' into staging
git bisect bad 53db932604dfa7bb9241d132e0173894cf54261c
# bad: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure vga register 
setup for vbe stays intact (CVE-2016-3712).
git bisect bad fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7
# first bad commit: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure 
vga register setup for vbe stays intact (CVE-2016-3712).

best regards,
Thomas




[Qemu-devel] [Bug 1581936] [NEW] Frozen Windows 7 VMs with VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

2016-05-15 Thread Thomas Lamprecht
Public bug reported:

Hi,

As already posted on the QEMU devel list [1] I stumbled upon a problem
with QEMU in version 2.5.1.1 and 2.6.0.

the VM shows Windows loading
files for the installation, then the "Starting Windows" screen appears
here it hangs and never continues.

Changing the "-vga" option to cirrus solves this, the installation can
proceed and finish. When changing back to std (or also qxl, vmware) the
installed VM also hangs on the "Starting Windows" screen while qemu
showing a little but no excessive load.

This phenomena appears also with QEMU 2.6.0 but not with 2.6.0-rc4, a
git bisect shows fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7 (vga: make
sure vga register setup for vbe stays intact (CVE-2016-3712)) as the
culprit for this regression, as its a fix for a DoS its not an option to
just revert it, I guess.

The bisect log is:

git bisect start
# bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
release
git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
# good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
v2.6.0-rc4 release
git bisect good 975eb6a547f809608ccb08c221552f11af25
# good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
setup on vbe changes
git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
# bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking branch 
'remotes/kraxel/tags/pull-vga-20160509-1' into staging
git bisect bad 53db932604dfa7bb9241d132e0173894cf54261c
# bad: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure vga register 
setup for vbe stays intact (CVE-2016-3712).
git bisect bad fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7
# first bad commit: [fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7] vga: make sure 
vga register setup for vbe stays intact (CVE-2016-3712).


I could reproduce that with QEMU 2.5.1 and QEMU 2.6 on a Debian derivate
(Promox VE) with 4.4 Kernel and also with QEMU 2.6 on an Arch Linux
System with a 4.5 Kernel, so it should not be host distro depended. Both
machines have Intel x86_64 processors.
The problem should be reproducible with said Versions or a build from
git including the above mentioned commit (fd3c136) by starting a VM with
an Windows 7 ISO, e.g.:

Freezing installation (as vga defaults to std I marked it as optional):
./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 [-vga 
(std|qxl|vmware)]

Working installation:
./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 -vga cirrus

If someone has already an installed Windows 7 VM this behaviour should be
also observable when trying to start it with the new versions of QEMU.

Noteworthy may be that Windows 10 is working, I do not had time to get
other Windows versions and test them, I'll do that as soon as possible.
Various Linux system also seems do work fine, at least I did not ran
into an issue there yet.

I also tried testing with SeaBIOS and OVMF as firmware, as initially I
had no idea what broke, both lead to the same result - without the 
CVE-2016-3712 fix they both work, with not.
Further, KVM enabled and disabled does not make any difference.


[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-05/msg02416.html

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1581936

Title:
  Frozen Windows 7 VMs with VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

Status in QEMU:
  New

Bug description:
  Hi,

  As already posted on the QEMU devel list [1] I stumbled upon a problem
  with QEMU in version 2.5.1.1 and 2.6.0.

  the VM shows Windows loading
  files for the installation, then the "Starting Windows" screen appears
  here it hangs and never continues.

  Changing the "-vga" option to cirrus solves this, the installation can
  proceed and finish. When changing back to std (or also qxl, vmware) the
  installed VM also hangs on the "Starting Windows" screen while qemu
  showing a little but no excessive load.

  This phenomena appears also with QEMU 2.6.0 but not with 2.6.0-rc4, a
  git bisect shows fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7 (vga: make
  sure vga register setup for vbe stays intact (CVE-2016-3712)) as the
  culprit for this regression, as its a fix for a DoS its not an option to
  just revert it, I guess.

  The bisect log is:

  git bisect start
  # bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
release
  git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
  # good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
v2.6.0-rc4 release
  git bisect good 975eb6a547f809608ccb08c221552f11af25
  # good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
setup on vbe changes
  git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
  # bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking 
branch 

[Qemu-devel] Regression with windows 7 VMs and VGA CVE-2016-3712 fix (2.6.0 and 2.5.1.1)

2016-05-14 Thread Thomas Lamprecht
Hi all,

I recently ran into Problems when trying to install some Windows VMs
this was after an update to QEMU 2.5.1.1, the VM shows Windows loading
files for the installation, then the "Starting Windows" screen appears
here it hangs and never continues.

Changing the "-vga" option to cirrus solves this, the installation can
proceed and finish. When changing back to std (or also qxl, vmware) the
installed VM also hangs on the "Starting Windows" screen while qemu
showing a little but no excessive load.

This phenomena appears also with QEMU 2.6.0 but not with 2.6.0-rc4, a
git bisect shows fd3c136b3e1482cd0ec7285d6bc2a3e6a62c38d7 (vga: make
sure vga register setup for vbe stays intact (CVE-2016-3712)) as the
culprit for this regression, as its a fix for a DoS its not an option to
just revert it, I guess.
The (short) bisect log is:

git bisect start
# bad: [bfc766d38e1fae5767d43845c15c79ac8fa6d6af] Update version for v2.6.0 
release
git bisect bad bfc766d38e1fae5767d43845c15c79ac8fa6d6af
# good: [975eb6a547f809608ccb08c221552f11af25] Update version for 
v2.6.0-rc4 release
git bisect good 975eb6a547f809608ccb08c221552f11af25
# good: [2068192dcccd8a80dddfcc8df6164cf9c26e0fc4] vga: update vga register 
setup on vbe changes
git bisect good 2068192dcccd8a80dddfcc8df6164cf9c26e0fc4
# bad: [53db932604dfa7bb9241d132e0173894cf54261c] Merge remote-tracking branch 
'remotes/kraxel/tags/pull-vga-20160509-1' into staging
git bisect bad 53db932604dfa7bb9241d132e0173894cf54261c

I could reproduce that with QEMU 2.5.1 and QEMU 2.6 on a Debian derivate
(Promox VE) with 4.4 Kernel and also with QEMU 2.6 on an Arch Linux
System with a 4.5 Kernel, so it should not be host distro depended. Both
machines have Intel x86_64 processors.
The problem should be reproducible with said Versions or a build from
git including the above mentioned commit (fd3c136) by starting a VM with
an Windows 7 ISO, e.g.:

Hanging installation
./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024

Working installation:
./x86_64-softmmu/qemu-system-x86_64 -boot d -cdrom win7.iso -m 1024 -vga cirrus

Noteworthy may be that Windows 10 is working, I do not had time to get
other Windows versions and test them, I'll do that as soon as possible.
Various Linux system also seems to work fine, at least I did not ran
into an issue there yet.

I also tried testing with SeaBIOS and OVMF, as initially I had no idea
what broke, both lead to the same result - without the CVE-2016-3712 fix
they both work, with not.
Further, KVM enabled and disabled does not make any difference.

If I can take any further step, e.g. open a bug report at another place
or help with testing I'd glad to do so.

best regards,
Thomas




Re: [Qemu-devel] Qemu-kvm live migration modify

2016-04-13 Thread Thomas Lamprecht
hi,

this message would probably be better suited on the qemu-discuss list
not the devel.

comments inline.

On 13.04.2016 09:43, Gilar Dwitresna wrote:
> hi, 
> I have done make implementation of qemu-kvm instalation for live
> migration on ubuntu with sharing storage (NFS) configuration.
> the qemu-kvm has been succeed for live migration with default algorithm
> (guest without service), but if the guest run the service (streaming
> server), the dirty pages rate are increase from 900 to 7000 pages, and
> live migration can't succeed.
> 
> i have played run live migration with extended downtime 30 second, live
> migration can be succees, consequenly the downtime come to increase.
> My system use fast ethernet 100mbps for network bandwidth, 2 PC with ram

That's really not fast but more like the lower limit, a guest which
dirties page quickly can saturate the max 100Megabits/s / 8 = 12.5
megabyte/s line really quick. 1 Gb/s (=125 megabyte/s) would be an
better option.

> 8 GB for host, 1 PC RAM 4GB HDD 500 GB for NFS shared storage. Guest VM
> configure with 2GB RAM HDD 100 GB, migrate from host A to host B.
> 
> From some reference, if the write rate to the memory pages in use by the
> VM (here after referred to as the dirty page rate) is high compared with
> the cost of transferring the pages between the two hosts involved in the
> process, (as dictated, among others, by the network bandwidth,) then
> live migration may not be possible.
> 

With qemu 2.5 you have better autoconverge of live migration, see:
http://wiki.qemu.org/Features/AutoconvergeLiveMigration

Also post copy ram (introduced as experimental in qemu 2.5) would be an
option, see http://wiki.qemu.org/Features/PostCopyLiveMigration

cheers,
Thomas

> In the case with the high dirty pages, i run the live migration while
> the guest run the streaming server with network bandwidth 100 mbps, then
> the server can be migrated.
> 
> can i modify qemu-kvm live migration algorithm? and what should i do to
> modify this algorithm that can be migrate while the VM Guest run the
> servive (streaming server) without extended downtime?
> 
> Thanks.
> Regards, Papandayan
> 




Re: [Qemu-devel] [PATCH for-2.5] piix: Document coreboot-specific RAM size config register

2015-08-25 Thread Thomas Lamprecht



On 08/17/2015 08:58 PM, Eduardo Habkost wrote:

On Thu, Aug 13, 2015 at 11:30:57AM -0400, Richard Smith wrote:

On 08/09/2015 09:48 PM, Ed Swierk wrote:



References to coreboot commits: * Original commit adding code reading
register offsets 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, 0x56, 0x57 to
Intel 440bx code in coreboot:
cb8eab482ff09ec256456312ef2d6e7710123551

I have vague recollection I may have been responsible for this but it
was so long ago.  I'm having trouble finding the commits in gitweb.
When I put those hashes into the commit search at
review.coreboot.org I get not found.

Those are git commits from the repository at
http://review.coreboot.org/coreboot.git

(I couldn't check if they can be seen in a browser, right now, because
the server is returning HTTP 502 errors)

Server doesn't work for me neither, but here is the commit on the github 
repo:

https://github.com/coreboot/coreboot/commit/cb8eab482ff09ec256456312ef2d6e7710123551