25.07.2013 02:33, Christoph Anton Mitterer wrote:

> hmm really weird... I tried a Windows/ide image now.. that seems to work
> as well (at least it runs smoothly,.. Windows doesn't really print a lot
> of errors to the console ;) )...
> 
> I tried around a bit more with my linux image (which is just another
> sid)... different CPUs (kvm64,qemu64 vs. SandyBridge what I had now... I
> disabled the CPU features (all))... that leads to no more:
> [  242.483704] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c4 data 0
> [  242.988307] kvm [7790]: vcpu0 unhandled rdmsr: 0xe8
> [  242.988312] kvm [7790]: vcpu0 unhandled rdmsr: 0xe7

These are harmless (you provided these in your original report), --
guest tries to read (or write) some model-specific CPU registers
which are not emulated by qemu/kvm.  It is usually stuff like CPU
frequency scaling, power management and similar, which is all
handled by the host anyway.  It is definitely not related to the
disk i/o errors.

> But still I get gazillion of ATA errors and the VM breaks already during
> boot...

Ok.  Please provide command line(s) which is used to start your guest.
And please start from the real bugreport - at least indicate your
environment, your host bitness, versions of packages which qemu
uses and so on -- all this is collected automatically when you
run `reportbug qemu-system-x86' (and it really is qemu-system-x86,
not qemu which is a meta-package which does not depend on other
packages).

This is what I refer to when said I need a reproducer.

And a few more points.

 o When you collect guest dmesg and similar, enable serial console.
   On qemu side:

     qemu... -serial file:/some/where/pathname

   on guest (linux) side, in a bootloader:

     linux... console=ttyS0 console=tty1

   This will let you collect guest messages in a much easier way.

 o Please try different storage controller (IDE vs SATA vs VIRTIO),
   to find out if the problem is in a common i/o layer or in a
   particular I/O controller.

 o Note

> I tried also a copy of my Linux VM image from a few days ago (where not
> all updates would have been in)... same issue... also, when looking at
> the last few days... there weren't any updates (in the guest) like
> kernel or so... which could have changed really a lot.

No changes except of the kernel are relevant in the guest, because
only the kernel "talks" with qemu.  And all guest kernels should
Just Work (tm).

> Attaching some more images, which also shows what I meant with tty
> restarting... it's nothing special... just getty which breaks due to the
> disk problems.
> 
> 
> Note that I use libvirt and that seems to have some other storage
> problem as well (#715510)... but I can't believe this is related,... and
> even though I suffered from this... my existing VMs still booted all the
> last days.
> 
> 
> Perhaps an upstream issue.. with the two changes?
>   * new upstream 1.5.1 stable/bugfix release (as qemu-1.5.1.diff)
>     removed qemu_openpty_raw-helper.patch (included upstream)
>   * configure-explicitly-disable-virtfs-if-softmmu=no.patch -- do not
>     build virtfs-proxy-helper stuff if not building system emulator
>     (fix FTBFS on s390)

This actually is just one change - adding a ton of upstream bugfixes
between 1.5.0 and 1.5.1.  This is the current version of qemu which is
used by all other current qemu users.

Another possible issue is toolchain problem (like, a gcc bug which
results in wrong code generation).

> Is there some problem since the package has versions 1.5.0 but you
> removed some patch from 1.5.1?

I don't understand what you're saying here.

Thanks,

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to