Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-29 Thread Antoine Martin
On 05/24/2010 12:47 AM, Stefan Hajnoczi wrote:
 On Sun, May 23, 2010 at 5:18 PM, Antoine Martin anto...@nagafix.co.uk wrote:
 Why does it work in a chroot for the other options (aio=native, if=ide, etc)
 but not for aio!=native??
 Looks like I am misunderstanding the semantics of chroot...
 
 It might not be the chroot() semantics but the environment inside that
 chroot, like the glibc.  Have you compared strace inside and outside
 the chroot?
Reverting to a static build also fixes the issue: aio=threads works.
Definitely something fishy going on with glibc library loading.
(I've checked glibc, libaio were up to date in the chroot - nothing
blatant in the strace)

Can someone explain the aio options?
All I can find is this:
# qemu-system-x86_64 -h | grep -i aio
   [,addr=A][,id=name][,aio=threads|native]
I assume it means the aio=threads emulates the kernel's aio with
separate threads? And is therefore likely to be slower, right?
Is there a reason why aio=native is not the default? Shouldn't
aio=threads be the fallback?

Cheers
Antoine



 
 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-29 Thread Stefan Hajnoczi
On Sat, May 29, 2010 at 10:42 AM, Antoine Martin anto...@nagafix.co.uk wrote:
 Can someone explain the aio options?
 All I can find is this:
 # qemu-system-x86_64 -h | grep -i aio
       [,addr=A][,id=name][,aio=threads|native]
 I assume it means the aio=threads emulates the kernel's aio with
 separate threads? And is therefore likely to be slower, right?
 Is there a reason why aio=native is not the default? Shouldn't
 aio=threads be the fallback?

aio=threads uses posix-aio-compat.c, a POSIX AIO-like implementation
using a thread pool.  Each thread services queued I/O requests using
blocking syscalls (e.g. preadv()/pwritev()).

aio=native uses Linux libaio, the native (non-POSIX) AIO interface.

I would expect that aio=native is faster but benchmarks show that this
isn't true for all workloads.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-29 Thread Christoph Hellwig
On Sat, May 29, 2010 at 04:42:59PM +0700, Antoine Martin wrote:
 Can someone explain the aio options?
 All I can find is this:
 # qemu-system-x86_64 -h | grep -i aio
[,addr=A][,id=name][,aio=threads|native]
 I assume it means the aio=threads emulates the kernel's aio with
 separate threads? And is therefore likely to be slower, right?
 Is there a reason why aio=native is not the default? Shouldn't
 aio=threads be the fallback?

The kernel AIO support is unfortunately not a very generic API.
It only supports O_DIRECT I/O (cache=none for qemu), and if used on
a filesystems it might still block if we need to perform block
allocations.  We could probably make it the default for block devices,
but I'm not a big fan of these kind of conditional defaults.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-29 Thread Christoph Hellwig
On Sat, May 29, 2010 at 10:55:18AM +0100, Stefan Hajnoczi wrote:
 I would expect that aio=native is faster but benchmarks show that this
 isn't true for all workloads.

In what benchmark do you see worse results for aio=native compared to
aio=threads?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-29 Thread Stefan Hajnoczi
On Sat, May 29, 2010 at 11:34 AM, Christoph Hellwig h...@infradead.org wrote:
 In what benchmark do you see worse results for aio=native compared to
 aio=threads?

Sequential reads using 4 concurrent dd if=/dev/vdb iflag=direct
of=/dev/null bs=8k processes.  2 vcpu guest with 4 GB RAM, virtio
block devices, cache=none.  Host storage is a striped LVM volume.
Host kernel kvm.git and qemu-kvm.git userspace.

aio=native and aio=threads each run 3 times.

Result: aio=native has 15% lower throughput than aio=threads.

I haven't looked into this so I don't know what is causes these results.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity

On 05/23/2010 11:53 AM, Antoine Martin wrote:

I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 2048 at 0: Input/output error
pread is still broken on this big raw disk.



Can you summarize your environment and reproducer again?  The thread is 
so long it is difficult to understand exactly what you are testing.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin

On 05/23/2010 06:57 PM, Avi Kivity wrote:

On 05/23/2010 11:53 AM, Antoine Martin wrote:

I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 2048 at 0: Input/output error
pread is still broken on this big raw disk.



Can you summarize your environment and reproducer again?  The thread 
is so long it is difficult to understand exactly what you are testing.



Sure thing:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


What has been tested:
Host has had 2.6.31.x and now 2.6.34 - with no improvement.
Guest has had numerous kernel versions and patches applied for testing 
(from 2.6.31.x to 2.6.34) - none made any difference. (although the 
recent patch stopped the large partition from getting corrupted by the 
merged requests bug.. yay!)
It was once thought that glibc needed to be rebuilt against newer kernel 
headers, done that too, and rebuilt qemu-kvm afterwards. (was 2.6.28 
then 2.6.31 and now 2.6.33)


What has been reported: straces, kernel error messages as above.
Here is the qemu command line (sanitized - removed the virtual disks 
which work fine and network config):
qemu -clock dynticks -m 256 -L ./ -kernel ./bzImage-2.6.34-aio -append 
root=/dev/vda -nographic -drive file=/dev/sda9,if=virtio,cache=none


Let me know if you need any other details. I can also arrange ssh access 
to the system if needed.


Cheers
Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity

On 05/23/2010 05:07 PM, Antoine Martin wrote:

On 05/23/2010 06:57 PM, Avi Kivity wrote:

On 05/23/2010 11:53 AM, Antoine Martin wrote:

I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 2048 at 0: Input/output error
pread is still broken on this big raw disk.



Can you summarize your environment and reproducer again?  The thread 
is so long it is difficult to understand exactly what you are testing.



Sure thing:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


Did you mean: preadv?



What has been tested:
Host has had 2.6.31.x and now 2.6.34 - with no improvement.
Guest has had numerous kernel versions and patches applied for testing 
(from 2.6.31.x to 2.6.34) - none made any difference. (although the 
recent patch stopped the large partition from getting corrupted by the 
merged requests bug.. yay!)
It was once thought that glibc needed to be rebuilt against newer 
kernel headers, done that too, and rebuilt qemu-kvm afterwards. (was 
2.6.28 then 2.6.31 and now 2.6.33)


What has been reported: straces, kernel error messages as above.
Here is the qemu command line (sanitized - removed the virtual disks 
which work fine and network config):
qemu -clock dynticks -m 256 -L ./ -kernel ./bzImage-2.6.34-aio -append 
root=/dev/vda -nographic -drive file=/dev/sda9,if=virtio,cache=none


Let me know if you need any other details. I can also arrange ssh 
access to the system if needed.


And: 64-bit host kernel, 64-bit host userspace, yes?

Does aio=native help?  How about if=ide?

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin

On 05/23/2010 09:18 PM, Avi Kivity wrote:

On 05/23/2010 05:07 PM, Antoine Martin wrote:

On 05/23/2010 06:57 PM, Avi Kivity wrote:

On 05/23/2010 11:53 AM, Antoine Martin wrote:

I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 2048 at 0: Input/output error
pread is still broken on this big raw disk.



Can you summarize your environment and reproducer again?  The thread 
is so long it is difficult to understand exactly what you are testing.



Sure thing:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


Did you mean: preadv?
Yes, here's what makes it work ok (as suggested by Christoph earlier in 
the thread) in posix-aio-compat.c:

#undef CONFIG_PREADV
#ifdef CONFIG_PREADV
static int preadv_present = 1;
#else
static int preadv_present = 0;
#endif


What has been tested:
Host has had 2.6.31.x and now 2.6.34 - with no improvement.
Guest has had numerous kernel versions and patches applied for 
testing (from 2.6.31.x to 2.6.34) - none made any difference. 
(although the recent patch stopped the large partition from getting 
corrupted by the merged requests bug.. yay!)
It was once thought that glibc needed to be rebuilt against newer 
kernel headers, done that too, and rebuilt qemu-kvm afterwards. (was 
2.6.28 then 2.6.31 and now 2.6.33)


What has been reported: straces, kernel error messages as above.
Here is the qemu command line (sanitized - removed the virtual disks 
which work fine and network config):
qemu -clock dynticks -m 256 -L ./ -kernel ./bzImage-2.6.34-aio 
-append root=/dev/vda -nographic -drive 
file=/dev/sda9,if=virtio,cache=none


Let me know if you need any other details. I can also arrange ssh 
access to the system if needed.


And: 64-bit host kernel, 64-bit host userspace, yes?

Yes. (and 64-bit guest/userspace too)


Does aio=native help?

It does indeed!

  How about if=ide?
Will test with another kernel and report back (this one doesn't have any 
non-virtio drivers)


Thanks
Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin



  How about if=ide?
Will test with another kernel and report back (this one doesn't have 
any non-virtio drivers)
Can anyone tell me which kernel module I need for if=ide? Google was 
no help here.
(before I include dozens of unnecessary modules in my slimmed down and 
non modular kernel)


Thanks
Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity

On 05/23/2010 05:53 PM, Antoine Martin wrote:



  How about if=ide?
Will test with another kernel and report back (this one doesn't have 
any non-virtio drivers)
Can anyone tell me which kernel module I need for if=ide? Google was 
no help here.
(before I include dozens of unnecessary modules in my slimmed down and 
non modular kernel)


ATA_PIIX probably.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin

On 05/23/2010 09:43 PM, Antoine Martin wrote:

On 05/23/2010 09:18 PM, Avi Kivity wrote:

On 05/23/2010 05:07 PM, Antoine Martin wrote:

On 05/23/2010 06:57 PM, Avi Kivity wrote:

On 05/23/2010 11:53 AM, Antoine Martin wrote:

I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 512 at 0: Input/output error
  /dev/vdc: read failed after 0 of 2048 at 0: Input/output error
pread is still broken on this big raw disk.



Can you summarize your environment and reproducer again?  The 
thread is so long it is difficult to understand exactly what you 
are testing.



Sure thing:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


Did you mean: preadv?
Yes, here's what makes it work ok (as suggested by Christoph earlier 
in the thread) in posix-aio-compat.c:

#undef CONFIG_PREADV
#ifdef CONFIG_PREADV
static int preadv_present = 1;
#else
static int preadv_present = 0;
#endif


What has been tested:
Host has had 2.6.31.x and now 2.6.34 - with no improvement.
Guest has had numerous kernel versions and patches applied for 
testing (from 2.6.31.x to 2.6.34) - none made any difference. 
(although the recent patch stopped the large partition from getting 
corrupted by the merged requests bug.. yay!)
It was once thought that glibc needed to be rebuilt against newer 
kernel headers, done that too, and rebuilt qemu-kvm afterwards. (was 
2.6.28 then 2.6.31 and now 2.6.33)


What has been reported: straces, kernel error messages as above.
Here is the qemu command line (sanitized - removed the virtual disks 
which work fine and network config):
qemu -clock dynticks -m 256 -L ./ -kernel ./bzImage-2.6.34-aio 
-append root=/dev/vda -nographic -drive 
file=/dev/sda9,if=virtio,cache=none


Let me know if you need any other details. I can also arrange ssh 
access to the system if needed.


And: 64-bit host kernel, 64-bit host userspace, yes?

Yes. (and 64-bit guest/userspace too)


Does aio=native help?

It does indeed!

  How about if=ide?
Will test with another kernel and report back (this one doesn't have 
any non-virtio drivers)

That also works fine.

I guess this tells you that the problem is confined to 
raw-disk+virtio+aio-threaded


I'm switching all my VMs to aio=native, but feel free to send patches if 
you want me to re-test aio=threaded


Thanks
Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity

On 05/23/2010 05:43 PM, Antoine Martin wrote:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


Did you mean: preadv?


Yes, here's what makes it work ok (as suggested by Christoph earlier 
in the thread) in posix-aio-compat.c:

#undef CONFIG_PREADV
#ifdef CONFIG_PREADV
static int preadv_present = 1;
#else
static int preadv_present = 0;
#endif


When preadv_present=1, does strace -fF show preadv being called?

If so, the kernel's preadv() is broken.

If not, glibc's preadv() emulation is broken.



Does aio=native help?

It does indeed!


That's recommended anyway on raw partitions with cache=off.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin

On 05/23/2010 10:12 PM, Avi Kivity wrote:

On 05/23/2010 05:43 PM, Antoine Martin wrote:

Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my 
case), this fails with pread enabled, works with it disabled.


Did you mean: preadv?


Yes, here's what makes it work ok (as suggested by Christoph earlier 
in the thread) in posix-aio-compat.c:

#undef CONFIG_PREADV
#ifdef CONFIG_PREADV
static int preadv_present = 1;
#else
static int preadv_present = 0;
#endif


When preadv_present=1, does strace -fF show preadv being called?

There were some pread() but no preadv() in the strace BTW.

If so, the kernel's preadv() is broken.

If not, glibc's preadv() emulation is broken.

OK, I found what's causing it: chroot
I was testing all this in a chroot, I always do and I didn't think of 
mentioning it, sorry about that.

Running it non-chrooted works in all cases, including aio!=native

Why does it work in a chroot for the other options (aio=native, if=ide, 
etc) but not for aio!=native??

Looks like I am misunderstanding the semantics of chroot...



Does aio=native help?

It does indeed!


That's recommended anyway on raw partitions with cache=off.


What about non-raw partitions (or/and with cache=on)?

Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Stefan Hajnoczi
On Sun, May 23, 2010 at 5:18 PM, Antoine Martin anto...@nagafix.co.uk wrote:
 Why does it work in a chroot for the other options (aio=native, if=ide, etc)
 but not for aio!=native??
 Looks like I am misunderstanding the semantics of chroot...

It might not be the chroot() semantics but the environment inside that
chroot, like the glibc.  Have you compared strace inside and outside
the chroot?

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-22 Thread Antoine Martin

Bump.

Now that qemu is less likely to eat my data,  *[Qemu-devel] [PATCH 4/8] 
block: fix sector comparism in*

http://marc.info/?l=qemu-develm=127436114712437

I thought I would try using the raw 1.5TB partition again with KVM, 
still no go.

I am still having to use:

#undef CONFIG_PREADV

Host and guest kernel version is 2.6.34, headers 2.6.33, glibc 2.10.1-r1
qemu-kvm 0.12.4 + patch above.

Who do I need to bug? glibc? kvm?

Thanks
Antoine


On 04/09/2010 05:00 AM, Antoine Martin wrote:


Antoine Martin wrote:
   

On 03/08/2010 02:35 AM, Avi Kivity wrote:
 

On 03/07/2010 09:25 PM, Antoine Martin wrote:
   

On 03/08/2010 02:17 AM, Avi Kivity wrote:
 

On 03/07/2010 09:13 PM, Antoine Martin wrote:
   

What version of glibc do you have installed?
   

Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1

 

$ git show glibc-2.10~108 | head
commit e109c6124fe121618e42ba882e2a0af6e97b8efc
Author: Ulrich Drepperdrep...@redhat.com
Date:   Fri Apr 3 19:57:16 2009 +

 * misc/Makefile (routines): Add preadv, preadv64, pwritev,
pwritev64.

 * misc/Versions: Export preadv, preadv64, pwritev, pwritev64
for
 GLIBC_2.10.
 * misc/sys/uio.h: Declare preadv, preadv64, pwritev, pwritev64.
 * sysdeps/unix/sysv/linux/kernel-features.h: Add entries for
preadv

You might get away with rebuilding glibc against the 2.6.33 headers.

   

The latest kernel headers available in gentoo (and they're masked
unstable):
sys-kernel/linux-headers-2.6.32

So I think I will just keep using Christoph's patch until .33 hits
portage.
Unless there's any reason not to? I would rather keep my system clean.
I can try it though, if that helps you clear things up?
 

preadv/pwritev was actually introduced in 2.6.30.  Perhaps you last
build glibc before that?  If so, a rebuild may be all that's necessary.

   

To be certain, I've rebuilt qemu-kvm against:
linux-headers-2.6.33 + glibc-2.10.1-r1 (both freshly built)
And still no go!
I'm still having to use the patch which disables preadv unconditionally...
 

Better late than never, here's the relevant part of the strace (for the
unpatched case where it fails):

stat(./fs, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 41), ...}) = 0
open(./fs, O_RDWR|O_DIRECT|O_CLOEXEC) = 12

lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_SET)  = 0
[pid 31266] read(12,
\240\246E\32\r\21\367c\212\316Xn\177e'\310}\234\1\273`\371\266\247\r\1nj\332\32\221\26...,
512) = 512
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12, iQ\35
\271O\203vj\ve[Ni}\355\263\272\4#yMo\266.\341\21\340Y5\204\20..., 4096,
1321851805696) = 4096
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] 

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-22 Thread Michael Tokarev

22.05.2010 14:44, Antoine Martin wrote:

Bump.

Now that qemu is less likely to eat my data,  *[Qemu-devel] [PATCH 4/8]
block: fix sector comparism in*
http://marc.info/?l=qemu-develm=127436114712437

I thought I would try using the raw 1.5TB partition again with KVM,
still no go.


Hm.  I don't have so much diskspace (my largest is 750Gb, whole disk),
but I created 1.5Tb sparse lvm volume.  It appears to work for me,
even 32bit version of qemu-kvm-0.12.4 (with the mentioned patch applied).


I am still having to use:

#undef CONFIG_PREADV

Host and guest kernel version is 2.6.34, headers 2.6.33, glibc 2.10.1-r1
qemu-kvm 0.12.4 + patch above.


eglibc-2.10.2-6, kernel #2.6.34.0-amd64,
kernel headers 2.6.32-11~bpo50+1
(debian)


Who do I need to bug? glibc? kvm?


are you running 32bit userspace and 64bit kernel
by a chance?  If yes that's a kernel prob, see
http://thread.gmane.org/gmane.linux.kernel.aio.general/2891
(the fix will be in 2.6.35 hopefully, now it's in
Andrew Morton's tree).

If not, well, I don't know ;)

/mjt
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-22 Thread Antoine Martin

On 05/22/2010 06:17 PM, Michael Tokarev wrote:

22.05.2010 14:44, Antoine Martin wrote:

Bump.

Now that qemu is less likely to eat my data,  *[Qemu-devel] [PATCH 4/8]
block: fix sector comparism in*
http://marc.info/?l=qemu-develm=127436114712437

I thought I would try using the raw 1.5TB partition again with KVM,
still no go.


Hm.  I don't have so much diskspace (my largest is 750Gb, whole disk),
but I created 1.5Tb sparse lvm volume.  It appears to work for me,
even 32bit version of qemu-kvm-0.12.4 (with the mentioned patch applied).


I am still having to use:

#undef CONFIG_PREADV

Host and guest kernel version is 2.6.34, headers 2.6.33, glibc 2.10.1-r1
qemu-kvm 0.12.4 + patch above.


eglibc-2.10.2-6, kernel #2.6.34.0-amd64,
kernel headers 2.6.32-11~bpo50+1
(debian)


Who do I need to bug? glibc? kvm?


are you running 32bit userspace and 64bit kernel
by a chance?  If yes that's a kernel prob, see
http://thread.gmane.org/gmane.linux.kernel.aio.general/2891
(the fix will be in 2.6.35 hopefully, now it's in
Andrew Morton's tree).

If not, well, I don't know ;)

I'm not: 64-bit host and 64-bit guest.

Thanks anyway.
Antoine


/mjt
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-04-08 Thread Antoine Martin


Antoine Martin wrote:
 On 03/08/2010 02:35 AM, Avi Kivity wrote:
 On 03/07/2010 09:25 PM, Antoine Martin wrote:
 On 03/08/2010 02:17 AM, Avi Kivity wrote:
 On 03/07/2010 09:13 PM, Antoine Martin wrote:
 What version of glibc do you have installed?

 Latest stable:
 sys-devel/gcc-4.3.4
 sys-libs/glibc-2.10.1-r1


 $ git show glibc-2.10~108 | head
 commit e109c6124fe121618e42ba882e2a0af6e97b8efc
 Author: Ulrich Drepper drep...@redhat.com
 Date:   Fri Apr 3 19:57:16 2009 +

 * misc/Makefile (routines): Add preadv, preadv64, pwritev,
 pwritev64.

 * misc/Versions: Export preadv, preadv64, pwritev, pwritev64
 for
 GLIBC_2.10.
 * misc/sys/uio.h: Declare preadv, preadv64, pwritev, pwritev64.
 * sysdeps/unix/sysv/linux/kernel-features.h: Add entries for
 preadv

 You might get away with rebuilding glibc against the 2.6.33 headers.

 The latest kernel headers available in gentoo (and they're masked
 unstable):
 sys-kernel/linux-headers-2.6.32

 So I think I will just keep using Christoph's patch until .33 hits
 portage.
 Unless there's any reason not to? I would rather keep my system clean.
 I can try it though, if that helps you clear things up?

 preadv/pwritev was actually introduced in 2.6.30.  Perhaps you last
 build glibc before that?  If so, a rebuild may be all that's necessary.

 To be certain, I've rebuilt qemu-kvm against:
 linux-headers-2.6.33 + glibc-2.10.1-r1 (both freshly built)
 And still no go!
 I'm still having to use the patch which disables preadv unconditionally...

Better late than never, here's the relevant part of the strace (for the
unpatched case where it fails):

stat(./fs, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 41), ...}) = 0
open(./fs, O_RDWR|O_DIRECT|O_CLOEXEC) = 12

lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31266] lseek(12, 0, SEEK_SET)  = 0
[pid 31266] read(12,
\240\246E\32\r\21\367c\212\316Xn\177e'\310}\234\1\273`\371\266\247\r\1nj\332\32\221\26...,
512) = 512
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12, iQ\35
\271O\203vj\ve[Ni}\355\263\272\4#yMo\266.\341\21\340Y5\204\20..., 4096,
1321851805696) = 4096
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31273] pread(12,  unfinished ...
[pid 31267] lseek(12, 0, SEEK_END)  = 1321851815424
[pid 31271] pread(12,  unfinished ...
[pid 31267] lseek(12, 

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-13 Thread Antoine Martin

On 03/08/2010 02:35 AM, Avi Kivity wrote:

On 03/07/2010 09:25 PM, Antoine Martin wrote:

On 03/08/2010 02:17 AM, Avi Kivity wrote:

On 03/07/2010 09:13 PM, Antoine Martin wrote:

What version of glibc do you have installed?


Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1



$ git show glibc-2.10~108 | head
commit e109c6124fe121618e42ba882e2a0af6e97b8efc
Author: Ulrich Drepper drep...@redhat.com
Date:   Fri Apr 3 19:57:16 2009 +

* misc/Makefile (routines): Add preadv, preadv64, pwritev, 
pwritev64.


* misc/Versions: Export preadv, preadv64, pwritev, pwritev64 
for

GLIBC_2.10.
* misc/sys/uio.h: Declare preadv, preadv64, pwritev, pwritev64.
* sysdeps/unix/sysv/linux/kernel-features.h: Add entries for 
preadv


You might get away with rebuilding glibc against the 2.6.33 headers.

The latest kernel headers available in gentoo (and they're masked 
unstable):

sys-kernel/linux-headers-2.6.32

So I think I will just keep using Christoph's patch until .33 hits 
portage.

Unless there's any reason not to? I would rather keep my system clean.
I can try it though, if that helps you clear things up?


preadv/pwritev was actually introduced in 2.6.30.  Perhaps you last 
build glibc before that?  If so, a rebuild may be all that's necessary.



To be certain, I've rebuilt qemu-kvm against:
linux-headers-2.6.33 + glibc-2.10.1-r1 (both freshly built)
And still no go!
I'm still having to use the patch which disables preadv unconditionally...

Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-13 Thread Avi Kivity

On 03/13/2010 11:51 AM, Antoine Martin wrote:
preadv/pwritev was actually introduced in 2.6.30.  Perhaps you last 
build glibc before that?  If so, a rebuild may be all that's necessary.




To be certain, I've rebuilt qemu-kvm against:
linux-headers-2.6.33 + glibc-2.10.1-r1 (both freshly built)
And still no go!
I'm still having to use the patch which disables preadv 
unconditionally...


What does strace show?  Is the kernel's preadv called?

Maybe you have a glibc that has broken emulated preadv and no kernel 
preadv support.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-08 Thread Anthony Liguori

On 03/07/2010 10:21 AM, Avi Kivity wrote:

On 03/07/2010 12:00 PM, Christoph Hellwig wrote:



I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all.  So it's impossible to know where
the problem is.

Actually it is, and the bug has been fixed long ago in:

commit e2a305fb13ff0f5cf6ff80aaa90a5ed5954c
Author: Christoph Hellwigh...@lst.de
Date:   Tue Jan 26 14:49:08 2010 +0100

 block: avoid creating too large iovecs in multiwrite_merge


I've asked for it be added to the -stable series but that hasn't
happened so far.


Anthony, this looks critical.



It's in stable now.  Sounds like a good time to do a 0.12.4.

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Michael Tokarev
Antoine Martin wrote:
[]
 https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599


 The initial report is almost 8 weeks old!
 Is data-corruption and data loss somehow less important than the
 hundreds of patches that have been submitted since?? Or is there a fix
 somewhere I've missed?

I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all.  So it's impossible to know where
the problem is.

Please don't blame on people (me included: i suffer from this same
issue just like you).  We need something constructive.

And by the way, as stated in my comment attached to that bug (I'm
user mjtsf at sourceforge), error-behavour=remount-ro avoids data
corruption (this is not unique for the issue in question but for
general case of i/o errors and filesystem operations).

/mjt
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Gleb Natapov
On Sun, Mar 07, 2010 at 03:48:23AM +0700, Antoine Martin wrote:
 Hi,
 
 With qemu-kvm-0.12.3:
 ./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
 [1.882843]  vdc:
 [2.365154] udev: starting version 146
 [2.693768] end_request: I/O error, dev vdc, sector 126
 [2.693772] Buffer I/O error on device vdc, logical block 126
 [2.693775] Buffer I/O error on device vdc, logical block 127
 [2.693777] Buffer I/O error on device vdc, logical block 128
 [2.693779] Buffer I/O error on device vdc, logical block 129
 [2.693781] Buffer I/O error on device vdc, logical block 130
 [2.693783] Buffer I/O error on device vdc, logical block 131
 [2.693785] Buffer I/O error on device vdc, logical block 132
 [2.693787] Buffer I/O error on device vdc, logical block 133
 [2.693788] Buffer I/O error on device vdc, logical block 134
 [2.693814] end_request: I/O error, dev vdc, sector 0
 [3.144870] end_request: I/O error, dev vdc, sector 0
 [3.499377] end_request: I/O error, dev vdc, sector 0
 [3.523247] end_request: I/O error, dev vdc, sector 0
 [3.547130] end_request: I/O error, dev vdc, sector 0
 [3.550076] end_request: I/O error, dev vdc, sector 0
 
 Works fine with kvm-88:
 cp  /usr/src/KVM/kvm-88/pc-bios/*bin ./
 cp /usr/src/KVM/kvm-88/x86_64-softmmu/qemu-system-x86_64 ./
 ./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
 
 [1.650274]  vdc: unknown partition table
 [  112.704164] EXT4-fs (vdc): mounted filesystem with ordered data mode
 
 I've tried running as root, using the block device directly (as
 shown above) rather than using a softlink, etc..
 
 Something broke.
 Host and guest are both running 2.6.33 and latest KVM.
 
Are you sure you have write access to the block device?

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Christoph Hellwig
On Sun, Mar 07, 2010 at 12:32:38PM +0300, Michael Tokarev wrote:
 Antoine Martin wrote:
 []
  https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599
 
 
  The initial report is almost 8 weeks old!
  Is data-corruption and data loss somehow less important than the
  hundreds of patches that have been submitted since?? Or is there a fix
  somewhere I've missed?
 
 I can only guess that the info collected so far is not sufficient to
 understand what's going on: except of I/O error writing block NNN
 we does not have anything at all.  So it's impossible to know where
 the problem is.

Actually it is, and the bug has been fixed long ago in:

commit e2a305fb13ff0f5cf6ff80aaa90a5ed5954c
Author: Christoph Hellwig h...@lst.de
Date:   Tue Jan 26 14:49:08 2010 +0100

block: avoid creating too large iovecs in multiwrite_merge


I've asked for it be added to the -stable series but that hasn't
happened so far.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

On 03/07/2010 05:00 PM, Christoph Hellwig wrote:

On Sun, Mar 07, 2010 at 12:32:38PM +0300, Michael Tokarev wrote:
   

Antoine Martin wrote:
[]
 

https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599


 

The initial report is almost 8 weeks old!
Is data-corruption and data loss somehow less important than the
hundreds of patches that have been submitted since?? Or is there a fix
somewhere I've missed?
   

I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all.  So it's impossible to know where
the problem is.
 

Actually it is, and the bug has been fixed long ago in:

commit e2a305fb13ff0f5cf6ff80aaa90a5ed5954c
Author: Christoph Hellwigh...@lst.de
Date:   Tue Jan 26 14:49:08 2010 +0100

 block: avoid creating too large iovecs in multiwrite_merge
   

Hmmm.
Like most of you, I had assumed that the data corruption bug (which I 
have also encountered on this raw partition as it is well over 1TB) 
would be the same as the cannot open drive bug I have described above, 
but unfortunately I still cannot use 0.12.3 with raw disks with this 
patch applied.

This is the patch, right?:
http://www.mail-archive.com/qemu-de...@nongnu.org/msg24129.html

So there is something else at play. And just for the record:
1) kvm-88 works fine *with the exact same setup*
2) I've tried running as root
3) The raw disk mounts fine from the host.
So I *know* the problem is with kvm. I wouldn't post to the list without 
triple checking that.


I have also just tested with another raw partition which is much smaller 
(1GB) and the same thing still occurs: kvm-88 works and qemu-kvm-0.12.3 
does not.
So I think that it is fair to assume that this new problem is unrelated 
to the partition size.

I've asked for it be added to the -stable series but that hasn't
happened so far.
   

If eating-your-data doesn't make it to -stable, what does??
Even if it did, I would have thought this kind of bug should warrant a 
big warning sign somewhere.


Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

[snip]


So there is something else at play. And just for the record:
1) kvm-88 works fine *with the exact same setup*
2) I've tried running as root
3) The raw disk mounts fine from the host.
So I *know* the problem is with kvm. I wouldn't post to the list 
without triple checking that.


I have also just tested with another raw partition which is much 
smaller (1GB) and the same thing still occurs: kvm-88 works and 
qemu-kvm-0.12.3 does not.
So I think that it is fair to assume that this new problem is 
unrelated to the partition size.

I have narrowed it down to the io-thread option:
* rebuilding older versions of qemu without --enable-io-thread causes 
the bug (guest cannot open raw partition)

* qemu-kvm-0.12.3 cannot be built with --enable-io-thread over here:
  LINK  x86_64-softmmu/qemu-system-x86_64
kvm-all.o: In function `qemu_mutex_lock_iothread':
/usr/src/KVM/qemu-kvm-0.12.3/qemu-kvm.c:2532: multiple definition of 
`qemu_mutex_lock_iothread'

vl.o:/usr/src/KVM/qemu-kvm-0.12.3/vl.c:3772: first defined here
[..]
Which I have reported as part of another unsolved issue here:
http://www.mail-archive.com/kvm@vger.kernel.org/msg27663.html

Why not using the io-thread would prevent qemu from opening the raw 
partition is beyond me.


Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Michael Tokarev
Antoine Martin wrote:
 [snip]

 So there is something else at play. And just for the record:
 1) kvm-88 works fine *with the exact same setup*
 2) I've tried running as root
 3) The raw disk mounts fine from the host.
 So I *know* the problem is with kvm. I wouldn't post to the list
 without triple checking that.

 I have also just tested with another raw partition which is much
 smaller (1GB) and the same thing still occurs: kvm-88 works and
 qemu-kvm-0.12.3 does not.
 So I think that it is fair to assume that this new problem is
 unrelated to the partition size.
 I have narrowed it down to the io-thread option:
 * rebuilding older versions of qemu without --enable-io-thread causes
 the bug (guest cannot open raw partition)
 * qemu-kvm-0.12.3 cannot be built with --enable-io-thread over here:
   LINK  x86_64-softmmu/qemu-system-x86_64
 kvm-all.o: In function `qemu_mutex_lock_iothread':
 /usr/src/KVM/qemu-kvm-0.12.3/qemu-kvm.c:2532: multiple definition of
 `qemu_mutex_lock_iothread'
 vl.o:/usr/src/KVM/qemu-kvm-0.12.3/vl.c:3772: first defined here
 [..]
 Which I have reported as part of another unsolved issue here:
 http://www.mail-archive.com/kvm@vger.kernel.org/msg27663.html
 
 Why not using the io-thread would prevent qemu from opening the raw
 partition is beyond me.

Ok, this is in fact different problem, not the one I referred you
initially (which was in fact good too, because apparently Christoph
solved that bug for me and for other Debian users, thank you!).

In your case, recalling your initial email:

 With qemu-kvm-0.12.3:
 ./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
 [1.882843]  vdc:
 [2.365154] udev: starting version 146
 [2.693768] end_request: I/O error, dev vdc, sector 126
 [2.693772] Buffer I/O error on device vdc, logical block 126
 [2.693775] Buffer I/O error on device vdc, logical block 127
 [2.693777] Buffer I/O error on device vdc, logical block 128
...

the problem happens right at startup, it can't read _anything_
at all from the disk.  In my case, the problem is intermittent
and happens under high load only, hence the big difference.

But anyway, this is something which should be easy to find
out.  Run kvm under `strace -f' and see how it opens the
device, or find out with lsof what filedescriptor corresponds
to the file in question (in running kvm instance) and see
flags in /proc/$kvm_pid/fdinfo/$fdnum.

I guess it can't open the image in read-write mode somehow.

By the way, iothread doesn't really work in kvm, as far
as I can see.

Thanks.

/mjt
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 12:00 PM, Christoph Hellwig wrote:



I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all.  So it's impossible to know where
the problem is.
 

Actually it is, and the bug has been fixed long ago in:

commit e2a305fb13ff0f5cf6ff80aaa90a5ed5954c
Author: Christoph Hellwigh...@lst.de
Date:   Tue Jan 26 14:49:08 2010 +0100

 block: avoid creating too large iovecs in multiwrite_merge


I've asked for it be added to the -stable series but that hasn't
happened so far.
   


Anthony, this looks critical.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 07:11 PM, Antoine Martin wrote:


the problem happens right at startup, it can't read _anything_
at all from the disk.  In my case, the problem is intermittent
and happens under high load only, hence the big difference.

But anyway, this is something which should be easy to find
out.  Run kvm under `strace -f' and see how it opens the
   device, or find out with lsof what filedescriptor corresponds
to the file in question (in running kvm instance) and see
flags in /proc/$kvm_pid/fdinfo/$fdnum.

...
[...]
stat(./vm/var_fs, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 41), 
...}) = 0

open(./vm/var_fs, O_RDWR|O_DIRECT|O_CLOEXEC) = 12
lseek(12, 0, SEEK_END)  = 1321851815424
[..]
So it opens it the device without problems.

The only things that stands out is this before the read failed message:
[pid  9098] lseek(12, 0, SEEK_END)  = 1321851815424
[pid  9121] pread(12, 0x7fa50a0e47d0, 2048, 0) = -1 EINVAL (Invalid 
argument)




The buffer is unaligned here, yet the file was opened with O_DIRECT 
(cache=none).  This is strange, since alignment is not related to disk size.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Christoph Hellwig
On Sun, Mar 07, 2010 at 07:30:06PM +0200, Avi Kivity wrote:
 It may also be that glibc is emulating preadv, incorrectly.

I've done a quick audit of all pathes leading to pread and all seem
to align correctly.  So either a broken glibc emulation or something
else outside the block layer seems likely.

 Antoine, can you check this?  ltrace may help, or run 'strings libc.so |  
 grep pread'.

Or just add an

#undef CONFIG_PREADV

just before the first

#ifdef CONFIG_PREADV

in posix-aio-compat.c and see if that works.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

On 03/08/2010 12:30 AM, Avi Kivity wrote:

On 03/07/2010 07:21 PM, Christoph Hellwig wrote:

On Sun, Mar 07, 2010 at 07:18:40PM +0200, Avi Kivity wrote:
The only things that stands out is this before the read failed 
message:

[pid  9098] lseek(12, 0, SEEK_END)  = 1321851815424
[pid  9121] pread(12, 0x7fa50a0e47d0, 2048, 0) = -1 EINVAL (Invalid
argument)


The buffer is unaligned here, yet the file was opened with O_DIRECT
(cache=none).  This is strange, since alignment is not related to disk
size.

Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none

Side question: this is the right thing to do for raw partitions, right?

The other interesting thing is that it's using pread - which means
it's a too old kernel to use preadv and thus a not very well tested
codepath with current qemu.

Too old?, I am confused: both host and guest kernels are 2.6.33!
I built KVM against the 2.6.30 headers though.


It may also be that glibc is emulating preadv, incorrectly.

Not sure how to do that.


Antoine, can you check this?  ltrace may help,

This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)   = 0x2a38d60
[pid 26883] memset(0x2a38d60, '\000', 48)= 0x2a38d60
[pid 26883] open64(./vm/media_fs, 540674, 00)  = 12
[pid 26883] posix_memalign(0x7fff92e40560, 512, 16384, -1, 48) = 0
[pid 26883] lseek64(12, 0, 2, 0x7f404f2f3e60, 4) = 0x133c4820600


or run 'strings libc.so | grep pread'.


strings /lib/libc.so.6 | grep pread
preadv
preadv64
pread
__pread64_chk
__pread64
__pread_chk

Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine) [SOLVED]

2010-03-07 Thread Antoine Martin

On 03/08/2010 12:34 AM, Christoph Hellwig wrote:

On Sun, Mar 07, 2010 at 07:30:06PM +0200, Avi Kivity wrote:


It may also be that glibc is emulating preadv, incorrectly.


I've done a quick audit of all pathes leading to pread and all seem
to align correctly.  So either a broken glibc emulation or something
else outside the block layer seems likely.



Antoine, can you check this?  ltrace may help, or run 'strings libc.so |
grep pread'.


Or just add an

#undef CONFIG_PREADV

just before the first

#ifdef CONFIG_PREADV

in posix-aio-compat.c and see if that works.


It does indeed! qemu-kvm-0.12.3 is now seeing my partition again. woohoo!
So, PREAD makes it break... how, where? What does this mean?

Thanks a lot!
Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 08:01 PM, Antoine Martin wrote:

On 03/08/2010 12:30 AM, Avi Kivity wrote:

On 03/07/2010 07:21 PM, Christoph Hellwig wrote:

On Sun, Mar 07, 2010 at 07:18:40PM +0200, Avi Kivity wrote:
The only things that stands out is this before the read failed 
message:

[pid  9098] lseek(12, 0, SEEK_END)  = 1321851815424
[pid  9121] pread(12, 0x7fa50a0e47d0, 2048, 0) = -1 EINVAL (Invalid
argument)


The buffer is unaligned here, yet the file was opened with O_DIRECT
(cache=none).  This is strange, since alignment is not related to disk
size.

Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none

Side question: this is the right thing to do for raw partitions, right?


The rightest.


The other interesting thing is that it's using pread - which means
it's a too old kernel to use preadv and thus a not very well tested
codepath with current qemu.

Too old?, I am confused: both host and guest kernels are 2.6.33!
I built KVM against the 2.6.30 headers though.


You need to build qemu against the 2.6.33 headers ('make 
headers-install').  But after we fix this, please.




It may also be that glibc is emulating preadv, incorrectly.

Not sure how to do that.


Antoine, can you check this?  ltrace may help,

This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)   = 0x2a38d60
[pid 26883] memset(0x2a38d60, '\000', 48)= 0x2a38d60
[pid 26883] open64(./vm/media_fs, 540674, 00)  = 12
[pid 26883] posix_memalign(0x7fff92e40560, 512, 16384, -1, 48) = 0
[pid 26883] lseek64(12, 0, 2, 0x7f404f2f3e60, 4) = 0x133c4820600



Where's pread/preadv?  Did you use -f?




or run 'strings libc.so | grep pread'.


strings /lib/libc.so.6 | grep pread
preadv
preadv64
pread
__pread64_chk
__pread64
__pread_chk



So it does seem glibc emulates preadv.

Perhaps https://bugzilla.redhat.com/show_bug.cgi?id=563103 ?

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine) [SOLVED]

2010-03-07 Thread Avi Kivity

On 03/07/2010 08:43 PM, Antoine Martin wrote:


Antoine, can you check this?  ltrace may help, or run 'strings 
libc.so |

grep pread'.


Or just add an

#undef CONFIG_PREADV

just before the first

#ifdef CONFIG_PREADV

in posix-aio-compat.c and see if that works.



It does indeed! qemu-kvm-0.12.3 is now seeing my partition again. woohoo!
So, PREAD makes it break... how, where? What does this mean?



No - preadv emulation in glibc breaks it.  Likely the bug I mentioned in 
the other email on this thread.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

[snip]

The other interesting thing is that it's using pread - which means
it's a too old kernel to use preadv and thus a not very well tested
codepath with current qemu.

Too old?, I am confused: both host and guest kernels are 2.6.33!
I built KVM against the 2.6.30 headers though.


You need to build qemu against the 2.6.33 headers ('make 
headers-install').  But after we fix this, please.

OK.
I'm signing off for today, but I will test whatever patches/suggestions 
you send my way.




It may also be that glibc is emulating preadv, incorrectly.

Not sure how to do that.


Antoine, can you check this?  ltrace may help,

This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)   = 0x2a38d60
[pid 26883] memset(0x2a38d60, '\000', 48)= 0x2a38d60
[pid 26883] open64(./vm/media_fs, 540674, 00)  = 12
[pid 26883] posix_memalign(0x7fff92e40560, 512, 16384, -1, 48) = 0
[pid 26883] lseek64(12, 0, 2, 0x7f404f2f3e60, 4) = 0x133c4820600



Where's pread/preadv?  Did you use -f?

Yes I did.
ltrace -f bash ./vm/start.sh  trace.log

or run 'strings libc.so | grep pread'.


strings /lib/libc.so.6 | grep pread
preadv
preadv64
pread
__pread64_chk
__pread64
__pread_chk



So it does seem glibc emulates preadv.

Perhaps https://bugzilla.redhat.com/show_bug.cgi?id=563103 ?

This a gentoo box... but yes, this does look similar doesn't it?

Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Asdo

Avi Kivity wrote:

On 03/07/2010 08:01 PM, Antoine Martin wrote:


Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none

Side question: this is the right thing to do for raw partitions, right?


The rightest.


Isn't cache=writeback now safe on virtio-blk since 2.6.32?
Doesn't it provide better performances?

Also that looks like a raw file to me, not a partition... do they follow 
the same optimization rule?


Thank you
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 09:07 PM, Antoine Martin wrote:

Antoine, can you check this?  ltrace may help,

This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)   = 0x2a38d60
[pid 26883] memset(0x2a38d60, '\000', 48)= 0x2a38d60
[pid 26883] open64(./vm/media_fs, 540674, 00)  = 12
[pid 26883] posix_memalign(0x7fff92e40560, 512, 16384, -1, 48) = 0
[pid 26883] lseek64(12, 0, 2, 0x7f404f2f3e60, 4) = 0x133c4820600



Where's pread/preadv?  Did you use -f?


Yes I did.
ltrace -f bash ./vm/start.sh  trace.log


Well, some variant of pread should have shown up.  Maybe it's the ltrace 
-f race.



or run 'strings libc.so | grep pread'.


strings /lib/libc.so.6 | grep pread
preadv
preadv64
pread
__pread64_chk
__pread64
__pread_chk



So it does seem glibc emulates preadv.

Perhaps https://bugzilla.redhat.com/show_bug.cgi?id=563103 ?

This a gentoo box... but yes, this does look similar doesn't it?


What version of glibc do you have installed?

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

On 03/08/2010 02:09 AM, Asdo wrote:

Avi Kivity wrote:

On 03/07/2010 08:01 PM, Antoine Martin wrote:


Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none

Side question: this is the right thing to do for raw partitions, right?


The rightest.


Isn't cache=writeback now safe on virtio-blk since 2.6.32?
Doesn't it provide better performances?

I thought it was best to let the guest deal with it? (single cache)
Also that looks like a raw file to me, not a partition... do they 
follow the same optimization rule?

This is a device (/dev/sdc9) in the chroot, don't let the name fool you ;)

Antoine



Thank you
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 09:09 PM, Asdo wrote:

Avi Kivity wrote:

On 03/07/2010 08:01 PM, Antoine Martin wrote:


Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none

Side question: this is the right thing to do for raw partitions, right?


The rightest.


Isn't cache=writeback now safe on virtio-blk since 2.6.32?


Yes.


Doesn't it provide better performances?


No.  The guest will do its own caching, so unless the guest is really 
short of memory, you aren't gaining much; and the copying will hurt.




Also that looks like a raw file to me, not a partition... do they 
follow the same optimization rule?


More or less.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

On 03/08/2010 02:10 AM, Avi Kivity wrote:

On 03/07/2010 09:07 PM, Antoine Martin wrote:

Antoine, can you check this?  ltrace may help,

This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)   = 0x2a38d60
[pid 26883] memset(0x2a38d60, '\000', 48)= 0x2a38d60
[pid 26883] open64(./vm/media_fs, 540674, 00)  = 12
[pid 26883] posix_memalign(0x7fff92e40560, 512, 16384, -1, 48) = 0
[pid 26883] lseek64(12, 0, 2, 0x7f404f2f3e60, 4) = 0x133c4820600



Where's pread/preadv?  Did you use -f?


Yes I did.
ltrace -f bash ./vm/start.sh  trace.log


Well, some variant of pread should have shown up.  Maybe it's the 
ltrace -f race.

FYI: qemu booted fully under strace, but not under ltrace...


or run 'strings libc.so | grep pread'.


strings /lib/libc.so.6 | grep pread
preadv
preadv64
pread
__pread64_chk
__pread64
__pread_chk



So it does seem glibc emulates preadv.

Perhaps https://bugzilla.redhat.com/show_bug.cgi?id=563103 ?

This a gentoo box... but yes, this does look similar doesn't it?


What version of glibc do you have installed?

Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Antoine Martin

On 03/08/2010 02:17 AM, Avi Kivity wrote:

On 03/07/2010 09:13 PM, Antoine Martin wrote:

What version of glibc do you have installed?


Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1



$ git show glibc-2.10~108 | head
commit e109c6124fe121618e42ba882e2a0af6e97b8efc
Author: Ulrich Drepper drep...@redhat.com
Date:   Fri Apr 3 19:57:16 2009 +

* misc/Makefile (routines): Add preadv, preadv64, pwritev, pwritev64.

* misc/Versions: Export preadv, preadv64, pwritev, pwritev64 for
GLIBC_2.10.
* misc/sys/uio.h: Declare preadv, preadv64, pwritev, pwritev64.
* sysdeps/unix/sysv/linux/kernel-features.h: Add entries for 
preadv


You might get away with rebuilding glibc against the 2.6.33 headers.


The latest kernel headers available in gentoo (and they're masked unstable):
sys-kernel/linux-headers-2.6.32

So I think I will just keep using Christoph's patch until .33 hits portage.
Unless there's any reason not to? I would rather keep my system clean.
I can try it though, if that helps you clear things up?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-07 Thread Avi Kivity

On 03/07/2010 09:25 PM, Antoine Martin wrote:

On 03/08/2010 02:17 AM, Avi Kivity wrote:

On 03/07/2010 09:13 PM, Antoine Martin wrote:

What version of glibc do you have installed?


Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1



$ git show glibc-2.10~108 | head
commit e109c6124fe121618e42ba882e2a0af6e97b8efc
Author: Ulrich Drepper drep...@redhat.com
Date:   Fri Apr 3 19:57:16 2009 +

* misc/Makefile (routines): Add preadv, preadv64, pwritev, 
pwritev64.


* misc/Versions: Export preadv, preadv64, pwritev, pwritev64 for
GLIBC_2.10.
* misc/sys/uio.h: Declare preadv, preadv64, pwritev, pwritev64.
* sysdeps/unix/sysv/linux/kernel-features.h: Add entries for 
preadv


You might get away with rebuilding glibc against the 2.6.33 headers.

The latest kernel headers available in gentoo (and they're masked 
unstable):

sys-kernel/linux-headers-2.6.32

So I think I will just keep using Christoph's patch until .33 hits 
portage.

Unless there's any reason not to? I would rather keep my system clean.
I can try it though, if that helps you clear things up?


preadv/pwritev was actually introduced in 2.6.30.  Perhaps you last 
build glibc before that?  If so, a rebuild may be all that's necessary.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-06 Thread Antoine Martin

Hi,

With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843]  vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
[2.693772] Buffer I/O error on device vdc, logical block 126
[2.693775] Buffer I/O error on device vdc, logical block 127
[2.693777] Buffer I/O error on device vdc, logical block 128
[2.693779] Buffer I/O error on device vdc, logical block 129
[2.693781] Buffer I/O error on device vdc, logical block 130
[2.693783] Buffer I/O error on device vdc, logical block 131
[2.693785] Buffer I/O error on device vdc, logical block 132
[2.693787] Buffer I/O error on device vdc, logical block 133
[2.693788] Buffer I/O error on device vdc, logical block 134
[2.693814] end_request: I/O error, dev vdc, sector 0
[3.144870] end_request: I/O error, dev vdc, sector 0
[3.499377] end_request: I/O error, dev vdc, sector 0
[3.523247] end_request: I/O error, dev vdc, sector 0
[3.547130] end_request: I/O error, dev vdc, sector 0
[3.550076] end_request: I/O error, dev vdc, sector 0

Works fine with kvm-88:
cp  /usr/src/KVM/kvm-88/pc-bios/*bin ./
cp /usr/src/KVM/kvm-88/x86_64-softmmu/qemu-system-x86_64 ./
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]

[1.650274]  vdc: unknown partition table
[  112.704164] EXT4-fs (vdc): mounted filesystem with ordered data mode

I've tried running as root, using the block device directly (as shown 
above) rather than using a softlink, etc..


Something broke.
Host and guest are both running 2.6.33 and latest KVM.

Cheers
Antoine

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-06 Thread Michael Tokarev
Antoine Martin wrote:
 Hi,
 
 With qemu-kvm-0.12.3:
 ./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
 [1.882843]  vdc:
 [2.365154] udev: starting version 146
 [2.693768] end_request: I/O error, dev vdc, sector 126
 [2.693772] Buffer I/O error on device vdc, logical block 126
 [2.693775] Buffer I/O error on device vdc, logical block 127
 [2.693777] Buffer I/O error on device vdc, logical block 128
..
 [3.550076] end_request: I/O error, dev vdc, sector 0
 
 Works fine with kvm-88:
 cp  /usr/src/KVM/kvm-88/pc-bios/*bin ./
 cp /usr/src/KVM/kvm-88/x86_64-softmmu/qemu-system-x86_64 ./
 ./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
 
 [1.650274]  vdc: unknown partition table
 [  112.704164] EXT4-fs (vdc): mounted filesystem with ordered data mode
 
 I've tried running as root, using the block device directly (as shown
 above) rather than using a softlink, etc..
 
 Something broke.
 Host and guest are both running 2.6.33 and latest KVM.

https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-03-06 Thread Antoine Martin

On 03/07/2010 04:28 AM, Michael Tokarev wrote:

Antoine Martin wrote:
   

Hi,

With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843]  vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
[2.693772] Buffer I/O error on device vdc, logical block 126
[2.693775] Buffer I/O error on device vdc, logical block 127
[2.693777] Buffer I/O error on device vdc, logical block 128
 

..
   

[3.550076] end_request: I/O error, dev vdc, sector 0

Works fine with kvm-88:
cp  /usr/src/KVM/kvm-88/pc-bios/*bin ./
cp /usr/src/KVM/kvm-88/x86_64-softmmu/qemu-system-x86_64 ./
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]

[1.650274]  vdc: unknown partition table
[  112.704164] EXT4-fs (vdc): mounted filesystem with ordered data mode

I've tried running as root, using the block device directly (as shown
above) rather than using a softlink, etc..

Something broke.
Host and guest are both running 2.6.33 and latest KVM.
 

https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599
   

The initial report is almost 8 weeks old!
Is data-corruption and data loss somehow less important than the 
hundreds of patches that have been submitted since?? Or is there a fix 
somewhere I've missed?


Cheers
Antoine
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html