Poor LVM performance.

2012-12-27 Thread Andrew Holway
Hi,

I have been doing some testing with KVM and Virtuozzo(containers based 
virtualisation)  and various storage devices and have some results I would like 
some help analyzing.  I have a nice big ZFS box from Oracle (Yes, evil but 
Solaris NFS is amazing). I have 10G and IB connecting these to my cluster. My 
cluster is four HP servers (E5-2670 & 144GB ram) with a RAID10 of 600k SAS 
drives. 

Please open these pictures side by side.

https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%202.50.33%20PM.png
https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%203.18.03%20PM.png

You will notice that using KVM/LVM on the local RAID10 completely destroys 
performance whereas the container based virtualisation stuff is awesome and as 
fast as the NFS.

4,8,12,16...VMs relates to the aggregate performance of the benchmark in that 
number of VMs. 4 = 1 VM on each node, 8 = 2 VM on each node. TPCC warehouses is 
the number of tpcc warehouses that  the benchmark used. 1 warehouse is about 
150MB so 10 warehouses would mean about 1.5GB of data being held in the innodb 
pool.

Why does LVM performance suck so hard compared to a single filesystem approach. 
What am I doing wrong? 

Thanks,

Andrew
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


I/O scheduling on block devices

2012-12-18 Thread Andrew Holway
Hello,

I have a iSCSI storage array connected to 4 physical hosts. On these 4 hosts I 
have configured 40 odd logical volumes with clvm. each logical volume is the 
root volume for a VM.

How should I set up io scheduling with this configuration. Performance is not 
so great and I have a feeling that all of the io schedulers in my VMs and the 
ones on my host are not having a nice party together.

Thanks,

Andrew


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Process blocked for more than 120 seconds.

2012-12-12 Thread Andrew Holway
Hi,

No the NFS is not hung and yes I can access the image on the host.

Its just seems to happen occasionally with some VMs..

Thanks,

Andrew


On Dec 11, 2012, at 11:59 AM, Stefan Hajnoczi wrote:

> On Fri, Dec 7, 2012 at 1:09 PM, Andrew Holway  wrote:
>> -drivefile=/rhev/data-center/3ecf6306-3fa6-11e2-b544-00215e253fcc/b2a3daf4-7315-4cd8-a076-4ab005db7410/images/8c3541ea-9b89-4837-b98a-ae97feae6765/c90ceaf1-9c3a-42e6-b1a2-939fc1403fcb
>> if=none
>> id=drive-virtio-disk0
>> format=raw
>> serial=8c3541ea-9b89-4837-b98a-ae97feae6765
>> cache=none
>> werror=stop
>> rerror=stop
>> aio=threads-devicevirtio-blk-pci
>> scsi=off
>> bus=pci.0
>> addr=0x5
>> drive=drive-virtio-disk0
>> id=virtio-disk0
> 
> Is NFS hung - can you access the image from the host?
> 
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


NFS mount options

2012-12-12 Thread Andrew Holway
Hello,

I have been using NFS with kvm for a little while and have been wondering about 
appropriate NFS mount options.

Could someone please explain to me what mount options should be used and why? 
Also, what are the differences between NFSv3 and NFSv4 in regards to KVM?

Thanks,

Andrew
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Process blocked for more than 120 seconds.

2012-12-07 Thread Andrew Holway
sorry. I forgot to mention that my images are being created on a Solaris based 
NFS / ZFS server.

Thanks,

Andrew


On Dec 7, 2012, at 1:09 PM, Andrew Holway wrote:

> Hello,
> 
> I have been using rhev 3.1 and created a few VMs. I have provisioning system 
> that boots machines via pxe with centos 6.3 images.
> 
> It creates the following:
> 
> /dev/vda1 on / type ext3 (rw,noatime,nodiratime)
> /dev/vda6 on /local type ext3 (rw,noatime,nodiratime)
> /dev/vda3 on /tmp type ext3 (rw,nosuid,nodev,noatime,nodiratime)
> /dev/vda2 on /var type ext3 (rw,noatime,nodiratime)
> 
> With:  Kernel:  2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 
> x86_64 x86_64 x86_64 GNU/Linux
> 
> Thanks,
> 
> Andrew
> 
> 
> Dec  7 12:22:01 cheese04 kernel: imklog 4.6.2, log source = /proc/kmsg 
> started.
> Dec  7 12:22:01 cheese04 rsyslogd: [origin software="rsyslogd" 
> swVersion="4.6.2" x-pid="2790" x-info="http://www.rsyslog.com";] (re)start
> Dec  7 12:29:56 cheese04 kernel: INFO: task kjournald:2520 blocked for more 
> than 120 seconds.
> Dec  7 12:29:56 cheese04 kernel: "echo 0 > 
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec  7 12:29:56 cheese04 kernel: kjournald D  0  2520 
>  2 0x0080
> Dec  7 12:29:56 cheese04 kernel: 880101005d50 0046 
> 81a8d020 8160b400
> Dec  7 12:29:56 cheese04 kernel:  88011fc23080 
> 880101005da0 8105b483
> Dec  7 12:29:56 cheese04 kernel: 880117f99af8 880101005fd8 
> fb88 880117f99af8
> Dec  7 12:29:56 cheese04 kernel: Call Trace:
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> perf_event_task_sched_out+0x33/0x80
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> prepare_to_wait+0x4e/0x80
> Dec  7 12:29:56 cheese04 kernel: [] 
> journal_commit_transaction+0x161/0x1310 [jbd]
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> autoremove_wake_function+0x0/0x40
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> try_to_del_timer_sync+0x7b/0xe0
> Dec  7 12:29:56 cheese04 kernel: [] kjournald+0xe8/0x250 
> [jbd]
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> autoremove_wake_function+0x0/0x40
> Dec  7 12:29:56 cheese04 kernel: [] ? kjournald+0x0/0x250 
> [jbd]
> Dec  7 12:29:56 cheese04 kernel: [] kthread+0x96/0xa0
> Dec  7 12:29:56 cheese04 kernel: [] child_rip+0xa/0x20
> Dec  7 12:29:56 cheese04 kernel: [] ? kthread+0x0/0xa0
> Dec  7 12:29:56 cheese04 kernel: [] ? child_rip+0x0/0x20
> Dec  7 12:29:56 cheese04 kernel: INFO: task master:3023 blocked for more than 
> 120 seconds.
> Dec  7 12:29:56 cheese04 kernel: "echo 0 > 
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec  7 12:29:56 cheese04 kernel: masterD  0  3023 
>  1 0x0084
> Dec  7 12:29:56 cheese04 kernel: 8801013dd968 0082 
>  0792d63945c4
> Dec  7 12:29:56 cheese04 kernel: 8801013dd928 880117e21e50 
> 0007884e af3e39ec
> Dec  7 12:29:56 cheese04 kernel: 8800c4282638 8801013ddfd8 
> fb88 8800c4282638
> Dec  7 12:29:56 cheese04 kernel: Call Trace:
> Dec  7 12:29:56 cheese04 kernel: [] ? sync_buffer+0x0/0x50
> Dec  7 12:29:56 cheese04 kernel: [] io_schedule+0x73/0xc0
> Dec  7 12:29:56 cheese04 kernel: [] sync_buffer+0x40/0x50
> Dec  7 12:29:56 cheese04 kernel: [] 
> __wait_on_bit_lock+0x5a/0xc0
> Dec  7 12:29:56 cheese04 kernel: [] ? sync_buffer+0x0/0x50
> Dec  7 12:29:56 cheese04 kernel: [] 
> out_of_line_wait_on_bit_lock+0x78/0x90
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> wake_bit_function+0x0/0x50
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> pvclock_clocksource_read+0x58/0xd0
> Dec  7 12:29:56 cheese04 kernel: [] __lock_buffer+0x36/0x40
> Dec  7 12:29:56 cheese04 kernel: [] 
> do_get_write_access+0x483/0x500 [jbd]
> Dec  7 12:29:56 cheese04 kernel: [] ? __getblk+0x2c/0x2e0
> Dec  7 12:29:56 cheese04 kernel: [] 
> journal_get_write_access+0x31/0x50 [jbd]
> Dec  7 12:29:56 cheese04 kernel: [] 
> __ext3_journal_get_write_access+0x2d/0x60 [ext3]
> Dec  7 12:29:56 cheese04 kernel: [] 
> ext3_reserve_inode_write+0x7b/0xa0 [ext3]
> Dec  7 12:29:56 cheese04 kernel: [] 
> ext3_mark_inode_dirty+0x48/0xa0 [ext3]
> Dec  7 12:29:56 cheese04 kernel: [] 
> ext3_dirty_inode+0x61/0xa0 [ext3]
> Dec  7 12:29:56 cheese04 kernel: [] 
> __mark_inode_dirty+0x3b/0x160
> Dec  7 12:29:56 cheese04 kernel: [] 
> file_update_time+0xf2/0x170
> Dec  7 12:29:56 cheese04 kernel: [] pipe_write+0x2d2/0x650
> Dec  7 12:29:56 cheese04 kernel: [] do_sync_write+0xfa/0x140
> Dec  7 12:29:56 cheese04 kernel: [] ? 
> autoremove_wake_function+0x0/0x40

Process blocked for more than 120 seconds.

2012-12-07 Thread Andrew Holway
Hello,

I have been using rhev 3.1 and created a few VMs. I have provisioning system 
that boots machines via pxe with centos 6.3 images.

It creates the following:

/dev/vda1 on / type ext3 (rw,noatime,nodiratime)
/dev/vda6 on /local type ext3 (rw,noatime,nodiratime)
/dev/vda3 on /tmp type ext3 (rw,nosuid,nodev,noatime,nodiratime)
/dev/vda2 on /var type ext3 (rw,noatime,nodiratime)

With:  Kernel:  2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 
x86_64 x86_64 x86_64 GNU/Linux

Thanks,

Andrew


Dec  7 12:22:01 cheese04 kernel: imklog 4.6.2, log source = /proc/kmsg started.
Dec  7 12:22:01 cheese04 rsyslogd: [origin software="rsyslogd" 
swVersion="4.6.2" x-pid="2790" x-info="http://www.rsyslog.com";] (re)start
Dec  7 12:29:56 cheese04 kernel: INFO: task kjournald:2520 blocked for more 
than 120 seconds.
Dec  7 12:29:56 cheese04 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  7 12:29:56 cheese04 kernel: kjournald D  0  2520   
   2 0x0080
Dec  7 12:29:56 cheese04 kernel: 880101005d50 0046 
81a8d020 8160b400
Dec  7 12:29:56 cheese04 kernel:  88011fc23080 
880101005da0 8105b483
Dec  7 12:29:56 cheese04 kernel: 880117f99af8 880101005fd8 
fb88 880117f99af8
Dec  7 12:29:56 cheese04 kernel: Call Trace:
Dec  7 12:29:56 cheese04 kernel: [] ? 
perf_event_task_sched_out+0x33/0x80
Dec  7 12:29:56 cheese04 kernel: [] ? 
prepare_to_wait+0x4e/0x80
Dec  7 12:29:56 cheese04 kernel: [] 
journal_commit_transaction+0x161/0x1310 [jbd]
Dec  7 12:29:56 cheese04 kernel: [] ? 
autoremove_wake_function+0x0/0x40
Dec  7 12:29:56 cheese04 kernel: [] ? 
try_to_del_timer_sync+0x7b/0xe0
Dec  7 12:29:56 cheese04 kernel: [] kjournald+0xe8/0x250 [jbd]
Dec  7 12:29:56 cheese04 kernel: [] ? 
autoremove_wake_function+0x0/0x40
Dec  7 12:29:56 cheese04 kernel: [] ? kjournald+0x0/0x250 
[jbd]
Dec  7 12:29:56 cheese04 kernel: [] kthread+0x96/0xa0
Dec  7 12:29:56 cheese04 kernel: [] child_rip+0xa/0x20
Dec  7 12:29:56 cheese04 kernel: [] ? kthread+0x0/0xa0
Dec  7 12:29:56 cheese04 kernel: [] ? child_rip+0x0/0x20
Dec  7 12:29:56 cheese04 kernel: INFO: task master:3023 blocked for more than 
120 seconds.
Dec  7 12:29:56 cheese04 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  7 12:29:56 cheese04 kernel: masterD  0  3023   
   1 0x0084
Dec  7 12:29:56 cheese04 kernel: 8801013dd968 0082 
 0792d63945c4
Dec  7 12:29:56 cheese04 kernel: 8801013dd928 880117e21e50 
0007884e af3e39ec
Dec  7 12:29:56 cheese04 kernel: 8800c4282638 8801013ddfd8 
fb88 8800c4282638
Dec  7 12:29:56 cheese04 kernel: Call Trace:
Dec  7 12:29:56 cheese04 kernel: [] ? sync_buffer+0x0/0x50
Dec  7 12:29:56 cheese04 kernel: [] io_schedule+0x73/0xc0
Dec  7 12:29:56 cheese04 kernel: [] sync_buffer+0x40/0x50
Dec  7 12:29:56 cheese04 kernel: [] 
__wait_on_bit_lock+0x5a/0xc0
Dec  7 12:29:56 cheese04 kernel: [] ? sync_buffer+0x0/0x50
Dec  7 12:29:56 cheese04 kernel: [] 
out_of_line_wait_on_bit_lock+0x78/0x90
Dec  7 12:29:56 cheese04 kernel: [] ? 
wake_bit_function+0x0/0x50
Dec  7 12:29:56 cheese04 kernel: [] ? 
pvclock_clocksource_read+0x58/0xd0
Dec  7 12:29:56 cheese04 kernel: [] __lock_buffer+0x36/0x40
Dec  7 12:29:56 cheese04 kernel: [] 
do_get_write_access+0x483/0x500 [jbd]
Dec  7 12:29:56 cheese04 kernel: [] ? __getblk+0x2c/0x2e0
Dec  7 12:29:56 cheese04 kernel: [] 
journal_get_write_access+0x31/0x50 [jbd]
Dec  7 12:29:56 cheese04 kernel: [] 
__ext3_journal_get_write_access+0x2d/0x60 [ext3]
Dec  7 12:29:56 cheese04 kernel: [] 
ext3_reserve_inode_write+0x7b/0xa0 [ext3]
Dec  7 12:29:56 cheese04 kernel: [] 
ext3_mark_inode_dirty+0x48/0xa0 [ext3]
Dec  7 12:29:56 cheese04 kernel: [] 
ext3_dirty_inode+0x61/0xa0 [ext3]
Dec  7 12:29:56 cheese04 kernel: [] 
__mark_inode_dirty+0x3b/0x160
Dec  7 12:29:56 cheese04 kernel: [] 
file_update_time+0xf2/0x170
Dec  7 12:29:56 cheese04 kernel: [] pipe_write+0x2d2/0x650
Dec  7 12:29:56 cheese04 kernel: [] do_sync_write+0xfa/0x140
Dec  7 12:29:56 cheese04 kernel: [] ? 
autoremove_wake_function+0x0/0x40
Dec  7 12:29:56 cheese04 kernel: [] ? 
selinux_file_permission+0xfb/0x150
Dec  7 12:29:56 cheese04 kernel: [] ? 
security_file_permission+0x16/0x20
Dec  7 12:29:56 cheese04 kernel: [] vfs_write+0xb8/0x1a0
Dec  7 12:29:56 cheese04 kernel: [] sys_write+0x51/0x90
Dec  7 12:29:56 cheese04 kernel: [] 
system_call_fastpath+0x16/0x1b

[root@cheese04 ~]# dumpe2fs /dev/vda1 
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   
Last mounted on:  
Filesystem UUID:  79efe567-75c2-4fd1-800c-429d0530b945
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode dir_index filetype 
needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash 

Re: KVM on NFS

2012-10-17 Thread Andrew Holway


> O_DIRECT is good.  I/O schedulers don't affect NFS so no need to tune
> anything on the host.  You might experiment with switching to the
> deadline scheduler in the guest.

Ill give it a go. Any ideas how I should be tuning my NFS?

> 
> 
> -- 
> error compiling committee.c: too many arguments to function
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


KVM on NFS

2012-10-17 Thread Andrew Holway
Hello,

I am testing KVM on an Oracle NFS box that I have.

Does the list have any advice on best practice? I remember reading that there 
is stuff you can do with I/O schedulers and stuff to make it more efficient.

My VMs will primarily be running mysql databases. I am currently using o_direct.

Thanks,

Andrew



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NFS over RDMA small block DIRECT_IO bug

2012-09-18 Thread Andrew Holway
Hi Steve,

Do you think these patches will make their way into the redhat kernel sometime 
soon?

What is the state of support for NFS over RDMA support at redhat?

Thanks,

Andrew


On Sep 11, 2012, at 7:03 PM, Steve Dickson wrote:

> 
> 
> On 09/04/2012 05:31 AM, Andrew Holway wrote:
>> Hello.
>> 
>> # Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also 
>> seem relevent to libvirt. #
>> 
>> I have a Centos 6.2 server and Centos 6.2 client.
>> 
>> [root@store ~]# cat /etc/exports 
>> /dev/shm 
>> 10.149.0.0/16(rw,fsid=1,no_root_squash,insecure)(I have tried with non 
>> tempfs targets also)
>> 
>> 
>> [root@node001 ~]# cat /etc/fstab 
>> store.ibnet:/dev/shm /mnt nfs  
>> rdma,port=2050,defaults 0 0
>> 
>> 
>> I wrote a little for loop one liner that dd'd the centos net install image 
>> to a file called 'hello' then checksummed that file. Each iteration uses a 
>> different block size.
>> 
>> Non DIRECT_IO seems to work fine. DIRECT_IO with 512byte, 1K and 2K block 
>> sizes get corrupted.
>> 
>> I want to run my KVM guests on top of NFS over RDMA. My guests cannot create 
>> filesystems.
>> 
>> Thanks,
>> 
>> Andrew.
>> 
>> bug report: https://bugzilla.linux-nfs.org/show_bug.cgi?id=228
> Well it appears the RHEL6 kernels are lacking a couple patches that might
> help with this
> 
> 5c635e09 RPCRDMA: Fix FRMR registration/invalidate handling.
> 9b78145c xprtrdma: Remove assumption that each segment is <= PAGE_SIZE
> 
> I can only image that Centos 6.2 might me lacking these too... ;-)
> 
> steved.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NFS over RDMA small block DIRECT_IO bug

2012-09-06 Thread Andrew Holway

On Sep 5, 2012, at 4:02 PM, Avi Kivity wrote:

> On 09/04/2012 03:04 PM, Myklebust, Trond wrote:
>> On Tue, 2012-09-04 at 11:31 +0200, Andrew Holway wrote:
>>> Hello.
>>> 
>>> # Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also 
>>> seem relevent to libvirt. #
>>> 
>>> I have a Centos 6.2 server and Centos 6.2 client.
>>> 
>>> [root@store ~]# cat /etc/exports 
>>> /dev/shm
>>> 10.149.0.0/16(rw,fsid=1,no_root_squash,insecure)(I have tried with non 
>>> tempfs targets also)
>>> 
>>> 
>>> [root@node001 ~]# cat /etc/fstab 
>>> store.ibnet:/dev/shm /mnt nfs  
>>> rdma,port=2050,defaults 0 0
>>> 
>>> 
>>> I wrote a little for loop one liner that dd'd the centos net install image 
>>> to a file called 'hello' then checksummed that file. Each iteration uses a 
>>> different block size.
>>> 
>>> Non DIRECT_IO seems to work fine. DIRECT_IO with 512byte, 1K and 2K block 
>>> sizes get corrupted.
>> 
>> 
>> That is expected behaviour. DIRECT_IO over RDMA needs to be page aligned
>> so that it can use the more efficient RDMA READ and RDMA WRITE memory
>> semantics (instead of the SEND/RECEIVE channel semantics).
> 
> Shouldn't subpage requests fail then?  O_DIRECT block requests fail for
> subsector writes, instead of corrupting your data.

But silent data corruption is so much fun!!
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NFS over RDMA small block DIRECT_IO bug

2012-09-04 Thread Andrew Holway
> 
> That is expected behaviour. DIRECT_IO over RDMA needs to be page aligned
> so that it can use the more efficient RDMA READ and RDMA WRITE memory
> semantics (instead of the SEND/RECEIVE channel semantics).

Yes, I think I am understanding that now.

I need to find a way of getting around the lib-virt issue.

http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg01570.html

Thanks,

Andrew


> 
>> I want to run my KVM guests on top of NFS over RDMA. My guests cannot create 
>> filesystems.
>> 
>> Thanks,
>> 
>> Andrew.
>> 
>> bug report: https://bugzilla.linux-nfs.org/show_bug.cgi?id=228
>> 
>> [root@node001 mnt]# for f in 512 1024 2048 4096 8192 16384 32768 65536 
>> 131072; do dd bs="$f" if=CentOS-6.3-x86_64-netinstall.iso of=hello 
>> iflag=direct oflag=direct && md5sum hello && rm -f hello; done
>> 
>> 409600+0 records in
>> 409600+0 records out
>> 209715200 bytes (210 MB) copied, 62.3649 s, 3.4 MB/s
>> aadd0ffe3c9dfa35d8354e99ecac9276  hello -- 512 byte block 
>> 
>> 204800+0 records in
>> 204800+0 records out
>> 209715200 bytes (210 MB) copied, 41.3876 s, 5.1 MB/s
>> 336f6da78f93dab591edc18da81f002e  hello -- 1K block
>> 
>> 102400+0 records in
>> 102400+0 records out
>> 209715200 bytes (210 MB) copied, 21.1712 s, 9.9 MB/s
>> f4cefe0a05c9b47ba68effdb17dc95d6  hello -- 2k block
>> 
>> 51200+0 records in
>> 51200+0 records out
>> 209715200 bytes (210 MB) copied, 10.9631 s, 19.1 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello -- 4k block
>> 
>> 25600+0 records in
>> 25600+0 records out
>> 209715200 bytes (210 MB) copied, 5.4136 s, 38.7 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello
>> 
>> 12800+0 records in
>> 12800+0 records out
>> 209715200 bytes (210 MB) copied, 3.1448 s, 66.7 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello
>> 
>> 6400+0 records in
>> 6400+0 records out
>> 209715200 bytes (210 MB) copied, 1.77304 s, 118 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello
>> 
>> 3200+0 records in
>> 3200+0 records out
>> 209715200 bytes (210 MB) copied, 1.4331 s, 146 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello
>> 
>> 1600+0 records in
>> 1600+0 records out
>> 209715200 bytes (210 MB) copied, 0.922167 s, 227 MB/s
>> 690138908de516b6e5d7d180d085c3f3  hello
>> 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer
> 
> NetApp
> trond.mykleb...@netapp.com
> www.netapp.com
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


NFS over RDMA small block DIRECT_IO bug

2012-09-04 Thread Andrew Holway
Hello.

# Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also seem 
relevent to libvirt. #

I have a Centos 6.2 server and Centos 6.2 client.

[root@store ~]# cat /etc/exports 
/dev/shm
10.149.0.0/16(rw,fsid=1,no_root_squash,insecure)(I have tried with non 
tempfs targets also)


[root@node001 ~]# cat /etc/fstab 
store.ibnet:/dev/shm /mnt nfs  
rdma,port=2050,defaults 0 0


I wrote a little for loop one liner that dd'd the centos net install image to a 
file called 'hello' then checksummed that file. Each iteration uses a different 
block size.

Non DIRECT_IO seems to work fine. DIRECT_IO with 512byte, 1K and 2K block sizes 
get corrupted.

I want to run my KVM guests on top of NFS over RDMA. My guests cannot create 
filesystems.

Thanks,

Andrew.

bug report: https://bugzilla.linux-nfs.org/show_bug.cgi?id=228

[root@node001 mnt]# for f in 512 1024 2048 4096 8192 16384 32768 65536 131072; 
do dd bs="$f" if=CentOS-6.3-x86_64-netinstall.iso of=hello iflag=direct 
oflag=direct && md5sum hello && rm -f hello; done

409600+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 62.3649 s, 3.4 MB/s
aadd0ffe3c9dfa35d8354e99ecac9276  hello -- 512 byte block 

204800+0 records in
204800+0 records out
209715200 bytes (210 MB) copied, 41.3876 s, 5.1 MB/s
336f6da78f93dab591edc18da81f002e  hello -- 1K block

102400+0 records in
102400+0 records out
209715200 bytes (210 MB) copied, 21.1712 s, 9.9 MB/s
f4cefe0a05c9b47ba68effdb17dc95d6  hello -- 2k block

51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 10.9631 s, 19.1 MB/s
690138908de516b6e5d7d180d085c3f3  hello -- 4k block

25600+0 records in
25600+0 records out
209715200 bytes (210 MB) copied, 5.4136 s, 38.7 MB/s
690138908de516b6e5d7d180d085c3f3  hello

12800+0 records in
12800+0 records out
209715200 bytes (210 MB) copied, 3.1448 s, 66.7 MB/s
690138908de516b6e5d7d180d085c3f3  hello

6400+0 records in
6400+0 records out
209715200 bytes (210 MB) copied, 1.77304 s, 118 MB/s
690138908de516b6e5d7d180d085c3f3  hello

3200+0 records in
3200+0 records out
209715200 bytes (210 MB) copied, 1.4331 s, 146 MB/s
690138908de516b6e5d7d180d085c3f3  hello

1600+0 records in
1600+0 records out
209715200 bytes (210 MB) copied, 0.922167 s, 227 MB/s
690138908de516b6e5d7d180d085c3f3  hello


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NFSoRDMA not working with KVM when cache disabled

2012-09-03 Thread Andrew Holway
> 
> and report which (if any) of the output files (x1, x2, y1, y2) are
> corrupted, by comparing them against the original.  This will tell us
> whether O_DIRECT is broken, or 512 byte block size, or neither.
> 


Looks like you were directly on the money there. 512, 1K and 2K O_DIRECT looks 
broken.


[root@node001 mnt]# for f in 512 1024 2048 4096 8192 16384 32768 65536 131072; 
do dd bs="$f" if=CentOS-6.3-x86_64-netinstall.iso of=hello && md5sum hello && 
rm -f hello; done409600+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 0.774718 s, 271 MB/s
690138908de516b6e5d7d180d085c3f3  hello
204800+0 records in
204800+0 records out
209715200 bytes (210 MB) copied, 0.609578 s, 344 MB/s
690138908de516b6e5d7d180d085c3f3  hello
102400+0 records in
102400+0 records out
209715200 bytes (210 MB) copied, 0.490421 s, 428 MB/s
690138908de516b6e5d7d180d085c3f3  hello
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 0.440156 s, 476 MB/s
690138908de516b6e5d7d180d085c3f3  hello
25600+0 records in
25600+0 records out
209715200 bytes (210 MB) copied, 0.4229 s, 496 MB/s
690138908de516b6e5d7d180d085c3f3  hello
12800+0 records in
12800+0 records out
209715200 bytes (210 MB) copied, 0.40914 s, 513 MB/s
690138908de516b6e5d7d180d085c3f3  hello
6400+0 records in
6400+0 records out
209715200 bytes (210 MB) copied, 0.427247 s, 491 MB/s
690138908de516b6e5d7d180d085c3f3  hello
3200+0 records in
3200+0 records out
209715200 bytes (210 MB) copied, 0.411776 s, 509 MB/s
690138908de516b6e5d7d180d085c3f3  hello
1600+0 records in
1600+0 records out
209715200 bytes (210 MB) copied, 0.417098 s, 503 MB/s
690138908de516b6e5d7d180d085c3f3  hello



[root@node001 mnt]# for f in 512 1024 2048 4096 8192 16384 32768 65536 131072; 
do dd bs="$f" if=CentOS-6.3-x86_64-netinstall.iso of=hello iflag=direct 
oflag=direct && md5sum hello && rm -f hello; done
409600+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 62.3649 s, 3.4 MB/s
aadd0ffe3c9dfa35d8354e99ecac9276  hello
204800+0 records in
204800+0 records out
209715200 bytes (210 MB) copied, 41.3876 s, 5.1 MB/s
336f6da78f93dab591edc18da81f002e  hello
102400+0 records in
102400+0 records out
209715200 bytes (210 MB) copied, 21.1712 s, 9.9 MB/s
f4cefe0a05c9b47ba68effdb17dc95d6  hello
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 10.9631 s, 19.1 MB/s
690138908de516b6e5d7d180d085c3f3  hello
25600+0 records in
25600+0 records out
209715200 bytes (210 MB) copied, 5.4136 s, 38.7 MB/s
690138908de516b6e5d7d180d085c3f3  hello
12800+0 records in
12800+0 records out
209715200 bytes (210 MB) copied, 3.1448 s, 66.7 MB/s
690138908de516b6e5d7d180d085c3f3  hello
6400+0 records in
6400+0 records out
209715200 bytes (210 MB) copied, 1.77304 s, 118 MB/s
690138908de516b6e5d7d180d085c3f3  hello
3200+0 records in
3200+0 records out
209715200 bytes (210 MB) copied, 1.4331 s, 146 MB/s
690138908de516b6e5d7d180d085c3f3  hello
1600+0 records in
1600+0 records out
209715200 bytes (210 MB) copied, 0.922167 s, 227 MB/s
690138908de516b6e5d7d180d085c3f3  hello

> 
> -- 
> error compiling committee.c: too many arguments to function
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bright Cluster Manager Support #2699] NFSoRDMA not working with KVM when cache disabled

2012-09-01 Thread Andrew Holway
Hi,

That is FULL install (I think)

I create a new virtual machine each time I test it.

Ta,

Andrew


On Aug 31, 2012, at 11:20 PM, Martijn de Vries wrote:

> Hi Andrew,
> 
> That's pretty strange. I am not a KVM expert, so I don't know what happens
> under the hood when you disable cache. Have you tried doing a FULL install?
> 
> Best regards,
> 
> Martijn
> 
> On Fri, Aug 31, 2012 at 7:06 PM, Andrew Holway
> wrote:
> 
>> 
>> Fri Aug 31 19:06:00 2012: Request 2699 was acted upon.
>> Transaction: Ticket created by a.hol...@syseleven.de
>>   Queue: Bright
>> Subject: NFSoRDMA not working with KVM when cache disabled
>>   Owner: Nobody
>>  Requestors: a.hol...@syseleven.de
>>  Status: new
>> Ticket > http://support.brightcomputing.com/rt/Ticket/Display.html?id=2699 >
>> 
>> 
>> Hi,
>> 
>> I am trying to host KVM machines on an NFSoRDMA mount.
>> 
>> This works:
>> 
>> -drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0
>> 
>> This Doesn't!
>> 
>> -drive
>> file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw,cache=none
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
>> 
>> Any ideas why this could be?
>> 
>> Thanks,
>> 
>> Andrew
>> 
>> 
>> 
>> dmesg:
>> 
>> vda:
>> vda: vda1
>> vda: vda1
>> vda: vda1 vda2
>> vda: vda1 vda2
>> vda: vda1 vda2 vda3
>> vda: vda1 vda2 vda3
>> vda: vda1 vda2 vda3 vda4 < >
>> vda: vda1 vda2 vda3 vda4 < >
>> vda: vda1 vda2 vda3 vda4 < vda5 >
>> vda: vda1 vda2 vda3 vda4 < vda5 >
>> vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
>> vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
>> Adding 15998968k swap on /dev/vda5.  Priority:-1 extents:1 across:15998968k
>> EXT3-fs (vda1): error: can't find ext3 filesystem on dev vda1.
>> 
>> /var/log/messages
>> 
>> Aug 31 18:53:46 (none) node-installer: Mounting disks.
>> Aug 31 18:53:46 (none) node-installer: Updating device status: mounting
>> disks
>> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/sda': not
>> found
>> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/hda': not
>> found
>> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/vda': found
>> Aug 31 18:53:46 (none) node-installer: swapon /dev/vda5
>> Aug 31 18:53:46 (none) node-installer: mkdir -p /localdisk/
>> Aug 31 18:53:46 (none) kernel: Adding 15998968k swap on /dev/vda5.
>> Priority:-1 extents:1 across:15998968k
>> Aug 31 18:53:46 (none) node-installer: Mounting /dev/vda1 on /localdisk/
>> Aug 31 18:53:46 (none) node-installer: mount -t ext3 -o
>> defaults,noatime,nodiratime /dev/vda1 /localdisk/
>> Aug 31 18:53:46 (none) node-installer: mount: wrong fs type, bad option,
>> bad superblock on /dev/vda1,
>> Aug 31 18:53:46 (none) node-installer:missing codepage or helper
>> program, or other error
>> Aug 31 18:53:46 (none) node-installer:In some cases useful info is
>> found in syslog - try
>> Aug 31 18:53:46 (none) node-installer:dmesg | tail  or so
>> Aug 31 18:53:46 (none) node-installer:
>> Aug 31 18:53:46 (none) node-installer: Command failed.
>> Aug 31 18:53:46 (none) node-installer: Running: "mount -t ext3 -o
>> defaults,noatime,nodiratime /dev/vda1 /localdisk/" failed:
>> Aug 31 18:53:46 (none) node-installer: Non zero exit code: 32
>> Aug 31 18:53:46 (none) kernel: EXT3-fs (vda1): error: can't find ext3
>> filesystem on dev vda1.
>> Aug 31 18:53:46 (none) node-installer: Failed to mount disks. (Exit code
>> 12, signal 0)
>> Aug 31 18:53:46 (none) node-installer: There was a fatal problem. This
>> node can not be installed until the problem is corrected.
>> Aug 31 18:53:46 (none) node-installer: The error was: failed to mount disks
>> Aug 31 18:53:46 (none) node-installer: Updating device status: failed to
>> mount disks
>> 
>> Aug 31 18:53:28 (none) node-installer: Creating primary partition
>> /dev/vda3.
>> Aug 31 18:53:28 (none) node-installer: parted -s -- /dev/vda mkpart
>> primary ext3 22530 24578
>> Aug 31 18:53:28 (none) kernel: vda: vda1 vda2 vda3
>> Aug 31 18:53:28 (none) node-installer: dd if=/dev/zero of=/dev/vda3 bs=1k
>> count=4
>> Aug 31 18:53:28 (none) node-installer: 4

Re: NFSoRDMA not working with KVM when cache disabled

2012-08-31 Thread Andrew Holway
A bit more

[ root@vm001 ~]# dumpe2fs /dev/vda1
dumpe2fs 1.41.12 (17-May-2010)
dumpe2fs: Bad magic number in super-block while trying to open /dev/vda1
Couldn't find valid filesystem superblock.
[ root@vm001 ~]# fdisk -ul

Disk /dev/vda: 48.3 GB, 48318382080 bytes
255 heads, 63 sectors/track, 5874 cylinders, total 94371840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00098bc1

   Device Boot  Start End  Blocks   Id  System
/dev/vda1   140002000   83  Linux
/dev/vda24000153644001279 1999872   83  Linux
/dev/vda34400332848003071 1999872   83  Linux
/dev/vda4480051209437183923183360f  W95 Ext'd (LBA)
/dev/vda5480071688000511915998976   82  Linux swap / Solaris
/dev/vda68000716894371839 7182336   83  LinuxOn Aug 31, 2012, 
at 7:05 PM, Andrew Holway wrote:

> Hi,
> 
> I am trying to host KVM machines on an NFSoRDMA mount.
> 
> This works:
> 
> -drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0
> 
> This Doesn't!
> 
> -drive 
> file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw,cache=none 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> 
> Any ideas why this could be?
> 
> Thanks,
> 
> Andrew
> 
> 
> 
> dmesg:
> 
> vda:
> vda: vda1
> vda: vda1
> vda: vda1 vda2
> vda: vda1 vda2
> vda: vda1 vda2 vda3
> vda: vda1 vda2 vda3
> vda: vda1 vda2 vda3 vda4 < >
> vda: vda1 vda2 vda3 vda4 < >
> vda: vda1 vda2 vda3 vda4 < vda5 >
> vda: vda1 vda2 vda3 vda4 < vda5 >
> vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
> vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
> Adding 15998968k swap on /dev/vda5.  Priority:-1 extents:1 across:15998968k 
> EXT3-fs (vda1): error: can't find ext3 filesystem on dev vda1.
> 
> /var/log/messages
> 
> Aug 31 18:53:46 (none) node-installer: Mounting disks.
> Aug 31 18:53:46 (none) node-installer: Updating device status: mounting disks
> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/sda': not found
> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/hda': not found
> Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/vda': found
> Aug 31 18:53:46 (none) node-installer: swapon /dev/vda5
> Aug 31 18:53:46 (none) node-installer: mkdir -p /localdisk/
> Aug 31 18:53:46 (none) kernel: Adding 15998968k swap on /dev/vda5.  
> Priority:-1 extents:1 across:15998968k 
> Aug 31 18:53:46 (none) node-installer: Mounting /dev/vda1 on /localdisk/
> Aug 31 18:53:46 (none) node-installer: mount -t ext3 -o 
> defaults,noatime,nodiratime /dev/vda1 /localdisk/
> Aug 31 18:53:46 (none) node-installer: mount: wrong fs type, bad option, bad 
> superblock on /dev/vda1,
> Aug 31 18:53:46 (none) node-installer:missing codepage or helper 
> program, or other error
> Aug 31 18:53:46 (none) node-installer:In some cases useful info is 
> found in syslog - try
> Aug 31 18:53:46 (none) node-installer:dmesg | tail  or so
> Aug 31 18:53:46 (none) node-installer: 
> Aug 31 18:53:46 (none) node-installer: Command failed.
> Aug 31 18:53:46 (none) node-installer: Running: "mount -t ext3 -o 
> defaults,noatime,nodiratime /dev/vda1 /localdisk/" failed:
> Aug 31 18:53:46 (none) node-installer: Non zero exit code: 32
> Aug 31 18:53:46 (none) kernel: EXT3-fs (vda1): error: can't find ext3 
> filesystem on dev vda1.
> Aug 31 18:53:46 (none) node-installer: Failed to mount disks. (Exit code 12, 
> signal 0)
> Aug 31 18:53:46 (none) node-installer: There was a fatal problem. This node 
> can not be installed until the problem is corrected.
> Aug 31 18:53:46 (none) node-installer: The error was: failed to mount disks
> Aug 31 18:53:46 (none) node-installer: Updating device status: failed to 
> mount disks
> 
> Aug 31 18:53:28 (none) node-installer: Creating primary partition /dev/vda3.
> Aug 31 18:53:28 (none) node-installer: parted -s -- /dev/vda mkpart primary 
> ext3 22530 24578
> Aug 31 18:53:28 (none) kernel: vda: vda1 vda2 vda3
> Aug 31 18:53:28 (none) node-installer: dd if=/dev/zero of=/dev/vda3 bs=1k 
> count=4
> Aug 31 18:53:28 (none) node-installer: 4+0 records in
> Aug 31 18:53:28 (none) node-installer: 4+0 records out
> Aug 31 18:53:28 (none) node-installer: 4096 bytes (4.1 kB) copied, 
> 0.000842261 s, 4.9 MB/s
> Aug 31 18:53:28 (none) node-installer: Running partprobe on device: /dev/vda

NFSoRDMA not working with KVM when cache disabled

2012-08-31 Thread Andrew Holway
Hi,

I am trying to host KVM machines on an NFSoRDMA mount.

This works:

-drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0

This Doesn't!

-drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw,cache=none 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2

Any ideas why this could be?

Thanks,

Andrew



dmesg:

 vda:
 vda: vda1
 vda: vda1
 vda: vda1 vda2
 vda: vda1 vda2
 vda: vda1 vda2 vda3
 vda: vda1 vda2 vda3
 vda: vda1 vda2 vda3 vda4 < >
 vda: vda1 vda2 vda3 vda4 < >
 vda: vda1 vda2 vda3 vda4 < vda5 >
 vda: vda1 vda2 vda3 vda4 < vda5 >
 vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
 vda: vda1 vda2 vda3 vda4 < vda5 vda6 >
Adding 15998968k swap on /dev/vda5.  Priority:-1 extents:1 across:15998968k 
EXT3-fs (vda1): error: can't find ext3 filesystem on dev vda1.

/var/log/messages

Aug 31 18:53:46 (none) node-installer: Mounting disks.
Aug 31 18:53:46 (none) node-installer: Updating device status: mounting disks
Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/sda': not found
Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/hda': not found
Aug 31 18:53:46 (none) node-installer: Detecting device '/dev/vda': found
Aug 31 18:53:46 (none) node-installer: swapon /dev/vda5
Aug 31 18:53:46 (none) node-installer: mkdir -p /localdisk/
Aug 31 18:53:46 (none) kernel: Adding 15998968k swap on /dev/vda5.  Priority:-1 
extents:1 across:15998968k 
Aug 31 18:53:46 (none) node-installer: Mounting /dev/vda1 on /localdisk/
Aug 31 18:53:46 (none) node-installer: mount -t ext3 -o 
defaults,noatime,nodiratime /dev/vda1 /localdisk/
Aug 31 18:53:46 (none) node-installer: mount: wrong fs type, bad option, bad 
superblock on /dev/vda1,
Aug 31 18:53:46 (none) node-installer:missing codepage or helper 
program, or other error
Aug 31 18:53:46 (none) node-installer:In some cases useful info is 
found in syslog - try
Aug 31 18:53:46 (none) node-installer:dmesg | tail  or so
Aug 31 18:53:46 (none) node-installer: 
Aug 31 18:53:46 (none) node-installer: Command failed.
Aug 31 18:53:46 (none) node-installer: Running: "mount -t ext3 -o 
defaults,noatime,nodiratime /dev/vda1 /localdisk/" failed:
Aug 31 18:53:46 (none) node-installer: Non zero exit code: 32
Aug 31 18:53:46 (none) kernel: EXT3-fs (vda1): error: can't find ext3 
filesystem on dev vda1.
Aug 31 18:53:46 (none) node-installer: Failed to mount disks. (Exit code 12, 
signal 0)
Aug 31 18:53:46 (none) node-installer: There was a fatal problem. This node can 
not be installed until the problem is corrected.
Aug 31 18:53:46 (none) node-installer: The error was: failed to mount disks
Aug 31 18:53:46 (none) node-installer: Updating device status: failed to mount 
disks

Aug 31 18:53:28 (none) node-installer: Creating primary partition /dev/vda3.
Aug 31 18:53:28 (none) node-installer: parted -s -- /dev/vda mkpart primary 
ext3 22530 24578
Aug 31 18:53:28 (none) kernel: vda: vda1 vda2 vda3
Aug 31 18:53:28 (none) node-installer: dd if=/dev/zero of=/dev/vda3 bs=1k 
count=4
Aug 31 18:53:28 (none) node-installer: 4+0 records in
Aug 31 18:53:28 (none) node-installer: 4+0 records out
Aug 31 18:53:28 (none) node-installer: 4096 bytes (4.1 kB) copied, 0.000842261 
s, 4.9 MB/s
Aug 31 18:53:28 (none) node-installer: Running partprobe on device: /dev/vda
Aug 31 18:53:28 (none) kernel: vda: vda1 vda2 vda3

Aug 31 18:53:45 (none) node-installer: Creating ext3 filesystem on /dev/vda3
Aug 31 18:53:45 (none) node-installer: mke2fs -j /dev/vda3
Aug 31 18:53:45 (none) node-installer: mke2fs 1.41.12 (17-May-2010)
Aug 31 18:53:45 (none) node-installer: Filesystem label=
Aug 31 18:53:45 (none) node-installer: OS type: Linux
Aug 31 18:53:45 (none) node-installer: Block size=4096 (log=2)
Aug 31 18:53:45 (none) node-installer: Fragment size=4096 (log=2)
Aug 31 18:53:45 (none) node-installer: Stride=0 blocks, Stripe width=0 blocks
Aug 31 18:53:45 (none) node-installer: 125184 inodes, 499968 blocks
Aug 31 18:53:45 (none) node-installer: 24998 blocks (5.00%) reserved for the 
super user
Aug 31 18:53:45 (none) node-installer: First data block=0
Aug 31 18:53:45 (none) node-installer: Maximum filesystem blocks=515899392
Aug 31 18:53:45 (none) node-installer: 16 block groups
Aug 31 18:53:45 (none) node-installer: 32768 blocks per group, 32768 fragments 
per group
Aug 31 18:53:45 (none) node-installer: 7824 inodes per group
Aug 31 18:53:45 (none) node-installer: Superblock backups stored on blocks: 
Aug 31 18:53:45 (none) node-installer: #01132768, 98304, 163840, 229376, 294912
Aug 31 18:53:45 (none) node-installer: 
Aug 31 18:53:45 (none) node-installer: Writing inode tables:  
0/16#010#010#010#010#010 1/16#010#010#010#010#010 2/16#010#010#010#010#010 
3/16#010#010#010#010#010 4/16#010#010#010#010#010 5/16#010#010#010#010#010 
6/16#010#010#010#010#010 7/16#010#010#010#010#010 8/16#010#010#010#010#010 

Re: virtIO disks not usable

2012-08-31 Thread Andrew Holway
Stupid boy. I didnt have the virtIO modules loaded.

On Aug 31, 2012, at 3:19 PM, Andrew Holway wrote:

> Hi,
> 
> I am creating a VM with the following command:
> 
> virt-install --connect qemu:///system -n vm001 -r 2048 --vcpus=2 --disk 
> path=/local/vm001.img,device=disk,bus=virtio,size=45 --vnc --noautoconsole 
> --os-type linux --accelerate --network=bridge:br0,mac=00:00:00:00:00:0E 
> --network=bridge:br1,mac=00:00:01:00:00:0E,model=virtio --pxe --hvm
> 
> [root@node001 ~]# ps aux | grep kvm
> qemu 26893  1.4  0.2 2491820 342852 ?  Sl   14:51   0:19 
> /usr/libexec/qemu-kvm -S -M rhel6.3.0 -enable-kvm -m 2048 -smp 
> 2,sockets=2,cores=1,threads=1 -name vm001 -uuid 
> 1b379a13-c85f-aa62-7e1f-3cf0803ff095 -nodefconfig -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm001.monitor,server,nowait 
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-reboot 
> -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
> file=/local/vm001.img,if=none,id=drive-virtio-disk0,format=raw,cache=none 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
>  -netdev tap,fd=28,id=hostnet0 -device 
> rtl8139,netdev=hostnet0,id=net0,mac=00:00:00:00:00:0e,bus=pci.0,addr=0x3,bootindex=1
>  -netdev tap,fd=29,id=hostnet1,vhost=on,vhostfd=30 -device 
> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:01:00:00:0e,bus=pci.0,addr=0x4
>  -chardev pty,id=charserial0 -device 
> isa-serial,chardev=charserial0,id=serial0 -vnc 127.
> 0.0.1:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> 
> but my os cannot see any usable disks:
> 
> root@vm001 ~]# ls -alF /dev/disk/by-path
> total 0
> drwxr-xr-x 2 root root 200 2012-08-23 23:41 ./
> drwxr-xr-x 5 root root 100 2012-07-25 18:48 ../
> lrwxrwxrwx 1 root root   9 2012-08-31 14:52 pci-:03:05.0-scsi-0:0:0:0 -> 
> ../../sda
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-0:0:0:0-part1 -> ../../sda1
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-0:0:0:0-part2 -> ../../sda2
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-0:0:0:0-part3 -> ../../sda3
> lrwxrwxrwx 1 root root   9 2012-08-31 14:52 pci-:03:05.0-scsi-1:0:0:0 -> 
> ../../sdb
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-1:0:0:0-part1 -> ../../sdb1
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-1:0:0:0-part2 -> ../../sdb2
> lrwxrwxrwx 1 root root  10 2012-08-31 14:52 
> pci-:03:05.0-scsi-1:0:0:0-part3 -> ../../sdb3
> 
> I have 4096 block devices from sda..sdx.
> 
> root@vm001 ~]# lspci -k
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
>   Subsystem: Red Hat, Inc Qemu virtual machine
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>   Subsystem: Red Hat, Inc Qemu virtual machine
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
>   Subsystem: Red Hat, Inc Qemu virtual machine
>   Kernel driver in use: ata_piix
> 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton 
> II] (rev 01)
>   Subsystem: Red Hat, Inc Qemu virtual machine
>   Kernel driver in use: uhci_hcd
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>   Subsystem: Red Hat, Inc Qemu virtual machine
> 00:02.0 VGA compatible controller: Cirrus Logic GD 5446
>   Subsystem: Red Hat, Inc Device 1100
> 00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
> RTL-8139/8139C/8139C+ (rev 20)
>   Subsystem: Red Hat, Inc Device 1100
>   Kernel driver in use: 8139cp
> 00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
>   Subsystem: Red Hat, Inc Device 0001
>   Kernel driver in use: virtio-pci
> 00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
>   Subsystem: Red Hat, Inc Device 0002
>   Kernel driver in use: virtio-pci
> 00:06.0 RAM memory: Red Hat, Inc Virtio memory balloon
>   Subsystem: Red Hat, Inc Device 0005
>   Kernel driver in use: virtio-pci
> 
> Any ideas what is going on?
> 
> Danke,
> 
> Andrew
> 
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [libvirt-users] vm pxe fail

2012-08-31 Thread Andrew Holway
Hi,

In the end the problem was SR-IOV enabled on the cards. I turned this off and 
everything worked ok.

Im using HP 10G cards which are rebranded emulex.

0a:00.0 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 
01)

Thanks,

Andrew

On Aug 17, 2012, at 4:34 AM, Alex Jia wrote:

> Hi Andrew,
> I can't confirm a root reason based on your information, perhaps you may
> try to find a reason by yourself via the following docs:
> 
> http://wiki.libvirt.org/page/PXE_boot_%28or_dhcp%29_on_guest_failed  
> (Troubleshooting)
> http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/sect-Virtualization-Virtualized_guest_installation_overview-Installing_guests_with_PXE.html
>   (User Guide)
> 
> If can't, please provide your version of kvm, libvirt, tftp, etc, and run 
> 'virsh net-dumpxml br0' to dump your
> network bridge XML configuration, and run 'cat pxelinux.cfg' to show your 
> pxelinux configuration, thanks.
> 
> -- 
> Regards, 
> Alex
> 
> 
> - Original Message -
> From: "Andrew Holway" 
> To: kvm@vger.kernel.org
> Sent: Thursday, August 16, 2012 8:25:35 PM
> Subject: [libvirt-users] vm pxe fail
> 
> Hallo
> 
> I have a kvm vm that I am attempting to boot from pxe. The dhcp works 
> perfectly and I can see the VM in the pxe server arp. but the tftp just times 
> out. I don't see any tftp traffic on either the physical host or on the pie 
> server. I am using a bridged interface. I have tried using several virtual 
> nic drivers, several different mac addresses and several different ips.  on 
> the physical host I can get the pxelinux.0 file from the pxe server via tftp 
> and can clearly see that traffic with tcpdump.
> 
> Ive tried using various virtual interfaces.
> 
> I can pxe boot my physical hosts with no problems.
> 
> I can tftp fine from the physical host and see the traffic with ethdump
> 
> Here is the terminal output from the VM: 
> https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-08-15%20at%206.41.12%20PM.png
> 
> Thanks,
> 
> Andrew
> 
> [root@node002 ~]# yum list | grep qemu
> gpxe-roms-qemu.noarch   0.9.7-6.9.el6   @base 
>   
> qemu-img.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-guest-agent.x86_64 2:0.12.1.2-2.295.el6_3.1updates   
>   
> qemu-kvm-tools.x86_64   2:0.12.1.2-2.295.el6_3.1updates 
> 
> [root@node002 ~]# ethtool eth0
> Settings for eth0:
>   Supported ports: [ TP ]
>   Supported link modes:   1baseT/Full 
>   Supports auto-negotiation: No
>   Advertised link modes:  1baseT/Full 
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: Unknown!
>   Duplex: Unknown! (255)
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Supports Wake-on: g
>   Wake-on: g
>   Current message level: 0x0014 (20)
>   Link detected: no
> 
> [root@node002 ~]# brctl show
> bridge name   bridge id   STP enabled interfaces
> br0   8000.009c02241ae0   no  eth1
>   vnet0
> virbr08000.525400a6d5aa   yes virbr0-nic
> 
> [root@node002 ~]# ethtool vnet0
> Settings for vnet0:
>   Supported ports: [ ]
>   Supported link modes:   
>   Supports auto-negotiation: No
>   Advertised link modes:  Not reported
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: 10Mb/s
>   Duplex: Full
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Current message level: 0xffa1 (-95)
>   Link detected: yes
> 
> 
> vm004
> 4f03b09b-e834-bbf3-a6c2-1689f3156ef2
> 2097152
> 2097152
> 2
> 
>   hvm
>   
> 
> 
>   
>   
>   
> 
> 
> destroy
> restart
> restart
> 
>   /usr/libexec/qemu-kvm
>   
> 
> 
> 
> 
>   
>   
> 
> 
> 
> 
>   
>   
>  function='0x2'/>
>   
>   
>  function='0x1'/>
>   
>   
> 
> 
> 
>  function='0x0'/>
>   
>   
> 
>   
>   
> 
>   
>   
>   
>   
> 
>  function='0x0'/>
>   
>   
>  function='0x0'/>
>   
> 
> 
> 
> 
> 
> ___
> libvirt-users mailing list
> libvirt-us...@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


virtIO disks not usable

2012-08-31 Thread Andrew Holway
Hi,

I am creating a VM with the following command:

virt-install --connect qemu:///system -n vm001 -r 2048 --vcpus=2 --disk 
path=/local/vm001.img,device=disk,bus=virtio,size=45 --vnc --noautoconsole 
--os-type linux --accelerate --network=bridge:br0,mac=00:00:00:00:00:0E 
--network=bridge:br1,mac=00:00:01:00:00:0E,model=virtio --pxe --hvm

[root@node001 ~]# ps aux | grep kvm
qemu 26893  1.4  0.2 2491820 342852 ?  Sl   14:51   0:19 
/usr/libexec/qemu-kvm -S -M rhel6.3.0 -enable-kvm -m 2048 -smp 
2,sockets=2,cores=1,threads=1 -name vm001 -uuid 
1b379a13-c85f-aa62-7e1f-3cf0803ff095 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm001.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-reboot 
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/local/vm001.img,if=none,id=drive-virtio-disk0,format=raw,cache=none 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
 -netdev tap,fd=28,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=00:00:00:00:00:0e,bus=pci.0,addr=0x3,bootindex=1
 -netdev tap,fd=29,id=hostnet1,vhost=on,vhostfd=30 -device 
virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:01:00:00:0e,bus=pci.0,addr=0x4 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-vnc 127.0.0.1:0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

but my os cannot see any usable disks:

root@vm001 ~]# ls -alF /dev/disk/by-path
total 0
drwxr-xr-x 2 root root 200 2012-08-23 23:41 ./
drwxr-xr-x 5 root root 100 2012-07-25 18:48 ../
lrwxrwxrwx 1 root root   9 2012-08-31 14:52 pci-:03:05.0-scsi-0:0:0:0 -> 
../../sda
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-0:0:0:0-part1 
-> ../../sda1
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-0:0:0:0-part2 
-> ../../sda2
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-0:0:0:0-part3 
-> ../../sda3
lrwxrwxrwx 1 root root   9 2012-08-31 14:52 pci-:03:05.0-scsi-1:0:0:0 -> 
../../sdb
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-1:0:0:0-part1 
-> ../../sdb1
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-1:0:0:0-part2 
-> ../../sdb2
lrwxrwxrwx 1 root root  10 2012-08-31 14:52 pci-:03:05.0-scsi-1:0:0:0-part3 
-> ../../sdb3

I have 4096 block devices from sda..sdx.

root@vm001 ~]# lspci -k
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
Subsystem: Red Hat, Inc Qemu virtual machine
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
Subsystem: Red Hat, Inc Qemu virtual machine
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
Subsystem: Red Hat, Inc Qemu virtual machine
Kernel driver in use: ata_piix
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] 
(rev 01)
Subsystem: Red Hat, Inc Qemu virtual machine
Kernel driver in use: uhci_hcd
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
Subsystem: Red Hat, Inc Qemu virtual machine
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
Subsystem: Red Hat, Inc Device 1100
00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
RTL-8139/8139C/8139C+ (rev 20)
Subsystem: Red Hat, Inc Device 1100
Kernel driver in use: 8139cp
00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
Subsystem: Red Hat, Inc Device 0001
Kernel driver in use: virtio-pci
00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
Subsystem: Red Hat, Inc Device 0002
Kernel driver in use: virtio-pci
00:06.0 RAM memory: Red Hat, Inc Virtio memory balloon
Subsystem: Red Hat, Inc Device 0005
Kernel driver in use: virtio-pci

Any ideas what is going on?

Danke,

Andrew




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Getting VLANS to the vm ping / UDP working but TCP not

2012-08-21 Thread Andrew Holway
Hi,

I am trying out a couple of methods to get VLANs to the VM. In both cases the 
VM can ping google et all without problem and DNS works fine but it does not 
want to do any TCP. I thought this might be a frame size problem but even using 
telnet (which I understand sends tiny packets) fails to work.

Why would udp / ping work fine when tcp fails?

I saw some kind of weird packet on the network when I was trying to connect to 
a web server running on the VM with telnet.

15:11:47.464656 01:00:00:0e:00:24 (oui Unknown) > 00:00:01:00:00:00 (oui 
Unknown), ethertype Unknown (0xdcd9), length 66: 
0x:  84a8 0800 4510 0030 cd23 4000 4006 527b  E..0.#@.@.R{
0x0010:  257b 6811 257b 6812 f487 0050 da75 1e54  %{h.%{hP.u.T
0x0020:    7002  7b65  0204 05b4  p...{e..
0x0030:  0402 

But its hard to repeat them.

Any ideas?

Thanks,

Andrew


a) vm001 is on node002 and has the following xml:

[root@node002 ~]# virsh dumpxml vm001
...

  
  
  
  
  


  
  
  
  
  
  

…

[root@vm001 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
link/ether 00:00:00:00:00:0e brd ff:ff:ff:ff:ff:ff
inet 10.141.100.1/16 brd 10.141.255.255 scope global eth0
inet6 fe80::200:ff:fe00:e/64 scope link 
   valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
link/ether 00:00:01:00:00:0e brd ff:ff:ff:ff:ff:ff
4: eth1.4@eth1:  mtu 1500 qdisc noqueue state 
UP 
link/ether 00:00:01:00:00:0e brd ff:ff:ff:ff:ff:ff
inet 37.123.104.18/29 brd 37.123.104.23 scope global eth1.4
inet6 fe80::200:1ff:fe00:e/64 scope link 
   valid_lft forever preferred_lft forever

[root@node002 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq state UNKNOWN qlen 1000
link/ether 00:02:c9:34:67:31 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 scope global eth0
3: eth1:  mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:9c:02:24:1a:e0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::29c:2ff:fe24:1ae0/64 scope link 
   valid_lft forever preferred_lft forever
4: eth2:  mtu 1522 qdisc mq state UP 
qlen 1000
link/ether 00:9c:02:24:1a:e4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::29c:2ff:fe24:1ae4/64 scope link 
   valid_lft forever preferred_lft forever
5: br0:  mtu 1500 qdisc noqueue state UNKNOWN 
link/ether 00:9c:02:24:1a:e0 brd ff:ff:ff:ff:ff:ff
inet 10.141.0.2/16 brd 10.141.255.255 scope global br0
inet6 fe80::29c:2ff:fe24:1ae0/64 scope link 
   valid_lft forever preferred_lft forever
7: br1:  mtu 1522 qdisc noqueue state UNKNOWN 
link/ether 00:9c:02:24:1a:e4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::29c:2ff:fe24:1ae4/64 scope link 
   valid_lft forever preferred_lft forever
8: virbr0:  mtu 1500 qdisc noqueue state 
UNKNOWN 
link/ether 52:54:00:81:84:9f brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
9: virbr0-nic:  mtu 1500 qdisc noop state DOWN qlen 500
link/ether 52:54:00:81:84:9f brd ff:ff:ff:ff:ff:ff
33: vnet0:  mtu 1500 qdisc pfifo_fast state 
UNKNOWN qlen 500
link/ether fe:00:00:00:00:0e brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc00:ff:fe00:e/64 scope link 
   valid_lft forever preferred_lft forever
34: vnet1:  mtu 1522 qdisc pfifo_fast state 
UNKNOWN qlen 500
link/ether fe:00:01:00:00:0e brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc00:1ff:fe00:e/64 scope link 
   valid_lft forever preferred_lft forever



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [libvirt-users] vm pxe fail

2012-08-20 Thread Andrew Holway
I had SR-IOV enabled on the interface card. This was breaking it.

The card is a:   Ethernet controller: Emulex Corporation OneConnect 10Gb NIC 
(be3) (rev 01)

On Aug 17, 2012, at 4:34 AM, Alex Jia wrote:

> Hi Andrew,
> I can't confirm a root reason based on your information, perhaps you may
> try to find a reason by yourself via the following docs:
> 
> http://wiki.libvirt.org/page/PXE_boot_%28or_dhcp%29_on_guest_failed  
> (Troubleshooting)
> http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/sect-Virtualization-Virtualized_guest_installation_overview-Installing_guests_with_PXE.html
>   (User Guide)
> 
> If can't, please provide your version of kvm, libvirt, tftp, etc, and run 
> 'virsh net-dumpxml br0' to dump your
> network bridge XML configuration, and run 'cat pxelinux.cfg' to show your 
> pxelinux configuration, thanks.
> 
> -- 
> Regards, 
> Alex
> 
> 
> - Original Message -
> From: "Andrew Holway" 
> To: kvm@vger.kernel.org
> Sent: Thursday, August 16, 2012 8:25:35 PM
> Subject: [libvirt-users] vm pxe fail
> 
> Hallo
> 
> I have a kvm vm that I am attempting to boot from pxe. The dhcp works 
> perfectly and I can see the VM in the pxe server arp. but the tftp just times 
> out. I don't see any tftp traffic on either the physical host or on the pie 
> server. I am using a bridged interface. I have tried using several virtual 
> nic drivers, several different mac addresses and several different ips.  on 
> the physical host I can get the pxelinux.0 file from the pxe server via tftp 
> and can clearly see that traffic with tcpdump.
> 
> Ive tried using various virtual interfaces.
> 
> I can pxe boot my physical hosts with no problems.
> 
> I can tftp fine from the physical host and see the traffic with ethdump
> 
> Here is the terminal output from the VM: 
> https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-08-15%20at%206.41.12%20PM.png
> 
> Thanks,
> 
> Andrew
> 
> [root@node002 ~]# yum list | grep qemu
> gpxe-roms-qemu.noarch   0.9.7-6.9.el6   @base 
>   
> qemu-img.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-guest-agent.x86_64 2:0.12.1.2-2.295.el6_3.1updates   
>   
> qemu-kvm-tools.x86_64   2:0.12.1.2-2.295.el6_3.1updates 
> 
> [root@node002 ~]# ethtool eth0
> Settings for eth0:
>   Supported ports: [ TP ]
>   Supported link modes:   1baseT/Full 
>   Supports auto-negotiation: No
>   Advertised link modes:  1baseT/Full 
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: Unknown!
>   Duplex: Unknown! (255)
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Supports Wake-on: g
>   Wake-on: g
>   Current message level: 0x0014 (20)
>   Link detected: no
> 
> [root@node002 ~]# brctl show
> bridge name   bridge id   STP enabled interfaces
> br0   8000.009c02241ae0   no  eth1
>   vnet0
> virbr08000.525400a6d5aa   yes virbr0-nic
> 
> [root@node002 ~]# ethtool vnet0
> Settings for vnet0:
>   Supported ports: [ ]
>   Supported link modes:   
>   Supports auto-negotiation: No
>   Advertised link modes:  Not reported
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: 10Mb/s
>   Duplex: Full
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Current message level: 0xffa1 (-95)
>   Link detected: yes
> 
> 
> vm004
> 4f03b09b-e834-bbf3-a6c2-1689f3156ef2
> 2097152
> 2097152
> 2
> 
>   hvm
>   
> 
> 
>   
>   
>   
> 
> 
> destroy
> restart
> restart
> 
>   /usr/libexec/qemu-kvm
>   
> 
> 
> 
> 
>   
>   
> 
> 
> 
> 
>   
>   
>  function='0x2'/>
>   
>   
>  function='0x1'/>
>   
>   
> 
> 
> 
>  function='0x0'/>
>   
>   
> 
>   
>   
> 
>   
>   
>   
>   
> 
>  function='0x0'/>
>   
>   
>  function='0x0'/>
>   
> 
> 
> 
> 
> 
> ___
> libvirt-users mailing list
> libvirt-us...@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [libvirt-users] vm pxe fail

2012-08-17 Thread Andrew Holway
75, length 45: 
177.5.1.16.236.129.57.19.1.1.23.1.1.21.1.1.24.1.1.35.1.1.34.1.1.25.1.1.33.1.1.16.1.2.18.1.1.17.1.1.235.3.0.9.7
DHCP-Message Option 53, length 1: Request
MSZ Option 57, length 2: 1472
ARCH Option 93, length 2: 0
NDI Option 94, length 3: 1.2.1
Vendor-Class Option 60, length 32: 
"PXEClient:Arch:0:UNDI:002001"
CLASS Option 77, length 4: "gPXE"
Parameter-Request Option 55, length 13: 
  Subnet-Mask, Default-Gateway, Domain-Name-Server, LOG
  Hostname, Domain-Name, RP, Vendor-Option
  Vendor-Class, TFTP, BF, Option 175
  Option 203
17:39:27.156373 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP 
(17), length 388)
master.cm.cluster.bootps > 255.255.255.255.bootpc: [udp sum ok] BOOTP/DHCP, 
Reply, length 360, xid 0xd, Flags [none] (0x)
  Your-IP vm001.cm.cluster
  Server-IP master.cm.cluster
  Client-Ethernet-Address 00:00:00:00:00:0d (oui Ethernet)
  file "cm-images/default-image/boot/pxelinux.0"
  Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: ACK
Server-ID Option 54, length 4: master.cm.cluster
Lease-Time Option 51, length 4: 86400
Subnet-Mask Option 1, length 4: 255.255.0.0
Default-Gateway Option 3, length 4: master.cm.cluster
Domain-Name-Server Option 6, length 4: master.cm.cluster
Hostname Option 12, length 17: "vm001.internalnet"
Domain-Name Option 15, length 65: "eth.cluster brightcomputing.com 
ib.cluster ilo.cluster cm.cluster"

> 
> -- 
> Regards, 
> Alex
> 
> 
> - Original Message -
> From: "Andrew Holway" 
> To: kvm@vger.kernel.org
> Sent: Thursday, August 16, 2012 8:25:35 PM
> Subject: [libvirt-users] vm pxe fail
> 
> Hallo
> 
> I have a kvm vm that I am attempting to boot from pxe. The dhcp works 
> perfectly and I can see the VM in the pxe server arp. but the tftp just times 
> out. I don't see any tftp traffic on either the physical host or on the pie 
> server. I am using a bridged interface. I have tried using several virtual 
> nic drivers, several different mac addresses and several different ips.  on 
> the physical host I can get the pxelinux.0 file from the pxe server via tftp 
> and can clearly see that traffic with tcpdump.
> 
> Ive tried using various virtual interfaces.
> 
> I can pxe boot my physical hosts with no problems.
> 
> I can tftp fine from the physical host and see the traffic with ethdump
> 
> Here is the terminal output from the VM: 
> https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-08-15%20at%206.41.12%20PM.png
> 
> Thanks,
> 
> Andrew
> 
> [root@node002 ~]# yum list | grep qemu
> gpxe-roms-qemu.noarch   0.9.7-6.9.el6   @base 
>   
> qemu-img.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.1@updates  
>   
> qemu-guest-agent.x86_64 2:0.12.1.2-2.295.el6_3.1updates   
>   
> qemu-kvm-tools.x86_64   2:0.12.1.2-2.295.el6_3.1updates 
> 
> [root@node002 ~]# ethtool eth0
> Settings for eth0:
>   Supported ports: [ TP ]
>   Supported link modes:   1baseT/Full 
>   Supports auto-negotiation: No
>   Advertised link modes:  1baseT/Full 
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: Unknown!
>   Duplex: Unknown! (255)
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Supports Wake-on: g
>   Wake-on: g
>   Current message level: 0x0014 (20)
>   Link detected: no
> 
> [root@node002 ~]# brctl show
> bridge name   bridge id   STP enabled interfaces
> br0   8000.009c02241ae0   no  eth1
>   vnet0
> virbr08000.525400a6d5aa   yes virbr0-nic
> 
> [root@node002 ~]# ethtool vnet0
> Settings for vnet0:
>   Supported ports: [ ]
>   Supported link modes:   
>   Supports auto-negotiation: No
>   Advertised link modes:  Not reported
>   Advertised pause frame use: No
>   Advertised auto-negotiation: No
>   Speed: 10Mb/s
>   Duplex: Full
>   Port: Twisted Pair
>   PHYAD: 0
>   Transceiver: internal
>   Auto-negotiation: off
>   MDI-X: Unknown
>   Current message level: 0xffa1 (-95)
>   

Re: vm pxe fail

2012-08-16 Thread Andrew Holway

On Aug 16, 2012, at 3:54 PM, Stefan Hajnoczi wrote:

> On Thu, Aug 16, 2012 at 1:25 PM, Andrew Holway  wrote:
>> I have a kvm vm that I am attempting to boot from pxe. The dhcp works 
>> perfectly and I can see the VM in the pxe server arp. but the tftp just 
>> times out. I don't see any tftp traffic on either the physical host or on 
>> the pie server. I am using a bridged interface. I have tried using several 
>> virtual nic drivers, several different mac addresses and several different 
>> ips.  on the physical host I can get the pxelinux.0 file from the pxe server 
>> via tftp and can clearly see that traffic with tcpdump.
>> 
>> Ive tried using various virtual interfaces.
>> 
>> I can pxe boot my physical hosts with no problems.
>> 
>> I can tftp fine from the physical host and see the traffic with ethdump
> 
> Have you run tcpdump on the tap interface?  (This is different from
> running tcpdump on host eth0 because it is earlier in the network path
> and happens before the software bridge.)

Yes. I can just see DHCP traffic.

> 
> What do iptables -L -n and ebtables -L say?
> 

[root@node002 ~]# iptables -L -n 
Chain INPUT (policy ACCEPT)
target prot opt source   destination 
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   udp dpt:53 
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   tcp dpt:53 
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   udp dpt:67 
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   tcp dpt:67 

Chain FORWARD (policy ACCEPT)
target prot opt source   destination 
ACCEPT all  --  0.0.0.0/0192.168.122.0/24state 
RELATED,ESTABLISHED 
ACCEPT all  --  192.168.122.0/24 0.0.0.0/0   
ACCEPT all  --  0.0.0.0/00.0.0.0/0   
REJECT all  --  0.0.0.0/00.0.0.0/0   reject-with 
icmp-port-unreachable 
REJECT all  --  0.0.0.0/00.0.0.0/0   reject-with 
icmp-port-unreachable 

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination  


[root@node002 ~]# ebtables -L
Bridge table: filter

Bridge chain: INPUT, entries: 0, policy: ACCEPT

Bridge chain: FORWARD, entries: 0, policy: ACCEPT

Bridge chain: OUTPUT, entries: 0, policy: ACCEPT

[root@node002 ~]# tcpdump -i vnet0 udp
tcpdump: WARNING: vnet0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vnet0, link-type EN10MB (Ethernet), capture size 65535 bytes
17:08:08.849344 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 387
17:08:08.849413 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 387
17:08:08.849661 IP master.cm.cluster.bootps > 255.255.255.255.bootpc: 
BOOTP/DHCP, Reply, length 360
17:08:09.812645 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 387
17:08:09.812709 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 387
17:08:09.812903 IP master.cm.cluster.bootps > 255.255.255.255.bootpc: 
BOOTP/DHCP, Reply, length 360
17:08:11.789993 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 399
17:08:11.790107 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request 
from 00:00:00:00:00:0d (oui Ethernet), length 399
17:08:11.790294 IP master.cm.cluster.bootps > 255.255.255.255.bootpc: 
BOOTP/DHCP, Reply, length 360


And then…….silence!



> Stefan


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


vm pxe fail

2012-08-16 Thread Andrew Holway
Hallo

I have a kvm vm that I am attempting to boot from pxe. The dhcp works perfectly 
and I can see the VM in the pxe server arp. but the tftp just times out. I 
don't see any tftp traffic on either the physical host or on the pie server. I 
am using a bridged interface. I have tried using several virtual nic drivers, 
several different mac addresses and several different ips.  on the physical 
host I can get the pxelinux.0 file from the pxe server via tftp and can clearly 
see that traffic with tcpdump.

Ive tried using various virtual interfaces.

I can pxe boot my physical hosts with no problems.

I can tftp fine from the physical host and see the traffic with ethdump

Here is the terminal output from the VM: 
https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-08-15%20at%206.41.12%20PM.png

Thanks,

Andrew

[root@node002 ~]# yum list | grep qemu
gpxe-roms-qemu.noarch   0.9.7-6.9.el6   @base   
qemu-img.x86_64 2:0.12.1.2-2.295.el6_3.1@updates
qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.1@updates
qemu-guest-agent.x86_64 2:0.12.1.2-2.295.el6_3.1updates 
qemu-kvm-tools.x86_64   2:0.12.1.2-2.295.el6_3.1updates 

[root@node002 ~]# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes:   1baseT/Full 
Supports auto-negotiation: No
Advertised link modes:  1baseT/Full 
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: Unknown!
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Current message level: 0x0014 (20)
Link detected: no

[root@node002 ~]# brctl show
bridge name bridge id   STP enabled interfaces
br0 8000.009c02241ae0   no  eth1
vnet0
virbr0  8000.525400a6d5aa   yes virbr0-nic

[root@node002 ~]# ethtool vnet0
Settings for vnet0:
Supported ports: [ ]
Supported link modes:   
Supports auto-negotiation: No
Advertised link modes:  Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Current message level: 0xffa1 (-95)
Link detected: yes


  vm004
  4f03b09b-e834-bbf3-a6c2-1689f3156ef2
  2097152
  2097152
  2
  
hvm

  
  



  
  
  destroy
  restart
  restart
  
/usr/libexec/qemu-kvm

  
  
  
  


  
  
  
  


  


  


  
  
  
  


  


  




  
  


  

  



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html