Re: List of unaccessible x86 states

2009-10-24 Thread Alexander Graf


On 23.10.2009, at 21:34, Jan Kiszka wrote:


Jan Kiszka wrote:

Hi all,

as the list of yet user-unaccessible x86 states is a bit volatile  
ATM,

this is an attempt to collect the precise requirements for additional
state fields. Once everyone feels the list is complete, we can decide
how to partition it into one ore more substates for the new
KVM_GET/SET_VCPU_STATE interface.

What I read so far (or tried to patch already):

- nmi_masked
- nmi_pending
- nmi_injected
- kvm_queued_exception (whole struct content)
- KVM_REQ_TRIPLE_FAULT (from vcpu.requests)

Unclear points (for me) from the last discussion:

- sipi_vector
- MCE (covered via kvm_queued_exception, or does it require more?)

Please extend or correct the list as required.



Here is a wrap-up of what has been reported so far:

- NMI
   o nmi_masked
   o nmi_pending
   o nmi_injected
- queued exception
   o kvm_queued_exception
   o triple_fault
- SVM
   o gif
   (Are we sure that there is really nothing more here?)


Hm, thinking about this again, it might be useful to have an  
currently in nested VM flag here. That way userspace can decide if  
it needs to get out of the nested state (for migration) or if it just  
doesn't care.



- sipi_vector

So the next question is how to map these on substates. I'm currently
leaning towards this organization:

- KVM_X86_VCPU_STATE_EVENTS
   o NMI states
   o pending exception
   o sipi_vector
   o pending interrupt?
 (would be redundant to kvm_sregs.interrupt_bitmap, but that  
struct

 may be obsoleted one day)
- KVM_X86_VCPU_STATE_SVM
   o gif


Can we make this an svm_flags or so u32? And then we'd just set bits?

Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Using snmpd on host to monitor guest bandwidth usage

2009-10-24 Thread Neil Aggarwal
Hello:

I am using Cacti to monitor traffic usage on my network.

According to what I am reading, snmpd can report traffic
stats to Cacti.

Running netstat -in on the host, I see this output:

Kernel Interface table
Iface   MTU MetRX-OK RX-ERR RX-DRP RX-OVRTX-OK TX-ERR TX-DRP
TX-OVR Flg
br01500   0   237609  0  0  013615  0  0
0 BMRU
eth0   1500   0   967594  0  0  0   354576  0  0
0 BMRU
lo16436   0   63  0  0  0   63  0  0
0 LRU
virbr0 1500   00  0  0  0   32  0  0
0 BMRU
vnet0  1500   029802  0  0  0   306940  0  0
0 BMRU
vnet1  1500   0   311556  0  0  0   789331  0  0
0 BMRU

Each guest runs a bridge interface with a static IP address.
Looking at the firewall logs vnet1 is connected to guestA and
vnet0 is connected to guestB.  Will that ever change if I reboot
the host or the guests?  If it does, that would be a problem.

Are there any pitfalls of using this approach?

I am looking for a solution where I do not need to run anything
on the guests.

Thanks,
Neil

--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, fire, or other disaster?
If so, ask about our geographically redundant database system.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GDB Debugging

2009-10-24 Thread Yolkfull Chow
On Fri, Oct 23, 2009 at 09:19:40AM -0700, Saksena, Abhishek wrote:
 
 Hi Guys,
 
 Any help will be appreciated on following issue. I have been struggling on 
 this for quite some time...
 
 
 -Abhishek
 
 
 
 -Original Message-
 From: Saksena, Abhishek 
 Sent: Tuesday, October 20, 2009 11:49 AM
 To: 'Jan Kiszka'
 Cc: kvm@vger.kernel.org
 Subject: GDB + KVM Debug
 
 I have now tried using both
 
 
 Set arch i8086 and 
 Set arch i386:x86-64:intel 

Try 'set architecture i386:x86-64'.

 
 But still see the same issue. Do I need to apply any patch?
 
 
 Abhishek
 
 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Thursday, September 17, 2009 1:36 AM
 To: Saksena, Abhishek
 Cc: kvm@vger.kernel.org
 Subject: Re: GDB + KVM Debug
 
 Saksena, Abhishek wrote:
  I am using KVM-88. However I can't get gdb still working. I stared qemu 
  with -s -S option and when I try to connect gdb to it I get following 
  error:-
  
  (gdb) target remote lochost:1234
  lochost: unknown host
  lochost:1234: No such file or directory.
  (gdb) target remote locahost:1234
  locahost: unknown host
  locahost:1234: No such file or directory.
  (gdb) target remote localhost:1234
  Remote debugging using localhost:1234
  [New Thread 1]
  Remote 'g' packet reply is too long: 
  2306f0ff023002f07f03000
 0
  (gdb)
  
 
 Try 'set arch target-architecture' before connecting. This is required
 if you didn't load the corresponding target image into gdb.
 
 Jan
 
 -- 
 Siemens AG, Corporate Technology, CT SE 2
 Corporate Competence Center Embedded Linux
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using snmpd on host to monitor guest bandwidth usage

2009-10-24 Thread Nikolai K. Bochev
Hello,

As far as i know, you can fix the vm's interfaces on the host. I'm using 
libvirt and you can do it there as described in here :

http://libvirt.org/formatdomain.html#elementsNICSBridge ( What you're looking 
for is target dev='vnet0'/ directive ).

If you don't do this, vnet interfaces will be assigned to vm's in the order 
they start - the first one getting vnet0, the second one vnet1 etc.


- Original Message -
From: Neil Aggarwal n...@jammconsulting.com
To: kvm@vger.kernel.org
Sent: Saturday, October 24, 2009 4:52:59 PM
Subject: Using snmpd on host to monitor guest bandwidth usage

Hello:

I am using Cacti to monitor traffic usage on my network.

According to what I am reading, snmpd can report traffic
stats to Cacti.

Running netstat -in on the host, I see this output:

Kernel Interface table
Iface   MTU MetRX-OK RX-ERR RX-DRP RX-OVRTX-OK TX-ERR TX-DRP
TX-OVR Flg
br01500   0   237609  0  0  013615  0  0
0 BMRU
eth0   1500   0   967594  0  0  0   354576  0  0
0 BMRU
lo16436   0   63  0  0  0   63  0  0
0 LRU
virbr0 1500   00  0  0  0   32  0  0
0 BMRU
vnet0  1500   029802  0  0  0   306940  0  0
0 BMRU
vnet1  1500   0   311556  0  0  0   789331  0  0
0 BMRU

Each guest runs a bridge interface with a static IP address.
Looking at the firewall logs vnet1 is connected to guestA and
vnet0 is connected to guestB.  Will that ever change if I reboot
the host or the guests?  If it does, that would be a problem.

Are there any pitfalls of using this approach?

I am looking for a solution where I do not need to run anything
on the guests.

Thanks,
Neil

--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, fire, or other disaster?
If so, ask about our geographically redundant database system.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KSM and HugePages

2009-10-24 Thread Dor Laor

On 10/23/2009 08:21 PM, David Martin wrote:

Does KSM support HugePages?  Reading the Fedora 12 feature list I notice this:
Using huge pages for guest memory does have a downside, however - you
can no longer swap nor balloon guest memory.
However it is unclear to me if that includes KSM.


ksm pages are only standard 4k pages.



If I use 1GB HugePages and KSM (assuming this is possible), does that
mean the entire 1GB page has to match another for them to merge?  Are
there any other downsides to using them other than swapping and
ballooning?


It needs to be available at VM creation time.
Also the tlb size for use pages is smaller although it does bring better 
results than 4k pages.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Using snmpd on host to monitor guest bandwidth usage

2009-10-24 Thread Neil Aggarwal
 As far as i know, you can fix the vm's interfaces on the 
 host. I'm using libvirt and you can do it there as described in here :
 http://libvirt.org/formatdomain.html#elementsNICSBridge ( 
 What you're looking for is target dev='vnet0'/ directive ).

Setting the target device name is not working.

Here is what I did:

I stopped both my guests.

Next, I opened the file /etc/libvirt/qemu/jamm12a.xml
for my first virtual host and added a target element
for the interface:
interface type='bridge'
  mac address='54:52:00:4f:83:67'/
  source bridge='br0'/
  target dev='vnet1'/
/interface

I started the virtual host and when I do ifconfig,
I see vnet0.

Also, when look in /var/log/libvirt/qemu/jamm12a.log,
I see this info:

LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin HOME=/ /usr/libexec/qemu-kvm -S
-M pc -m 1024 -smp 1 -name jamm12a -uuid
6452dcff-c20f-908b-d1ee-7dcf1406a3e0 -monitor pty -pidfile
/var/run/libvirt/qemu//jamm12a.pid -boot c -drive
file=/var/lib/libvirt/images/jamm12a.img,if=ide,index=0,boot=on -drive
file=,if=ide,media=cdrom,index=2 -net nic,macaddr=54:52:00:4f:83:67,vlan=0
-net tap,fd=12,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb
-vnc 127.0.0.1:0 -k en-us

The ifname has vnet0 for its value.

This seems to be a bug in KVM.  I searched the bug tracker and I
do not see anything related.

I am using KVM 83-105.el5 installed by yum in CentOS 5.4,
could this be fixed in a later version of KVM?

Thanks,
Neil

--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, fire, or other disaster?
If so, ask about our geographically redundant database system. 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM

2009-10-24 Thread Avi Kivity

On 10/23/2009 05:40 PM, FUJITA Tomonori wrote:

On Fri, 23 Oct 2009 09:14:29 -0500
Javier Guerrajav...@guerrag.com  wrote:

   

I think that the major difference between sheepdog and cluster file
systems such as Google File system, pNFS, etc is the interface between
clients and a storage system.
   

note that GFS is Global File System (written by Sistina (the same
folks from LVM) and bought by RedHat).  Google Filesystem is a
different thing, and ironically the client/storage interface is a
little more like sheepdog and unlike a regular cluster filesystem.
 

Hmm, Avi referred to Global File System? I wasn't sure. 'GFS' is
ambiguous. Anyway, Global File System is a SAN file system. It's
a completely different architecture from Sheepdog.
   


I did, and yes, it is completely different since you don't require 
central storage.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 64 bit guest much faster ?

2009-10-24 Thread Avi Kivity

On 10/23/2009 05:54 PM, Stefan wrote:

Hello,

I have a simple question (sorry I'm a kvm beginner):
Is it right that a 64bit guest (8 CPUs, 16GB) is
much faster than a 32bit guest (8 CPUs, 16GB PAE).
(kvm guest and kvm host are Ubuntu 9.04, 2.6.28-15-server,
kvm 1:84+dfsg-0ubuntu12.3). eg dpkg-reconfigure initramfs-tools
runs twice as fast on the 64bit guest.

   


There shouldn't be that much of a difference.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I/O performance of VirtIO

2009-10-24 Thread Avi Kivity

On 10/23/2009 12:06 AM, Alexander Graf wrote:


Am 22.10.2009 um 18:29 schrieb Avi Kivity a...@redhat.com:


On 10/13/2009 08:35 AM, Jan Kiszka wrote:

It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect I always
wanted to understand on a rainy day. So, when you actually want -net
user, try -no-kvm-irqchip.



This might be due to a missing SIGIO or SIGALRM; -no-kvm-irqchip 
generates a lot of extra signals and thus polling opportunities.


Isn't that what dedicated io threads are supposed to solve?



No.  Dedicated I/O threads provide parallelism.  All latency needs is to 
have SIGIO sent on all file descriptors (or rather, in qemu-kvm with 
irqchip, to have all file descriptors in the poll() call).


Jan, does slirp add new connections to the select set?

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using snmpd on host to monitor guest bandwidth usage

2009-10-24 Thread Avi Kivity

On 10/25/2009 02:23 AM, Neil Aggarwal wrote:

As far as i know, you can fix the vm's interfaces on the
host. I'm using libvirt and you can do it there as described in here :
http://libvirt.org/formatdomain.html#elementsNICSBridge (
What you're looking for istarget dev='vnet0'/  directive ).
 

Setting the target device name is not working.

Here is what I did:

I stopped both my guests.

Next, I opened the file /etc/libvirt/qemu/jamm12a.xml
for my first virtual host and added a target element
for the interface:
 interface type='bridge'
   mac address='54:52:00:4f:83:67'/
   source bridge='br0'/
   target dev='vnet1'/
 /interface

   


Please take this to the libvirt mailing list, since that is all handled 
by libvirt, not qemu or kvm.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2883570 ] KVM needs easier network bridging support

2009-10-24 Thread SourceForge.net
Bugs item #2883570, was opened at 2009-10-22 00:16
Message generated for change (Settings changed) made by avik
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=2883570group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Closed
Resolution: Invalid
Priority: 5
Private: No
Submitted By: dmitryb77 (dmitryb77)
Assigned to: Nobody/Anonymous (nobody)
Summary: KVM needs easier network bridging support

Initial Comment:
Setup:
Triple core Phenom II, 64-bit Ubuntu Karmic Koala, Ralink wireless NIC.

Repro:
1. Attempt to create a bridged connection using a wireless NIC.
2. Spend hours reading half-baked howtos and fail.
3. Throw in the towel and use VirtualBox where bridging just works after you 
select it through a combo box.

Result:
Wireless bridging is darn near impossible with KVM (yet VirtualBox bridges just 
fine on the same machine). Wired bridging requires manual (and hence error 
prone) editing of configuration files. This is compounded by the fact that if 
you screw up a network config on the remote host, you lose access to it and 
have to go there (or use an expensive enterprise remote console thingamabob) 
to correct your error.

Expected result:
From my (fairly limited) understanding, VirtualBox implements bridging to in 
their code using a driver. KVM should consider doing the same thing. 
Currently, bridging is a major pain in the ass, and NAT is useless for most 
intents and purposes, since you can't connect to a NATted machine from the 
outside.

--

Comment By: Avi Kivity (avik)
Date: 2009-10-25 07:50

Message:
qemu/kvm is not a complete stack; it's just the virtual machine monitor. 
You need to use a complete stack to get simple and automatic
configuration.

Duplicating unrelated pieces of code is not an option.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=2883570group_id=180599
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html