Re: I/O scheduling on block devices

2012-12-18 Thread Javier Guerra Giraldez
On Tue, Dec 18, 2012 at 4:41 PM, Andrew Holway  wrote:
> How should I set up io scheduling with this configuration. Performance is not 
> so great and I have a feeling that all of the io schedulers in my VMs and the 
> ones on my host are not having a nice party together.

most external storage units do their own io scheduling with sizeable
amounts of buffers and cache, so any optimization done by the hosts
would be nullified before reaching the media.

also, a scheduler in each host ignores what's happening on the other
hosts, so their worldview is incomplete at best.

therefore, the best you could aim is to minimize latency; that is,
send each command out to the storage ASAP and let it sort it out.  for
that, usually the best is to use the 'noop' scheduler.


--
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-14 Thread Javier Guerra Giraldez
On Sat, Oct 13, 2012 at 5:25 PM, Lukas Laukamp  wrote:
> I have backed up the data within the machine with partimage and fsarchiver.
> But it would be greate to have a better way than doing this over a live
> system.

make no mistake, the absolutely best way is from within the VM.  It's
the most consistent, safe and efficient method.

Doing it "from the outside" is attractive, but it's a hack, and in
some cases you have to jump through several hoops to make it safe.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Javier Guerra Giraldez
On Fri, Oct 12, 2012 at 3:56 PM, Lukas Laukamp  wrote:
> I think that it must be possible to create an image with a size like the
> used space + a few hundret MB with metadata or something like that.

the 'best' way to do it is 'from within' the VM

the typical workaround is to mount/fsck a LVM snapshot (don't skip the
fsck, or at least a journal replay)

beyond that, there are a few utilities for specific filesystems:
PartImage [1], dump/restore [2], and i'm sure some others.  I don't
know how would these behave with unclean images, which is what you get
if you pull the image under the VM's feet.


[1] http://www.partimage.org/Main_Page
[2] http://dump.sourceforge.net/


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Javier Guerra Giraldez
On Fri, Oct 12, 2012 at 9:25 AM, Stefan Hajnoczi  wrote:
> I would leave them raw as long as they are sparse (zero regions do not
> take up space).  If you need to copy them you can either convert to
> qcow2 or use tools that preserve sparseness (BTW compression tools are
> good at this).

note that free blocks previously used by deleted files won't be
sparse, won't be zero and won't be much reduced by compression.

i'd say the usual advice stays:

A: if you run any non-trivial application on that VM, then use a real
network backup tool 'from the inside' of the VM
B: if real point-in-time application-cache-storage consistency is not
important, then you can:

- make a read-write LVM snapshot
- mount that and fsck.  (it will appear as not-cleanly unmounted)
- backup the files. (i like rsync, especially if you have an
uncompressed previous backup)
- umount and destroy the snapshot
- optionally compress the backup


but seriously consider option A before.  especially important if you
run any DB on that VM

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Question] Live migration -> normal process migration

2012-10-09 Thread Javier Guerra Giraldez
On Tue, Oct 9, 2012 at 9:36 AM, Grzegorz Dwornicki  wrote:
> I have a question to you guys. Is it possible to use code from live
> migration of KVM VMs to migrate other process?

As far as I can tell, no.

most of the virtualization facililites of KVM are implemented in the
kernel, but they're managed by a 'normal' process (sometimes called
qemu-kvm, sometimes just kvm).  Is this userspace process that
implements it's own migration feature.

to do that, one running qemu-kvm (proc A) instance connects with a
just-started instance (proc B).  proc A first sends all the
configuration information, so B sets itself identically.  then all RAM
data is transferred (iteratively, to 'catch up' with the still-running
VM).  finally, proc A suspends the running VM, finishes the last
modified RAM data and all the CPU state.  now proc B is identical to
A, and both are suspended, and then, proc B is unsuspended and
notifies proc A, which just exits.

if you want to do that for arbitrary 'normal' processes, you'd have to
implement that in the kernel; but KVM doesn't do that.

As Paolo mentions, there are other projects that do similar things.
OpenVZ is one; LXC does it too. I'd start with LXC, since it's a
standard part of the kernel (just like KVM), and supported by many
common utilities.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: QEMU- 1CPU for guest while more cores used on host?

2012-07-04 Thread Javier Guerra Giraldez
On Wed, Jul 4, 2012 at 2:44 PM,   wrote:
> Thank you very much for your explanation, it makes sense :-)
>
>>2: why do you think "course amd-v+KVM is impossible to be used" ??  it does 
>>work very well
> Not for me, it is some old RedHat version that fails on boot under KVM, but 
> works well (but slow) under Qemu. It is even unable to use more cores, so the 
> limit in this case is one-core per guest for me.

1: kvm does work, and very well on AMD chips.   "of course amd-v+KVM
is impossible to be used" is plain wrong

2: if an old OS doesn't work under kvm, but does under qemu, then you
can fiddle with kvm's cpu emulation flags.  try "kvm -cpu ?" to see
what's available.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: QEMU- 1CPU for guest while more cores used on host?

2012-07-04 Thread Javier Guerra Giraldez
1:  no. what you're asking can't be done.   if it was possible, every
chipmaker would implement it in silicon to create über-fast
single-processors on top of multicore chips.

2: why do you think "course amd-v+KVM is impossible to be used" ??  it
does work very well


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Credit-based scheduling in KVM?

2012-06-15 Thread Javier Guerra Giraldez
On Fri, Jun 15, 2012 at 6:39 AM, sguazt  wrote:
> With "native" I meant that I'd like to have a credit-based scheduling
> mechanism specifically targeted to VMs, without affecting the other
> processes of the host machine.

just put the VMs on their own cgroup

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Credit-based scheduling in KVM?

2012-06-15 Thread Javier Guerra Giraldez
On Fri, Jun 15, 2012 at 4:08 AM, sguazt  wrote:
> I've found some suggestions here:
>
> http://forums.meulie.net/viewtopic.php?f=43&t=6436
> http://serverfault.com/questions/324265/equivalent-of-xen-capping-in-kvm
>
> but none of those are native to KVM.

what do you mean 'native to KVM'?

kvm is part of the Linux kernel, VMs are normal processes, the Linux
scheduler is the 'native kvm' scheduler.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Starting / stopping VMs

2012-06-06 Thread Javier Guerra Giraldez
On Wed, Jun 6, 2012 at 9:57 AM, Michael Johns  wrote:
> a small
> database from which small amounts of information about running
> machines which indicated the presence or not of virtual machine
> instances.

pgrep works beautifully, especially when using the -name option

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux guests and ksm performance

2012-02-23 Thread Javier Guerra Giraldez
On Thu, Feb 23, 2012 at 11:42 AM, Stefan Hajnoczi  wrote:
> The other approach is a memory page "discard" mechanism - which
> obviously requires more code changes than zeroing freed pages.
>
> The advantage is that we don't take the brute-force and CPU intensive
> approach of zeroing pages.  It would be like a fine-grained ballooning
> feature.

(disclaimer: i don't know the code, i'm just guessing)

does KVM emulate the MMU? if so, is there any 'unmap page' primitive?

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Infiniband / RDMA

2011-08-05 Thread Javier Guerra Giraldez
On Fri, Aug 5, 2011 at 2:14 PM, Lance Couture  wrote:
> We are looking at implementing KVM-based virtual machines in our HPC cluster.
>
> Our storage runs over Infiniband using RDMA, but we have been unable to find 
> any real documentation regarding Infiniband, let alone using RDMA.

usually it's best to manage storage in the host system, not in the
VMs.  In that case, it's just a usual Linux setup.  Once you get the
storage running in your host, KVM will take it, no matter what
technology is used underneath.

If, on the other hand, you do use Infiniband for some applications
_inside_ the VMs, then you would need to pass the PCI devices to the
VMs, a totally different issue; and one I don't know about.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread Javier Guerra Giraldez
On Mon, Mar 28, 2011 at 3:31 PM, Marcin M. Jessa  wrote:
> How is OCFS2 compared to CLVM?

different layers, can't compare.

CLVM (aka cLVM) is the cluster version of LVM, the volume manager.
the addition of a userspace lock manager lets you do all volume
management (create/delete volumes, resize them, add/remove physical
devices, etc.) online on any machine and all others will see the
change.  since locks are only needed while modifying the volume
layout, there's no overhead during normal operation.

OCFS2 is a filesystem. specifically, a Cluster filesystem.  that means
that the same storage can be mounted by several machines and all of
them will see the same data consistently.  distributed locks are
needed for any modification, and cache strategies have to be complex
and tied to such locks.  scalability is good, since there's no central
node; but ultimately limited to the lock performance.

Usually you store cluster filesystems on cluster volumes on cluster storage.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: recommended timer frequency for host and guest

2011-03-01 Thread Javier Guerra Giraldez
On Tue, Mar 1, 2011 at 4:44 PM, Nikola Ciprich  wrote:
> I guess that on host, the higher frequency can be better, but for guest,
> 100HZ should be better because it causes lower overhead for host, right?

or NO_HZ?

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Disk activity issue...

2011-01-05 Thread Javier Guerra Giraldez
On Tue, Jan 4, 2011 at 11:19 AM, Erich Weiler  wrote:
> Thanks for replying!  I was able to figure it out - it was not the fault of 
> KVM.  One of the guests was running ganglia gmetad which was updating 30,000+ 
> rrd files every 15 seconds (thus generating load via disk I/O), I didn't spot 
> that until shutting down the VMs one by one until I found the offending one.  
> It was just a needle in a haystack.  ;)

i use iotop to check which VM is hitting the disk

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limit conectivity of a VM

2010-11-20 Thread Javier Guerra Giraldez
On Sat, Nov 20, 2010 at 3:40 AM, Thomas Mueller  wrote:
> maybe one of the virtual network cards is 10mbit? start kvm with "-net
> nic,model=?" to get a list.

wouldn't matter.   different models emulate the hardware registers
used to transmit, not the performance.

if you had infinitely fast processors, every virtual network would be
infinitely fast.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limit conectivity of a VM

2010-11-19 Thread Javier Guerra Giraldez
On Fri, Nov 19, 2010 at 2:47 PM, hadi golestani
 wrote:
> Hello,
> I need to limit the port speed of a VM to 10 mbps ( or 5 mbps if it's 
> possible).
> What's the way of doing so?

tc

check http://lartc.org/howto/lartc.qdisc.html


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ESXi, KVM or Xen?

2010-07-02 Thread Javier Guerra Giraldez
On Fri, Jul 2, 2010 at 10:55 PM, Emmanuel Noobadmin
 wrote:
> This is a concern since I plan to put storage on the network and the
> most heavy load the client has is basically the email server due to
> the volume plus inline antivirus and anti-spam scanning to be done on
> those emails. Admittedly, they won't be seeing as much emails as say a
> webhost but most of their emails come with relatively large
> attachments.

if by 'put storage on the network' you mean using a block-level
protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
means initiate on the host OS (Dom0 in Xen) and present to the VM as
if it were local storage.  it's far faster and more stable that way.
in that case, storage wouldn't add to the VM's network load, which
might or might not make those (old) scenarios irrelevant


> 2. Security
> Some sites point out that KVM VM runs in userspace as threads. So a
> compromised guest OS would then give intruder access to the system as
> well as other VMs.

in any case, if your base OS (host on KVM, Dom0 on Xen) is
compromised, it's game over.  also, KVM is not 'userspace threads',
they're processes as far as the scheduler is concerned, and their ram
mapping is managed as a separate process space.  no more no less
separation than usual among processes on a server.  of course, the
guest processes are isolated inside that VM and there's no way out of
there (unless there's a security bug, which are few and far between
given the hardware-assisted virtualization)


in any case, yes; Xen does have more maturity on big hosting
deployments.  but most third parties are betting on KVM for the
future.  the biggest examples are Redhat, Canonical, libvirt (which is
sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
with either Xen or KVM, focusing on the last) so the gap is closing.

regarding performance, KVM is still somewhat behind; but the design is
cleaner and more scalable (don't believe too much on 'type 1 vs type
2' hype, most people invoking that don' really understand the issues).
 the evolving virtio backends have a lot of promise, and periodically
there are proof of concept tests that blow out of the water everything
else.

and finally, even if right now the 'best' deployment on Xen definitely
outperforms KVM by a measurable margin; when things are not optimal
Xen degrades a lot quicker than KVM.  in part because the Xen core
scheduler is far from the maturity of Linux kernel's scheduler.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Graphical virtualisation management system

2010-06-24 Thread Javier Guerra Giraldez
On Thu, Jun 24, 2010 at 1:32 PM, Freddie Cash  wrote:
>  * virt-manager which requires X and seems to be more desktop-oriented;

don't know about the others, but virt-manager runs only on the admin
station.  on the VM hosts you run only libvirtd, which doesn't need X

in fact, that's what you get when you install Ubuntu server and choose
'VM host'.

about the storage servers, i don't know if they manage them, as i
simply add PVs to the VG, and libvirt handles from there.  it doesn't
care about the type of block devices.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Networking - Static NATs

2010-04-26 Thread Javier Guerra Giraldez
On Mon, Apr 26, 2010 at 7:10 AM, Anthony Davis
 wrote:
> the problem I have is that kvm currently has dhcp running and setting up
> NATs etc...
>
> I need to stop this, but still allow my current virtual machines access out,
> how would be the best way to do this?

use bridged networking

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [GSoC 2010] Pass-through filesystem support.

2010-04-10 Thread Javier Guerra Giraldez
On Sat, Apr 10, 2010 at 7:42 AM, Mohammed Gamal  wrote:
> On Sat, Apr 10, 2010 at 2:12 PM, Jamie Lokier  wrote:
>> To throw a spanner in, the most widely supported filesystem across
>> operating systems is probably NFS, version 2 :-)
>
> Remember that Windows usage on a VM is not some rare use case, and
> it'd be a little bit of a pain from a user's perspective to have to
> install a third party NFS client for every VM they use. Having
> something supported on the VM out of the box is a better option IMO.

i don't think virtio-CIFS has any more support out of the box (on any
system) than virtio-9P.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [GSoC 2010] Pass-through filesystem support.

2010-04-09 Thread Javier Guerra Giraldez
On Fri, Apr 9, 2010 at 5:17 PM, Mohammed Gamal  wrote:
> That's all good and well. The question now is which direction would
> the community prefer to go. Would everyone be just happy with
> virtio-9p passthrough? Would it support multiple OSs (Windows comes to
> mind here)? Or would we eventually need to patch Samba for passthrough
> filesystems?

found this:

http://code.google.com/p/ninefs/

it's a BSD-licensed 9p client for windows i have no idea of how
stable / complete / trustable it is; but might be some start


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Unify KVM kernel-space and user-space code into a single project

2010-03-23 Thread Javier Guerra Giraldez
On Tue, Mar 23, 2010 at 2:21 PM, Joerg Roedel  wrote:
> On Tue, Mar 23, 2010 at 06:39:58PM +0200, Avi Kivity wrote:
>> So, two users can't have a guest named MyGuest each?  What about
>> namespace support?  There's a lot of work in virtualizing all kernel
>> namespaces, you're adding to that.
>
> This enumeration is a very small and non-intrusive feature. Making it
> aware of namespaces is easy too.

an outsider's comment: this path leads to a filesystem... which could
be a very nice idea.  it could have a directory for each VM, with
pseudo-files with all the guest's status, and even the memory it's
using.  perf could simply watch those files.   in fact, such a
filesystem could be the main userleve/kernel interface.

but i'm sure such a layour was considered (and rejected) very early in
the KVM design.  i don't think there's anything new to make it more
desirable than it was back then.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how to tweak kernel to get the best out of kvm?

2010-03-10 Thread Javier Guerra Giraldez
On Wed, Mar 10, 2010 at 8:15 AM, Avi Kivity  wrote:
> 15 guests should fit comfortably, more with ksm running if the workloads are
> similar, or if you use ballooning.

is there any simple way to get some stats to see how is ksm doing?

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vnc mouse trouble; -usbdevice tablet no help

2010-02-08 Thread Javier Guerra
On Mon, Feb 8, 2010 at 1:19 AM, Ross Boylan  wrote:
> Previous advice (to me and others) was to use -usbdevice tablet.  I've
> tried that, and a variety of kvm/qemu versions, but no luck.

check your guest's X11 config.  if you didn't have -usbdevice tablet
when installing, the installer would set only the PS/2 mouse, which
isn't disabled when adding the tablet.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: CPU change causes hanging of .NET apps

2009-10-28 Thread Javier Guerra
On Wed, Oct 28, 2009 at 4:13 PM, Erik Rull  wrote:
> Any Ideas what happens here? I also started applications that were NOT
> started with the QEMU32 CPU to prevent a caching - same problem.

just a couple guesses:

- maybe there's some JIT'ed code cached somewhere in the filesystem

- maybe the exact code generator is chosen at startup time, or even at
.NET installation time.

in short, if restarting windows doesn't fix it, i would reinstall
.NET, or maybe the whole HAL

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [ANNOUNCE] Sheepdog: Distributed Storage System for KVM

2009-10-23 Thread Javier Guerra
On Fri, Oct 23, 2009 at 2:39 PM, MORITA Kazutaka
 wrote:
> Thanks for many comments.
>
> Sheepdog git trees are created.

great!

is there any client (no matter how crude) besides the patched
KVM/Qemu?  it would make it far easier to hack around...

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM

2009-10-23 Thread Javier Guerra
On Fri, Oct 23, 2009 at 9:58 AM, Chris Webb  wrote:
> If the chunks into which the virtual drives are split are quite small (say
> the 64MB used by Hadoop), LVM may be a less appropriate choice. It doesn't
> support very large numbers of very small logical volumes very well.

absolutely.  the 'nicest' way to do it would be to use a single block
device per sheep process, and do the splitting there.

it's an extra layer of code, and once you add non-naïve behavior for
deleting and fragmentation, you quickly approach filesystem-like
complexity.

unless you can do some very clever mapping that reuses the consistent
hash algorithms to find not only which server(s) you want, but also
which chunk to hit  the kind of things i'd love to code, but never
found the use for it.

i'll definitely dig deeper in the code.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM

2009-10-23 Thread Javier Guerra
On Fri, Oct 23, 2009 at 5:41 AM, MORITA Kazutaka
 wrote:
> On Fri, Oct 23, 2009 at 12:30 AM, Avi Kivity  wrote:
>> If so, is it reasonable to compare this to a cluster file system setup (like
>> GFS) with images as files on this filesystem?  The difference would be that
>> clustering is implemented in userspace in sheepdog, but in the kernel for a
>> clustering filesystem.
>
> I think that the major difference between sheepdog and cluster file
> systems such as Google File system, pNFS, etc is the interface between
> clients and a storage system.

note that GFS is "Global File System" (written by Sistina (the same
folks from LVM) and bought by RedHat).  Google Filesystem is a
different thing, and ironically the client/storage interface is a
little more like sheepdog and unlike a regular cluster filesystem.

>> How is load balancing implemented?  Can you move an image transparently
>> while a guest is running?  Will an image be moved closer to its guest?
>
> Sheepdog uses consistent hashing to decide where objects store; I/O
> load is balanced across the nodes. When a new node is added or the
> existing node is removed, the hash table changes and the data
> automatically and transparently are moved over nodes.
>
> We plan to implement a mechanism to distribute the data not randomly
> but intelligently; we could use machine load, the locations of VMs, etc.

i don't have much hands-on experience on consistent hashing; but it
sounds reasonable to make each node's ring segment proportional to its
storage capacity.  dynamic load balancing seems a tougher nut to
crack, especially while keeping all clients mapping consistent.

>> Do you support multiple guests accessing the same image?
>
> A VM image can be attached to any VMs but one VM at a time; multiple
> running VMs cannot access to the same VM image.

this is a must-have safety measure; but a 'manual override' is quite
useful for those that know how to manage a cluster-aware filesystem
inside a VM image, maybe like Xen's "w!" flag does.  justs be sure to
avoid distributed caching for a shared image!

in all, great project, and with such a clean patch into KVM/Qemu, high
hopes of making into regular use.

i'd just want to add my '+1 votes' on both getting rid of JVM
dependency and using block devices (usually LVM) instead of ext3/btrfs

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Javier Guerra
On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivity  wrote:
> Early implementations of virtio devices did not support barrier operations,
> but did commit the data to disk.  In such cases, drain the queue to emulate
> barrier operations.

would this help on the (i think common) situation with XFS on a
virtio-enabled VM, using LVM-backed storage; where LVM just loses
barriers.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sharing host infiniband between VMs

2009-10-09 Thread Javier Guerra
On Fri, Oct 9, 2009 at 4:11 PM, Cameron Macdonell  wrote:
> I'm trying to set up a number of VMs on a host that uses IP over infiniband
> to its NFS server.  I've been googling, but can't find any mention of
> sharing a single IB interface between multiple VMs.  Since its IPoIB, I was
> wondering if something there is some similar to bridged ethernet that could
> work with IB?  Any pointers about using IB with KVM would be appreciated.

i haven't done this, but

- bridging is a 'level 2' operation, usually that means Ethernet, but
in your case i'ts IB.  i don't think you can add an IB device to br0

- your connection to the world is IPoIB, to the VMs is IP over
(virtual) Ethernet. so join them at IP level, that is, use routing or
NAT.

- there's also Ethernet over IB, where you get a (fake) Ethernet
device running on your IB wire (at near-IB speeds, hopefully), you
could add that to the bridge

IOW: i guess you have to choose between routing IP to your IPoIB, or
use EoIB, to have 'Ethernet everywhere', including your IB wires.



-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: INFO: task kjournal:337 blocked for more than 120 seconds

2009-10-01 Thread Javier Guerra
On Thu, Oct 1, 2009 at 4:00 PM, Shirley Ma  wrote:
> I talked to Mingming, she suggested to use different IO scheduler. The
> default scheduler is cfg, after I switch to noop, the problem is gone.

deadline is the most recommended one for virtualization hosts.  some
distros set it as default if you select Xen or KVM at installation
time.  (and noop for the guests)


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Binary Windows guest drivers are released

2009-09-24 Thread Javier Guerra
On Thu, Sep 24, 2009 at 3:38 PM, Kenni Lund  wrote:
> I've done some benchmarking with the drivers on Windows XP SP3 32bit,
> but it seems like using the VirtIO drivers are slower than the IDE drivers in
> (almost) all cases. Perhaps I've missed something or does the driver still
> need optimization?

very interesting!

it seems that IDE wins on all the performance numbers, but VirtIO
always has lower CPU utilization.  i guess this is guest CPU %, right?
it would also be interesting to compare the CPU usage from the host
point of view, since a lower 'off-guest' CPU usage is very important
for scaling to many guests doing I/O.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-09-17 Thread Javier Guerra
On Wed, Sep 16, 2009 at 10:11 PM, Gregory Haskins
 wrote:
> It is certainly not a requirement to make said
> chip somehow work with existing drivers/facilities on bare metal, per
> se.  Why should virtual systems be different?

i'd guess it's an issue of support resources.  a hardware developer
creates a chip and immediately sells it, getting small but assured
revenue, with it they write (or pays to write) drivers for a couple of
releases, and stop to manufacture it as soon as it's not profitable.

software has a much longer lifetime, especially at the platform-level
(and KVM is a platform for a lot of us). also, being GPL, it's cheaper
to produce but has (much!) more limited resources.  creating a new
support issue is a scary thought.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm scaling question

2009-09-11 Thread Javier Guerra
On Fri, Sep 11, 2009 at 10:36 AM, Bruce Rogers  wrote:
> Also, when I did a simple experiment with vcpu overcommitment, I was 
> surprised how quickly performance suffered (just bringing a Linux vm up), 
> since I would have assumed the additional vcpus would have been halted the 
> vast majority of the time.  On a 2 proc box, overcommitment to 8 vcpus in a 
> guest (I know this isn't a good usage scenario, but does provide some 
> insights) caused the boot time to increase to almost exponential levels. At 
> 16 vcpus, it took hours to just reach the gui login prompt.

I'd guess (and hope!) that having many 1- or 2-cpu guests won't kill
performance as sharply as having a single guest with more vcpus than
the physical cpus available.  have you tested that?

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Notes on block I/O data integrity

2009-08-25 Thread Javier Guerra
On Tue, Aug 25, 2009 at 1:11 PM, Christoph Hellwig wrote:
>  - barrier requests and cache flushes are supported by all local
>   disk filesystem in popular use (btrfs, ext3, ext4, reiserfs, XFS).
>   However unlike the other filesystems ext3 does _NOT_ enable barriers
>   and cache flush requests by default.

what about LVM? iv'e read somewhere that it used to just eat barriers
used by XFS, making it less safe than simple partitions.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AlacrityVM numbers updated for 31-rc4

2009-08-12 Thread Javier Guerra
On Wed, Aug 12, 2009 at 3:43 PM, Gregory
Haskins wrote:
> In any case, I have updated the graphs on the AlacrityVM wiki to reflect
> these latest numbers:

pseudo-3D charts are just wrong
(http://www.chuckchakrapani.com/articles/PDF/94070347.pdf)

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-87 fails to compile under uClibc

2009-07-07 Thread Javier Guerra
On Tue, Jul 7, 2009 at 6:31 AM, Cristi
Magherusan wrote:
> The kernel will be 2.6.24 because it's smaller. I know this mismatch may
> not be good, but I have to get to a compromise. The kernel needs to be
> as small as possible (everything should fit in a 4MB BIOS flash)

i know it's a big change; but have you tried the new kernel
compressions available in 2.6.30? they do make a difference.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm, drbd, elevator, rotational - quite an interesting co-operation

2009-07-06 Thread Javier Guerra
On Mon, Jul 6, 2009 at 8:28 AM, Michael Tokarev wrote:
> Javier Guerra wrote:
>> it also bothers me because when i have a couple of moderately
>> disk-heavy VMs, the load average numbers skyrockets.  that's because
>> each blocked thread counts as 1 on this figure, even if they're all
>> waiting on the same device.
>
> And how having large LA is bad?  I mean, LA by itself is not an
> indicator of bad or good performance, don't you think?


it's not a real problem, of course; but it's a nuisance because some
reporting tools (zabbix/nessus) use this figure to raise alarms,
meaning i have to adjust it.

also, even a single-threaded high-IO process on a guest fires a lot of
IO threads on the host, and other not-so-aggressive VMs suffer.

definitely using deadline scheduler on the host reduces the impact.
(down to zero? i don't think so, but it's certainly manageable)


>> on my own (quick) tests, changing the elevator on the guest has very
>> little effect on performance; but does affect the host CPU
>> utilization. using drbd on the guest while testing with bonnie++
>> increased host CPU by around 20% for each VM
>
> Increased compared what with what?  Also, which virtual disk format
> did you use?

sorry, i had a typo there, i meant: using cfq vs. noop on the guest
(running bonnie++, no drbd anywhere) produced around 20% more CPU load
on the host, with no measurable performance advantage.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [DRBD-user] kvm, drbd, elevator, rotational - quite an interesting co-operation

2009-07-03 Thread Javier Guerra
On Fri, Jul 3, 2009 at 9:00 AM, Lars Ellenberg wrote:
> the elevator of the lower level block device (in this case,
> the kvm virtual block device, or the host real block device)

so, the original post (Michael) was running drbd on the KVM guests??

i thought the only sensible setup was using dbrb on the hosts, to make
VMs live-migrable.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [DRBD-user] kvm, drbd, elevator, rotational - quite an interesting co-operation

2009-07-03 Thread Javier Guerra
Lars Ellenberg wrote:
> On Thu, Jul 02, 2009 at 11:55:05PM +0400, Michael Tokarev wrote:
> > drbd: what's the difference in write pattern on secondary and
> >   primary nodes?  Why `rotational' flag makes very big difference
> >   on secondary and no difference whatsoever on primary?
> 
> not much.
> disk IO on Primary is usually submitted in the context of the
> submitter (vm subsystem, filesystem or the process itself)
> whereas on Secondary, IO is naturally submitted just by the
> DRBD receiver and worker threads.

just like with KVM itself, using several worker threads against a single IO 
device makes performance heavily dependent on a sensible elevator algorithm.  
ideally, there should be only one worker thread for each thread/process 
originating the initial write.  unfortunately DRBD, being a block/level 
protocol, might have a hard time unraveling which writes belong to which 
process.  maybe just merging adjacent (in block address space, not in time) 
write operations would keep most of the relationships.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm, drbd, elevator, rotational - quite an interesting co-operation

2009-07-02 Thread Javier Guerra
On Thu, Jul 2, 2009 at 2:55 PM, Michael Tokarev wrote:
> kvm: i/o threads - should there be a way to control the amount of
>  threads?  With default workload generated by drbd on secondary
>  node having less thread makes more sense.

+1 on this.   it seems reasonable to have one thread per device, or am
i wrong?

it also bothers me because when i have a couple of moderately
disk-heavy VMs, the load average numbers skyrockets.  that's because
each blocked thread counts as 1 on this figure, even if they're all
waiting on the same device.

> kvm: it has been said that using noop elevator on guest makes sense
>  since host does its own elevator/reordering.  But this example
>  shows "nicely" that this isn't always the case.  I wonder how
>  "general" this example is.  Will try to measure further.

on my own (quick) tests, changing the elevator on the guest has very
little effect on performance; but does affect the host CPU
utilization. using drbd on the guest while testing with bonnie++
increased host CPU by around 20% for each VM


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Configuration vs. compat hints

2009-06-15 Thread Javier Guerra
On Mon, Jun 15, 2009 at 6:43 AM, Avi Kivity wrote:
> (I'd be quite happy constructing the entire machine config on the command
> line, but I realize it's just me)

as a user-only (well, i'm a developer, but don't meddle in kernel
affairs since 0.99pl9); I also like that kvm is totally CLI-managed.

but migration-wise, i think it could be nicer if the 'origin' process
could send the config to the 'target' one.  IOW: the -incoming flag
shouldn't need any other parameter, and the 'migrate' command should
send the whole hardware description before the CPU state, and fail
with a 'can't comply' message if the target complains.

of course, that's a simplification.  for example, the 'target' process
should be able to respect some parameters, mostly the 'external'
descriptions, like storage pathnames, or '-net tap' ones.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Windows Server 2008 VM performance

2009-06-02 Thread Javier Guerra
On Tue, Jun 2, 2009 at 4:38 PM, Avi Kivity  wrote:
> Andrew Theurer wrote:
>> P.S. Here is the qemu cmd line for the windows VMs:
>> /usr/local/bin/qemu-system-x86_64 -name newcastle-xdbt01 -hda
>> /dev/disk/by-id/scsi-3600a0b8f1eb1074f4a02b08a
>
> Use: -drive /dev/very-long-name,cache=none instead of -hda to disable the
> host cache.  Won't make much of a differnence for 5 MB/s though.

would emulating SCSI instead of IDE help while we hope and wait for
virtio_block drivers? (and better virtio_net).

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how to manage KVM guests with libvirt ?

2009-06-01 Thread Javier Guerra
On Mon, Jun 1, 2009 at 11:02 AM, Riccardo Veraldi
 wrote:
> thank you very much.
>
> How do I know all the XML tag options ??
>
> how to convert from comand line quemu options into XML tags ?
>
> and here to put XML file ?

you'll have to play around a little with a test machine before you get
the hang of it.  the xml options are documented on the libvirt site.
put them in /etc/libvirt/qemu/blahblahblah.xml, and the libvirtd
daemon will pick them at startup.

> about the console I used to start qemu-kvm under SCREEN program.
> is ther another better way to have serial console ?

for that i don't have any good answer.  it seems libvirt ties the qemu
monitor with a pipe to its own process, so you don't have manual
control anymore.

i'd much prefer if it used a unix socket, so you could open it with
socat or similar tools.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how to manage KVM guests with libvirt ?

2009-06-01 Thread Javier Guerra
On Mon, Jun 1, 2009 at 9:41 AM, Riccardo Veraldi
 wrote:
> Hello,
> I have always created my guests by hand with qemu-kvm syntax.
> Is there a way to control and manage KVM guests with libvirt without being
> forced to create the guest with virtmanager or with virtsh ?

i'm doing this, after seeing the managment tools available with
libvirt.  the easiest way is to write an XML that describes what you
already know how to do with command line.  there are still a couple of
missing options (most notably cache=none, and getting to the command
console); but you should be able to get it to work.

virt-install is a nice hack for installing well-behaved linux distros;
but for windows (where you have to pick exactly which features to
expose at different steps of the install), it's easier to do on the
command line and write the needed XML after that.

one tip, if you choose to allow libvirt to manage an LVM storage pool,
it's better not to create/destroy LVs manually.  use virsh for that or
just don't tell libvirt about your LVM.  i had a couple of host
crashes when the libvirt view of the storage got inconsistent with
reality.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu sles 11 guest failed to boot up after the guest install

2009-05-20 Thread Javier Guerra
On Wed, May 20, 2009 at 5:42 AM, Pradeep K Surisetty
 wrote:
> 3.start guest install
> qemu-kvm -cdrom SLES-11-DVD-x86_64-GM-DVD1.iso -drive
> file=sles11.raw,if=scsi,cache=off -m 512 -smp 2
>
> 4. After the guest install boot up from the image.
> qemu-kvm -boot c sles11.raw

any reason why you install on SCSI but then boot on IDE?


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unicode Error

2009-05-14 Thread Javier Guerra
On Thu, May 14, 2009 at 9:16 AM, Gilberto Nunes
 wrote:
> Hi all
>
> I'm newbie on list.
> I have deploy a system here, with a Ubuntu Server running KVM.
> Well, when I run virt-clone command, I get this error:
>
> CMD: virt-clone -o vm01 -n VMUbuntu-2 -f /virt/ubuntu-2.img
>
> RESULT:
> Traceback (most recent call last):
>  File "/usr/lib/python2.6/logging/__init__.py", line 773, in emit
>    stream.write(fs % msg.encode("UTF-8"))
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 14:
> ordinal not in range(128)
>
> I don't know if this is a issue of Ubuntu, libvirt (!)...
>
> Someone can point a way to fix this issue...

it seems that at some point in libvirt (which is mostly written in
Python), it transcodes some info between ascii and UTF8.  some of that
info isn't valid 7-bit ASCII, probably some name.  it's safer to use
only ascii valid strings, both in names and paths.

of course, it should be reported as a bug to the libvirt people
(http://libvirt.org/bugs.html)

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Paravirtualisation or not?

2009-05-05 Thread Javier Guerra
Pantelis Koukousoulas wrote:
> On Tue, May 5, 2009 at 11:37 AM, Stefan Hajnoczi  wrote:
> Sure, closed-source virtio-net drivers exist (in fact there is a newer
> version than the one
> you linked. I think it is 12/2008 distributed as an iso). The point
> (and the advantage
> of Xen in this area) is that Xen provides the source too under GPL.

XenSource drivers are not only closed source, but tied to their distribution of 
Xen.  OpenSource Xen doesn't have any windows drivers.

you must be talking about the GPLPV drivers, which is the work of an 
independent third party: James Harper.  I haven't tried them,  their 
performance is quite good but far from trouble-free.

> It is harder for a third party to do this job because you would have to make 
> the
> decision to either use the Windows DDK and samples (which means you can't
> release under GPL and thus you can't reuse or even look at the current virtio
> implementations) or use GPL and the current linux virtio code as a base but
> in this case you can forget DDK and the samples (at least that is my
> understanding).

that's what had to happen (very recently) for Xen, since neither XenSource nor 
Novell (which also has a set of closed source, distro-tied drivers) wanted to 
open theirs.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Paravirtualisation or not?

2009-05-04 Thread Javier Guerra
On Mon, May 4, 2009 at 9:49 AM, howard chen  wrote:
> On Mon, May 4, 2009 at 10:44 PM, Pasi Kärkkäinen  wrote:
>> On Mon, May 04, 2009 at 10:40:00PM +0800, howard chen wrote:
>> Yes, paravirtualization is good. If running KVM, use paravirtualized network
>> and disk/block drivers for better performance.
>
> So does it mean generally Xen is more optimized than KVM for speed?


no

Xen started as paravirtualization-only, and later got full
virtualization capabilities, mainly to run windows guests.

KVM is full-virtualization-only.  if things stopped there, then yes,
Xen would be much faster than kvm.

but on almost all cases, biggest bottleneck (by far) isn't the CPU,
it's I/O. adding paravirtualization drivers to a fully virtualized
guest brings it roughly to the same speed level as a PV guest.  that
makes kvm comparable to Xen in most workloads.

there some real advantages of kvm:

- less context switches needed to make a block or packet go from guest
to hardware and viceversa

- paravirtualized drivers widely available both for Linux and Windows
(Xen's drivers on windows can be hard and/or expensive to get)

- tight work with the qemu/kernel guys make big advances in througput.
 i recall that virtio-net can go near 2Gbit with little tuning, almost
twice as the best Xen numbers.

of course, there are also several hard, real advantages of Xen:

- the hypervisor's scheduler is more appropriate for dataserver
managers that sell VMs

- wider recognition from supporting companies (changing quickly)

several more for each side that i don't remember right now, i'm sure

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH] virtio-blk: add SGI_IO passthru support

2009-04-30 Thread Javier Guerra
On Thu, Apr 30, 2009 at 4:49 PM, Paul Brook  wrote:
> One of the nice things about scsi is it separates the command set from the
> transport layer. cf. USB mass-storage, SAS, SBP2(firewire), and probably
> several others I've forgotten.

ATAPI, SCSIoFC, SCSIoIB


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one question about virualization and kvm

2009-04-01 Thread Javier Guerra
On Wed, Apr 1, 2009 at 7:27 AM, Vasiliy Tolstov  wrote:
> Hello!
> I have two containers with os linux. All files in /usr and /bin are
> identical.
> Is that possible to mount/bind /usr and /bin to containers? (not copy
> all files to containers).. ?

the problem (and solution) is exactly the same as if they weren't
virtual machines, but real machines: use the network.

simply share the directories with NFS and mount them in your initrd
scripts (preferably read/only).

other way would be to set a new image file with a copy of the
directories, and mount them on both virtual machines.  of course, now
you MUST mount them as readonly.  and you can't change anything there
without ummounting from both VMs.

usually it's not worth it, unless you have tens of identical VMs

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery

2009-03-31 Thread Javier Guerra
On Tue, Mar 31, 2009 at 5:52 PM, Gerry Reno  wrote:
>
> Ok, I've been working with this for a couple hours but this command line
> errors on F10 like this:
>
> # /usr/bin/qemu-kvm -S -M pc -m 512 -smp 2 -name MX_3 -monitor pty -boot d
> -drive file=/var/vm/vm1/qemu/images/MX_3/MX_3.img,if=ide,index=0,boot=on
> -cdrom /dev/sr0 -net nic,macaddr=00:0c:29:e3:bc:ee,vlan=0 -net
> tap,fd=17,script=,vlan=0,ifname=vnet1 -serial none -parallel none -usb -vnc
> 127.0.0.1:1 -k en-us
> TUNGETIFF ioctl() failed: Bad file descriptor
> char device redirected to /dev/pts/2
>
> What does this error mean?

ah, i didn't notice the "fd=17" in the "-net" parameter.

to make it simple, first try with userspace network:

qemu-kvm -m 512 -hda /var/vm/vm1/qemu/images/MX_3/MX_3 -cdrom /dev/sr0
-boot d -vnc 127.0.0.1


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery

2009-03-31 Thread Javier Guerra
On Tue, Mar 31, 2009 at 2:13 PM, Gerry Reno  wrote:
> Javier Guerra wrote:
>>
>> your underlying problem is that you can't get libvirt to generate the
>> appropriate command line.  you really should take it to the libvirt
>> list
>>
>>
>
> Ok, can you give me a command line that will work and then I'll take that
> over to libvirt.

try this:

/usr/bin/qemu-kvm -S -M pc -m 512 -smp 2 -name MX_3 -monitor pty -boot
d -drive file=/var/vm/vm1/qemu/images/MX_3/MX_3.img,if=ide,index=0,boot=on
-cdrom /dev/sr0 -net nic,macaddr=00:0c:29:e3:bc:ee,vlan=0 -net
tap,fd=17,script=,vlan=0,ifname=vnet1 -serial none -parallel none -usb
-vnc 127.0.0.1:1 -k en-us




-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery

2009-03-31 Thread Javier Guerra
On Tue, Mar 31, 2009 at 1:53 PM, Gerry Reno  wrote:
> Boot Failure Code:  0003
> Boot from CDROM failed:  cannot read the boot disk.
> FATAL: No bootable device.

your underlying problem is that you can't get libvirt to generate the
appropriate command line.  you really should take it to the libvirt
list

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery

2009-03-31 Thread Javier Guerra
On Tue, Mar 31, 2009 at 12:01 PM, Gerry Reno  wrote:
> Charles Duffy wrote:
> I put the xml stanza in the file and undefine/define domain but it gives an
> error about cannot read image file.
> 
> And I check this path and I can read all the files from the command line on
> the DVD just fine.
> What could be the problem?

don't put a mount dir, either use a ISO image, or the cdrom device file


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Live memory allocation?

2009-03-30 Thread Javier Guerra
On Mon, Mar 30, 2009 at 10:15 AM, Tomasz Chmielewski  wrote:
> Still, if there is free memory on host, why not use it for cache?

because it's best used on the guest; which will do anyway.  so, not
cacheing already-cached data, it's free to cache other more important
things, or to keep more of the VMs memory on RAM.



-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which -cpu to use

2009-02-26 Thread Javier Guerra
On Thu, Feb 26, 2009 at 7:57 AM, Piavlo  wrote:
>  What is still unclear to me is that's the actual difference between PV
> drivers implementation in paravirtual
> linux guest and PV dirvers in HVM linux guest? AFAIK in xen guest the PV
> front-end drivers are quite simple, and in KVM guest
> to use PV drivers the guest linux needs to be compiled with paravirtual
> guest and viritio drivers support. So in both cases the OS is modified -
> but there is the actual difference?

first of all, using paravirtual drivers isn't considered 'core' OS
modification.  usually they're compiled in the Linux kernel, but could
be loaded as modules.  and in Windows case, they're always loaded
modules.  PV guests OTOH, is a very (very) inner-core modification
(but a conceptually simple one, the requirements used to be documented
in a single HTML for early Xen systems)

now, in KVM (and possibly HVM Xen) the virtio drivers are presented
via a special PCI device, while on fully PV guests, there's some
function calls that can be used to communicate with the hypervisor.
so, there might be some very low-level differences, but above that,
they should be the same.  as for performance, i really don't think it
makes any perceptible difference.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which -cpu to use

2009-02-26 Thread Javier Guerra Giraldez
Piavlo wrote:
> Alexander Graf wrote:
> > virtio drivers have nothing to do with CPU.
>
>  Yes I mistakenly used the term "viritio drivers" instead of
> "paravirtual guest support". So what I wanted to ask is if I build a
> guest kernel with paravitual support
> will it make the native hardware cpu features available inside the guest
> like in Xen kernel? Or  paravirtual support is for device drivers only
> and has no impact on CPU handling like in Xen?
> I thought that KVM (as Xen) is a bare metal hypervisor with regards to
> giving access native access to the  CPU which have  svm or vmx support,
> and not just CPU emulation.
> I'm confused here - can someone shed some light on my ignorance on the
> matter?

you're mixing several buzzwords:

- paravirtual/fully virtualized guest: this refers as to how the guest runs in 
terms of the lowest level of hardware, mainly about accessing CPU's ring0, the 
most privileged mode of execution.  paravirtualized guests can't access ring0, 
so can't do some very low level CPU operations.  for this, the guest is 
modified to use some hypervisor calls instead of CPU operations.  since the 
guest OS has to be modified, usually the VM system doesn't bother fully 
emulating IO hardware either.  Xen works in either mode, KVM only works in 
'fully virtualized' mode, using HVM capabilities.

- paravirtualized drivers: usually paravirtualized guests (which KVM doesn't 
do) also have some special IO channels, which are faster than emulating every 
detail of an existing piece of hardware.  but even if you're fully emulating a 
CPU, there's nothing preventing you from creating special device drivers that 
know how to access the Hypervisor communication channels to do any IO needed.  
the advantage is that these drivers are easy to integrate into an oterwise-
unmodified OS.  that makes it possible to use PV drivers on windows guests, 
gaining most (if not all) performance advantages of PV guests without access 
to the guest OS's source code.

- virtio: is the IO interface presented to the guest for communicating with 
the emulator.  at least KVM uses it, but i think Xen's interface is similar.  
there are several virtio clients in current Linux kernels, so if you select 
virtio network, and block device when launching KVM, a Linux guest gets a big 
speedup.  also available if you install the PV drivers on windows guests (for 
network, block drivers aren't available yet).

- CPU type: this is only how the CPU identifies itself to the guest, and what 
capabilities it advertises.  AFAIK, it doesn't mean any software emulation (á 
la qemu), or maybe only a few non-performance-sensitive.  it's useful mainly 
to facilitate guest migration between different hosts.  if the guest OS sees 
the same CPU as the host, it might see it changing, and since all modern OSs 
check the CPU type at bootup to activate different optimised code, changing it 
while running would make it crash.  advertising only the features common to 
all hosts lets it stay constant no matter how you move the guest around.  
originally Xen supported migration only between identical hosts, but there's 
some special features to allow this on some cases. i don't know how complete 
they're currently.

hope that clears the waters a bit.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: making snapshots with raw devices? and some general snapshot thoughts

2009-02-25 Thread Javier Guerra
On Wed, Feb 25, 2009 at 9:31 AM, Tomasz Chmielewski  wrote:
>   "The VM state info is stored in the first qcow2 non removable and
>    writable block device. The disk image snapshots are stored in every
>    disk image."
>
> Or, am I making a mistake here?

ah, i misremembered that.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: making snapshots with raw devices? and some general snapshot thoughts

2009-02-25 Thread Javier Guerra
On Wed, Feb 25, 2009 at 8:20 AM, Tomasz Chmielewski  wrote:
> Is it possible to make snapshots when using raw devices (i.e. disk,
> partition, LVM volume) as guest's disk image?
>
> According to documentation[1] (and some tests I made) it is only possible
> with qcow2 images. Which makes it very inflexible:
>
> - one is forced to use a potentially slower file access
> - one can't use the benefits of i.e. iSCSI disk access, SAN etc.

what about using 'good' block devices, and add one small, mostly empty
qcow2? could it be used to store the snapshot for all?  of course it
would degrade performance while it's active, but should revert after
'commiting' it to the 'real' block device(s)



-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vga vmware option

2009-01-28 Thread Javier Guerra
On Wed, Jan 28, 2009 at 5:47 PM, Christian Roessner
 wrote:
> Hello,
>
> excuse me for this little question. I found the
>
> -vga vmware
>
> option. I have tried any tricks I new to get the WinXP driver installed,

hi,

i tried that too, but then found that somewhere it says that it's not
for this.  it seems that the vmware display specifications isn't
documented, but they provide a driver for xorg, so it was possible to
reverse-engineer it.  this option is supposed to be useful for linux
and unix-like OSs running xorg.  the vmware driver for windows isn't
supported.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how to assign partition as partition for KVM?

2009-01-15 Thread Javier Guerra
On Thu, Jan 15, 2009 at 3:05 AM, paolo pedaletti
 wrote:
> Ciao Толя,
>
>> Is there any way to assign single partition to KVM virtual machine (for 
>> example, i need to assign /dev/sda1 on my host as /dev/hda1 on VM)
>> In XEN this assignment looks like disk=[ 'phy:/dev/lvm_dg-vol1,xvda1,w',].
>
> with LVM, you can do that:
>
> kvm . -drive file=/dev/mapper/vm-root,if=scsi,index=0,boot=on ..
>
> I think it should work even without LVM, for example:
> kvm . -drive file=/dev/sda1,if=scsi,index=0,boot=on ..

the difference is that in KVM, the guest see a whole disk, not a partition.

no surprise, since Xen's HVM guests have the same limitation; only
paravirtualized guests can be assigned individual partitions.
remember that KVM is a 'fully virtualized' system, similar to HVM in
Xen.


-- 
Javier


Re: How to do an automated backup?

2008-11-03 Thread Javier Guerra
On Mon, Nov 3, 2008 at 2:32 PM, Steve Lorimer <[EMAIL PROTECTED]> wrote:
> Ok, should be a simple question here:  How to backup a KVM host image.

this is far from a simple question.

it appears periodically on the Xen lists.  with few exceptions, the
best advice is to use traditional network backup software "from
inside" the guests.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: TAP MTU >= 4055 problem?

2008-10-30 Thread Javier Guerra
On Thu, Oct 30, 2008 at 4:41 AM, Matthew Faulkner
<[EMAIL PROTECTED]> wrote:
> I go no respone. So i started with a lower packet size and figured out
> below a size of 4054 packets were sent and recevied (without ip
> fragmentation), however, as soon as the packets were >= 4055 it
> stopped working.
>
> Is this a known problem? Have I set something up incorrectly?

are you using a bridge?  it has some problems with long packets.

if you can, try without a bridge.  if not, make sure that you set the
MTU of both ethX and tapX to 9000 before adding anything to the
bridge.  i think the bridge itself has its own MTU that is set
according to the interfaces it gets; but i don't know if it can change
MTU when an interface changes.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Start KVM guest headless

2008-10-10 Thread Javier Guerra
On Fri, Oct 10, 2008 at 10:24 AM, Simon Gao <[EMAIL PROTECTED]> wrote:
> I am wondering to how to start a KVM guest OS headless, ie, no
> monitoring window? And I can attach to the guest's console if I want to.

-vnc 


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6 v3] PCI: Linux kernel SR-IOV support

2008-09-27 Thread Javier Guerra Giraldez
On Saturday 27 September 2008, Zhao, Yu wrote:
> Greetings,
>
> Following patches are intended to support SR-IOV capability in the Linux
> kernel. With these patches, people can turn a PCI device with the
> capability into multiple ones from software perspective, which can benefit
> KVM and achieve other purposes such as QoS, security, etc.

sounds great, i think some Infiniband HBAs have this capability; they even 
suggested using on Xen for faster (no hypervisor intervention) communication 
between DomU's on the same box. (and transparently to out of the box, of 
course)

does it need an IOMMU (VT-d), or the whole magic is done by the PCI device?

-- 
Javier


signature.asc
Description: This is a digitally signed message part.


Re: KVM for Sparc?

2008-09-22 Thread Javier Guerra
On Mon, Sep 22, 2008 at 3:18 PM, Hollis Blanchard <[EMAIL PROTECTED]> wrote:
> It would be even more interesting to implement host support on the Sparc
> processors with hardware virtualization support.

Does Sparc processors also have 'virtualization support' as an extra
feature? i thought that 'non virtualizationability' was an
intel-specific limitation.  (come on... creating a privileged mode and
not trapping violations? who else would design like that?)

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Avoiding I/O bottlenecks between VM's

2008-09-19 Thread Javier Guerra
On Fri, Sep 19, 2008 at 1:53 PM, Alberto Treviño <[EMAIL PROTECTED]> wrote:
> On Friday 19 September 2008 12:41:46 pm you wrote:
>> Are you using filesystem backed storage for the guest images or direct
>> block device storage? I assume there's heavy write activity on the
>> guests when these hangs happen?
>
> Yes, they happen when one VM is doing heavy writes.  I'm actually using a
> whole stack of things:
>
> OCFS2 on DRBD (Primary-Primary) on LVM Volume (continuous) on LUKS-encrypted
> partition.  Fun debugging that, heh?

a not-so-wild guess might be the inter-node locking needed by any
cluster FS.  you'd do much better using just CLVM or EVMS-Ha

if it's a single box, it would be interesting to compare with ext3

> So, any ideas on how to solve the bottleneck?  Isn't the CFQ scheduler
> supposed to grant every processes the same amount of I/O?  Is there a way to
> change something in proc to avoid this situation?

i don't think CFQ can do much to alleviate the heavy lock-dependency
of a cluster FS

-- 
Javier
N�r��yb�X��ǧv�^�)޺{.n�+h����ܨ}���Ơz�&j:+v���zZ+��+zf���h���~i���z��w���?�&�)ߢf

Re: Event channels in KVM?

2008-09-19 Thread Javier Guerra
On Fri, Sep 19, 2008 at 1:31 PM, Anthony Liguori <[EMAIL PROTECTED]> wrote:
> Matt Anger wrote:
>>
>> Does KVM have any interface similar to event-channels like Xen does?
>> Basically a way to send notifications between the host and guest.
>>
>
> virtio is the abstraction we use.
>
> But virtio is based on the standard hardware interfaces of the PC--PIO,
> MMIO, and interrupts.

this is rather low-level, it would be nice to have a multiplatform
interface to this abstraction.

just for kicks, i've found and printed Rusty's paper about it. hope
it's current :-)

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nfs, tap & vlan issues

2008-09-18 Thread Javier Guerra
On Thu, Sep 18, 2008 at 11:47 AM, Fabio Coatti <[EMAIL PROTECTED]> wrote:
> The network on guest machine is set up like this:
> 1: lo:  mtu 16436 qdisc noqueue
>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
>link/ether de:ad:be:ef:15:05 brd ff:ff:ff:ff:ff:ff
> 3: [EMAIL PROTECTED]:  mtu 1500 qdisc noqueue
>link/ether de:ad:be:ef:15:05 brd ff:ff:ff:ff:ff:ff
>inet 192.168.0.5/24 brd 192.168.61.255 scope global vlan3
> 4: [EMAIL PROTECTED]:  mtu 1500 qdisc noqueue
>link/ether de:ad:be:ef:15:05 brd ff:ff:ff:ff:ff:ff
>inet 10.0.0.33/24 brd 10.0.0.255 scope global vlan4


there's your problem:

your vlan interfaces ([EMAIL PROTECTED], [EMAIL PROTECTED]) have an MTU of 
1500. to
encapsulate that in eth0, it has to add 4 bytes of tagging, therefore
eth0 should have an MTU of 1504.  also, the bridge and eth1 on Dom0
must have MTUs of 1504.

i don't know if the bridge can support 1504, if not, you would have to
set eth0 at 1500, and the vlan interfaces at 1496


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to use current KVM with non-modular kernel

2008-09-03 Thread Javier Guerra
On Wed, Sep 3, 2008 at 4:27 PM, Thomas Lockney <[EMAIL PROTECTED]> wrote:
> On Wed, 2008-09-03 at 12:39 -0500, Charles Duffy wrote:
>> Would it not address your security concerns to build a modular kernel,
>> load the current kvm module, and then drop CAP_SYS_MODULE as part of
>> your boot scripts?
>
> Seems that this could be less than ideal if you're providing the VMs as
> hosts for clients (perhaps in a VPS-type situation).

the module loading and capability dropping would be done at host boot,
not guests



-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH 2/5] husb: support for USB host device auto connect.

2008-08-18 Thread Javier Guerra
On Mon, Aug 18, 2008 at 1:21 PM, Max Krasnyansky <[EMAIL PROTECTED]> wrote:
> Javier Guerra wrote:
>>
>> what about doing it the other way around?  that is, setting udev
>> scripts that notify KVM of the hardware changes.
>
> That seems a bit odd. What if you have more than one QEMU instance and
> stuff.

there has to be some policy in place to redirect USB devices to each
QEMU instance, maybe at startup it could reserve a node in the USB
device tree (an USB controller, or maybe a port in a hub).  when
invoked by udev, some script would consult those reservations and pick
the appropriate QEMU

yep, it's not trivial, but seems doable with scripts (and a small DB,
or maybe a clever filesystem arrangement).

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH 2/5] husb: support for USB host device auto connect.

2008-08-15 Thread Javier Guerra
On Fri, Aug 15, 2008 at 1:24 PM, Max Krasnyansky <[EMAIL PROTECTED]> wrote:

> btw Interface to HAL might still be useful in general to monitor other device
> classes that we may want to automatically assign to the VMs. So I'll play
> around with that too (some day :)).

what about doing it the other way around?  that is, setting udev
scripts that notify KVM of the hardware changes.


--
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM backup and snapshots

2008-08-07 Thread Javier Guerra
On Thu, Aug 7, 2008 at 9:51 AM, Dietmar Maurer <[EMAIL PROTECTED]> wrote:
> I thought about using 1 lvm volume, but splitting that into slices
> somehow, which can then be used as kvm disks - maybe by implementing a
> very simple filesystem (block mapper). The problem with this approach is
> that adding/deleting a new disk would mean to grow/shrink an lvm
> partition, which is slow.

you could run LVM in the VM.  be careful about block scanning tools on
Dom0, could mistake the LVM structure inside a LV for the 'outer' one.
 (reiserfsck has this problem with image files)


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM backup and snapshots

2008-08-07 Thread Javier Guerra
On Thu, Aug 7, 2008 at 9:23 AM, Dietmar Maurer <[EMAIL PROTECTED]> wrote:
>
> What is the suggested way to backup a running kvm instance which uses
> several disk images? Currently I simply use a LVM2 snapshot if all disk
> images resides on one lvm volume. But what if it uses several lvm
> volumes?

i'd try to suspend KVM, do all LVM snapshots, unsuspend KVM.
hopefully it would only mean a few seconds of dead time.  can several
LVM snapshots be created in parallel? lots of testing ahead...


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [et-mgmt-tools] Re: [libvirt] RE: [Qemu-devel] [ANNOUNCE] virt-mem tools version 0.2.8 released

2008-08-07 Thread Javier Guerra
On Thu, Aug 7, 2008 at 8:06 AM, Richard W.M. Jones <[EMAIL PROTECTED]> wrote:
> I think the message here is, install libvirt & be happy :-)

nice as this tool sounds, i would need far more than this to make me
switch from a simple, easily scriptable command-line to a generic,
'lowest common', solution like libvirt.

of course, i hope it keeps getting better.  who knows? maybe in a year
or so it would be comparable to the CLI.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to connect USB enclosue to guest

2008-08-01 Thread Javier Guerra
On Fri, Aug 1, 2008 at 10:41 AM, Stephen Liu <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
>
> Ubuntu 8.04 server amd64 - host
> Ubuntu 6.06 server amd64 - guest
> KVM 1:62+dfsq
> UBS enclosure
>
>
> Please advise how to mount the USB enclosure to guest.  It can be
> mounted on host.  Pointer would be appreciated.  TIA

instead of messing with USB, manage it as any block device.

don't mount it on host, and add the /dev/sdX device file as an extra
drive on the qemu command line.

downside is that you wouldn't be able to (easily) unmount from the guest.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to run KVM on non-X environment

2008-07-29 Thread Javier Guerra Giraldez
On Tuesday 29 July 2008, Stephen Liu wrote:
> # kvm -hda ubuntu6.06.img -cdrom /dev/scd0 -m 512 -boot d -vnc :1
>
> It was hanging there.

it's not a hang.  it's running, but all feedback goes to the VNC port.  maybe 
you'd like it to run in the background, so you get the prompt back.  just add 
a '&' at the end:

kvm -hda ubuntu6.06.img -cdrom /dev/scd0 -m 512 -boot d -vnc :1 &

> On local desktop Xterm after ssh-connect the server;
>
> # netstat -antp
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address   Foreign Address
> State   PID/Program name
> tcp0  0 0.0.0.0:59010.0.0.0:*
> LISTEN  5682/kvm
> tcp0  0 192.168.122.1:530.0.0.0:*
> LISTEN  5100/dnsmasq
> tcp6   0  0 :::22   :::*
> LISTEN  5561/sshd
> tcp6   0  0 192.168.0.110:22192.168.0.36:45338
> ESTABLISHED 5566/0
>
>
> Is the VM running?

yes, and port 5901 is VNC's port 1  (since you used -vnc :1)

you do need a VNC client.  it would be the 'screen and keyboard' of the 
virtual machine.  you need at least that to install an OS.  after that, you 
could setup any kind of network access you like directly to the VM (ssh, X, 
RDP, etc)


-- 
Javier


signature.asc
Description: This is a digitally signed message part.


Re: Simple way of putting a VM on a LAN

2008-07-24 Thread Javier Guerra
On Wed, Jul 23, 2008 at 11:15 PM, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Your easy way seems to mean using Debian, other distributions don't have
> some of the scripts, or they are in different places or do different things.
> Other thoughts below.

yep, on Gentoo and SuSE i didn't find the included scripts flexible
enough, so i did the same 'by hand'.  that was a few years ago, it
might be better now; but it's not hard to do anyway.


> Not being a trusting person I find that a bridge is an ineffective firewall,

a bridge isn't a firewall.  it's the software equivalent of plugging
both your host and guest to an ethernet switch.  in most ways, your
host 'steps out of the way'.

> but with a bit of trickery that could live on the VM, to the extent it's
> needed. Now the "sets up its own IP" is a mystery, since there's no place I
> have told it what the IP of the machine it replaces might be. I did take the

as said before, it's as if your VM is directly plugged to the LAN.
you just configure its network 'from inside'.  the host doesn't care
what IP numbers it uses.  in fact, it could be using totally different
protocols, just as long as they go over ethernet.

> hand does result in a working configuration, however, so other than the lack
> of control from using iptables to forward packets, it works well.

you can use iptables.  maybe you have to setup ebtables, but in the
end, just put rules in the FORWARD chains.  google for 'transparent
firewall', or 'bridge iptables'

> of manual setup, it's faster than setting up iptables, and acceptably secure
> as long as the kvm host is at least as secure as the original.

just do with your VM as you do with a 'real' box.  after that, you can
use the fact that every packet to the VM has to pass through your eth0
device; even if they don't appear on your INPUT chains.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Simple way of putting a VM on a LAN

2008-07-09 Thread Javier Guerra
On Wed, Jul 9, 2008 at 11:28 AM, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> A bit of the original problem seems to have been clipped before you read it,
> or I stated it poorly.

i think you're very confused.  maybe you got it working the hard way,
but it's really simple to do the easy way.

first, you have to do some small preparations on the host machine, but
nothing difficult.  this is what i do on my workstation (kubuntu) so
that i can fire up a test VM at any moment's whim:

- setup a bridge, and use it as main interface
- add a /etc/qemu-ifup script
- kvm kernel module
- make sure /dev/kvm and /dev/net/tun have the correct privilege access.

for the first one, in debian-like systems just use the following in
/dev/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br0
iface br0 inet static
address 172.18.13.66
netmask 255.255.0.0
network 172.18.0.0
broadcast 172.18.255.255
gateway 172.18.0.1
bridge_ports eth0

that makes br0 my main interface, and adds eth0 to it. when i'm not
running any VM, it doesn't interfere in any way, except for any
utilities that default to eth0... if that were a problem, i could
simply rename eth0=>peth0 and br0=>eth0 (i think the Xen scripts do
similar tricks)

when that's set, /etc/qemu-ifup just have to setup the tun/tap
interface and add to the bridge:
#!/bin/sh
ifconfig $1 0.0.0.0 promisc up
brctl addif br0 $1

and that's it!  no need to meddle with iptables.  note that i don't
even set the IP, the VM is connected to the LAN, and it setups it's
own IP "from inside"

hope that helps

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: comparisons with VMware and Xen

2008-07-07 Thread Javier Guerra
On Mon, Jul 7, 2008 at 3:09 PM, Sukanto Ghosh
<[EMAIL PROTECTED]> wrote:

> 3. Are there any means to do content-based page sharing between guests
> as VMware does ?

is it VMWare, or NetApp the one doing this? or you mean RAM page
sharing? if so, sounds like a big performance tradeoff for a little
extra (cheap) RAM


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ kvm-Bugs-2009439 ] data corruption with virtio-blk

2008-07-07 Thread Javier Guerra
On Sat, Jul 5, 2008 at 5:29 AM, Mark McLoughlin <[EMAIL PROTECTED]> wrote:
> It's certainly possible the hangs you're seeing were caused by the
> IOAPIC interrupt injection bug I just sent out a fix for - you could try
> doing an install either running kvm-qemu -no-kvm-irqchip or booting the
> guest with "noapic" on the kernel command line. If the IOAPIC bug is
> your issue, you shouldn't see any freezes.

thanks! that works perfectly for virtio blockdevices; but still no
help for windows guests (with SCSI emulation). with -no-kvm-irqchipno
i don't get freezes anymore, just BSOD (and core dumps). also tried
adding -no-acpi, no difference.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ kvm-Bugs-2009439 ] data corruption with virtio-blk

2008-07-03 Thread Javier Guerra
On Thu, Jul 3, 2008 at 3:03 AM, Mark McLoughlin <[EMAIL PROTECTED]> wrote:
> I think the below fixes the data corrupter, but I'm still tracking down
> another issue where the guest is hanging waiting for I/O to complete
> with the latest virtio-blk backend.

is that a common case? i haven't been able to finish a system install
on either virtio or scsi block devices.  both linux (Ubuntu JeOS 8.04)
and windows (xp sp2) freeze sometime while writing packages.  IDE
installs work perfectly

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: exporting a single partition?

2008-07-01 Thread Javier Guerra
On Tue, Jul 1, 2008 at 10:38 AM, Andy Smith <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is it possible at the moment to export a single partition to a kvm
> guest as a virtio block device, like you can with Xen?  e.g. with
> LVM on the host, /dev/somevg/somelv -> /dev/vda1 ?

in Xen this is only possible for PV guests, non HVM.

KVM always uses what Xen calls HVM or 'full virtualization';
therefore, only export block devices, which (usually) contain
partition tables.


-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Sharing disks between two kvm guests

2008-06-20 Thread Javier Guerra
On Fri, Jun 20, 2008 at 7:23 AM, carlopmart <[EMAIL PROTECTED]> wrote:
> Felix Leimbach wrote:
>>
>>>  This is my first post to this list. I have already installed kvm-70
>>> under rhel5.2. My intention is to share on disk image betwwen two rhel5.2
>>> kvm guests. Is it possible to accomplish this in kvm like xen or vmware
>>> does?? How can I do?? I didn't find any reference abou this on kvm
>>> documentation ...

i tried this looong ago and didn't really work because there was some
userspace cache on each QEMU instance.  but the -drive option has a
'cache=off' setting that should be enough.

in theory (i haven't tested, but Avi 'blessed' it):
- create a new image with qemu-img
- add it to the command line using -drive file=xxx,cache=off on both
KVM instances
- use a cluster filesystem!

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linux with kvm-intel locks up VMplayer guest is started

2008-06-17 Thread Javier Guerra
On Tue, Jun 17, 2008 at 8:44 AM, Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> Sorry, Anthony, I just realized I misparsed your response.  So you're
> saying that it's a known issue and that VMware is the problem.  Thanks
> a lot!  I'll take it up with VMware.

i think what he's saying is that VMWare is a closed binary blob
executing in the kernel; so there's no way to certify anything with
this.

as soon as you put some unknown (and unknowable, unverifiable,
untrustable) code in the kernel, you can't know what will work and
what won't.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu-send.c (was Re: Since we're sharing, here's my kvmctl script)

2008-06-11 Thread Javier Guerra Giraldez
On Wednesday 11 June 2008, Chris Webb wrote:
> Hi. I have a small 'qemu-send' utility for talking to a running qemu/kvm
> process whose monitor console listens on a filesystem socket, which I think
> might be a useful building block when extending these kinds of script to do
> things like migratation, pausing, and so on. The source is attached.

there's a utility called socat that let's you send text to/from TCP sockets 
and unix-domain sockets.  it can even (temporarily) attach the terminal, or 
use GNU's readline to regain interactive control of KVM/Qemu


-- 
Javier


signature.asc
Description: This is a digitally signed message part.


Re: Since we're sharing, here's my kvmctl script

2008-06-11 Thread Javier Guerra Giraldez
On Wednesday 11 June 2008, Freddie Cash wrote:
> The script can be run as a normal user, as it will use sudo where
> needed.  However, this causes all the VMs to be run as root (this is
> developed on Debian where they've added that annoying "feature" of not
> being able to create/use tun/tap devices as non-root users).  If
> anyone knows how to unbreak Debian to allow non-root users to create
> tun/tap devices, I'm all ears.

change the group, owner, and/or privileges of /dev/net/tun, usually maneged by 
udev

-- 
Javier


signature.asc
Description: This is a digitally signed message part.


Re: New version of my kvmctl script

2008-06-03 Thread Javier Guerra
as promised, here's my patch against your first version.

-- 
Javier
--- usr/local/sbin/kvmctl	2008-05-31 18:03:14.0 -0500
+++ /usr/local/sbin/kvmctl	2008-06-03 12:05:11.0 -0500
@@ -1,4 +1,4 @@
-#!/bin/sh
+#!/bin/bash
 # Script for controlling kvm (kernel-based virtual machine)
 # (c) [EMAIL PROTECTED]
 #
@@ -149,7 +149,15 @@
 
 }
 
-
+function addtun
+{
+ifconfig "$1" > /dev/null 2>&1
+if [ "$?" = "0" ]; then
+	tunctl -d "$1"
+fi
+USERID=$(whoami)
+sudo tunctl -b -u "$USERID" -t "$1"
+}
 
 function usage
 {
@@ -195,7 +203,11 @@
 fi
 
 
-HDPARM=" -drive file=$HDA,if=$DISKMODEL,boot=on"
+if [ "$DISKMODEL" == "ide" ]; then
+	HDPARM=" -hda $HDA"
+else
+	HDPARM=" -drive file=$HDA,if=$DISKMODEL,boot=on"
+fi
 }
 
 
@@ -216,7 +228,7 @@
 function getvnc
 {
 VNC=`grep ^"$1 " $globalconf | awk {'print $2'}`
-if [ -z $VNC ] || [ x$VNC == "x-" ];then
+if [ -z "$VNC" ] || [ "x$VNC" == "x-" ]; then
 	vf=$rundir/$1.vnc
 	if [ -f $vf ];then
 	VNC=`cat $rundir/$1.vnc`
@@ -370,8 +382,15 @@
 if ! isrunning $name;then
 	createhdparm $DISKMODEL 
 	shift 2
+	addtun "$IFNAME"
+	if [ "$1" == "sdl" ]; then
+	shift
+	SCREEN=""
+	else
+	SCREEN=" -vnc :$VNC"
+	fi
 
-	nice -n $NICE $IONICEBIN -n $IONICE $TASKSET -c $CPULIST /usr/local/bin/qemu-system-x86_64 -std-vga -cpu $CPU -k $KEYBOARD -localtime -name $name  -usbdevice tablet -smp $SMP -m $MEM -vnc :$VNC $HDPARM  -net nic,macaddr=$MACADDR,model=$NICMODEL  -net tap,ifname=$IFNAME  -daemonize -pidfile $pidfile -monitor unix:$monfile,server,nowait $EXTRAPARM $* > $logfile 2>&1
+	nice -n $NICE $IONICEBIN -n $IONICE $TASKSET -c $CPULIST /usr/local/bin/qemu-system-x86_64 -cpu $CPU -k $KEYBOARD -localtime -name $name  -usbdevice tablet -smp $SMP -m $MEM $SCREEN $HDPARM  -net nic,macaddr=$MACADDR,model=$NICMODEL  -net tap,ifname=$IFNAME  -daemonize -pidfile $pidfile -monitor unix:$monfile,server,nowait $EXTRAPARM $* > $logfile 2>&1
 	if [ $? -eq 0 ];then
 	pid=`cat $pidfile`
 	#renice $NICE -p $pid > /dev/null 2>&1


Re: New version of my kvmctl script

2008-06-02 Thread Javier Guerra
i got it running under non-root.

first, i used to have a group 'kvm', and /dev/net/tun is writable by
this group. so, i made the 'temp' directories (/var/run/kvmctl/, and a
couple extra) also writable by the same group.

then, i added a couple calls to "tunctl" to pre-create the tap device
before running kvm.

i also added a "sdl" option to the "start" command, it simply removes
the VNC parameters from the startup line.  as i feel VNC is too clumsy
for a workstation, but i think it should be an 'ephemeral' option, not
for the config file.

i'll post my diffs tomorrow, i have to run now.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Control script for kvm

2008-06-02 Thread Javier Guerra
On Sat, May 31, 2008 at 2:41 PM, FinnTux N/A <[EMAIL PROTECTED]> wrote:
> I think pretty much everyone has made their own so here is mine.

looks pretty nice; i'm reworking my setup to adapt to yours.
(currently only on dev workstations, but soon will be a couple of
servers (replacing VMWare server)).

1 question:  you set -std-vga; is there any advantage for this? or is
it mostly for higer/wider resolutions?  since my guests are either
servers or test machines, i really don't have a use for screens bigger
than 1024x768


> TODO:
> - "ONBOOT" flag to start certain virtual machines on boot.  Also
> gracefully shut running vms on reboot/shutdown of host

1 idea: instead of a flag, have a 'dirstart' command, that starts all
machines (or links) from a given directory.  then add a "kvmctl
dirstart /etc/kvmctl/startup" to /etc/rc.local, or even simpler:

for vm in /etc/kvmctl/startup/*.cfg; do kvmctl start $vm; done

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Issues booting off virtual SCSI drives in kvm-69

2008-05-30 Thread Javier Guerra
On Fri, May 30, 2008 at 4:10 PM, Freddie Cash <[EMAIL PROTECTED]> wrote:
> Am I missing something simple, or can you only boot off virtual IDE
> drives?

add 'boot=on' to the drive specification:

/usr/bin/kvm -name webmail -daemonize -localtime -usb -usbdevice
tablet -smp 1 -m 2048 -vnc :05 -pidfile /var/run/kvm/webmail.pid -net
nic,macaddr=00:16:3e:00:00:05,model=rtl8139 -net tap,ifname=tap05
-boot c -drive index=1,media=disk,if=scsi,file=/dev/mapper/vol0-webmail--storage
-drive index=0,media=disk,if=scsi,file=/dev/mapper/vol0-webmail,boot=on


that got me for a long time too.

-- 
Javier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html