On Fri, Feb 19, 2016 at 11:15 PM, Michael R. Hines
<mhi...@digitalocean.com <mailto:mhi...@digitalocean.com>> wrote:
Is the QEMU process (after startup) actually running as the QEMU
userid ?
/*
* Michael R. Hines
* Platform Engineer, DigitalOcean.
*/
O
community on the
use of those system calls.
/*
* Michael R. Hines
* Platform Engineer, DigitalOcean.
*/
On 02/19/2016 04:37 PM, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process
MEMLOCK? maybe
Is the QEMU process (after startup) actually running as the QEMU userid ?
/*
* Michael R. Hines
* Platform Engineer, DigitalOcean.
*/
On 02/19/2016 02:43 PM, Roy Shterman wrote:
First off all thank you for your answer,
I couldn't figured how to start virtual machine with increased MEMLOCK
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two
questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCKmax locked-in-memory address space 65536 65536 bytes
in qemu process
On 04/04/2014 11:47 PM, Eric Blake wrote:
On 04/04/2014 12:19 AM, Michael R. Hines wrote:
@@ -2561,6 +2570,10 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr
qemuCaps,
if (qemuCaps-version = 1006000)
virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY);
+if (qemuCaps
On 04/05/2014 04:46 AM, Eric Blake wrote:
On 04/04/2014 12:29 AM, Michael R. Hines wrote:
Yes, it's present, but it still does not guarantee that QEMU supports
it if RDMA was compiled out - only the version number is a
(minimal) guarantee, and even then the hardware can still throw
an error
On 02/03/2014 11:19 PM, Jiri Denemark wrote:
USAGE: $ virsh migrate --live --migrateuri x-rdma:hostname domain
qemu+ssh://hostname/system
s/x-rdma/rdma/ and I believe we should use rdma://hostname as the URI
Acknowledged.
Signed-off-by: Michael R. Hines mrhi...@us.ibm.com
---
src/qemu
On 02/03/2014 11:44 PM, Eric Blake wrote:
On 02/03/2014 08:19 AM, Jiri Denemark wrote:
On Mon, Jan 13, 2014 at 14:28:11 +0800, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
The switch from x-rdma = rdma has not yet happened,
but at least we can review the patch
On 02/04/2014 10:56 PM, Jiri Denemark wrote:
On Mon, Jan 13, 2014 at 14:28:12 +0800, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
RDMA Live migration requires registering memory with the hardware,
Hmm, I forgot to ask when I was reviewing the previous patch
On 02/03/2014 08:32 PM, Jiri Denemark wrote:
On Mon, Jan 13, 2014 at 14:28:10 +0800, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
RDMA migration uses the 'setup' state in QEMU to optionally lock
all memory before the migration starts. The total time spent
On 02/03/2014 08:52 PM, Daniel P. Berrange wrote:
On Mon, Feb 03, 2014 at 01:32:59PM +0100, Jiri Denemark wrote:
On Mon, Jan 13, 2014 at 14:28:10 +0800, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
RDMA migration uses the 'setup' state in QEMU to optionally lock
Hi,
I'm using OpenStack grizzly on Ubuntu 12.04 to create LXC containers.
After successful container creation and a subsequent reboot of the
hypervisor,
libvirt gets stuck in an infinite loop, spinning at 100%
Here's the backtrace:
(gdb) thread 12
[Switching to thread 12 (Thread
On 08/02/2013 09:06 PM, Michael R. Hines wrote:
Hi,
I'm using OpenStack grizzly on Ubuntu 12.04 to create LXC containers.
After successful container creation and a subsequent reboot of the
hypervisor,
libvirt gets stuck in an infinite loop, spinning at 100%
Here's the backtrace:
(gdb
On 07/29/2013 06:18 AM, Daniel P. Berrange wrote:
On Fri, Jul 26, 2013 at 12:16:08PM -0600, Eric Blake wrote:
On 07/26/2013 11:47 AM, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
QEMU has in tree now planned for 1.6 support for RDMA-based live migration
On 07/26/2013 02:16 PM, Eric Blake wrote:
On 07/26/2013 11:47 AM, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
QEMU has in tree now planned for 1.6 support for RDMA-based live migration.
Changes to libvirt:
1. QEMU has a new 'setup' phase in their state machine
On 07/26/2013 02:17 PM, Jiri Denemark wrote:
On Fri, Jul 26, 2013 at 13:47:43 -0400, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
Previously, QEMU's 'setup' state was no a formal state in their
state machine, but it is now. This state is used by RDMA
On 07/26/2013 02:27 PM, Jiri Denemark wrote:
On Fri, Jul 26, 2013 at 13:47:44 -0400, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
QEMU has in tree now for version 1.6 support for RDMA Live migration.
Full documenation of the feature:
http://wiki.qemu.org
On 07/26/2013 02:32 PM, Eric Blake wrote:
On 07/26/2013 11:47 AM, mrhi...@linux.vnet.ibm.com wrote:
From: Michael R. Hines mrhi...@us.ibm.com
Previously, QEMU's 'setup' state was no a formal state in their
state machine, but it is now. This state is used by RDMA to optionally
perform memory
On 07/26/2013 02:56 PM, Eric Blake wrote:
On 07/26/2013 12:48 PM, Michael R. Hines wrote:
int ret = -1;
+if (qemuCaps-version = MIN_X_RDMA_VERSION) {
+virQEMUCapsSet(qemuCaps, QEMU_CAPS_MIGRATE_QEMU_X_RDMA);
+}
+
if (!(archstr = qemuMonitorGetTargetArch(mon
On 06/03/2013 06:03 AM, Daniel P. Berrange wrote:
On Wed, May 22, 2013 at 06:22:12PM -0400, Michael R. Hines wrote:
Hi,
We run nvidia devices inside libvirt-managed LXC containers.
It used to be that simply doing:
$ echo 'c 195:* rwm' /sys/fs/cgroup/devices/libvirt/lxc
Then, after booting
applications.
But, according to:
$ cat src/lxc/lxc_container.c
The CAP_MKNOD capability is being dropped and only a specific
set of devices is being created before booting the container.
Is there any reason why this is not per-device configurable?
Thanks,
- Michael R. Hines
--
libvir-list mailing list
21 matches
Mail list logo