[libvirt] [PATCH v2] virsh-volume: Add missing check when calling virStreamNew
Check return value of virStreamNew when called by cmdVolUpload and cmdVolDownload. --- tools/virsh-volume.c | 12 ++-- 1 files changed, 10 insertions(+), 2 deletions(-) diff --git a/tools/virsh-volume.c b/tools/virsh-volume.c index 7dab532..0a66a6c 100644 --- a/tools/virsh-volume.c +++ b/tools/virsh-volume.c @@ -665,7 +665,11 @@ cmdVolUpload(vshControl *ctl, const vshCmd *cmd) goto cleanup; } -st = virStreamNew(ctl-conn, 0); +if (!(st = virStreamNew(ctl-conn, 0))) { +vshError(ctl, _(cannot create a new stream)); +goto cleanup; +} + if (virStorageVolUpload(vol, st, offset, length, 0) 0) { vshError(ctl, _(cannot upload to volume %s), name); goto cleanup; @@ -775,7 +779,11 @@ cmdVolDownload(vshControl *ctl, const vshCmd *cmd) created = true; } -st = virStreamNew(ctl-conn, 0); +if (!(st = virStreamNew(ctl-conn, 0))) { +vshError(ctl, _(cannot create a new stream)); +goto cleanup; +} + if (virStorageVolDownload(vol, st, offset, length, 0) 0) { vshError(ctl, _(cannot download from volume %s), name); goto cleanup; -- 1.7.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH]lxc: do cleanup when failed to bind fs as read-only
From: Chen Hanxiao chenhanx...@cn.fujitsu.com We forgot to 'goto cleanup' when lxcContainerMountFSTmpfs failed to bind fs as read-only. Signed-off-by: Chen Hanxiao chenhanx...@cn.fujitsu.com --- src/lxc/lxc_container.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/lxc/lxc_container.c b/src/lxc/lxc_container.c index c60f5d8..3fdf397 100644 --- a/src/lxc/lxc_container.c +++ b/src/lxc/lxc_container.c @@ -1451,6 +1451,7 @@ static int lxcContainerMountFSTmpfs(virDomainFSDefPtr fs, virReportSystemError(errno, _(Failed to make directory %s readonly), fs-dst); +goto cleanup; } } -- 1.8.2.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCHv3 1/2] VMware: Support more than 2 driver backends
On Fri, Sep 27, 2013 at 06:03:03PM +0800, Daniel Veillard wrote: On Tue, Sep 24, 2013 at 11:24:30AM -0500, Doug Goldstein wrote: Currently the VMware version check code only supports two types of VMware backends, Workstation and Player. But in the near future we will have an additional one so we need to support more. Additionally, we discover and cache the path to the vmrun binary so we should use that path when using the cooresponding binary from the VMware VIX SDK. --- change from v2: * No change change from v1: * Added default case so we don't potentially pass NULL to virCommand Since it adds a new product support with very little changes, looks (IMHO) safe and Matthias looks fine with it, I'm okay to have those two patches pushed for 1.1.3. So please make the few changes suggested by Matthias, and push preferably today as I would like to make rc2 over the week-end, Okay, i pushed before rc2, taking Matthias comments into account, Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veill...@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/ -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] Build failed in Jenkins: libvirt-syntax-check #1531
See http://honk.sigxcpu.org:8001/job/libvirt-syntax-check/1531/ -- Started by upstream project libvirt-build build number 1700 Building on master in workspace http://honk.sigxcpu.org:8001/job/libvirt-syntax-check/ws/ [workspace] $ /bin/sh -xe /tmp/hudson4053703831105800586.sh + make syntax-check GENbracket-spacing-check GFDL_version 0.48 GFDL_version TAB_in_indentation src/vmware/vmware_driver.c:163: break; src/vmware/vmware_driver.c:164: } maint.mk: indent with space, not TAB, in C, sh, html, py, syms and RNG schemas make: *** [sc_TAB_in_indentation] Error 1 Build step 'Execute shell' marked build as failure -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] Availability of release candidate 2 of libvirt-1.1.3
As planned I tagged rc2 in git and pushed tarball and rpms to the usual place at: ftp://libvirt.org/libvirt/ seems to behave correctly on my limited testing, please give it a try too. i will try to push 1.1.3 final on Tuesday or Wednesday depending on time available and feedback. thanks ! Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veill...@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/ -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] qemu, numa: non-contiguous cpusets
Btw, while I got your attention, on a not-really related topic: how do we feel about adding support for specifying a non-contiguous set of cpus for a numa node in qemu with the -numa option? I.e., like this, for example: x86_64-softmmu/qemu-system-x86_64 -smp 8 -numa node,nodeid=0,cpus=0\;2\;4-5 -numa node,nodeid=1,cpus=1\;3\;6-7 The ';' needs to be escaped from the shell but I'm open for better suggestions. Here's a diff: --- diff --git a/vl.c b/vl.c index 4e709d5c1c20..82a6c8451fb0 100644 --- a/vl.c +++ b/vl.c @@ -1261,9 +1261,27 @@ char *get_boot_devices_list(size_t *size) return list; } +static int __numa_set_cpus(unsigned long *map, int start, int end) +{ +if (end = MAX_CPUMASK_BITS) { +end = MAX_CPUMASK_BITS - 1; +fprintf(stderr, +qemu: NUMA: A max of %d VCPUs are supported\n, + MAX_CPUMASK_BITS); +return -EINVAL; +} + +if (end start) { +return -EINVAL; +} + +bitmap_set(map, start, end-start+1); +return 0; +} + static void numa_node_parse_cpus(int nodenr, const char *cpus) { -char *endptr; +char *endptr, *ptr = (char *)cpus; unsigned long long value, endvalue; /* Empty CPU range strings will be considered valid, they will simply @@ -1273,7 +1291,8 @@ static void numa_node_parse_cpus(int nodenr, const char *cpus) return; } -if (parse_uint(cpus, value, endptr, 10) 0) { +fromthetop: +if (parse_uint(ptr, value, endptr, 10) 0) { goto error; } if (*endptr == '-') { @@ -1282,22 +1301,22 @@ static void numa_node_parse_cpus(int nodenr, const char *cpus) } } else if (*endptr == '\0') { endvalue = value; -} else { -goto error; -} +} else if (*endptr == ';') { + if (__numa_set_cpus(node_cpumask[nodenr], value, value) 0) { + goto error; + } + endptr++; +if (*endptr == '\0') + return; -if (endvalue = MAX_CPUMASK_BITS) { -endvalue = MAX_CPUMASK_BITS - 1; -fprintf(stderr, -qemu: NUMA: A max of %d VCPUs are supported\n, - MAX_CPUMASK_BITS); -} + ptr = endptr; -if (endvalue value) { + goto fromthetop; +} else { goto error; } -bitmap_set(node_cpumask[nodenr], value, endvalue-value+1); +__numa_set_cpus(node_cpumask[nodenr], value, endvalue); return; error: -- Thanks. -- Regards/Gruss, Boris. Sent from a fat crate under my desk. Formatting is fine. -- -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] Build failed in Jenkins: libvirt-syntax-check #1531
On Sun, Sep 29, 2013 at 01:14:34PM +0200, Jenkins CI wrote: See http://honk.sigxcpu.org:8001/job/libvirt-syntax-check/1531/ -- Started by upstream project libvirt-build build number 1700 Building on master in workspace http://honk.sigxcpu.org:8001/job/libvirt-syntax-check/ws/ [workspace] $ /bin/sh -xe /tmp/hudson4053703831105800586.sh + make syntax-check GENbracket-spacing-check GFDL_version 0.48 GFDL_version TAB_in_indentation src/vmware/vmware_driver.c:163: break; src/vmware/vmware_driver.c:164: } maint.mk: indent with space, not TAB, in C, sh, html, py, syms and RNG schemas make: *** [sc_TAB_in_indentation] Error 1 Build step 'Execute shell' marked build as failure grin/ my fault, fixed, Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veill...@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/ -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] Jenkins build is back to normal : libvirt-syntax-check #1532
See http://honk.sigxcpu.org:8001/job/libvirt-syntax-check/1532/ -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] libvirt: How to boot VM in esx?
Can anyone please reply. I am really very curious to know...and I am trying everything to boot my node but it is not getting booted. :( B.R. Varun On Sat, Sep 28, 2013 at 3:10 PM, varun bhatnagar varun292...@gmail.comwrote: Hi, Im not able to boot my VM on esx hypervisor using libvirt. It says NO Operating System found. Can anyone help? Regards, Varun -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] storage: use btrfs file clone ioctl when possible
On Fri, Sep 27, 2013 at 03:19:06PM +0100, Daniel P. Berrange wrote: On Fri, Sep 27, 2013 at 05:02:53PM +0300, Oskari Saarenmaa wrote: Btrfs provides a copy-on-write clone ioctl so let's try to use it instead of copying files block by block. The ioctl is executed unconditionally if it's available and we fall back to block copying if it fails, similarly to cp --reflink=auto. Currently the virStorageVolCreateXMLFrom method does a full allocation of storage when cloning volumes. This means applications can rely on the image having enough space when clone completes and won't get ENOSPC in the VM. AFAICT, this change to do copy-on-write changes the API to do thin provisioning of the storage during clone, so any future write on either the new or old volume may generate ENOSPC when btrfs finally copies the sector. I don't think this is a good thing. I think applications should have to explicitly request copy-on-write behaviour for the clone so they know the implications. That's a good point. However, it looks like this change would only change the behavior for the old volumes; new volumes are always created sparsely and they may already get ENOSPC on write if they contained zero blocks. This should probably be fixed by calling fallocate instead of lseek when noticing empty blocks (safezero should probably be used instead, but it's currently rather unsafe if posix_fallocate isn't available.) I was wondering if we could reuse the allocation and capacity fields to decide whether or not to try to do a cow-clone (or sparse allocation of the cloned bits)? Currently a cloned volume's allocation is always set to at least the original volume's capacity and the original client-requested allocation value is not passed on to the code doing the cloning, but we could pass it on and allow copy-on-write clones if allocation is set to zero (no space is guaranteed to be available for writing) and also change sparse cloning to only happen if allocation is lower than capacity. / Oskari -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] virsh-volume: Add missing check when calling virStreamNew
2013/9/28 Martin Kletzander mklet...@redhat.com: On Thu, Sep 26, 2013 at 10:59:04AM +0800, Hongwei Bi wrote: Check return value of virStreamNew when called by cmdVolUpload and cmdVolDownload. --- tools/virsh-volume.c |8 ++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/tools/virsh-volume.c b/tools/virsh-volume.c index 7dab532..e8b0d9a 100644 --- a/tools/virsh-volume.c +++ b/tools/virsh-volume.c @@ -665,7 +665,9 @@ cmdVolUpload(vshControl *ctl, const vshCmd *cmd) goto cleanup; } -st = virStreamNew(ctl-conn, 0); +if (!(st = virStreamNew(ctl-conn, 0))) +goto cleanup; + Add a vshError() call before the 'goto'. if (virStorageVolUpload(vol, st, offset, length, 0) 0) { vshError(ctl, _(cannot upload to volume %s), name); goto cleanup; @@ -775,7 +777,9 @@ cmdVolDownload(vshControl *ctl, const vshCmd *cmd) created = true; } -st = virStreamNew(ctl-conn, 0); +if (!(st = virStreamNew(ctl-conn, 0))) +goto cleanup; + ditto if (virStorageVolDownload(vol, st, offset, length, 0) 0) { vshError(ctl, _(cannot download from volume %s), name); goto cleanup; -- ACK with that fixed, Martin The v2 of the patch has been post. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently
From 6c2de34432db674072231ad66c9e8a0a600ede8a Mon Sep 17 00:00:00 2001 From: WangYufei james.wangyu...@huawei.com Date: Mon, 30 Sep 2013 11:48:43 +0800 Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently When we migrate vms concurrently, there's a chance that libvirtd on destination assign the same port for different migrations, which will lead to migration failed during migration prepare phase on destination. So we use virPortAllocator here to solve the problem. Signed-off-by: WangYufei james.wangyu...@huawei.com --- src/qemu/qemu_command.h |3 +++ src/qemu/qemu_conf.h |6 +++--- src/qemu/qemu_driver.c|6 ++ src/qemu/qemu_migration.c | 17 ++--- 4 files changed, 22 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h index 2e2acfb..3277ba4 100644 --- a/src/qemu/qemu_command.h +++ b/src/qemu/qemu_command.h @@ -51,6 +51,9 @@ # define QEMU_WEBSOCKET_PORT_MIN 5700 # define QEMU_WEBSOCKET_PORT_MAX 65535 +# define QEMU_MIGRATION_PORT_MIN 49152 +# define QEMU_MIGRATION_PORT_MAX 49215 + typedef struct _qemuBuildCommandLineCallbacks qemuBuildCommandLineCallbacks; typedef qemuBuildCommandLineCallbacks *qemuBuildCommandLineCallbacksPtr; struct _qemuBuildCommandLineCallbacks { diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h index da29a2a..3176085 100644 --- a/src/qemu/qemu_conf.h +++ b/src/qemu/qemu_conf.h @@ -221,6 +221,9 @@ struct _virQEMUDriver { /* Immutable pointer, self-locking APIs */ virPortAllocatorPtr webSocketPorts; +/* Immutable pointer, self-locking APIs */ +virPortAllocatorPtr migrationPorts; + /* Immutable pointer, lockless APIs*/ virSysinfoDefPtr hostsysinfo; @@ -242,9 +245,6 @@ struct _qemuDomainCmdlineDef { char **env_value; }; -/* Port numbers used for KVM migration. */ -# define QEMUD_MIGRATION_FIRST_PORT 49152 -# define QEMUD_MIGRATION_NUM_PORTS 64 void qemuDomainCmdlineDefFree(qemuDomainCmdlineDefPtr def); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e8bc04d..9437b5a 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -688,6 +688,11 @@ qemuStateInitialize(bool privileged, cfg-webSocketPortMax)) == NULL) goto error; +if ((qemu_driver-migrationPorts = +virPortAllocatorNew(QEMU_MIGRATION_PORT_MIN, +QEMU_MIGRATION_PORT_MAX)) == NULL) +goto error; + if (qemuSecurityInit(qemu_driver) 0) goto error; @@ -994,6 +999,7 @@ qemuStateCleanup(void) { virObjectUnref(qemu_driver-domains); virObjectUnref(qemu_driver-remotePorts); virObjectUnref(qemu_driver-webSocketPorts); +virObjectUnref(qemu_driver-migrationPorts); virObjectUnref(qemu_driver-xmlopt); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 3a1aab7..82d90bf 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2493,7 +2493,6 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, const char *origname, unsigned long flags) { -static int port = 0; int this_port; char *hostname = NULL; const char *p; @@ -2521,8 +2520,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, * to be a correct hostname which refers to the target machine). */ if (uri_in == NULL) { -this_port = QEMUD_MIGRATION_FIRST_PORT + port++; -if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0; +if (virPortAllocatorAcquire(driver-migrationPorts, +(unsigned short *)this_port) 0) +goto cleanup; /* Get hostname */ if ((hostname = virGetHostname()) == NULL) @@ -2578,9 +2578,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, if (uri-port == 0) { /* Generate a port */ -this_port = QEMUD_MIGRATION_FIRST_PORT + port++; -if (port == QEMUD_MIGRATION_NUM_PORTS) -port = 0; +if (virPortAllocatorAcquire(driver-migrationPorts, +(unsigned short *)this_port) 0) +goto cleanup; /* Caller frees */ if (virAsprintf(uri_out, %s:%d, uri_in, this_port) 0) @@ -2600,8 +2600,11 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, cleanup: virURIFree(uri); VIR_FREE(hostname); -if (ret != 0) +if (ret != 0) { VIR_FREE(*uri_out); +virPortAllocatorRelease(driver-migrationPorts, +(unsigned short)this_port); +} return ret; } -- 1.7.3.1.msysgit.0 Best Regards, -WangYufei -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [BUG] libvirtd on destination crash frequently while migrating vms concurrently
Hi guys, Is there any problem with my analysis? Am I right? If my analysis is right, do we have any plan to solve this kind of problems caused by deletion of driver lock? Thanks for your time and kind to reply. _ From: Wangyufei (A) Sent: Friday, September 27, 2013 3:56 PM To: libvir-list@redhat.com Cc: Wangrui (K); Wangyufei (A); Michal Privoznik; jdene...@redhat.com Subject: [BUG] libvirtd on destination crash frequently while migrating vms concurrently Hello, I found a problem that libvirtd on destination crash frequently while migrating vms concurrently. For example, if I migrate 10 vms concurrently ceaselessly, then after about 30 minutes the libvirtd on destination will crash. So I analyzed and found two bugs during migration process. First, during migration prepare phase on destination, libvirtd assigns ports to qemu to be startd on destination. But the port increase operation is not aomic, so there's a chance that multi vms get the same port, and only the first one can start successfully, others will fail to start. I've applied a patch to solve this bug, and I test it, it works well. If only this bug exists, libvirtd will not crash. The second bug is fatal. Second, I found the libvirtd crash because of segment fault which is produced by accessing vm released. Apparently it's caused by multi-thread operation, thread A access vm data which has released by thread B. At last I proved my thought right. Step 1. Because of bug one, the port is already occupied, so qemu on destination failed to start and sent a HANGUP signal to libvirtd, then libvirtd received this VIR_EVENT_HANDLE_HANGUP event, thread A dealing with events called qemuProcessHandleMonitorEOF as following: #0 qemuProcessHandleMonitorEOF (mon=0x7f4dcd9c3130, vm=0x7f4dcd9c9780) at qemu/qemu_process.c:399 #1 0x7f4dc18d9e87 in qemuMonitorIO (watch=68, fd=27, events=8, opaque=0x7f4dcd9c3130) at qemu/qemu_monitor.c:668 #2 0x7f4dccae6604 in virEventPollDispatchHandles (nfds=18, fds=0x7f4db4017e70) at util/vireventpoll.c:500 #3 0x7f4dccae7ff2 in virEventPollRunOnce () at util/vireventpoll.c:646 #4 0x7f4dccae60e4 in virEventRunDefaultImpl () at util/virevent.c:273 #5 0x7f4dccc40b25 in virNetServerRun (srv=0x7f4dcd8d26b0) at rpc/virnetserver.c:1106 #6 0x7f4dcd6164c9 in main (argc=3, argv=0x7fff8d8f9f88) at libvirtd.c:1518 static int virEventPollDispatchHandles(int nfds, struct pollfd *fds) { .. /* deleted flag is still false now, so we pass through to qemuProcessHandleMonitorEOF */ if (eventLoop.handles[i].deleted) { EVENT_DEBUG(Skip deleted n=%d w=%d f=%d, i, eventLoop.handles[i].watch, eventLoop.handles[i].fd); continue; } Step 2: Thread B dealing with migration on destination set deleted flag in virEventPollRemoveHandle as following: #0 virEventPollRemoveHandle (watch=74) at util/vireventpoll.c:176 #1 0x7f4dccae5e6f in virEventRemoveHandle (watch=74) at util/virevent.c:97 #2 0x7f4dc18d8ca8 in qemuMonitorClose (mon=0x7f4dbc030910) at qemu/qemu_monitor.c:831 #3 0x7f4dc18bec63 in qemuProcessStop (driver=0x7f4dcd9bd400, vm=0x7f4dbc00ed20, reason=VIR_DOMAIN_SHUTOFF_FAILED, flags=0) at qemu/qemu_process.c:4302 #4 0x7f4dc18c1a83 in qemuProcessStart (conn=0x7f4dbc031020, driver=0x7f4dcd9bd400, vm=0x7f4dbc00ed20, migrateFrom=0x7f4dbc01af90 tcp:[::]:49152, stdin_fd=-1, stdin_path=0x0, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6) at qemu/qemu_process.c:4145 #5 0x7f4dc18cc688 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400, Step 3: Thread B cleanup vm in qemuMigrationPrepareAny after qemuProcessStart failed. #0 virDomainObjDispose (obj=0x7f4dcd9c9780) at conf/domain_conf.c:2009 #1 0x7f4dccb0ccd9 in virObjectUnref (anyobj=0x7f4dcd9c9780) at util/virobject.c:266 #2 0x7f4dccb42340 in virDomainObjListRemove (doms=0x7f4dcd9bd4f0, dom=0x7f4dcd9c9780) at conf/domain_conf.c:2342 #3 0x7f4dc189ac33 in qemuDomainRemoveInactive (driver=0x7f4dcd9bd400, vm=0x7f4dcd9c9780) at qemu/qemu_domain.c:1993 #4 0x7f4dc18ccad5 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400, Step 4: Thread A access priv which is released by thread B before, then libvirtd crash, bomb! static void qemuProcessHandleMonitorEOF(qemuMonitorPtr mon ATTRIBUTE_UNUSED, virDomainObjPtr vm) { virQEMUDriverPtr driver = qemu_driver; virDomainEventPtr event = NULL; qemuDomainObjPrivatePtr priv; int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN; int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN; const char *auditReason = shutdown; VIR_DEBUG(Received EOF on %p '%s', vm, vm-def-name); virObjectLock(vm); priv = vm-privateData; (gdb) p priv $1 =
Re: [libvirt] networking restart after redefinition
On 2013-09-28 09:53, Mihamina Rakotomandimby wrote: Hi all, Running Fedora 18 and the bundled libvirt and virt-tools. Desktop use. As I like to access my guests with a hostname and not a numerical IP address, I fix the IP addressing in the network configuration: http://pastebin.com/rfMKn40j I'd better disable DHCP in the network configuration [1] and then run a spacial guest dedidacted to network configuration. [1] http://www.krisbuytaert.be/blog/disabling-dhcp-libvirt-setup -- RMA. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list