Re: [pve-devel] Question to ZFSPlugin.pm
This is strange, the destroy vm code is # only remove disks owned by this VM foreach_drive($conf, sub { my ($ds, $drive) = @_; return if drive_is_cdrom($drive); my $volid = $drive-{file}; return if !$volid || $volid =~ m|^/|; my ($path, $owner) = PVE::Storage::path($storecfg, $volid); return if !$path || !$owner || ($owner != $vmid); PVE::Storage::vdisk_free($storecfg, $volid); }); if ($keep_empty_config) { PVE::Tools::file_set_contents($conffile, memory: 128\n); } else { unlink $conffile; } So,If PVE::Storage::vdisk_free is hanging (before the disk have clone), the config file shouldn't be delete the vdisk_free code is $plugin-cluster_lock_storage($storeid, $scfg-{shared}, undef, sub { my ($vtype, $name, $vmid, undef, undef, $isBase) = $plugin-parse_volname($volname); if ($isBase) { my $vollist = $plugin-list_images($storeid, $scfg); foreach my $info (@$vollist) { my (undef, $tmpvolname) = parse_volume_id($info-{volid}); my $basename = undef; my $basevmid = undef; eval{ (undef, undef, undef, $basename, $basevmid) = $plugin-parse_volname($tmpvolname); }; if ($basename defined($basevmid) $basevmid == $vmid $basename eq $name) { die base volume '$volname' is still in use . (use by '$tmpvolname')\n; } } } my $cleanup_worker = $plugin-free_image($storeid, $scfg, $volname, $isBase); }); if volume isbase (this is the case of templates, the disk volid is base-), we check if we have clones. (listing all volumes, and check if the volumes have the base image as parent) - Mail original - De: Wolfgang Link w.l...@proxmox.com À: datanom.net m...@datanom.net, pve-devel pve-devel@pve.proxmox.com Envoyé: Mercredi 21 Janvier 2015 11:05:30 Objet: [pve-devel] Question to ZFSPlugin.pm I implemented the ZFSPlugin for local Zfs use. When I test my adaption of this Plug-in, I recognize that it is possible to erase a template witch has a linked clone. I mean the Zfs-volume will not destroyed but the config of the template. My problem is I can't validate, it on the orginal Plugin, because I have no iSCSI Nexenta/OmniOS. Has the ZFSPlugin the same behavior on iSCSI? regards. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Hyper-V enlightenments with KVM
By the way, in the future, we can enable them without breaking live migration. (I have tested both features). I think we just need to wait that kernel 3.10 (from rhel7.1) will be the default kernel, maybe for proxmox 4 - debian jessie - openvz on kernel 3.10) - Mail original - De: aderumier aderum...@odiso.com À: dietmar diet...@proxmox.com Cc: pve-devel pve-devel@pve.proxmox.com Envoyé: Mercredi 21 Janvier 2015 10:58:15 Objet: Re: [pve-devel] Hyper-V enlightenments with KVM So hv_vapic is known to make problems. hv_apic in current kernel 3.10 is buggy on some processor ( Westmere), https://bugzilla.redhat.com/show_bug.cgi?id=1091818 I should be fixed in comming rhel 7.1 kernel Not sure if we can/should add hv_time? kernel 3.10 work fine currently - Mail original - De: dietmar diet...@proxmox.com À: pve-devel pve-devel@pve.proxmox.com, Lindsay Mathieson lindsay.mathie...@gmail.com Envoyé: Mercredi 21 Janvier 2015 07:54:51 Objet: Re: [pve-devel] Hyper-V enlightenments with KVM From look at the kvm args in the process list I see we have: hv_relaxed,hv_spinlocks=0x but the current recomendations are: hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time This is a comment from current code: if ($ost eq 'win7' || $ost eq 'win8' || $ost eq 'w2k8' || $ost eq 'wvista') { push @$globalFlags, 'kvm-pit.lost_tick_policy=discard'; push @$cmd, '-no-hpet'; #push @$cpuFlags , 'hv_vapic if !$nokvm; #fixme, my win2008R2 hang at boot with this push @$cpuFlags , 'hv_spinlocks=0x' if !$nokvm; } if ($ost eq 'win7' || $ost eq 'win8') { push @$cpuFlags , 'hv_relaxed' if !$nokvm; } --- So hv_vapic is known to make problems. Not sure if we can/should add hv_time? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] Question to ZFSPlugin.pm
I implemented the ZFSPlugin for local Zfs use. When I test my adaption of this Plug-in, I recognize that it is possible to erase a template witch has a linked clone. I mean the Zfs-volume will not destroyed but the config of the template. My problem is I can't validate, it on the orginal Plugin, because I have no iSCSI Nexenta/OmniOS. Has the ZFSPlugin the same behavior on iSCSI? regards. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] memory hotplug patch v6
Ok, I'll rebase my patch with the mapping. - Mail original - De: dietmar diet...@proxmox.com À: aderumier aderum...@odiso.com Cc: Daniel Hunsaker danhunsa...@gmail.com, pve-devel pve-devel@pve.proxmox.com Envoyé: Mercredi 21 Janvier 2015 09:05:34 Objet: Re: [pve-devel] [PATCH] memory hotplug patch v6 I just look at the mapping, at the end dimm250 4194304 113244160 3.70 dimm251 4194304 117438464 3.57 dimm252 4194304 121632768 3.45 dimm253 4194304 125827072 3.33 dimm254 4194304 130021376 3.23 dimm255 4194304 134215680 3.13 that give us 4TB memory modules ? for a max memory of 127TB ? Yes :-) maybe it's a little bit too much ? ;) I would like to have have more granularity maybe: Yes, I guess we can do that, because we now have two different setting (dimm_memory and memory). So the user can always set 'memory' if he want more than 127TB ;-) ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] enable hotplug by default
Hi all, my plan is to enable hotplug with next release: https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=7196b757e7e0c5b252a5ea88cb288bc550f3fb7b Any objections? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Hyper-V enlightenments with KVM
So hv_vapic is known to make problems. hv_apic in current kernel 3.10 is buggy on some processor ( Westmere), https://bugzilla.redhat.com/show_bug.cgi?id=1091818 I should be fixed in comming rhel 7.1 kernel Not sure if we can/should add hv_time? kernel 3.10 work fine currently - Mail original - De: dietmar diet...@proxmox.com À: pve-devel pve-devel@pve.proxmox.com, Lindsay Mathieson lindsay.mathie...@gmail.com Envoyé: Mercredi 21 Janvier 2015 07:54:51 Objet: Re: [pve-devel] Hyper-V enlightenments with KVM From look at the kvm args in the process list I see we have: hv_relaxed,hv_spinlocks=0x but the current recomendations are: hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time This is a comment from current code: if ($ost eq 'win7' || $ost eq 'win8' || $ost eq 'w2k8' || $ost eq 'wvista') { push @$globalFlags, 'kvm-pit.lost_tick_policy=discard'; push @$cmd, '-no-hpet'; #push @$cpuFlags , 'hv_vapic if !$nokvm; #fixme, my win2008R2 hang at boot with this push @$cpuFlags , 'hv_spinlocks=0x' if !$nokvm; } if ($ost eq 'win7' || $ost eq 'win8') { push @$cpuFlags , 'hv_relaxed' if !$nokvm; } --- So hv_vapic is known to make problems. Not sure if we can/should add hv_time? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel