Re: [pve-devel] LXC volumes on ZFS over iSCSI
> The backup gui has snapshot mode as default so if the user misses to > change this setting backup will fail. The scheduled backup seems to be > able to detect what backup mode a VM's storage and type support and > choose backup mode accordingly. BTW, we already have some code to support such behavior here: https://git.proxmox.com/?p=pve-manager.git;a=blob;f=PVE/VZDump.pm;h=5f77b5486318a979d2d94bfc15ff97e46d8465c5;hb=HEAD#l868 The vzdump 'prepare' hook can simply return a "mode failure.*" exception. This causes vzdump to fall back to 'suspend' mode. We use that if a storage does not support snapshots. Code in pct is also already there: https://git.proxmox.com/?p=pve-container.git;a=blob;f=src/PVE/VZDump/LXC.pm;h=7062bf3b6dec745c0ac5cfe9540c0564bca68a7b;hb=HEAD#l139 I guess you just need to add a special case for your storage type there. hope that helps? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> > BTW, we just uploaded the target-cli packages. > > > Yes, has noticed. Writing a LUN plugin for target-cli is on my todo > list. Great! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Sun, 23 Oct 2016 12:03:15 +0200 (CEST) Dietmar Maurerwrote: > > BTW, we just uploaded the target-cli packages. > Yes, has noticed. Writing a LUN plugin for target-cli is on my todo list. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Of what you see in books, believe 75%. Of newspapers, believe 50%. And of TV news, believe 25% -- make that 5% if the anchorman wears a blazer. pgpb3glYHbEVC.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> > Simply make a backup using suspend mode (I guess I miss something)? > > > The backup gui has snapshot mode as default so if the user misses to > change this setting backup will fail. The scheduled backup seems to be > able to detect what backup mode a VM's storage and type support and > choose backup mode accordingly. Right. On the other side, all those hacks consumes too much (depelopment) time. I would prefer to have a working storage instead... BTW, we just uploaded the target-cli packages. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Sun, 23 Oct 2016 09:59:16 +0200 (CEST) Dietmar Maurerwrote: > > Since it is not possible to expose a zfs snapshot over iscsi backup in > > snapshot mode is not an option so is there a way to have the backup job > > skip snapshot mode and automatically use suspend mode to avoid error > > messages like this? > > Simply make a backup using suspend mode (I guess I miss something)? > The backup gui has snapshot mode as default so if the user misses to change this setting backup will fail. The scheduled backup seems to be able to detect what backup mode a VM's storage and type support and choose backup mode accordingly. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: No one can guarantee the actions of another. -- Spock, "Day of the Dove", stardate unknown pgpUO_bPRqSKh.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> Since it is not possible to expose a zfs snapshot over iscsi backup in > snapshot mode is not an option so is there a way to have the backup job > skip snapshot mode and automatically use suspend mode to avoid error > messages like this? Simply make a backup using suspend mode (I guess I miss something)? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Sun, 23 Oct 2016 05:46:13 +0200 Michael Rasmussenwrote: > Missing. > Deactivating when CT is stopped -> solution just needs to be > implemented. > Test and proof migrations works, both online and offline. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: What no spouse of a writer can ever understand is that a writer is working when he's staring out the window. pgpWNgYPh1nyo.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 23:12:58 +0200 Michael Rasmussenwrote: > It seems to work :-) > I am able now to create a LXC on a zvol. Works great;-) I am able to make backup in suspend mode. I am able to make snapshots, both online and offline. Missing. Deactivating when CT is stopped -> solution just needs to be implemented. Since it is not possible to expose a zfs snapshot over iscsi backup in snapshot mode is not an option so is there a way to have the backup job skip snapshot mode and automatically use suspend mode to avoid error messages like this? ERROR: Backup of VM 100 failed - unable to activate snapshot from remote zfs storage at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 418. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Before destruction a man's heart is haughty, but humility goes before honour. -- Proverbs 18:12 pgpt5C3ZhbG9O.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>That and that you have to use hard locking to garanti against data >>loss. The hard locking is the part which can bring any server down if >>the nfs server connection hangs in any way. >>Add to that that it is very difficult to make a high available NFS >>server since client locks are stored in kernel mode. Yes, sure. That's why netapp share nfs sessions across controllers. But indeed, I don't known if they are good opensource implementation of HA nfs. When I had my nexenta zfs san, I was using iscsi. The only problem was the number of luns exposed on hosts, because I had all luns of all vms on alls hosts. (so between 500-700luns), and multipath daemon was cpu crazy. - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 12 Octobre 2016 07:44:25 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Wed, 12 Oct 2016 05:41:26 +0200 (CEST) Alexandre DERUMIER <aderum...@odiso.com> wrote: > > Well, this is very dependent of nfs server implementation quality. > I'm running vm on netapp san though nfs 4.1, and I have very good > performance. > netapp is a purpose build NFS based storage appliance with their own customized NFS server so netapp is an exception to the rule. > I think the biggest problem is lack of fstrim/discard. (should be availble in > the future nfs 4.2) > That and that you have to use hard locking to garanti against data loss. The hard locking is the part which can bring any server down if the nfs server connection hangs in any way. Add to that that it is very difficult to make a high available NFS server since client locks are stored in kernel mode. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Most people need some of their problems to help take their mind off some of the others. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Wed, 12 Oct 2016 05:41:26 +0200 (CEST) Alexandre DERUMIERwrote: > > Well, this is very dependent of nfs server implementation quality. > I'm running vm on netapp san though nfs 4.1, and I have very good performance. > netapp is a purpose build NFS based storage appliance with their own customized NFS server so netapp is an exception to the rule. > I think the biggest problem is lack of fstrim/discard. (should be availble in > the future nfs 4.2) > That and that you have to use hard locking to garanti against data loss. The hard locking is the part which can bring any server down if the nfs server connection hangs in any way. Add to that that it is very difficult to make a high available NFS server since client locks are stored in kernel mode. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Most people need some of their problems to help take their mind off some of the others. pgpvRNvXRcoH3.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>And what about CIFS? I can't comment about CIFS, I never run vm on it. But what I known , is that it's really depending of CIFS version implementation, both client && server. Old CIFS version (smb 1.x) (windows2003/xp), was a very bad protocol (very chatty). I known that last windows2012 have very good CIFS (smb 3.x), I don't known support in samba4 server or other NAS. http://ram.kossboss.com/correlating-versions-samba-smbcifs/ To be honest,I don't care anymore, as I'm currently migrating on my vms on ceph/rbd ;) - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 12 Octobre 2016 06:32:37 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI > >>NFS does not provide good iops and has the ability to bring down a > >>node. IMHO NFS is only useful as filestorage server for backups and > >>iso images. > > Well, this is very dependent of nfs server implementation quality. > I'm running vm on netapp san though nfs 4.1, and I have very good > performance. > > I think the biggest problem is lack of fstrim/discard. (should be availble in > the future nfs 4.2) And what about CIFS? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
Alexandre Derumier Ingénieur système et stockage Manager Infrastructure Fixe : +33 3 59 82 20 10 125 Avenue de la république 59110 La Madeleine [ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [ https://www.linkedin.com/company/odiso ] [ https://www.viadeo.com/fr/company/odiso ] [ https://www.facebook.com/monsiteestlent ] [ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la webperformance et la gestion de pics de trafic - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 12 Octobre 2016 06:32:37 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI > >>NFS does not provide good iops and has the ability to bring down a > >>node. IMHO NFS is only useful as filestorage server for backups and > >>iso images. > > Well, this is very dependent of nfs server implementation quality. > I'm running vm on netapp san though nfs 4.1, and I have very good > performance. > > I think the biggest problem is lack of fstrim/discard. (should be availble in > the future nfs 4.2) And what about CIFS? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> I'm curious to see result of: > >>It seems to work :-) Ok, so it shouldn't be too diffcult to add|remove lun in activate|deactiveate volume :) About NFS vs Iscsi, I think we should add both if we can. Because not everybody love both NFS && ISCSI. (I'm already seeing the long debate in forum ;) - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 11 Octobre 2016 23:12:58 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Mon, 10 Oct 2016 22:30:04 +0200 (CEST) Alexandre DERUMIER <aderum...@odiso.com> wrote: > > I'm afraid of behaviour if a node is not joinable to do the delete or rescan. > Seem to be difficult to manage with big cluster. > I'm curious to see result of: > It seems to work :-) 1) After login to target: Oct 11 22:56:41 pve-dev kernel: [70633.376458] scsi host6: iSCSI Initiator over TCP/IP Oct 11 22:56:42 pve-dev kernel: [70633.885229] scsi 6:0:0:0: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 Oct 11 22:56:42 pve-dev kernel: [70633.887715] sd 6:0:0:0: Attached scsi generic sg2 type 0 Oct 11 22:56:42 pve-dev kernel: [70633.889615] sd 6:0:0:0: [sdb] 4194304 512-byte logical blocks: (2.15 GB/2.00 GiB) Oct 11 22:56:42 pve-dev kernel: [70633.891141] sd 6:0:0:0: [sdb] Write Protect is off Oct 11 22:56:42 pve-dev kernel: [70633.891655] sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Oct 11 22:56:42 pve-dev kernel: [70633.902977] sd 6:0:0:0: [sdb] Attached SCSI disk fdisk -l Disk /dev/sdb: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes 2) On target remove view 3) Rescan target: Oct 11 22:59:06 pve-dev kernel: [70778.489172] sdb: detected capacity change from 2147483648 to 0 fdisk -l No disk /dev/sdb lsscsi [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0] disk SUN COMSTAR 1.0 /dev/sda [6:0:0:0] disk SUN COMSTAR 1.0 /dev/sdb echo 1 > /sys/bus/scsi/devices/target6\:0\:0/6\:0\:0\:0/delete lsscsi [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0] disk SUN COMSTAR 1.0 /dev/sda 4) On target add view again 5) Rescan target: Oct 11 23:11:11 pve-dev kernel: [71503.637169] scsi 6:0:0:0: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 Oct 11 23:11:11 pve-dev kernel: [71503.638560] sd 6:0:0:0: Attached scsi generic sg2 type 0 Oct 11 23:11:11 pve-dev kernel: [71503.641348] sd 6:0:0:0: [sdb] 4194304 512-byte logical blocks: (2.15 GB/2.00 GiB) Oct 11 23:11:11 pve-dev kernel: [71503.645011] sd 6:0:0:0: [sdb] Write Protect is off Oct 11 23:11:11 pve-dev kernel: [71503.645520] sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Oct 11 23:11:11 pve-dev kernel: [71503.655686] sd 6:0:0:0: [sdb] Attached SCSI disk lsscsi [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0] disk SUN COMSTAR 1.0 /dev/sda [6:0:0:0] disk SUN COMSTAR 1.0 /dev/sdb -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Breeding rabbits is a hare raising experience. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> >>NFS does not provide good iops and has the ability to bring down a > >>node. IMHO NFS is only useful as filestorage server for backups and > >>iso images. > > Well, this is very dependent of nfs server implementation quality. > I'm running vm on netapp san though nfs 4.1, and I have very good performance. > > I think the biggest problem is lack of fstrim/discard. (should be availble in > the future nfs 4.2) And what about CIFS? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>NFS does not provide good iops and has the ability to bring down a >>node. IMHO NFS is only useful as filestorage server for backups and >>iso images. Well, this is very dependent of nfs server implementation quality. I'm running vm on netapp san though nfs 4.1, and I have very good performance. I think the biggest problem is lack of fstrim/discard. (should be availble in the future nfs 4.2) - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 11 Octobre 2016 21:38:09 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Tue, 11 Oct 2016 21:26:08 +0200 (CEST) Dietmar Maurer <diet...@proxmox.com> wrote: > > Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy. > NFS does not provide good iops and has the ability to bring down a node. IMHO NFS is only useful as filestorage server for backups and iso images. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Honk if you are against noise pollution! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST) Alexandre DERUMIERwrote: > > I'm afraid of behaviour if a node is not joinable to do the delete or rescan. > Seem to be difficult to manage with big cluster. > I'm curious to see result of: > It seems to work :-) 1) After login to target: Oct 11 22:56:41 pve-dev kernel: [70633.376458] scsi host6: iSCSI Initiator over TCP/IP Oct 11 22:56:42 pve-dev kernel: [70633.885229] scsi 6:0:0:0: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 Oct 11 22:56:42 pve-dev kernel: [70633.887715] sd 6:0:0:0: Attached scsi generic sg2 type 0 Oct 11 22:56:42 pve-dev kernel: [70633.889615] sd 6:0:0:0: [sdb] 4194304 512-byte logical blocks: (2.15 GB/2.00 GiB) Oct 11 22:56:42 pve-dev kernel: [70633.891141] sd 6:0:0:0: [sdb] Write Protect is off Oct 11 22:56:42 pve-dev kernel: [70633.891655] sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Oct 11 22:56:42 pve-dev kernel: [70633.902977] sd 6:0:0:0: [sdb] Attached SCSI disk fdisk -l Disk /dev/sdb: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes 2) On target remove view 3) Rescan target: Oct 11 22:59:06 pve-dev kernel: [70778.489172] sdb: detected capacity change from 2147483648 to 0 fdisk -l No disk /dev/sdb lsscsi [1:0:0:0]cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0]diskSUN COMSTAR 1.0 /dev/sda [6:0:0:0]diskSUN COMSTAR 1.0 /dev/sdb echo 1 > /sys/bus/scsi/devices/target6\:0\:0/6\:0\:0\:0/delete lsscsi [1:0:0:0]cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0]diskSUN COMSTAR 1.0 /dev/sda 4) On target add view again 5) Rescan target: Oct 11 23:11:11 pve-dev kernel: [71503.637169] scsi 6:0:0:0: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 Oct 11 23:11:11 pve-dev kernel: [71503.638560] sd 6:0:0:0: Attached scsi generic sg2 type 0 Oct 11 23:11:11 pve-dev kernel: [71503.641348] sd 6:0:0:0: [sdb] 4194304 512-byte logical blocks: (2.15 GB/2.00 GiB) Oct 11 23:11:11 pve-dev kernel: [71503.645011] sd 6:0:0:0: [sdb] Write Protect is off Oct 11 23:11:11 pve-dev kernel: [71503.645520] sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Oct 11 23:11:11 pve-dev kernel: [71503.655686] sd 6:0:0:0: [sdb] Attached SCSI disk lsscsi [1:0:0:0]cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0]diskSUN COMSTAR 1.0 /dev/sda [6:0:0:0]diskSUN COMSTAR 1.0 /dev/sdb -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Breeding rabbits is a hare raising experience. pgpbbhe5QLNzb.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 21:26:08 +0200 (CEST) Dietmar Maurerwrote: > > Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy. > NFS does not provide good iops and has the ability to bring down a node. IMHO NFS is only useful as filestorage server for backups and iso images. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Honk if you are against noise pollution! pgplBuG8OeAnz.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> > VMs use KVM live backup feature, which is not available for containers. > > > I see. A work around could be to make a clone of a snapshot which is > then exposed through iscsi. Would that be an idea? Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 19:54:05 +0200 (CEST) Dietmar Maurerwrote: > > VMs use KVM live backup feature, which is not available for containers. > I see. A work around could be to make a clone of a snapshot which is then exposed through iscsi. Would that be an idea? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: My doctor told me to stop having intimate dinners for four. Unless there are three other people. -- Orson Welles pgplrzYC6rm3a.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> > Besides, iSCSI has many other drawbacks, for example it is not possible > > to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have > > all that functionality? > > > To what purpose is it needed to be able to access a ZFS snapshot? We need that for vzdump container backups ... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 17:39:02 +0200 (CEST) Dietmar Maurerwrote: > > Besides, iSCSI has many other drawbacks, for example it is not possible > to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have > all that functionality? > To what purpose is it needed to be able to access a ZFS snapshot? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: NEVER swerve to hit a lawyer riding a bicycle -- it might be your bicycle. pgpauZY8Zxfdj.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> I think this is highly hypothetical since a LUN at any point in time > can only be active on one node, proxmox that is, so the hole operation > is serializable which means that every step will be controlled and can > be rolled back. Eg. we are dealing with a deterministic state machine. Besides, iSCSI has many other drawbacks, for example it is not possible to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have all that functionality? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 08:33:02 +0200 (CEST) Alexandre DERUMIERwrote: > > I agree with Dietmar. > > Think of a missed ssh command to remote node (small network timeout for > example), for a resize volume for example. > > the lun will still be there, but wrong size, and if you do a live migration, > it'll crash (and maybe data corruption). > I think this is highly hypothetical since a LUN at any point in time can only be active on one node, proxmox that is, so the hole operation is serializable which means that every step will be controlled and can be rolled back. Eg. we are dealing with a deterministic state machine. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: [Wisdom] is a tree of life to those laying hold of her, making happy each one holding her fast. -- Proverbs 3:18, NSV pgph07pBZUu1O.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> posses another problem which will be a problem for other things too and > must be handled by fencing. not really that easy - we have a quorum system to handle most situations ... fencing is only required for some special cases. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>Offline nodes is not a problem because when the get online their scsi >>bus will not have references to disappeared luns. Not reachable nodes >>posses another problem which will be a problem for other things too and >>must be handled by fencing. I agree with Dietmar. Think of a missed ssh command to remote node (small network timeout for example), for a resize volume for example. the lun will still be there, but wrong size, and if you do a live migration, it'll crash (and maybe data corruption). - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 11 Octobre 2016 08:08:43 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Tue, 11 Oct 2016 06:14:14 +0200 (CEST) Dietmar Maurer <diet...@proxmox.com> wrote: > > Such things cannot work, because nodes can be offline (or worse, online > but not reachable). > Offline nodes is not a problem because when the get online their scsi bus will not have references to disappeared luns. Not reachable nodes posses another problem which will be a problem for other things too and must be handled by fencing. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: If you always postpone pleasure you will never have it. Quit work and play for once! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 06:14:14 +0200 (CEST) Dietmar Maurerwrote: > > Such things cannot work, because nodes can be offline (or worse, online > but not reachable). > Offline nodes is not a problem because when the get online their scsi bus will not have references to disappeared luns. Not reachable nodes posses another problem which will be a problem for other things too and must be handled by fencing. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: If you always postpone pleasure you will never have it. Quit work and play for once! pgpV1_gr0Oho_.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> Start/activate: > New view > Foreach node: > iscsiadm --session --rescan Such things cannot work, because nodes can be offline (or worse, online but not reachable). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
10.10.2016 23:30, Alexandre DERUMIER wrote: >> We could rework iscsi-manipulation code into another behavior. For exmaple, >> Dell PS-series SAN exports each volume in separate target, lun 0. So we can >> login into this target in activate_volume() and logout in >> deactivate_volume(). See my plugin for these storages: >> https://github.com/mityarzn/pve-storage-custom-dellps >> Also, there's note about bug in debian's multipath-tools in plugin's README. >> >> That way we can have LUNs only on hosts, where they're needed. Except some >> cases where PVE does not call deactivate_volume() for some reason (I think >> these are bugs?). >> > I think this is bad hack. A target for each Lun will mean 100's or even > 1000's of targets. I am quit convinced that my idea can be implemented > and I think that is, IMHO a much more clean solution. > > My solution: > Create: > New volume > New lun > > Start/activate: > New view > Foreach node: > iscsiadm --session --rescan > > stop/deactivate: > Remove view > Foreach node: > echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B > = bus:T = target:L = lun) > > Remove: > Delete Lun > Delete volume Yes, that's what I'm doing with Netapp SAN (except stop phase and view manupulations). That works, but "Foreach node:" step is ugly. Maybe you could use per-initiator views. So you map volume only to node which needs it and not have to cleanup other nodes on volume deactivation. But you will have to remove views for crashed nodes somehow. But. Why 1000's targets are worse than 1000's LUNs in single target? Are there some limits on Linux kernel side for both parameters? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, Oct 10, 2016 at 4:27 PM, Alexandre DERUMIERwrote: >>>Having e.g. LXC working over NFS with ZFS on another server. > > Do you want to manage snasphot/clone on the zfs server ? Yes, the purpose is to use a ZFS-based storage on multiple nodes without iscsi. I don't know if Proxmox VE LXC is going to work, but "normal" ZFS filesystems via NFS work on "normal" LXC on Jessie. I'm running a few LXC containers on my Pi this way. > if yes, I think it not too diffcult to add nfs support to zfsplugin. > But I'm not sure how to manage nfs exports on target server. (maybe they are > differents implementation (solaris, freebsd,...). > Don't known if we can define a global nfs share, then automount sub nfs > directory by zfs volume. (I manage them like this with netapp) ZFS can share filesystems automatically via NFS. You only need to have one 'dummy' entry in /etc/exports on Debian to get the NFS server started. Afterwards it'll automatically share over the NFS server with the option sharenfs. On Solaris, you will not have a problem at all. Just use sharenfs on ZFS and you're good to go. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
Would a hardware iSCSI SAN operate in a significantly different fashion? On Oct 10, 2016 3:55 PM, "Michael Rasmussen"wrote: > On Mon, 10 Oct 2016 22:38:03 +0200 (CEST) > Alexandre DERUMIER wrote: > > > > > Great ! (Sorry I can't help because I don't have iscsi san anymore) > > > My 'iscsi san' for testing is a virtual debian and/or solaris;-) > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E > mir datanom net > http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C > mir miras org > http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 > -- > /usr/games/fortune -es says: > Q: How was Thomas J. Watson buried? > A: 9 edge down. > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:38:03 +0200 (CEST) Alexandre DERUMIERwrote: > > Great ! (Sorry I can't help because I don't have iscsi san anymore) > My 'iscsi san' for testing is a virtual debian and/or solaris;-) -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Q: How was Thomas J. Watson buried? A: 9 edge down. pgpSIaerlF9ae.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>I will do some preliminary tests this weekend starting with a node and >>a storage appliance. When and if this is successful I will scale it to >>more nodes. Great ! (Sorry I can't help because I don't have iscsi san anymore) - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 22:33:58 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Mon, 10 Oct 2016 22:30:04 +0200 (CEST) Alexandre DERUMIER <aderum...@odiso.com> wrote: > > I'm afraid of behaviour if a node is not joinable to do the delete or rescan. > Seem to be difficult to manage with big cluster. > I'm curious to see result of: > > > Start/activate: (PVE::Storage::activate_volume) > echo "c t l" > /sys/class/scsi_host/hosth/scan > > stop/deactivate: (PVE::Storage::deactivate_volume) > echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete > I will do some preliminary tests this weekend starting with a node and a storage appliance. When and if this is successful I will scale it to more nodes. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Oreo. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST) Alexandre DERUMIERwrote: > > I'm afraid of behaviour if a node is not joinable to do the delete or rescan. > Seem to be difficult to manage with big cluster. > I'm curious to see result of: > > > Start/activate: (PVE::Storage::activate_volume) > echo "c t l" > /sys/class/scsi_host/hosth/scan > > stop/deactivate: (PVE::Storage::deactivate_volume) > echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete > I will do some preliminary tests this weekend starting with a node and a storage appliance. When and if this is successful I will scale it to more nodes. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Oreo. pgpanO42iUclp.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>Start/activate: >> New view >> Foreach node: >>iscsiadm --session --rescan >>stop/deactivate: >> Remove view >> Foreach node: >>echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B >>= bus:T = target:L = lun) I'm afraid of behaviour if a node is not joinable to do the delete or rescan. Seem to be difficult to manage with big cluster. I'm curious to see result of: Start/activate: (PVE::Storage::activate_volume) echo "c t l" > /sys/class/scsi_host/hosth/scan stop/deactivate: (PVE::Storage::deactivate_volume) echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 21:31:51 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Mon, 10 Oct 2016 22:12:56 +0300 Dmitry Petuhov <mityapetu...@gmail.com> wrote: > We could rework iscsi-manipulation code into another behavior. For exmaple, > Dell PS-series SAN exports each volume in separate target, lun 0. So we can > login into this target in activate_volume() and logout in > deactivate_volume(). See my plugin for these storages: > https://github.com/mityarzn/pve-storage-custom-dellps > Also, there's note about bug in debian's multipath-tools in plugin's README. > > That way we can have LUNs only on hosts, where they're needed. Except some > cases where PVE does not call deactivate_volume() for some reason (I think > these are bugs?). > I think this is bad hack. A target for each Lun will mean 100's or even 1000's of targets. I am quit convinced that my idea can be implemented and I think that is, IMHO a much more clean solution. My solution: Create: New volume New lun Start/activate: New view Foreach node: iscsiadm --session --rescan stop/deactivate: Remove view Foreach node: echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T = target:L = lun) Remove: Delete Lun Delete volume -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Talkers are no good doers. -- William Shakespeare, "Henry VI" ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:12:56 +0300 Dmitry Petuhovwrote: > We could rework iscsi-manipulation code into another behavior. For exmaple, > Dell PS-series SAN exports each volume in separate target, lun 0. So we can > login into this target in activate_volume() and logout in > deactivate_volume(). See my plugin for these storages: > https://github.com/mityarzn/pve-storage-custom-dellps > Also, there's note about bug in debian's multipath-tools in plugin's README. > > That way we can have LUNs only on hosts, where they're needed. Except some > cases where PVE does not call deactivate_volume() for some reason (I think > these are bugs?). > I think this is bad hack. A target for each Lun will mean 100's or even 1000's of targets. I am quit convinced that my idea can be implemented and I think that is, IMHO a much more clean solution. My solution: Create: New volume New lun Start/activate: New view Foreach node: iscsiadm --session --rescan stop/deactivate: Remove view Foreach node: echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T = target:L = lun) Remove: Delete Lun Delete volume -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Talkers are no good doers. -- William Shakespeare, "Henry VI" pgpAJaeyZ3ZaV.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
10.10.2016 21:08, Alexandre DERUMIER wrote: >>> This is because the Lun is persisted through the scsi bus so the >>> following should do it: >>> echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T >>> = target:L = lun) > yes, but you need to do it on all nodes to be clean. > For example you remove a lun on node1, if you don't remove it from others > nodes, you'll have timeout, or multipath errors on theses nodes. We could rework iscsi-manipulation code into another behavior. For exmaple, Dell PS-series SAN exports each volume in separate target, lun 0. So we can login into this target in activate_volume() and logout in deactivate_volume(). See my plugin for these storages: https://github.com/mityarzn/pve-storage-custom-dellps Also, there's note about bug in debian's multipath-tools in plugin's README. That way we can have LUNs only on hosts, where they're needed. Except some cases where PVE does not call deactivate_volume() for some reason (I think these are bugs?). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>This is because the Lun is persisted through the scsi bus so the >>following should do it: >>echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T = >>target:L = lun) yes, but you need to do it on all nodes to be clean. For example you remove a lun on node1, if you don't remove it from others nodes, you'll have timeout, or multipath errors on theses nodes. In past, when I had my nexenta zfs san with iscsi luns, I had a hacky cronjob with rescan each 5minute to cleanup removed luns. But I have problem with scan all, adding 500luns to each nodes, and multipath daemon going cpu crazy with all theses luns. That's why I suggest to only add lun when needed (vm start) and remove them when vm is stopped. According to redhat doc, it's possible to scan only 1 lun # echo "c t l" > /sys/class/scsi_host/hosth/scan In the previous command, h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. I never use it, don't known if it's working fine. - Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 19:48:32 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Mon, 10 Oct 2016 19:32:28 +0200 (CEST) Alexandre DERUMIER <aderum...@odiso.com> wrote: > > But, if you delete a lun, rescan is not removing it. > and if you remove then add a new lun on same lunid, this is where problems > begins. > This is because the Lun is persisted through the scsi bus so the following should do it: echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T = target:L = lun) -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: The egg cream is psychologically the opposite of circumcision -- it *pleasurably* reaffirms your Jewishness. -- Mel Brooks ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 19:32:28 +0200 (CEST) Alexandre DERUMIERwrote: > > But, if you delete a lun, rescan is not removing it. > and if you remove then add a new lun on same lunid, this is where problems > begins. > This is because the Lun is persisted through the scsi bus so the following should do it: echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T = target:L = lun) -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: The egg cream is psychologically the opposite of circumcision -- it *pleasurably* reaffirms your Jewishness. -- Mel Brooks pgpB_l9RD9ofC.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>--R, --rescan >In session mode, if sid is also passed in rescan the session. If no sid has >been passed in rescan all running sessions. >In node mode, rescan a session running through the target, portal, iface tuple >passed in. as far I known, rescan only add new luns. But, if you delete a lun, rescan is not removing it. and if you remove then add a new lun on same lunid, this is where problems begins. Mail original - De: "datanom.net" <m...@datanom.net> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 19:00:18 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI On Mon, 10 Oct 2016 18:46:50 +0200 Michael Rasmussen <m...@datanom.net> wrote: > On Mon, 10 Oct 2016 13:57:36 +0200 (CEST) > Alexandre DERUMIER <aderum...@odiso.com> wrote: > > > I think the more difficult part is to manage iscsi lun add|remove with > > iscsiadm. (without doing a full scan) > Should option rescan for iscsiadm not do this? > -R, --rescan In session mode, if sid is also passed in rescan the session. If no sid has been passed in rescan all running sessions. In node mode, rescan a session running through the target, portal, iface tuple passed in. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Science and religion are in full accord but science and faith are in complete discord. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 18:46:50 +0200 Michael Rasmussenwrote: > On Mon, 10 Oct 2016 13:57:36 +0200 (CEST) > Alexandre DERUMIER wrote: > > > I think the more difficult part is to manage iscsi lun add|remove with > > iscsiadm. (without doing a full scan) > Should option rescan for iscsiadm not do this? > -R, --rescan In session mode, if sid is also passed in rescan the session. If no sid has been passed in rescan all running sessions. In node mode, rescan a session running through the target, portal, iface tuple passed in. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Science and religion are in full accord but science and faith are in complete discord. pgpVtiLFFRH9B.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 13:57:36 +0200 (CEST) Alexandre DERUMIERwrote: > I think the more difficult part is to manage iscsi lun add|remove with > iscsiadm. (without doing a full scan) Should option rescan for iscsiadm not do this? > (and with multipath is also more complex) > Current zfs over iscsi does not support multipath (libiscsi limitation) but if I get kernel driver iscsi to work multipath should be available for current implementation as well. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Nobody ever ruined their eyesight by looking at the bright side of something. pgpngNmkTbs1b.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> On October 10, 2016 at 6:07 PM Michael Rasmussenwrote: > > > On Mon, 10 Oct 2016 16:58:12 +0200 (CEST) > Dietmar Maurer wrote: > > > Sign, I should read the code before ... > > > > The problem is that the ZFSPlugin use the userspace iscsi library. > > > > You would need to replace that with kernel level iSCSI, so that > > we can 'mount' exported volumes. > > > Yes, that was also my conclusion. > > I guess it will require adding code to handle containers in every > function under storage implementation? > > If I get this to work using kernel iscsi driver it might be worthwhile > convert existing code to use this too to eliminate using libiscsi > entirely and reduce code lines. see comment from Alexandre - I guess you will run into problems ;-/ ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 16:58:12 +0200 (CEST) Dietmar Maurerwrote: > Sign, I should read the code before ... > > The problem is that the ZFSPlugin use the userspace iscsi library. > > You would need to replace that with kernel level iSCSI, so that > we can 'mount' exported volumes. > Yes, that was also my conclusion. I guess it will require adding code to handle containers in every function under storage implementation? If I get this to work using kernel iscsi driver it might be worthwhile convert existing code to use this too to eliminate using libiscsi entirely and reduce code lines. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Most people don't need a great deal of love nearly so much as they need a steady supply. pgpV4O_O5vyxc.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>You would need to replace that with kernel level iSCSI, so that >>we can 'mount' exported volumes. The more difficult part is to not autoadd all luns devices on all nodes at iscsi startup. I think it was not possible in past but maybe this has been fixed in last release. For example: autoscan have mapped all luns on all nodes. on 1 node, you rollback to a zfs snasphot. (then you remove scsi device lun mapping, then you readd them). But others nodes still have "old" lun mapped. We really need to avoid auto lun mapping, then map/unmap luns when needed. - Mail original - De: "dietmar" <diet...@proxmox.com> À: "datanom.net" <m...@datanom.net>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 16:58:12 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI Sign, I should read the code before ... The problem is that the ZFSPlugin use the userspace iscsi library. You would need to replace that with kernel level iSCSI, so that we can 'mount' exported volumes. Not sure if it makes sense to export volume using NFS instead. > On October 10, 2016 at 4:53 PM Dietmar Maurer <diet...@proxmox.com> wrote: > > > > What exactly is required to enable support for LXC volumes on ZFS over > > iSCSI? > > Sorry, but what is the problem exactly? > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
> What exactly is required to enable support for LXC volumes on ZFS over > iSCSI? Sorry, but what is the problem exactly? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
Sign, I should read the code before ... The problem is that the ZFSPlugin use the userspace iscsi library. You would need to replace that with kernel level iSCSI, so that we can 'mount' exported volumes. Not sure if it makes sense to export volume using NFS instead. > On October 10, 2016 at 4:53 PM Dietmar Maurerwrote: > > > > What exactly is required to enable support for LXC volumes on ZFS over > > iSCSI? > > Sorry, but what is the problem exactly? > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
>>Having e.g. LXC working over NFS with ZFS on another server. Do you want to manage snasphot/clone on the zfs server ? if yes, I think it not too diffcult to add nfs support to zfsplugin. But I'm not sure how to manage nfs exports on target server. (maybe they are differents implementation (solaris, freebsd,...). Don't known if we can define a global nfs share, then automount sub nfs directory by zfs volume. (I manage them like this with netapp) - Mail original - De: "Andreas Steinel" <a.stei...@gmail.com> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 10 Octobre 2016 15:15:42 Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI This is a similar thing that I wanted to discuss with my ZFS-over-NFS question a while ago. Having e.g. LXC working over NFS with ZFS on another server. On Mon, Oct 10, 2016 at 1:57 PM, Alexandre DERUMIER <aderum...@odiso.com> wrote: > I think the more difficult part is to manage iscsi lun add|remove with > iscsiadm. (without doing a full scan) > (and with multipath is also more complex) > > > - Mail original - > De: "datanom.net" <m...@datanom.net> > À: "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Lundi 10 Octobre 2016 13:50:53 > Objet: [pve-devel] LXC volumes on ZFS over iSCSI > > Hi all, > > What exactly is required to enable support for LXC volumes on ZFS over > iSCSI? > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E > mir datanom net > http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C > mir miras org > http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 > -- > > > > This mail was virus scanned and spam checked before delivery. > This mail is also DKIM signed. See header dkim-signature. > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] LXC volumes on ZFS over iSCSI
I think the more difficult part is to manage iscsi lun add|remove with iscsiadm. (without doing a full scan) (and with multipath is also more complex) - Mail original - De: "datanom.net"À: "pve-devel" Envoyé: Lundi 10 Octobre 2016 13:50:53 Objet: [pve-devel] LXC volumes on ZFS over iSCSI Hi all, What exactly is required to enable support for LXC volumes on ZFS over iSCSI? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- This mail was virus scanned and spam checked before delivery. This mail is also DKIM signed. See header dkim-signature. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel