[pve-devel] LizardFS support - wishlist

2018-01-27 Thread Gilberto Nunes
Hi I would like to see support to this DFS in future Proxmox versions! What about it? https://docs.lizardfs.com --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 ___ pve-devel mailing list pve-d

Re: [pve-devel] LizardFS support - wishlist

2018-01-29 Thread Eneko Lacunza
Hi Gilberto, I took a look at their website, but didn't find any hint about why this LizardFs would be better than currenty supported storages likes Ceph or GlusterFS. Did you find some use cases where this solution will be better than currently supported ones? Cheers El 27/01/18 a las 1

Re: [pve-devel] LizardFS support - wishlist

2018-01-29 Thread Gilberto Nunes
Hi Well... It's just seems that LizardFS (which is a MooseFS fork, I guess), is easier to implement, very faster than GlusterFS, have metadata all save into RAM, easier to implement compares to Ceph, there's client to Linux, Windows and MacOS... I am research with more benchmarks regard performanc

Re: [pve-devel] LizardFS support - wishlist

2018-01-29 Thread Gilberto Nunes
And just to supplement, I already had implement GlusterFS in WAN over fiber channel and the latency was painful... --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 2018-01-29 9:17 GMT-02:00 Gilberto Nunes : > Hi > > Well... It's just

Re: [pve-devel] LizardFS support - wishlist

2018-01-29 Thread Dietmar Maurer
> Well... It's just seems that LizardFS (which is a MooseFS fork, I guess), > is easier to implement, If I remember correctly, they run a single Master with no automatic failover? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.

Re: [pve-devel] LizardFS support - wishlist

2018-01-29 Thread Gökalp Çakıcı
Yes, you remember correctly. It's a paid option with their commercial version. On 29/01/18 14:29, Dietmar Maurer wrote: Well... It's just seems that LizardFS (which is a MooseFS fork, I guess), is easier to implement, If I remember correctly, they run a single Master with no automatic failove