Hi,
Did you test performance of LVM snapshots ?
LVM performance usually is OK, but with snapshots it could go 10x
decrease if your storage doesn't have many IOS (disc spindles of SSD).
Even with good IOS or SSD it goes down 2-3x down.
For reference http://www.nikhef.nl/~dennisvd/lvmcrap.html
Rega
Hello,
i already published the updated driver. now all images not uploaded by
administrator to the GFS2 volume are stored inside lvm volumes (inside
clvm)
https://github.com/jhrcz/opennebula-tm-gfs2clvm/tree/v0.0.20120303.0
basic tests are already done, and everything seems to work fine ;o) it
l
Hi Steve,
The complete writeup about seting all the things up "is on my todo".
clvm is the minimal form on centos. Just a shared storage (sas
infrastructure exactly, but could be anything else like drbd
primary/primary, iscsi etc). The driver currently does not use
exclusive locks, but i'm tempte
This is very interesting work. Jan, do you have any write-up on how
you were able to set up the gfs and clvm setup to work with this driver?
It's a use case very similar to what we are considering for FermiCloud.
Two other questions that weren't immediately obvious from looking
at the code:
1
Hi Jan,
> I call this transfer manager drive **gfs2clvm** and made it (even in
> current development state - but most of the functions works
> already) available on github:
> https://github.com/jhrcz/opennebula-tm-gfs2clvm
>
> if anyone is interrested, wants to contribute and help, please contact
Hello Opennebullians,
i'm currently working on setup, where VMs are running from clvm volumes
(for maximum performance) and traditional file data like state files,
contextualisation files, etc are stored in GFS2 filesystem on top of the
same clvm.
OpenNebula management node is connected with work