On 21/01/2021 21:31, n...@li.nux.ro wrote:
Well, there you go..

On 2021-01-21 18:50, Simon Weller wrote:
We used to use CLVM a while ago before we shifted to Ceph. Cluster
suite/corosync was a bit of a nightmare, and fencing events caused all
sorts of locking (DLM) problems.
I helped a CloudStack user out a couple of month ago, after they
upgraded and CLVM broke, so I know it's still out there in limited
places.
I wouldn't recommend using it today unless you're very brave and have
the capability of troubleshooting the code yourself.

In addition:

I assume you used CLVM with Corosync?

Yes


My concern with LVM is:

- No thin provisioning (when used with CloudStack)

Indeed and machine deployment meant a qemu-img convert qcow2 -> lv .. so lengthy.

Yes, that's a valid argument against using LVM as you loose a lot of features.


- No snapshots (Right?)

Don't remember honestly. If there were, they must have been slow.

- Not very much used

Yep..


OCFS starts to become more appealing. :)

At the time - and probably now as well - OCFS was best supported by Oracle Unbreakable Linux (RHEL rebuild), might be worth looking at running that instead of Ubuntu or CentOS, hopefully a smoother, bug-free experience.


I would need to test that. RHEL6 has been a long time ago and we are now at CentOS 8.

In our case we use Ubuntu 20.04 for hypervisors and that means a lot of development went into OCFS2 in recent years.

There's also a reason that VMWare uses VMFS with VMDK images on top as that provides a lot of flexibility.

This time the use-case is iSCSI, but thinking ahead we'll get new things like NVMeOF which provides even lower latency.

Probably best to set up a PoC and see how it works out.

Wido



Lucian

Reply via email to