Hello ! I've found a way that I think will solve definitively the issue 256 as explained in last posts of https://code.google.com/p/ganeti/issues/detail?id=256
To sum up, actually we use a fixed size for DRBD metadata volume size that implies a ~4Tb limit for DRBD volumes and that spent space for little DRBD volumes. Following DRBD documentation (http://drbd.linbit.com/users-guide-8.4/ch-internals.html#s-meta-data-size), we could use this formula to get the volume size of drbd metadata in function of drbd volume size: DRBD_META_SIZE_Mb=1+DRBD_SIZE_Mb/32768. I've started to follow the developpers guidelines and required modification seems trivial but the implementation is not. In addition to that this modifications require a disk migration on production environnement to recreate correctly drbd volume with a dynamic meta size instead of the fixed one. 1) Implementation Actually in python code DRBD_META_SIZE is a constant: - defined by drbdMetaSize in src/Ganeti/Constants.hs to generate the python constant - used in lib/masterd/instance.py and lib/cmdlib/instance_storage.py My purpose is to remove the constant DRBD_META_SIZE and replaced it by 1+DISK_SIZE_Mb/32768. My question: (should be trivial I suppose but I'm at code discovery state) What is the accurate meaning for constants.IDISK_SIZE and constants.IDISK_VG ? 2) Data migration If I could produce a patch to use a drbd dynamic meta data disk size instead of a fixed one, the next problem is how include it in actual branches since there is a need of disk data migration. My personal scenario is: - update ganeti (patched with drbd-dynamic-metadata) - emit a warning for data migration required on cluster verify for drbd instance that explain to admins to: For each drbd instance: do - stop instance - convert instance disk to plain format - convert back instance to drbd format - start the instance (that use now a drbd dynamic metadata size) I know this modification is awaited by severals users but it's the kind of modification that need to be introduced smartly with care since data storage is involved. Other scenario could be possible like: - consider max of (dynamic metadata size, actual_constant_size) for disk requirements to avoid the data migration and use the new metadata size for new instances only. - write code to transform actual drbd volumes by forcing on each drbd nodes to recreate the volume using a dynamic size but I don't think it's possible since we can't have synchonized meta volume between a fixed metadata volume and a dynamic one. What do you think ? Regards, Jean-François
