Hi Jean-François,

thank you for working on the issue. Let me reply below:

On Sat, Apr 11, 2015 at 2:34 PM, Jean-Francois Maeyhieux <
[email protected]> wrote:

> Hello !
>
>   I've found a way that I think will solve definitively the issue 256 as
> explained in last posts of
> https://code.google.com/p/ganeti/issues/detail?id=256
>
> To sum up, actually we use a fixed size for DRBD metadata volume size that
> implies a ~4Tb limit for DRBD volumes and that spent space for little DRBD
> volumes.
> Following DRBD documentation (
> http://drbd.linbit.com/users-guide-8.4/ch-internals.html#s-meta-data-size),
> we could use this formula to get the volume size of drbd metadata in
> function of drbd volume size: DRBD_META_SIZE_Mb=1+DRBD_SIZE_Mb/32768.
>
> I've started to follow the developpers guidelines and required
> modification seems trivial but the implementation is not.
> In addition to that this modifications require a disk migration on
> production environnement to recreate correctly drbd volume with a dynamic
> meta size instead of the fixed one.
>
> 1) Implementation
>
> Actually in python code DRBD_META_SIZE is a constant:
> - defined by drbdMetaSize in src/Ganeti/Constants.hs to generate the
> python constant
> - used in lib/masterd/instance.py and lib/cmdlib/instance_storage.py
>
> My purpose is to remove the constant DRBD_META_SIZE and replaced it by
> 1+DISK_SIZE_Mb/32768.
>
>
I'm not that familiar with this DRBD issue, so I can't comment on the
implementation at the moment, I'll need to look into it deeper.


> My question: (should be trivial I suppose but I'm at code discovery state)
>     What is the accurate meaning for constants.IDISK_SIZE and
> constants.IDISK_VG ?
>

These constants are used as keys in a dictionary in some opcodes when
sending data about disks. The whole data structure is visible in Haskell:
http://docs.ganeti.org/ganeti/master/api/hs/Ganeti-OpParams.html#v:IDiskParams
(just strip the 'idisk' prefix and lowercase the first letter, so 'idiskVg'
will have key 'vg' in the dictionary etc.).


>
>
> 2) Data migration
>
> If I could produce a patch to use a drbd dynamic meta data disk size
> instead of a fixed one, the next problem is how include it in actual
> branches since there is a need of disk data migration. My personal scenario
> is:
> - update ganeti (patched with drbd-dynamic-metadata)
> - emit a warning for data migration required on cluster verify for drbd
> instance that explain to admins to:
>    For each drbd instance: do
>        - stop instance
>        - convert instance disk to plain format
>        - convert back instance to drbd format
>        - start the instance (that use now a drbd dynamic metadata size)
>

This would have to be done only on an opt-in basis, as most users (as far
as I know) don't need this functionality, and such a process would not only
take a very long time, but also increase the risk of data loss (since each
instance would be converted to 'plain' for a period of time).


>
> I know this modification is awaited by severals users but it's the kind of
> modification that need to be introduced smartly with care since data
> storage is involved. Other scenario could be possible like:
> - consider max of (dynamic metadata size, actual_constant_size) for disk
> requirements to avoid the data migration and use the new metadata size for
> new instances only.
>

That sounds like a good idea, we'd just have to make sure that everything
works correctly for both variants.


> - write code to transform actual drbd volumes by forcing on each drbd
> nodes to recreate the volume using a dynamic size but I don't think it's
> possible since we can't have synchonized meta volume between a fixed
> metadata volume and a dynamic one.
>
> What do you think ?
>

Let me discuss it in more detail with my colleagues and get back to you
later this week.

  Thanks,
  Petr



>
> Regards,
>                 Jean-François
>

Reply via email to