[ceph-users] Re: RBD as a boot image [was: libceph: get_reply osd2 tid 1459933 data 3248128 > preallocated 131072, skipping]

2021-05-17 Thread Kees Meijs | Nefos
Hi, This is a chicken and egg problem I guess. The boot process (albeit UEFI or BIOS; given x86) should be able to load boot loader code, a Linux kernel and initial RAM disk (although in some cases a kernel alone could be enough). So yes: use PXE to load a Linux kernel and RAM disk. The RAM

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Thanks for clearing that up, Jason. K. On 14-05-2020 20:11, Jason Dillaman wrote: > rbd-mirror can only remove images that (1) have mirroring enabled and > (2) are not split-brained with its peer. It's totally fine to only > mirror a subset of images within a pool and it's fine to only mirror >

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Hi Anthony, A one-way mirror suits fine in my case (the old cluster will be dismantled in mean time) so I guess a single rbd-mirror daemon should suffice. The pool consists of OpenStack Cinder volumes containing a UUID (i.e. volume-ca69183a-9601-11ea-8e82-63973ea94e82 and such). The change of

[ceph-users] Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Hi list, Thanks again for pointing me towards rbd-mirror! I've read documentation, old mailing list posts, blog posts and some additional guides. Seems like the tool to help me through my data migration. Given one-way synchronisation and image-based (so, not pool based) configuration, it's