Ilya, Thanks for clarification.

On Thu, May 4, 2023 at 1:12 PM Ilya Dryomov <idryo...@gmail.com> wrote:

> On Thu, May 4, 2023 at 11:27 AM Kamil Madac <kamil.ma...@gmail.com> wrote:
> >
> > Thanks for the info.
> >
> > As a solution we used rbd-nbd which works fine without any issues. If we
> will have time we will also try to disable ipv4 on the cluster and will try
> kernel rbd mapping again. Are there any disadvantages when using NBD
> instead of kernel driver?
>
> Ceph doesn't really support dual stack configurations.  It's not
> something that is tested: even if it happens to work for some use case
> today, it can very well break tomorrow.  The kernel client just makes
> that very explicit ;)
>
> rbd-nbd is less performant and historically also less stable (although
> that might have changed in recent kernels as a bunch of work went into
> the NBD driver upstream).  It's also heavier on resource usage but that
> won't be noticeable/can be disregarded if you are not mapping dozens of
> RBD images on a single node.
>
> Thanks,
>
>                 Ilya
>


-- 
Kamil Madac <https://kmadac.github.io/>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to