[ceph-users] Re: Consequence of maintaining hundreds of clones of a single RBD image snapshot

2023-04-25 Thread Perspecti Vus
Hi again,

Is there a limit/best-practice regarding number of clones? I'd like to start 
development, but want to make sure I won't run into scaling issues.

  Perspectivus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Consequence of maintaining hundreds of clones of a single RBD image snapshot

2023-04-19 Thread Eugen Block

Hi,

the closest thing to your request I see in a customer cluster are 186  
rbd children of one single image, and nobody has complained yet. The  
pools are all-flash with 60 SSD OSDs across 5 nodes and are used for  
OpenStack. Regarding the consistency during flattening, I haven't done  
that too often and not with heavy load on the clones so I can't  
properly answer that, but my impression is that flattening is  
consistent. But I would leave that question for someone else with more  
insights.


Regards,
Eugen

Zitat von Eyal Barlev :


Hello,

My use-case involves creating hundreds of clones (~1,000) of a single RBD
image snapshot.

I assume watchers exist for each clone, due to the copy-on-write nature of
clones.

Should I expect a penalty for maintaining such a large number of clones:
cpu, memory, performance?

If such penalty does exist, we might opt to flatten some of the clones. Is
consistency guaranteed during the flattening process? In other words, can I
write to a clone while it is being flattened?

Perspectivus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io