We are making hourly snapshots of ~400 rbd drives in one (spinning-rust)
cluster. The snapshots are made one by one.
Total size of the base images is around 80TB. The entire process takes a
few minutes.
We do not experience any problems doing this.


Op do 30 jan. 2020 om 15:30 schreef Adam Boyhan <ad...@medent.com>:

> We are looking to role out a all flash Ceph cluster as storage for our
> cloud solution. The OSD's will be on slightly slower Micron 5300 PRO's,
> with WAL/DB on Micron 7300 MAX NVMe's.
>
> My main concern with Ceph being able to fit the bill is its snapshot
> abilities.
>
> For each RBD we would like the following snapshots
>
> 8x 30 minute snapshots (latest 4 hours)
>
> With our current solution (HPE Nimble) we simply pause all write IO on the
> 10 minute mark for roughly 2 seconds and then we take a snapshot of the
> entire Nimble volume. Each VM within the Nimble volume is sitting on a
> Linux Logical Volume so its easy for us to take one big snapshot and only
> get access to a specific clients data.
>
> Are there any options for automating managing/retention of snapshots
> within Ceph besides some bash scripts? Is there anyway to take snapshots of
> all RBD's within a pool at a given time?
>
> Is there anyone successfully running with this many snapshots? If anyone
> is running a similar setup, would love to hear how your doing it.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to