Stumbling closer toward a usable production cluster with Ceph, but I have
yet another stupid n00b question I'm hoping you all will tolerate.

I have 38 OSDs up and in across 4 hosts. I (maybe prematurely) removed my
test filesystem as well as the metadata and data pools used by the deleted
filesystem.

This leaves me with 38 OSDs with a bunch of data on them.

Is there a simple way to just whack all of the data on all of those OSDs
before I create new pools and a new filesystem?

Version:
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)

As you can see from the partial output of ceph -s, I left a bunch of crap
spread across the OSDs...

    pools:   8 pools, 32 pgs
    objects: 219 objects, 1.2 KiB
    usage:   45 TiB used, 109 TiB / 154 TiB avail
    pgs:     32 active+clean

Thanks in advance for a shove in the right direction.

-Dallas
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to