[ceph-users] Ceph on FreeBSD

2022-07-14 Thread Olivier Nicole
Hi, I would like to try Ceph on FreeBSD (because I mostly use FreeBSD) but before I invest too much time in it, it seems that the current version of Ceph for FreeBSD is quite old. Is it still being taken care of or not? TIA Olivier ___ ceph-users maili

[ceph-users] Re: rados df vs ls

2022-07-14 Thread stuart.anderson
> On Jul 14, 2022, at 11:11 AM, Janne Johansson wrote: > > Den ons 13 juli 2022 kl 04:33 skrev stuart.anderson > : >> >>> On Jul 6, 2022, at 10:30 AM, stuart.anderson >>> wrote: >>> >>> I am wondering if it is safe to delete the following pool that rados ls >>> reports is empty, but rados

[ceph-users] Re: rados df vs ls

2022-07-14 Thread Janne Johansson
Den ons 13 juli 2022 kl 04:33 skrev stuart.anderson : > > > > On Jul 6, 2022, at 10:30 AM, stuart.anderson > > wrote: > > > > I am wondering if it is safe to delete the following pool that rados ls > > reports is empty, but rados df indicates has a few thousand objects? > > Please excuse reposti

[ceph-users] Re: cephadm host maintenance

2022-07-14 Thread Kai Stian Olstad
On 14.07.2022 11:01, Steven Goodliff wrote: If i get anywhere with detecting the instance is the active manager handling that in Ansible i will reply back here. I use this - command: ceph mgr stat register: r - debug: msg={{ (r.stdout | from_json).active_name.split(".")[0] }}

[ceph-users] Re: rbd iostat requires pool specified

2022-07-14 Thread Ilya Dryomov
On Wed, Jul 13, 2022 at 10:50 PM Reed Dier wrote: > > Hoping this may be trivial to point me towards, but I typically keep a > background screen running `rbd perf image iostat` that shows all of the rbd > devices with io, and how busy that disk may be at any given moment. > > Recently after upgr

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-14 Thread Dan van der Ster
OK I recreated one OSD. It now has 4k min_alloc_size: 2022-07-14T10:52:58.382+0200 7fe5ec0aa200 1 bluestore(/var/lib/ceph/osd/ceph-0/) _open_super_meta min_alloc_size 0x1000 and I tested all these bluestore_prefer_deferred_size_hdd values: 4096: not deferred 4097: "_do_alloc_write deferring 0x1

[ceph-users] Re: cephadm host maintenance

2022-07-14 Thread Steven Goodliff
Thanks for the replies, It feels to me that cephadm should handle this case as it offers the maintenance function. right now i have a simple version of a playbook that just does the noout / patch the OS and reboot and unset noout ( similar to https://github.com/ceph/ceph-ansible/blob/main/in

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-14 Thread Konstantin Shalygin
Dan, do you tested the redeploy one of your OSD with default pacific bluestore_min_alloc_size_hdd (4096) ? This will also resolves this issue (just not affected, when all options in their defaults)? Thanks, k > On 14 Jul 2022, at 08:43, Dan van der Ster wrote: >