[ceph-users] Sharded File Copy for Cephfs

2021-08-02 Thread Daniel Williams
Does a stripe aware file copier exist for cephfs to parallelize the copying of one larger file? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Adding a third zone with tier type archive

2021-08-02 Thread Yosh de Vos
I have deleted the archive zone and recreated it with sync_from_all=false and sync_from=fo-am so only that zone is used to sync from. The full sync finally completed and didn't fill up the cluster entirely but is still using more space (28 TB instead of 13 TB). I have noticed that a lot of objects

[ceph-users] ceph-volume - AttributeError: module 'ceph_volume.api.lvm'

2021-08-02 Thread athreyavc
Hi, I am trying to Re-Add a OSD after replacing the disk, I am running, ceph-volume lvm create --bluestore --osd-id 41 --data ceph-dm-41/block-dm-41 And I get, --> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv' My Ceph Version is "osd": { "ceph version 15.2.9

[ceph-users] Re: Handling out-of-balance OSD?

2021-08-02 Thread Manuel Holtgrewe
Thanks. On Thu, Jul 29, 2021 at 8:51 AM Konstantin Shalygin wrote: > ceph pg ls-by-osd > > > k > Sent from my iPhone > > > On 28 Jul 2021, at 12:46, Manuel Holtgrewe wrote: > > > > How can I find out which pgs are actually on osd.0? > ___ ceph-users

[ceph-users] Re: Handling out-of-balance OSD?

2021-08-02 Thread Manuel Holtgrewe
Hi, to follow up on this. So the number of pgs on my osd.0 dropped to 74 but got stuck there. Maybe the daemon was restarted at some point? I tried to follow your guide and restart the daemon with increased log verbosity but it did not properly show me the pgs that it knew about. Also, Konstantin

[ceph-users] Re: Dashboard Montitoring: really suppress messages

2021-08-02 Thread E Taka
Hi, sorry for being so vague in the initial question. We use Ceph 16.2.5 (with Docker and Ubuntu 20.04). Thanks for opening the issue! Am Mo., 2. Aug. 2021 um 09:57 Uhr schrieb Patrick Seidensal : > > Hi Erich, > > I agree that there should be a way to disable an alert in the Ceph Dashboard > co

[ceph-users] RBD stale after ceph rolling upgrade

2021-08-02 Thread Jules
After passing the stage where CVE Patch (CVE-2021-20288: Unauthorized global_id reuse in cephx) for mon_warn_on_insecure_global_id_reclaim came into play and doing further rolling upgrades up to the latest version we are facing into a weird behavior executing: ceph.target on a single node all

[ceph-users] slow ops and osd_pool_default_read_lease_ratio

2021-08-02 Thread Manuel Lausch
Hi, some weeks ago I opened this bug ticket: https://tracker.ceph.com/issues/51463 with ceph octopus I have issues with slow requests while starting/stopping OSDs. Slow requests in this case are blocked for longer than 6 seconds. With nautilus this wasn‘t any issue. Can someone have a look to th