[ceph-users] Re: binary file cannot execute in cephfs directory

2022-08-23 Thread zxcs
oh, yes, there is a “noexec” option in the mount command. Thanks a ton! Thanks, Xiong > 2022年8月23日 22:01,Daniel Gryniewicz 写道: > > Does the mount have the "noexec" option on it? > > Daniel > > On 8/22/22 21:02, zxcs wrote: >> In case someone missing the picture. Just copy the text as below:

[ceph-users] rgw.meta pool df reporting 16EiB

2022-08-23 Thread Wyll Ingersoll
We have a large Pacific cluster (680 osd, ~9.6PB ) - primarily it is used as an RGW object store. The default.rgw.meta pool is reporting strange numbers: default.rgw.meta 4 32 16EiB 64 11MiB 100 0 Why would the "Stored" value show 16EiB (which is the maximum possible for ceph)? These

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wyll Ingersoll
We did this but oddly enough it is showing the movement of PGS away​ from the new, underutilized OSDs instead of TO them as we would expect. From: Wesley Dillingham Sent: Tuesday, August 23, 2022 2:13 PM To: Wyll Ingersoll Cc: ceph-users@ceph.io Subject: Re:

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wesley Dillingham
https://docs.ceph.com/en/pacific/rados/operations/upmap/ Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Aug 23, 2022 at 1:45 PM Wyll Ingersoll < wyllys.ingers...@keepertech.com> wrote: > Thank you - we have increased

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wyll Ingersoll
Thank you - we have increased backfill settings, but can you elaborate on "injecting upmaps" ? From: Wesley Dillingham Sent: Tuesday, August 23, 2022 1:44 PM To: Wyll Ingersoll Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Full cluster, new OSDS not being

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wesley Dillingham
In that case I would say your options are to make use of injecting upmaps to move data off the full osds or to increase the backfill throttle settings to make things move faster. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wyll Ingersoll
Unfortunately, I cannot. The system in question is in a secure location and I don't have direct access to it. The person on site runs the commands I send them and the osd tree is correct as far as we can tell. The new hosts and osds are in the right place in the tree and have proper weights.

[ceph-users] Re: Full cluster, new OSDS not being used

2022-08-23 Thread Wesley Dillingham
Can you please send the output of "ceph osd tree" Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Aug 23, 2022 at 10:53 AM Wyll Ingersoll < wyllys.ingers...@keepertech.com> wrote: > > We have a large cluster with a many osds

[ceph-users] Ceph User Survey 2022 - Comments on the Documentation

2022-08-23 Thread John Zachary Dover
The following are comments about the Ceph documentation, and were provided by respondents to the 2022 Ceph User Survey: "missing a tuning guide" "Documentation is very lacking for those trying to start using Ceph." "correct documentation, not the mix of out-dated and correct descriptions and

[ceph-users] CephFS Snapshot Mirroring slow due to repeating attribute sync

2022-08-23 Thread Kuhring, Mathias
Dear Ceph developers and users, We are using ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable). We are using cephadm since version 15 octopus. We mirror several CephFS directories from our main cluster our to a second mirror cluster. In particular with bigger

[ceph-users] Full cluster, new OSDS not being used

2022-08-23 Thread Wyll Ingersoll
We have a large cluster with a many osds that are at their nearfull or full ratio limit and are thus having problems rebalancing. We added 2 more storage nodes, each with 20 additional drives to give the cluster room to rebalance. However, for the past few days, the new OSDs are NOT being

[ceph-users] Re: binary file cannot execute in cephfs directory

2022-08-23 Thread Daniel Gryniewicz
Does the mount have the "noexec" option on it? Daniel On 8/22/22 21:02, zxcs wrote: In case someone missing the picture. Just copy the text as below: 1d@***ceph dir**$ 1s -lrth total 13M -rwxr-xr-x 1 ld ld 13M Nov 29 2021 cmake-3.22 1rwxrwxrwx 1 ld ld 10 Jul 26 10:03 cmake > cmake-3.22

[ceph-users] Re: Quincy: Corrupted devicehealth sqlite3 database from MGR crashing bug

2022-08-23 Thread Daniel Williams
Off thread sent my mgr database / core dumps to ceph devs. For documentation purposes / other peoples help, I could mitigate the issue by destroying my journal, here are the I used commands: # Create an backup copy of my old mgr pool ceph osd pool create mgr-backup-2022-08-19 rados cppool .mgr

[ceph-users] Re: cephfs and samba

2022-08-23 Thread Robert Sander
Am 23.08.22 um 08:56 schrieb Konstantin Shalygin: On 19 Aug 2022, at 17:11, Robert Sander wrote: You could easily add nodes to the CTDB cluster to distribute load there. How to do that? Add more then one publlic_ip? How to tell Winsows then, about multiple IP's? You need to extend

[ceph-users] Re: cephfs and samba

2022-08-23 Thread Konstantin Shalygin
Hi Robert, > On 19 Aug 2022, at 17:11, Robert Sander wrote: > > You could easily add nodes to the CTDB cluster to distribute load there. How to do that? Add more then one publlic_ip? How to tell Winsows then, about multiple IP's? Thanks k ___