[ceph-users] Re: ceph-dashboard python warning with new pyo3 0.17 lib (debian12)

2023-07-03 Thread David Fojtík
Hello. Update to the latest versions of Ceph solves that. See https://docs.ceph.com/en/quincy/install/get-packages/ and https://download.ceph.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Delete or move files from lost+found in cephfs

2023-07-03 Thread Thomas Widhalm
Hi, I had some trouble in the past with my CephFS which I was able to resolve - mostly with your help. Now I have about 150GB of data in lost+found in my CephFS. No matter what I try and how I change permissions, every time when I try to delete or move something from there I only get the rep

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-03 Thread Ilya Dryomov
On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote: > > > On 7/3/23 04:53, Matthew Booth wrote: > > On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote: > > This container runs: > > fio --rw=write --ioengine=sync --fdatasync=1 > > --directory=/var/lib/etcd --size=100m --bs=8000 --name=

[ceph-users] Ceph Quarterly (CQ) - Issue #1

2023-07-03 Thread Zac Dover
The first issue of "Ceph Quarterly" is attached to this email. Ceph Quarterly (or "CQ") is an overview of the past three months of upstream Ceph development. We provide CQ in three formats: A4, letter, and plain text wrapped at 80 columns. Zac Dover Upstream Documentation Ceph FoundationCeph Qu

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-03 Thread Mark Nelson
On 7/3/23 04:53, Matthew Booth wrote: On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote: This container runs: fio --rw=write --ioengine=sync --fdatasync=1 --directory=/var/lib/etcd --size=100m --bs=8000 --name=etcd_perf --output-format=json --runtime=60 --time_based=1 And extracts sync.lat

[ceph-users] Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-07-03 Thread Rafael Diaz Maurin
Hello, I've just upgraded a Pacific cluster into Quincy, and all my osd have the low value osd_mclock_max_capacity_iops_hdd : 315.00. The manuel does not explain how to benchmark the OSD with fio or ceph bench with good options. Can someone have the good ceph bench options or fio options

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-03 Thread Yin, Congmin
Hi Matthew, Due to the latency of rbd layers, the write latency of the pwl cache is more than ten times that of the Raw device. I replied directly below the 2 questions. Best regards. Congmin Yin -Original Message- From: Matthew Booth Sent: Thursday, June 29, 2023 7:23 PM To: Ilya Dr

[ceph-users] Re: db/wal pvmoved ok, but gui show old metadatas

2023-07-03 Thread Christophe BAILLON
Up I try to do for example ceph orch daemon reconfig osd.26 The cephadm gui continu to show me the old nvme as part of this osd device_ids nvme1n1=SAMSUNG_MZVLW1T0HMLH-0_S2U3NX0JB00438,sdc=SEAGATE_ST18000NM004J_ZR52TT83C148JFSJ device_paths nvme1n1=/dev/disk/by-path/pci-:3b:00.0-nvm

[ceph-users] What is the best way to use disks with different sizes

2023-07-03 Thread wodel youchi
Hi, I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3 nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need one pool. Is it good practice to use all disks to create the one pool I need, or is it better to create two pools, one on each group of disks? If

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread Casey Bodley
On Mon, Jul 3, 2023 at 6:52 AM mahnoosh shahidi wrote: > > I think this part of the doc shows that LocationConstraint can override the > placement and I can change the placement target with this field. > > When creating a bucket with the S3 protocol, a placement target can be > > provided as part

[ceph-users] Re: list of rgw instances in ceph status

2023-07-03 Thread Boris Behrens
Hi Mahnoosh, that helped. Thanks a lot! Am Mo., 3. Juli 2023 um 13:46 Uhr schrieb mahnoosh shahidi < mahnooosh@gmail.com>: > Hi Boris, > > You can list your rgw daemons with the following command > > ceph service dump -f json-pretty | jq '.services.rgw.daemons' > > > The following command ext

[ceph-users] Re: list of rgw instances in ceph status

2023-07-03 Thread mahnoosh shahidi
Hi Boris, You can list your rgw daemons with the following command ceph service dump -f json-pretty | jq '.services.rgw.daemons' The following command extract all their ids ceph service dump -f json-pretty | jq '.services.rgw.daemons' | egrep -e > 'gid' -e '\"id\"' > Best Regards, Mahnoosh O

[ceph-users] list of rgw instances in ceph status

2023-07-03 Thread Boris Behrens
Hi, might be a dump question, but is there a way to list the rgw instances that are running in a ceph cluster? Before pacific it showed up in `ceph status` but now it only tells me how many daemons are active, now which daemons are active. ceph orch ls tells me that I need to configure a backend

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread mahnoosh shahidi
I think this part of the doc shows that LocationConstraint can override the placement and I can change the placement target with this field. When creating a bucket with the S3 protocol, a placement target can be > provided as part of the LocationConstraint to override the default > placement targe

[ceph-users] dashboard for rgw NoSuchKey

2023-07-03 Thread farhad kh
I deploy the rgw service and the default pool is created automatically But I get an error in the dashboard `` Error connecting to Object Gateway: RGW REST API request failed with default 404 status code","HostId":"736528-default-default"}') `` There is a dashboard user but I created the bucket ma

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-03 Thread Matthew Booth
On Fri, 30 Jun 2023 at 08:50, Yin, Congmin wrote: > > Hi Matthew, > > Due to the latency of rbd layers, the write latency of the pwl cache is more > than ten times that of the Raw device. > I replied directly below the 2 questions. > > Best regards. > Congmin Yin > > > -Original Message-

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-03 Thread Matthew Booth
On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote: > >>> This container runs: > >>> fio --rw=write --ioengine=sync --fdatasync=1 > >>> --directory=/var/lib/etcd --size=100m --bs=8000 --name=etcd_perf > >>> --output-format=json --runtime=60 --time_based=1 > >>> > >>> And extracts sync.lat_ns.perc

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread Konstantin Shalygin
Hi, > On 3 Jul 2023, at 12:23, mahnoosh shahidi wrote: > > So clients can not get the value which they set in the LocationConstraint > field in the create bucket request as in this doc > ? LocationConstraint in this case is

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread mahnoosh shahidi
Thanks for your response, So clients can not get the value which they set in the LocationConstraint field in the create bucket request as in this doc ? Best Regards, Mahnoosh On Mon, Jul 3, 2023 at 12:35 PM Konstantin Shalyg

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread Konstantin Shalygin
Hi, > On 2 Jul 2023, at 17:17, mahnoosh shahidi wrote: > > Is there any way for clients (without rgw-admin access) to get the > placement target of their S3 buckets? The "GetBucketLocation'' api returns > "default" for all placement targets and I couldn't find any other S3 api > for this purpose

[ceph-users] Re: Transmit rate metric based per bucket

2023-07-03 Thread Ondřej Kukla
Well in fact it does. For example in our setup we are parsing the bucket name from the URL. It’s a bit tricky as a client could use both the domain name and path base styles, but that is not an issue for us. Alternatively you can parse and analyse logs directly from RGWs which have the bucket