[ceph-users] Pacific: access via S3 / Object gateway slow for small files

2021-08-24 Thread E Taka
One can find questions about this topic in the WWW, but most of them for older versions of Ceph. So I ask specifically for the actual version: · Pacific 16.2.5 · 7 nodes (with many cores and RAM) with 111 OSD · all OSD included by: ceph orch apply osd --all-available-devices · bucket created in th

[ceph-users] Re: mds in death loop with [ERR] loaded dup inode XXX [2,head] XXX at XXX, but inode XXX already exists at XXX

2021-08-24 Thread Dan van der Ster
Hi, What is the actual backtrace from the crash? We occasionally had dup inode errors like this in the past but they never escalated to a crash. You can see my old thread here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/036294.html The developer at that time suggested some mani

[ceph-users] Re: Pacific: access via S3 / Object gateway slow for small files

2021-08-24 Thread Janne Johansson
Den tis 24 aug. 2021 kl 09:12 skrev E Taka <0eta...@gmail.com>: > As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000 files): > > - rsync -a to a Cephfs took 2 min > - s3cmd put --recursive took over 70 min > Users reported that the S3 access is generally slow, not only with s3tool

[ceph-users] Re: Missing OSD in SSD after disk failure

2021-08-24 Thread Eugen Block
Can you check what ceph-volume would do if you did it manually? Something like this host1:~ # cephadm ceph-volume lvm batch --report /dev/vdc /dev/vdd --db-devices /dev/vdb and don't forget the '--report' flag. One more question, did you properly wipe the previous LV on that NVMe? You sho

[ceph-users] Re: Pacific: access via S3 / Object gateway slow for small files

2021-08-24 Thread Janne Johansson
Den tis 24 aug. 2021 kl 09:46 skrev Francesco Piraneo G. : > Il 24.08.21 09:32, Janne Johansson ha scritto: > >> As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000 > >> files): > >> - rsync -a to a Cephfs took 2 min > >> - s3cmd put --recursive took over 70 min > >> Users reporte

[ceph-users] Re: radosgw manual deployment

2021-08-24 Thread Eugen Block
Hi, I assume that the "latest" docs are already referring to quincy, if you check the pacific docs (https://docs.ceph.com/en/pacific/mgr/dashboard/) that command is not mentioned. So you'll probably have to use the previous method of configuring the credentials. Regards, Eugen Zitat v

[ceph-users] Ceph on windows: unable to map RBDimage

2021-08-24 Thread Aristide Bekroundjo
Hi, I try to map rbd image on windows but it fails with bellow message. 1 -1 rbd-wnbd: Could not send device map request. Make sure that the ceph service is running. Error: (5) AccÞs refusÚ. rbd: rbd-wnbd failed with error: C:\Program Files\Ceph\bin\rbd-wnbd: exit status: -22 I get the

[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-08-24 Thread Yanhu Cao
Any progress on this? We have encountered the same problem, use the rbd-nbd option timeout=120. ceph version: 14.2.13 kernel version: 4.19.118-2+deb10u1 On Wed, May 19, 2021 at 10:55 PM Mykola Golub wrote: > > On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote: > > On Wed, May 19, 2021 at

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-24 Thread Francesco Piraneo G.
Are there client packages available for Rocky Linux (specifically 8.4) for Pacific? If not, when can we expect them? I also looked at download.ceph.com and I couldn’t find anything relevant. I only saw rh7 and rh8 packages. I bootstrapped test cluster under Rocky 8 and CentOS without meetin

[ceph-users] Re: Pacific: access via S3 / Object gateway slow for small files

2021-08-24 Thread Francesco Piraneo G.
Il 24.08.21 09:32, Janne Johansson ha scritto: As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000 files): - rsync -a to a Cephfs took 2 min - s3cmd put --recursive took over 70 min Users reported that the S3 access is generally slow, not only with s3tools. Single per-object a

[ceph-users] radosgw manual deployment

2021-08-24 Thread Francesco Piraneo G.
Good morning all, I deployed my radosgw on first monitor node on my test cluster following the instructions here: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/manually-installing-ceph-object-gateway However the relat

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-24 Thread Kyriazis, George
Yes, it’s supposed to be, but they have their own package repo and mirrors, separate from Redhat. I’ve also heard that there are some differences between the two under the hood, specifically OpenHPC-related, which make CentOS 8.4 and Rocky Linux 8.4 not compatible. Maybe I’m mistaken, but I th

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-24 Thread Marc
, which make CentOS 8.4 and Rocky Linux 8.4 not compatible. > > Maybe I’m mistaken, but I thought that CentOS included ceph-common in > their own repos, so just doing “yum install ceph-common” worked. This > doesn’t work on Rocky. > Just wondering if adding the rh8 ceph repo will work on Rocky, o

[ceph-users] August Ceph Tech Talk

2021-08-24 Thread Mike Perez
Hi everyone, We have a Ceph Tech Talk scheduled for this Thursday at 17:00 UTC with Matan Brz on how to use Lua Scripting together with a NATS Lua client to add NATS to the list of bucket notifications endpoints. https://ceph.io/en/community/tech-talks/ More information on this project can be fo

[ceph-users] Re: [EXTERNAL] Re: mds in death loop with [ERR] loaded dup inode XXX [2,head] XXX at XXX, but inode XXX already exists at XXX

2021-08-24 Thread Pickett, Neale T
Full backtrace below. Seems pretty short for a ceph backtrace! I'll get started on a link scan for the time being. It'll keep it from flapping in and out of CEPH_ERR! -1> 2021-08-24T21:17:38.313+ 7fe9a730e700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH

[ceph-users] Re: [EXTERNAL] Re: mds in death loop with [ERR] loaded dup inode XXX [2,head] XXX at XXX, but inode XXX already exists at XXX

2021-08-24 Thread Pickett, Neale T
Aha, I knew it was too short to be true. It seems like a client is trying to delete a file which is triggering all this. There are many many lines looking like -5 and -4 here. -5> 2021-08-24T21:17:38.293+ 7fe9a5b0b700 0 mds.0.cache.dir(0x609) _fetched badness: got (but i already had

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-24 Thread Kyriazis, George
If rocky8 had a ceph-common I would go with that. It would (presumably) be tested more, since it comes with the original distro. In any case, I installed the el8 ceph packages and they seem to work … so far. At least I can mount my ceph volume and initial testing looks good. Thanks! George

[ceph-users] tcmu-runner crashing on 16.2.5

2021-08-24 Thread Paul Giralt (pgiralt)
I upgraded to Pacific 16.2.5 about a month ago and everything was working fine. Suddenly for the past few days I’ve started having the tcmu-runner container on my iSCSI gateways just disappear. I’m assuming this is because they have crashed. I deployed the services using cephadm / ceph orch in D

[ceph-users] Re: Ceph on windows: unable to map RBDimage

2021-08-24 Thread Lucian Petrut
Hi, On Windows, the RBD device map commands are dispatched to a centralized service so that the daemons are not tied to the current Windows session. The service gets configured automatically by the MSI installer [1]. However, if you’d like to configure it manually, please check this document [2