[ceph-users] Remove failed multi-part uploads?

2023-01-13 Thread rhys . g . powell
Hello, We are running an older version of ceph - 14.2.22 nautilus We have a radosgw/s3 implementation and had some issues with multi-part uploads failing to complete. We used s3cmd to delete the failed uploads and clean out the bucket, but when reviewing the space utilization of buckets, it

[ceph-users] Re: MDS error

2023-01-13 Thread afsmaira
Aditional information: - We tried to reset bothe the services and the entire machine - journalctl part: jan 13 02:40:18 s1.ceph.infra.ufscar.br ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 2023-01-13T05:40:18.653+ 7fc370b64700 0 log_channel(cluster) log [WRN] : Replacing

[ceph-users] Filesystem is degraded, offline, mds daemon damaged

2023-01-13 Thread bpurvis
I am really hoping you can help. THANKS in advance. I have inherited a Docker swarm running CEPH but I know very little about it. Current I have an unhealthy ceph environment that will not mount my data drive. Its a cluster of 4 vm servers. docker01,docker02, docker03, docker-cloud CL has the

[ceph-users] ceph orch osd spec questions

2023-01-13 Thread Wyll Ingersoll
Ceph Pacific 16.2.9 We have a storage server with multiple 1.7TB SSDs dedicated to the bluestore DB usage. The osd spec originally was misconfigured slightly and had set the "limit" parameter on the db_devices to 5 (there are 8 SSDs available) and did not specify a block_db_size. ceph

[ceph-users] User access

2023-01-13 Thread Rhys Powell
rhys.g.pow...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: pg mapping verification

2023-01-13 Thread Christopher Durham
Eugen, Thank you for the tip. While writing a script is ok, it would be nice if therewas an official way to do this. -Chris -Original Message- From: Eugen Block To: ceph-users@ceph.io Sent: Thu, Jan 12, 2023 8:58 am Subject: [ceph-users] Re: pg mapping verification Hi, I don't

[ceph-users] ._handle_peer_banner peer [v2:***,v1:***] is using msgr V1 protocol

2023-01-13 Thread Frank Schilder
Hi all, on an octopus latest cluster I see a lot of these log messages: Jan 13 20:00:25 ceph-21 journal: 2023-01-13T20:00:25.366+0100 7f47702b8700 -1 --2- [v2:192.168.16.96:6826/5724,v1:192.168.16.96:6827/5724] >> [v2:192.168.16.93:6928/3503064,v1:192.168.16.93:6929/3503064]

[ceph-users] MDS error

2023-01-13 Thread André de Freitas Smaira
Hello! Yesterday we found some errors in our cephadm disks, which is making it impossible to access our HPC Cluster: # ceph health detail HEALTH_WARN 3 failed cephadm daemon(s); insufficient standby MDS daemons available [WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s) daemon

[ceph-users] Re: Telemetry service is temporarily down

2023-01-13 Thread Yaarit Hatuka
Hi everyone, Our telemetry service is up and running again. Thanks Adam Kraitman and Dan Mick for restoring the service. We thank you for your patience and appreciate your contribution to the project! Thanks, Yaarit On Tue, Jan 3, 2023 at 3:14 PM Yaarit Hatuka wrote: > Hi everyone, > > We

[ceph-users] Re: Current min_alloc_size of OSD?

2023-01-13 Thread David Orman
I think this would be valuable to have easily accessible during runtime, perhaps submit a report (and patch if possible)? David On Fri, Jan 13, 2023, at 08:14, Robert Sander wrote: > Hi, > > Am 13.01.23 um 14:35 schrieb Konstantin Shalygin: > > > ceph-kvstore-tool bluestore-kv

[ceph-users] Re: Current min_alloc_size of OSD?

2023-01-13 Thread Robert Sander
Hi, Am 13.01.23 um 14:35 schrieb Konstantin Shalygin: ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-0/ get S min_alloc_size This only works when the OSD is not running. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Current min_alloc_size of OSD?

2023-01-13 Thread Konstantin Shalygin
Hi, > On 12 Jan 2023, at 04:35, Robert Sander wrote: > > How can I get the current min_allloc_size of OSDs that were created with > older Ceph versions? Is there a command that shows this info from the on disk > format of a bluestore OSD? You can see this via kvstore-tool:

[ceph-users] radosgw ceph.conf question

2023-01-13 Thread Boris Behrens
Hi, I am just reading through this document ( https://docs.ceph.com/en/octopus/radosgw/config-ref/) and on the top is states: The following settings may added to the Ceph configuration file (i.e., > usually ceph.conf) under the [client.radosgw.{instance-name}] section. > And my ceph.conf looks

[ceph-users] Re: rbd-mirror ceph quincy Not able to find rbd_mirror_journal_max_fetch_bytes config in rbd mirror

2023-01-13 Thread Eugen Block
Hi, apparently this config option has been removed between N and O releases. I found this revision [1] from 2019 and the pull request [2] in favor of adjusting the journal fetch based on memory target. I didn't read the whole conversation but to me it looks like the docs are outdated and

[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2023-01-13 Thread Manuel Holtgrewe
Dear Xiubo, could you explain how to enable kernel debug logs (I assume this is on the client)? Thanks, Manuel On Fri, May 13, 2022 at 9:39 AM Xiubo Li wrote: > > On 5/12/22 12:06 AM, Stefan Kooman wrote: > > Hi List, > > > > We have quite a few linux kernel clients for CephFS. One of our > >

[ceph-users] heavy rotation in store.db folder alongside with traces and exceptions in the .log

2023-01-13 Thread Jürgen Stawska
Hi everyone, I'm facing a weird issue with one of my pacific clusters. Brief into: - 5 Nodes Ubuntu 20.04. on 16.2.7 ( ceph01…05 ) - bootstrapped with cephadm recent image from quay.io (around 1 year ago) - approx. 200TB capacity 5% used - 5 OSD (2 HDD / 2 SSD / 1 NVMe) on each node - each node

[ceph-users] Re: RGW error Coundn't init storage provider (RADOS)

2023-01-13 Thread Alexander Y. Fomichev
Hi I facing similar error a couple of days ago: radosgw-admin --cluster=cl00 realm create --rgw-realm=data00 --default ... (0 rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration,

[ceph-users] Re: OSD crash on Onode::put

2023-01-13 Thread Frank Schilder
Hi Igor, my approach here, before doing something crazy like a daily cron job for restarting OSDs, is to do at least a minimum of thread analysis. How much of a problem is it really? I'm here also mostly guided by performance loss. As far as I know, the onode cache should be one of the most

[ceph-users] Re: OSD crash on Onode::put

2023-01-13 Thread Frank Schilder
Hi Anthony and Serkan, I think Anthony had the right idea. I forgot that we re-deployed a number of OSDs on existing drives and also did a PG split over Christmas. The relatively few disks that stick out with cache_other usage seem all to be these newly deployed OSDs. So, it looks like that