[ceph-users] Re: CephFS space usage

2024-03-26 Thread Thorne Lawler
Hi everyone! Just thought I would let everyone know: The issue appears to have been the Ceph NFS service associated with the filesystem. I removed all the files, waited a while, disconnected all the clients, waited a while, then deleted the NFS shares - the disk space and objects abruptly

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-26 Thread David Yang
This is great, we are currently using the smb protocol heavily to export kernel-mounted cephfs. But I encountered a problem. When there are many smb clients enumerating or listing the same directory, the smb server will experience high load, and the smb process will become D state. This problem

[ceph-users] Re: stretch mode item not defined

2024-03-26 Thread ronny.lippold
hi anthony ... many thanks for that. i did not understood, why the docu missed that part ... anyway i checked the login and the mail adress c...@spark5.de should be right :/ one last question ... we have to server rooms. do you think, strecht mode is the right way? do you have any other

[ceph-users] Re: Cephadm on mixed architecture hosts

2024-03-26 Thread Iain Stott
O thanks John, will give it a try and report back From: John Mulligan Sent: 26 March 2024 14:24 To: ceph-users@ceph.io Cc: Iain Stott Subject: Re: [ceph-users] Cephadm on mixed architecture hosts CAUTION: This email originates from outside THG On Tuesday,

[ceph-users] Re: Cephadm host keeps trying to set osd_memory_target to less than minimum

2024-03-26 Thread Adam King
For context, the value the autotune goes with takes the value from `cephadm gather-facts` on the host (the "memory_total_kb" field) and then subtracts from that per daemon on the host according to min_size_by_type = { 'mds': 4096 * 1048576, 'mgr': 4096 * 1048576,

[ceph-users] Cephadm stacktrace on copying ceph.conf

2024-03-26 Thread Jesper Agerbo Krogh [JSKR]
Hi. We're currently getting these errors - and I seem to be missing a clear overview over the cause and how to debug. 3/26/24 9:38:09 PM[ERR]executing _write_files((['dkcphhpcadmin01', 'dkcphhpcmgt028', 'dkcphhpcmgt029', 'dkcphhpcmgt031', 'dkcphhpcosd033', 'dkcphhpcosd034',

[ceph-users] Re: mark direct Zabbix support deprecated? Re: Ceph versus Zabbix: failure: no data sent

2024-03-26 Thread Zac Dover
I have created https://tracker.ceph.com/issues/65161 in order to track the process of updating the Zabbix documentation. Zac Dover Upstream Docs Ceph Foundation On Tuesday, March 26th, 2024 at 5:49 AM, John Jasen wrote: > > > Well, at least on my RHEL Ceph cluster, turns out

[ceph-users] CephFS filesystem mount tanks on some nodes?

2024-03-26 Thread Erich Weiler
Hi All, We have a CephFS filesystem where we are running Reef on the servers (OSD/MDS/MGR/MON) and Quincy on the clients. Every once in a while, one of the clients will stop allowing access to my CephFS filesystem, the error being "permission denied" while try to access the filesystem on

[ceph-users] 1x port from bond down causes all osd down in a single machine

2024-03-26 Thread Szabo, Istvan (Agoda)
Hi, Wonder what we are missing from the netplan configuration on ubuntu which ceph needs to tolerate properly. We are using this bond configuration on ubuntu 20.04 with octopus ceph: bond1: macaddress: x.x.x.x.x.50 dhcp4: no dhcp6: no addresses: -

[ceph-users] Re: cephfs client not released caps when running rsync

2024-03-26 Thread Alexander E. Patrakov
Hello Nikita, A valid workaround is to export both instances of CephFS via NFS-Ganesha and run rsync on NFS, not on CephFS directly. On Tue, Mar 26, 2024 at 10:15 PM Nikita Borisenkov wrote: > > We transfer data (300 million small files) using rsync between cephfs > from version 12.2.13 to

[ceph-users] mclock and massive reads

2024-03-26 Thread Luis Domingues
Hello, We have a question about mClock scheduling reads on pacific (16.2.14 currently). When we do massive reads, from let's say machines we want to drain containing a lot of data on EC pools, we observe quite frequently slow ops on the source OSDs. Those slow ops affect the client services,

[ceph-users] Re: Cephadm on mixed architecture hosts

2024-03-26 Thread John Mulligan
On Tuesday, March 26, 2024 7:22:18 AM EDT Iain Stott wrote: > Hi, > > We are trying to deploy Ceph Reef 18.2.1 using cephadm on mixed architecture > hosts using x86_64 for the mons and aarch64 for the OSDs. > > During deployment we use the following config for the bootstrap process, > where

[ceph-users] Re: Clients failing to advance oldest client?

2024-03-26 Thread Erich Weiler
Thank you! The OSD/mon/mgr/MDS servers are on 18.2.1, and the clients are mostly 17.2.6. -erich On 3/25/24 11:57 PM, Dhairya Parmar wrote: I think this bug has already been worked on in https://tracker.ceph.com/issues/63364 , can you tell which version

[ceph-users] Re: Cephadm on mixed architecture hosts

2024-03-26 Thread Daniel Brown
Iain - I’ve seen this same behavior. I’ve not found a work-around, though would agree that it would be a “nice to have” feature. > On Mar 26, 2024, at 7:22 AM, Iain Stott wrote: > > Hi, > > We are trying to deploy Ceph Reef 18.2.1 using cephadm on mixed architecture > hosts using

[ceph-users] Re: stretch mode item not defined

2024-03-26 Thread Anthony D'Atri
Yes, you will need to create datacenter buckets and move your host buckets under them. > On Mar 26, 2024, at 09:18, ronny.lippold wrote: > > hi there, need some help please. > > we are planning to replace our rbd-mirror setup and go to stretch mode. > the goal is, to have the cluster in 2

[ceph-users] cephfs client not released caps when running rsync

2024-03-26 Thread Nikita Borisenkov
We transfer data (300 million small files) using rsync between cephfs from version 12.2.13 to 18.2.1. After about the same time (about 7 hours in this case), copying stops for a minute ``` health:HEALTH_WARN 1 clients failing to advance oldest client/flush tid 1 MDSs report

[ceph-users] Re: How can I set osd fast shutdown = true

2024-03-26 Thread Manuel Lausch
I would suggest this way ceph config set global osd_fast_shutdown true Regards Manuel On Tue, 26 Mar 2024 12:12:22 +0530 Suyash Dongre wrote: > Hello, > > I want to set osd fast shutdown = true, how should I achieve this? > > Regards, > Suyash >

[ceph-users] stretch mode item not defined

2024-03-26 Thread ronny.lippold
hi there, need some help please. we are planning to replace our rbd-mirror setup and go to stretch mode. the goal is, to have the cluster in 2 fire compartment server rooms. start was a default proxmox/ceph setup. now, i followed the howto from:

[ceph-users] Cephadm on mixed architecture hosts

2024-03-26 Thread Iain Stott
Hi, We are trying to deploy Ceph Reef 18.2.1 using cephadm on mixed architecture hosts using x86_64 for the mons and aarch64 for the OSDs. During deployment we use the following config for the bootstrap process, where $REPOSITORY is our docker repo. [global] container_image =

[ceph-users] Re: Quincy/Dashboard: Object Gateway not accessible after applying self-signed cert to rgw service

2024-03-26 Thread stephan . budach
Although I did search the list for prior posting regarding the issue I had, I finally came across a thread which addressed this issue. In short, updating Ceph to 18.2.2 resolved this issue. The first issue was - as I initially suspected, caused by mgr to perform a verification on the SSL certs

[ceph-users] Can setting mds_session_blocklist_on_timeout to false minize the session eviction?

2024-03-26 Thread Yongseok Oh
Hi, CephFS is provided as a shared file system service in a private cloud environment of our company, LINE. The number of sessions is approximately more than 5,000, and session evictions occur several times a day. When session eviction occurs, the message 'Cannot send after transport endpoint

[ceph-users] Lot log message from one server

2024-03-26 Thread Albert Shih
Hi, On my active mgr (only one) on my cluster I got repeatedly something like Mar 26 08:50:04 cthulhu2 conmon[2737]: 2024-03-26T07:50:04.778+ 7f704bce8700 -1 client.0 error registering admin socket command: (17) File exists Mar 26 08:50:04 cthulhu2 conmon[2737]:

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz
I don't know Marc, i only know what I had to do to get the thing working  :-) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread Marc
is that the RBD Image needs to have a partition entry > created for it - that might be "obvious" to some, but my ongoing belief > is that most "obvious" things aren't, so its better to be explicit about > such things. > > Are you absolutely sure about this? I think you are missing something

[ceph-users] Re: Best practice in 2024 for simple RGW failover

2024-03-26 Thread Marc
> > The requirements are actually not high: 1. there should be a generally > known address for access. 2. it should be possible to reboot or shut down a > server without the RGW connections being down the entire time. A downtime > of a few seconds is OK. > > Constant load balancing would be

[ceph-users] Best practice in 2024 for simple RGW failover

2024-03-26 Thread E Taka
Hi, The requirements are actually not high: 1. there should be a generally known address for access. 2. it should be possible to reboot or shut down a server without the RGW connections being down the entire time. A downtime of a few seconds is OK. Constant load balancing would be nice, but is

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz
Hi All, OK, an update for everyone, a note about some (what I believe to be) missing information in the Ceph Doco, a success story, and an admission on my part that I may have left out some important information. So to start with, I finally got everything working - I now have my 4T RBD

[ceph-users] Re: Clients failing to advance oldest client?

2024-03-26 Thread Dhairya Parmar
I think this bug has already been worked on in https://tracker.ceph.com/issues/63364, can you tell which version you're on? -- *Dhairya Parmar* Associate Software Engineer, CephFS IBM, Inc. On Tue, Mar 26, 2024 at 2:32 AM Erich Weiler wrote: > Hi Y'all, > > I'm seeing this warning via 'ceph

[ceph-users] Re: Ceph object gateway metrics

2024-03-26 Thread Konstantin Shalygin
Hi, You can use the [2] exporter to achieve usage stats per user and per bucket, including quotas usage k Sent from my iPhone > On 26 Mar 2024, at 01:38, Kushagr Gupta wrote: > > 2. https://github.com/blemmenes/radosgw_usage_exporter ___

[ceph-users] How can I set osd fast shutdown = true

2024-03-26 Thread Suyash Dongre
Hello, I want to set osd fast shutdown = true, how should I achieve this? Regards, Suyash ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io