[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Thank you Adam! After "orch daemon redeploy", all works as expected. Tony From: Adam King Sent: March 24, 2022 11:50 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] Re: logging with container Hmm, I'm assuming from "Setting

[ceph-users] Re: [ERR] OSD_FULL: 1 full osd(s) - with 73% used

2022-03-24 Thread Nikhilkumar Shelke
Found doc related to troubleshooting OSD: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/troubleshooting_guide/troubleshooting-ceph-osds On Thu, Mar 24, 2022 at 12:43 AM Neeraj Pratap Singh wrote: > Hi, > Ceph prevents clients from performing I/O operations on full

[ceph-users] Re: logging with container

2022-03-24 Thread Adam King
Hmm, I'm assuming from "Setting "log_to_stderr" doesn't help" you've already tried all the steps in https://docs.ceph.com/en/latest/cephadm/operations/#disabling-logging-to-journald. That's meant to be the steps for stopping cluster logs from going to the container logs. From my personal testing,

[ceph-users] Re: March 2022 Ceph Tech Talk:

2022-03-24 Thread Neha Ojha
Starting now! On Fri, Mar 18, 2022 at 6:02 AM Mike Perez wrote: > Hi everyone > > On March 24 at 17:00 UTC, hear Kamoltat (Junior) Sirivadhna give a > Ceph Tech Talk on how Teuthology, Ceph's integration test framework, > works! > > https://ceph.io/en/community/tech-talks/ > > Also, if you

[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Any comments on this? Thanks! Tony From: Tony Liu Sent: March 21, 2022 10:01 PM To: Adam King Cc: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] Re: logging with container Hi Adam, When I do "ceph tell mon.ceph-1 config set log_to_file true", I

[ceph-users] Adding a new monitor to CEPH setup remains in state probing

2022-03-24 Thread Jose Apr
Hi all, I have a CEPH setup installed:  3 monitors, 3 mgr and 3 mds (CEPH 15.2.4 Octopus version / CentOS Linux release 7.8.2003) and the rest of OSDs. The idea is to add a new node on an updated OS like Rocky Linux release 8.5 and then start to install CEPH Pacific release in order to test the

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo
Hi Ilya, Thank you for your answer! On 3/24/22 14:09, Ilya Dryomov wrote: How can we see whether a lock is exclusive or shared? the rbd lock ls command output looks identical for the two cases. You can't. The way --exclusive is implemented is the client simply refuses to release the lock

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-24 Thread Ilya Dryomov
On Thu, Mar 24, 2022 at 11:06 AM Budai Laszlo wrote: > > Hi all, > > is there any possibility to turn an exclusive lock into a shared one? > > for instance if I map a device with "rbd map testimg --exclusive" then is > there any way to switch that lock to a shared one so I can map the rbd image

[ceph-users] Performance increase with NVMe for WAL/DB and SAS SSD for data

2022-03-24 Thread Pinco Pallino
Hi all, given that everywhere I find performance increases with WAL/DB on SSD and data on HDD, I'm trying to understand how much the performance increase would be using NVME for WAL/DB and regular sata SSD for data. Unfortunately I've looked back and forth on the internet but I couldn't find any

[ceph-users] RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo
Hi all, is there any possibility to turn an exclusive lock into a shared one? for instance if I map a device with "rbd map testimg --exclusive" then is there any way to switch that lock to a shared one so I can map the rbd image on an other node as well? How can we see whether a lock is

[ceph-users] Re: RBD exclusive lock

2022-03-24 Thread Florian Pritz
On Wed, Mar 23, 2022 at 11:18:18PM +0200, Budai Laszlo wrote: > After I map on the first host I can see its lock on the image. After that I > was expecting the map to fail on the second node, but actually it didn't. The > second node was able to map the image and take over the lock. > > How