[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Thank you Adam! After "orch daemon redeploy", all works as expected.

Tony

From: Adam King 
Sent: March 24, 2022 11:50 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] Re: logging with container

Hmm, I'm assuming from "Setting "log_to_stderr" doesn't help" you've already 
tried all the steps in 
https://docs.ceph.com/en/latest/cephadm/operations/#disabling-logging-to-journald.
 That's meant to be the steps for stopping cluster logs from going to the 
container logs. From my personal testing, just setting the global config 
options made it work for all the daemons without needing to redeploy or set any 
of the values at runtime. I verified locally after setting log to file to true 
as well as the steps in the posted link new logs were getting put in 
/var/log/ceph//mon.host1 file but the journal had no new logs after when 
I changed the settings. Perhaps because you've modified the values directly at 
runtime for the daemons it isn't picking up the set config options as runtime 
changes override config options? It could be worth trying just redeploying the 
daemons after having all 6 of the relevant config options set properly. I'll 
also note that I have been using podman. Not sure
  if there is some major logging difference between podman and docker.

Thanks,

 - Adam King

On Thu, Mar 24, 2022 at 1:00 PM Tony Liu 
mailto:tonyliu0...@hotmail.com>> wrote:
Any comments on this?

Thanks!
Tony

From: Tony Liu mailto:tonyliu0...@hotmail.com>>
Sent: March 21, 2022 10:01 PM
To: Adam King
Cc: ceph-users@ceph.io; 
d...@ceph.io
Subject: [ceph-users] Re: logging with container

Hi Adam,

When I do "ceph tell mon.ceph-1 config set log_to_file true",
I see the log file is created. That confirms that those options in command line
can only be override by runtime config change.
Could you check mon and mgr logging on your setup?

Can we remove those options in command line and let logging to be controlled
by cluster configuration or configuration file?

Another issue is that, log keeps going to 
/var/lib/docker/containers//-json.log,
which keeps growing up and it's not under logrotate management. How can I stop
logging to container stdout/stderr? Setting "log_to_stderr" doesn't help.


Thanks!
Tony

From: Tony Liu mailto:tonyliu0...@hotmail.com>>
Sent: March 21, 2022 09:41 PM
To: Adam King
Cc: ceph-users@ceph.io; 
d...@ceph.io
Subject: [ceph-users] Re: logging with container

Hi Adam,

# ceph config get mgr log_to_file
true
# ceph config get mgr log_file
/var/log/ceph/$cluster-$name.log
# ceph config get osd log_to_file
true
# ceph config get osd log_file
/var/log/ceph/$cluster-$name.log
# ls /var/log/ceph/fa771070-a975-11ec-86c7-e4434be9cb2e/
ceph-osd.10.log  ceph-osd.13.log  ceph-osd.16.log  ceph-osd.19.log  
ceph-osd.1.log  ceph-osd.22.log  ceph-osd.4.log  ceph-osd.7.log  ceph-volume.log
# ceph version
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

"log_to_file" and "log_file" are set the same for mgr and osd, but why there is 
osd log only,
but no mgr log?


Thanks!
Tony

From: Adam King mailto:adk...@redhat.com>>
Sent: March 21, 2022 08:26 AM
To: Tony Liu
Cc: ceph-users@ceph.io; 
d...@ceph.io
Subject: Re: [ceph-users] logging with container

Hi Tony,

Afaik those container flags just set the defaults and the config options 
override them. Setting the necessary flags 
(https://docs.ceph.com/en/latest/cephadm/operations/#logging-to-files) seemed 
to work for me.

[ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
false
[ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
false
[ceph: root@vm-00 /]# ceph config set global log_to_file true
[ceph: root@vm-00 /]# ceph config set global mon_cluster_log_to_file true
[ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
true
[ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
true
[ceph: root@vm-00 /]# ceph version
ceph version 16.2.7-601-g179a7bca (179a7bca8a84771b0dde09e26f7a2146a985df90) 
pacific (stable)
[ceph: root@vm-00 /]# exit
exit
[root@vm-00 ~]# ls /var/log/ceph/413f7ec8-a91e-11ec-9b02-52540092b5a3/
ceph.audit.log  ceph.cephadm.log  ceph.log  ceph-mgr.vm-00.ukcctb.log  
ceph-mon.vm-00.log  ceph-osd.0.log  ceph-osd.10.log  ceph-osd.2.log  
ceph-osd.4.log  ceph-osd.6.log  ceph-osd.8.log  ceph-volume.log



On Mon, Mar 21, 2022 at 1:06 AM Tony Liu 
mailto:tonyliu0...@hotmail.com>>>
 wrote:
Hi,

After reading through doc, it's still not very clear to me how logging works 
with container.
This is with Pacific v16.2 container.

In OSD container, I see this.
```
/usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph 
--defaul

[ceph-users] Re: [ERR] OSD_FULL: 1 full osd(s) - with 73% used

2022-03-24 Thread Nikhilkumar Shelke
Found doc related to troubleshooting OSD:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/troubleshooting_guide/troubleshooting-ceph-osds


On Thu, Mar 24, 2022 at 12:43 AM Neeraj Pratap Singh 
wrote:

> Hi,
> Ceph prevents clients from performing I/O operations on full OSD nodes to
> avoid losing data. It returns the HEALTH_ERR full osds message when the
> cluster reaches the capacity set by the mon_osd_full_ratio parameter. By
> default, this parameter is set to 0.95 which means 95% of the cluster
> capacity.
> If % RAW USED is above 70-75%, there are two options:
> 1. Delete unnecessary data, but this is short time solution.
> 2. OR,Scale the cluster by adding a new OSD node.
>
> On Wed, Mar 23, 2022 at 11:44 PM Rodrigo Werle 
> wrote:
>
> > Thanks Eugen!
> > Actually, as it is a nvme disk and I thought that the weight could be
> > greater. I tried changing it to 0.9 but it is still saying that osd is
> > full.
> > This osd got full yesterday. I wiped it out and re added. After some
> time,
> > the "osd full" message happened again. It is like the ceph thinks the osd
> > is still full, as it was before...
> >
> >
> > Em qua., 23 de mar. de 2022 às 14:38, Eugen Block 
> > escreveu:
> >
> > > Without having an answer to the question why the OSD is full I'm
> > > wondering why the OSD has a crush weight of 1.2 while its size is
> > > only 1 TB. Was that changed on purpose? I'm not sure if that would
> > > explain the OSD full message, though.
> > >
> > >
> > > Zitat von Rodrigo Werle :
> > >
> > > > Hi everyone!
> > > > I'm trying to understand why Ceph changed the state of one osd as
> full
> > if
> > > > it is 73% used and full_ratio is 0.97.
> > > >
> > > > Follow some information:
> > > >
> > > > # ceph health detail
> > > > (...)
> > > > [ERR] OSD_FULL: 1 full osd(s)
> > > > osd.11 is full
> > > > (...)
> > > >
> > > > # ceph osd metadata
> > > > (...)
> > > >
> > > >> {
> > > >> "id": 11,
> > > >> "arch": "x86_64",
> > > >> "back_iface": "",
> > > >> "bluefs": "1",
> > > >> "bluefs_dedicated_db": "0",
> > > >> "bluefs_dedicated_wal": "0",
> > > >> "bluefs_single_shared_device": "1",
> > > >> "bluestore_bdev_access_mode": "blk",
> > > >> "bluestore_bdev_block_size": "4096",
> > > >> "bluestore_bdev_dev_node": "/dev/dm-41",
> > > >> "bluestore_bdev_devices": "nvme0n1",
> > > >> "bluestore_bdev_driver": "KernelDevice",
> > > >> "bluestore_bdev_partition_path": "/dev/dm-41",
> > > >> "bluestore_bdev_rotational": "0",
> > > >> "bluestore_bdev_size": "1000200994816",
> > > >> "bluestore_bdev_support_discard": "1",
> > > >> "bluestore_bdev_type": "ssd",
> > > >> "ceph_release": "octopus",
> > > >> "ceph_version": "ceph version 15.2.16
> > > >> (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)",
> > > >> "ceph_version_short": "15.2.16",
> > > >> "cpu": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz",
> > > >> "default_device_class": "ssd",
> > > >> "device_ids":
> > > >> "nvme0n1=Samsung_SSD_970_EVO_Plus_1TB_S59ANJ0N308050F",
> > > >> "device_paths":
> > > >> "nvme0n1=/dev/disk/by-path/pci-:01:00.0-nvme-1",
> > > >> "devices": "nvme0n1",
> > > >> "distro": "ubuntu",
> > > >> "distro_description": "Ubuntu 18.04.6 LTS",
> > > >> "distro_version": "18.04",
> > > >> "front_iface": "",
> > > >> "hostname": "pf-us1-dfs6",
> > > >> "journal_rotational": "0",
> > > >> "kernel_description": "#182-Ubuntu SMP Fri Mar 18 15:53:46
> UTC
> > > >> 2022",
> > > >> "kernel_version": "4.15.0-173-generic",
> > > >> "mem_swap_kb": "117440508",
> > > >> "mem_total_kb": "181456152",
> > > >> "network_numa_unknown_ifaces": "back_iface,front_iface",
> > > >> "objectstore_numa_node": "0",
> > > >> "objectstore_numa_nodes": "0",
> > > >> "os": "Linux",
> > > >> "osd_data": "/var/lib/ceph/osd/ceph-11",
> > > >> "osd_objectstore": "bluestore",
> > > >> "osdspec_affinity": "",
> > > >> "rotational": "0"
> > > >> } (...)
> > > >
> > > >
> > > > # cat /var/lib/ceph/osd/ceph-11/type
> > > >
> > > >> bluestore
> > > >
> > > >
> > > > # ls -l /var/lib/ceph/osd/ceph-11 | grep block
> > > >
> > > >> lrwxrwxrwx 1 ceph ceph  50 Mar 22 21:44 block ->
> > > >> /dev/mapper/YpB2cx-HlyU-VPqT-Abaz-Dutx-iMrz-Tty6o1
> > > >
> > > >
> > > > # lsblk
> > > >
> > > >> nvme0n1  259:0
> > 0
> > > >> 931.5G  0 disk
> > > >>
> > >
> >
> └─ceph--7ef8f83a--d055--4a59--8d6b--c564544c5a55-osd--block--c7c03dfd--a1b1--4182--9e61--bf264f293f2b
> > > >>253:0
> 0
> > > >> 931.5G  0 lvm
> > > >>   └─YpB2cx-HlyU-VPqT-Abaz-Dutx-iMrz-Tty6o1   253:41
> >  0
> > > >

[ceph-users] Re: logging with container

2022-03-24 Thread Adam King
Hmm, I'm assuming from "Setting "log_to_stderr" doesn't help" you've
already tried all the steps in
https://docs.ceph.com/en/latest/cephadm/operations/#disabling-logging-to-journald.
That's meant to be the steps for stopping cluster logs from going to the
container logs. From my personal testing, just setting the global config
options made it work for all the daemons without needing to redeploy or set
any of the values at runtime. I verified locally after setting log to file
to true as well as the steps in the posted link new logs were
getting put in /var/log/ceph//mon.host1 file but the journal had no
new logs after when I changed the settings. Perhaps because you've modified
the values directly at runtime for the daemons it isn't picking up the set
config options as runtime changes override config options? It could be
worth trying just redeploying the daemons after having all 6 of the
relevant config options set properly. I'll also note that I have been using
podman. Not sure if there is some major logging difference between podman
and docker.

Thanks,

 - Adam King

On Thu, Mar 24, 2022 at 1:00 PM Tony Liu  wrote:

> Any comments on this?
>
> Thanks!
> Tony
> 
> From: Tony Liu 
> Sent: March 21, 2022 10:01 PM
> To: Adam King
> Cc: ceph-users@ceph.io; d...@ceph.io
> Subject: [ceph-users] Re: logging with container
>
> Hi Adam,
>
> When I do "ceph tell mon.ceph-1 config set log_to_file true",
> I see the log file is created. That confirms that those options in command
> line
> can only be override by runtime config change.
> Could you check mon and mgr logging on your setup?
>
> Can we remove those options in command line and let logging to be
> controlled
> by cluster configuration or configuration file?
>
> Another issue is that, log keeps going to
> /var/lib/docker/containers//-json.log,
> which keeps growing up and it's not under logrotate management. How can I
> stop
> logging to container stdout/stderr? Setting "log_to_stderr" doesn't help.
>
>
> Thanks!
> Tony
> 
> From: Tony Liu 
> Sent: March 21, 2022 09:41 PM
> To: Adam King
> Cc: ceph-users@ceph.io; d...@ceph.io
> Subject: [ceph-users] Re: logging with container
>
> Hi Adam,
>
> # ceph config get mgr log_to_file
> true
> # ceph config get mgr log_file
> /var/log/ceph/$cluster-$name.log
> # ceph config get osd log_to_file
> true
> # ceph config get osd log_file
> /var/log/ceph/$cluster-$name.log
> # ls /var/log/ceph/fa771070-a975-11ec-86c7-e4434be9cb2e/
> ceph-osd.10.log  ceph-osd.13.log  ceph-osd.16.log  ceph-osd.19.log
> ceph-osd.1.log  ceph-osd.22.log  ceph-osd.4.log  ceph-osd.7.log
> ceph-volume.log
> # ceph version
> ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific
> (stable)
>
> "log_to_file" and "log_file" are set the same for mgr and osd, but why
> there is osd log only,
> but no mgr log?
>
>
> Thanks!
> Tony
> 
> From: Adam King 
> Sent: March 21, 2022 08:26 AM
> To: Tony Liu
> Cc: ceph-users@ceph.io; d...@ceph.io
> Subject: Re: [ceph-users] logging with container
>
> Hi Tony,
>
> Afaik those container flags just set the defaults and the config options
> override them. Setting the necessary flags (
> https://docs.ceph.com/en/latest/cephadm/operations/#logging-to-files)
> seemed to work for me.
>
> [ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
> false
> [ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
> false
> [ceph: root@vm-00 /]# ceph config set global log_to_file true
> [ceph: root@vm-00 /]# ceph config set global mon_cluster_log_to_file true
> [ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
> true
> [ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
> true
> [ceph: root@vm-00 /]# ceph version
> ceph version 16.2.7-601-g179a7bca
> (179a7bca8a84771b0dde09e26f7a2146a985df90) pacific (stable)
> [ceph: root@vm-00 /]# exit
> exit
> [root@vm-00 ~]# ls /var/log/ceph/413f7ec8-a91e-11ec-9b02-52540092b5a3/
> ceph.audit.log  ceph.cephadm.log  ceph.log  ceph-mgr.vm-00.ukcctb.log
> ceph-mon.vm-00.log  ceph-osd.0.log  ceph-osd.10.log  ceph-osd.2.log
> ceph-osd.4.log  ceph-osd.6.log  ceph-osd.8.log  ceph-volume.log
>
>
>
> On Mon, Mar 21, 2022 at 1:06 AM Tony Liu  tonyliu0...@hotmail.com>> wrote:
> Hi,
>
> After reading through doc, it's still not very clear to me how logging
> works with container.
> This is with Pacific v16.2 container.
>
> In OSD container, I see this.
> ```
> /usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph
> --default-log-to-file=false --default-log-to-stderr=true
> --default-log-stderr-prefix=debug
> ```
> When check ceph configuration.
> ```
> # ceph config get osd.16 log_file
> /var/log/ceph/$cluster-$name.log
> # ceph config get osd.16 log_to_file
> true
> # ceph config show osd.16 log_to_file
> false
> ```
> Q1, what's the intention of those log settings in command line? It's high
> priority and overrides
> configuration in file and mon. 

[ceph-users] Re: March 2022 Ceph Tech Talk:

2022-03-24 Thread Neha Ojha
Starting now!

On Fri, Mar 18, 2022 at 6:02 AM Mike Perez  wrote:

> Hi everyone
>
> On March 24 at 17:00 UTC, hear Kamoltat (Junior) Sirivadhna give a
> Ceph Tech Talk on how Teuthology, Ceph's integration test framework,
> works!
>
> https://ceph.io/en/community/tech-talks/
>
> Also, if you would like to present and share with the community what
> you're doing with Ceph or development, please let me know as we are
> looking for content. Thanks!
>
> --
> Mike Perez
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Any comments on this?

Thanks!
Tony

From: Tony Liu 
Sent: March 21, 2022 10:01 PM
To: Adam King
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Re: logging with container

Hi Adam,

When I do "ceph tell mon.ceph-1 config set log_to_file true",
I see the log file is created. That confirms that those options in command line
can only be override by runtime config change.
Could you check mon and mgr logging on your setup?

Can we remove those options in command line and let logging to be controlled
by cluster configuration or configuration file?

Another issue is that, log keeps going to 
/var/lib/docker/containers//-json.log,
which keeps growing up and it's not under logrotate management. How can I stop
logging to container stdout/stderr? Setting "log_to_stderr" doesn't help.


Thanks!
Tony

From: Tony Liu 
Sent: March 21, 2022 09:41 PM
To: Adam King
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Re: logging with container

Hi Adam,

# ceph config get mgr log_to_file
true
# ceph config get mgr log_file
/var/log/ceph/$cluster-$name.log
# ceph config get osd log_to_file
true
# ceph config get osd log_file
/var/log/ceph/$cluster-$name.log
# ls /var/log/ceph/fa771070-a975-11ec-86c7-e4434be9cb2e/
ceph-osd.10.log  ceph-osd.13.log  ceph-osd.16.log  ceph-osd.19.log  
ceph-osd.1.log  ceph-osd.22.log  ceph-osd.4.log  ceph-osd.7.log  ceph-volume.log
# ceph version
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

"log_to_file" and "log_file" are set the same for mgr and osd, but why there is 
osd log only,
but no mgr log?


Thanks!
Tony

From: Adam King 
Sent: March 21, 2022 08:26 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] logging with container

Hi Tony,

Afaik those container flags just set the defaults and the config options 
override them. Setting the necessary flags 
(https://docs.ceph.com/en/latest/cephadm/operations/#logging-to-files) seemed 
to work for me.

[ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
false
[ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
false
[ceph: root@vm-00 /]# ceph config set global log_to_file true
[ceph: root@vm-00 /]# ceph config set global mon_cluster_log_to_file true
[ceph: root@vm-00 /]# ceph config get osd.0 log_to_file
true
[ceph: root@vm-00 /]# ceph config show osd.0 log_to_file
true
[ceph: root@vm-00 /]# ceph version
ceph version 16.2.7-601-g179a7bca (179a7bca8a84771b0dde09e26f7a2146a985df90) 
pacific (stable)
[ceph: root@vm-00 /]# exit
exit
[root@vm-00 ~]# ls /var/log/ceph/413f7ec8-a91e-11ec-9b02-52540092b5a3/
ceph.audit.log  ceph.cephadm.log  ceph.log  ceph-mgr.vm-00.ukcctb.log  
ceph-mon.vm-00.log  ceph-osd.0.log  ceph-osd.10.log  ceph-osd.2.log  
ceph-osd.4.log  ceph-osd.6.log  ceph-osd.8.log  ceph-volume.log



On Mon, Mar 21, 2022 at 1:06 AM Tony Liu 
mailto:tonyliu0...@hotmail.com>> wrote:
Hi,

After reading through doc, it's still not very clear to me how logging works 
with container.
This is with Pacific v16.2 container.

In OSD container, I see this.
```
/usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph 
--default-log-to-file=false --default-log-to-stderr=true 
--default-log-stderr-prefix=debug
```
When check ceph configuration.
```
# ceph config get osd.16 log_file
/var/log/ceph/$cluster-$name.log
# ceph config get osd.16 log_to_file
true
# ceph config show osd.16 log_to_file
false
```
Q1, what's the intention of those log settings in command line? It's high 
priority and overrides
configuration in file and mon. Is there any option not doing that when deploy 
the container?
Q2, since log_to_file is set to false by command line, why there is still 
loggings in log_file?

The same for mgr and mon.

What I want is to have everything in log file and minimize the stdout and 
stderr from container.
Because log file is managed by logrotate, it unlikely blow up disk space. But 
stdout and stderr
from container is stored in a single file, not managed by logrotate. It may 
grow up to huge file.
Also, it's easier to check log file by vi than "podman logs". And log file is 
also collected and
stored by ELK for central management.

Any comments how I can achieve what I want?
Runtime override may not be the best option, cause it's not persistent.


Thanks!
Tony

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ce

[ceph-users] Adding a new monitor to CEPH setup remains in state probing

2022-03-24 Thread Jose Apr

Hi all, I have a CEPH setup installed:  3 monitors, 3 mgr and 3 mds (CEPH 15.2.4 Octopus version / CentOS Linux release 7.8.2003) and the rest of OSDs.

The idea is to add a new node on an updated OS like Rocky Linux release 8.5 and then start to install CEPH Pacific release in order to test the upgrading process from CEPH Octopus to Pacific (I know it isn't a suitable number of monitors in order to establish a quorum).

Before to upgrade to Pacific release I installed on the new node the last release of Octopus: ceph-mds-15.2.16-0 and ceph-mgr-15.2.16-0 (Octopus 15.2.16 on the Rocky Linux) without troubles. 

However when I try to add the new monitor (Octopus 15.2.16) and start the mon daemon it never reach to join the rest of monitoring daemons and remains always on the "probing state". The networks are fine (the rest of daemons are using the same network) and new and old daemons have connectivity.

Below I show the configurations and log traces.

Thanks in advace.

 

 

ceph versions

{
    "mon": {
        "ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)": 3
    },
    "mgr": {
        "ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)": 1,
        "ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)": 4
    },
    "osd": {
        "ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)": 258
    },
    "mds": {
        "ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)": 1,
        "ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)": 3
    },
    "overall": {
        "ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)": 2,
        "ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)": 268
    }
 

 

ceph --admin-daemon  /var/run/ceph/ceph-mon.MN03.asok mon_status 
{
    "name": "MN03",
    "rank": -1,
    "state": "probing",
    "election_epoch": 0,
    "quorum": [],
    "features": {
        "required_con": "2449958197560098820",
        "required_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus",
            "octopus"
        ],
        "quorum_con": "0",
        "quorum_mon": []
    },
    "outside_quorum": [],
    "extra_probe_peers": [],
    "sync_provider": [],
    "monmap": {
        "epoch": 14,
        "fsid": "c31094f9-f9c2-41fc-9f0c-3a0fad593e72",
        "modified": "2022-03-15T12:55:53.431087Z",
        "created": "2020-02-12T15:30:05.703351Z",
        "min_mon_release": 15,
        "min_mon_release_name": "octopus",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus",
                "octopus"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "MN00",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.2.0.5:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.2.0.5:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.2.0.5:6789/0",
                "public_addr": "10.2.0.5:6789/0",
                "priority": 0,
                "weight": 0
            },
            {
                "rank": 1,
                "name": "MN01",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.2.0.6:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.2.0.6:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.2.0.6:6789/0",
                "public_addr": "10.2.0.6:6789/0",
                "priority": 0,
                "weight": 0
            },
            {
                "rank": 2,
                "name": "MN02",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.2.0.7:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.2.0.7:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "1

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo

Hi Ilya,

Thank you for your answer!

On 3/24/22 14:09, Ilya Dryomov wrote:



How can we see whether a lock is exclusive or shared? the rbd lock ls command 
output looks identical for the two cases.

You can't.  The way --exclusive is implemented is the client simply
refuses to release the lock when it gets the request to do so.  This
isn't tracked on the OSD side in any way so "rbd lock ls" doesn't have
that information.


if I understand correctly then the lock itself is an OSD "flag" but whether is 
treated as shared or exclusive that is a local decision of the client. Is this correct?

If my previous understanding is correct then I assume that it would not be 
impossible to modify the client code so it can be configured on the fly how to 
handle lock release requests.

My use case would be a HA cluster where a VM is mapping an rbd image, and then 
it encounters some network issue. An other node of the HA cluster could start 
the VM and map again the image, but if the networking is fixed on the first VM 
that would keep using the already mapped image. Here If I could instruct my 
second VM to treat the lock as exclusive after an automatic failover, then I'm 
protected against data corruption when the networking of initial VM is fixed. 
But I assume that a STONITH kind of fencing could also do the job (if it can be 
implemented).

Kind regards,
Laszlo
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-24 Thread Ilya Dryomov
On Thu, Mar 24, 2022 at 11:06 AM Budai Laszlo  wrote:
>
> Hi all,
>
> is there any possibility to turn an exclusive lock into a shared one?
>
> for instance if I map a device with "rbd map testimg --exclusive" then is 
> there any way to switch that lock to a shared one so I can map the rbd image 
> on an other node as well?

Hi Laszlo,

No, it's not possible.  In order for the lock to be gracefully
released, you would need to unmap that device.

>
> How can we see whether a lock is exclusive or shared? the rbd lock ls command 
> output looks identical for the two cases.

You can't.  The way --exclusive is implemented is the client simply
refuses to release the lock when it gets the request to do so.  This
isn't tracked on the OSD side in any way so "rbd lock ls" doesn't have
that information.

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Performance increase with NVMe for WAL/DB and SAS SSD for data

2022-03-24 Thread Pinco Pallino
Hi all,

given that everywhere I find performance increases with WAL/DB on SSD and
data on HDD, I'm trying to understand how much the performance increase
would be using NVME for WAL/DB and regular sata SSD for data.
Unfortunately I've looked back and forth on the internet but I couldn't
find any test or performance analysis on something like this.

Right now i've a full flash storage with 15 OSD and 5 nodes, and ssd
nominal IOPS are 175k read and 120k write.

Thanks everyone,

Erik
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo

Hi all,

is there any possibility to turn an exclusive lock into a shared one?

for instance if I map a device with "rbd map testimg --exclusive" then is there 
any way to switch that lock to a shared one so I can map the rbd image on an other node 
as well?

How can we see whether a lock is exclusive or shared? the rbd lock ls command 
output looks identical for the two cases.


Thank you,
Laszlo

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD exclusive lock

2022-03-24 Thread Florian Pritz
On Wed, Mar 23, 2022 at 11:18:18PM +0200, Budai Laszlo  
wrote:
> After I map on the first host I can see its lock on the image. After that I 
> was expecting the map to fail on the second node, but actually it didn't. The 
> second node was able to map the image and take over the lock.
> 
> How is this possible? What am I missing?

Your old client is probably getting blocked from writing to the image:
https://docs.ceph.com/en/latest/rbd/rbd-exclusive-locks/#blocklisting

You can check for that with `ceph osd blocklist ls` or in older versions
`ceph osd blacklist ls`.

Florian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io