[ceph-users] label or pseudo name for cephfs volume path

2024-05-10 Thread Adiga, Anantha
Hi, Under the circumstance that a ceph fs subvolume has to be recreated , the uuid will change and we have to change all sources that reference the volume path. Is there a way to provide a label /tag to the volume path that can be used for pv_root_path so that we do not have to

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-03 Thread Adiga, Anantha
:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586 # ceph config get mgr mgr/cephadm/container_image_base docker.io/ceph/daemon Thank you, Anantha -Original Message- From: Eugen Block Sent: Wednesday, April 3, 2024 12:27 AM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: ceph status

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-02 Thread Adiga, Anantha
ot;Id": "sha256:6e73176320aaccf3b3fb660b9945d0514222bd7a83e28b96e8440c630ba6891f", "RepoTags": [ "ceph/daemon:latest-pacific" "RepoDigests": [ "ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-02 Thread Adiga, Anantha
Hi Eugen, Currently there are only three nodes, but I can add a node to the cluster and check it out. I will take a look at the mon logs Thank you, Anantha -Original Message- From: Eugen Block Sent: Tuesday, April 2, 2024 12:19 AM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
17 - a001s018 # ceph orch ls --service_name=mon --export service_type: mon service_name: mon placement: count: 3 hosts: - a001s016 - a001s017 - a001s018 -Original Message- From: Adiga, Anantha Sent: Monday, April 1, 2024 6:06 PM To: Eugen Block Cc: ceph-users@ceph.io Subject: RE

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
"num": 3 } ], "osd": [ { "features": "0x3f01cfb9fffd", "release": "luminous", "num": 15 } ], &quo

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
f9a8cca8 last_changed 2024-03-31T23:54:18.692983+ created 2021-09-30T16:15:12.884602+ min_mon_release 16 (pacific) election_strategy: 1 0: [v2:10.45.128.28:3300/0,v1:10.45.128.28:6789/0] mon.a001s018 1: [v2:10.45.128.27:3300/0,v1:10.45.128.27:6789/0] mon.a001s017 # -Original Message- From:

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
Thank you. I will try the export and import method first. Thank you, Anantha -Original Message- From: Eugen Block Sent: Monday, April 1, 2024 1:57 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: ceph status not showing correct monitor services I have two

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
dump Did you do any maintenance (apparently OSDs restarted recently) and maybe accidentally removed a MON from the monmap? Zitat von "Adiga, Anantha" : > Hi Anthony, > > Seeing it since last after noon. It is same with mgr services as , > "ceph -s" is rep

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
761685606, "ports": [], "service_name": "mon", "started": "2024-03-31T23:55:16.268266Z", "status": 1, "status_desc": "running", "version": "16.2.5" }, Thank you, Anantha F

[ceph-users] ceph status not showing correct monitor services

2024-04-01 Thread Adiga, Anantha
Hi Why is "ceph -s" showing only two monitors while three monitor services are running ? # ceph versions { "mon": {"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 2 }, "mgr": { "ceph version 16.2.5

[ceph-users] recreating a cephfs subvolume with the same absolute path

2024-03-29 Thread Adiga, Anantha
Hi, ceph fs subvolume getpath cephfs cluster_A_subvolume cephfs_data_pool_ec21_subvolumegroup /volumes/cephfs_data_pool_ec21_subvolumegroup/cluster_A_subvolume/0f90806d-0d70-4fe1-9e2b-f958056ef0c9 If the subvolume got deleted, is it possible to recreate the subvolume with the same absolute

[ceph-users] cephadm Adding OSD wal device on a new

2023-12-16 Thread Adiga, Anantha
Hi, After adding a node to the cluster (3 nodes) with cephadm, how do I add OSDs with the same configuration on the other nodes ? The other nodes have 12 drives for data osd-block AND 2 drives for wal osd-wal. There are 6 LVs in each wal disk for the 12 data drives. I have added the ODS with

[ceph-users] Re: nfs export over RGW issue in Pacific

2023-12-07 Thread Adiga, Anantha
Thank you Adam!! Anantha From: Adam King Sent: Thursday, December 7, 2023 10:46 AM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] nfs export over RGW issue in Pacific The first handling of nfs exports over rgw in the nfs module, including the `ceph nfs export create rgw

[ceph-users] nfs export over RGW issue in Pacific

2023-12-07 Thread Adiga, Anantha
Hi, oot@a001s016:~# cephadm version Using recent ceph image ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586 ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable) root@a001s016:~# root@a001s016:~# cephadm shell Inferring fsid

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-30 Thread Adiga, Anantha
Hi Venky, “peer-bootstrap import” is working fine now. It was port 3300 blocked by firewall. Thank you for your help. Regards, Anantha From: Adiga, Anantha Sent: Monday, August 7, 2023 1:29 PM To: Venky Shankar ; ceph-users@ceph.io Subject: RE: [ceph-users] Re: cephfs snapshot mirror

[ceph-users] Re: radosgw mulsite multi zone configuration: current period realm name not same as in zonegroup

2023-08-30 Thread Adiga, Anantha
Update: There was a networking issue between the sites, after fixing it , the issue reported below did not occur. Thank you, Anantha From: Adiga, Anantha Sent: Thursday, August 24, 2023 2:40 PM To: ceph-users@ceph.io Subject: radosgw mulsite multi zone configuration: current period realm name

[ceph-users] radosgw mulsite multi zone configuration: current period realm name not same as in zonegroup

2023-08-24 Thread Adiga, Anantha
Hi, I have a multi zone configuration with 4 zones. While adding a secondary zone, getting this error: root@cs17ca101ja0702:/# radosgw-admin realm pull --rgw-realm=global --url=http://10.45.128.139:8080 --default --access-key=sync_user --secret=sync_secret request failed: (13) Permission

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-07 Thread Adiga, Anantha
lobal_id_reclaim = False [osd] osd memory target = 23630132019 -Original Message- From: Venky Shankar Sent: Monday, August 7, 2023 9:26 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung On Tue, Aug 8, 2023 at

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-07 Thread Adiga, Anantha
connection aborted Not sure if the --id (CLIENT_ID) is correct.. not able to connect Thank you, Anantha -Original Message- From: Venky Shankar Sent: Monday, August 7, 2023 7:05 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: cephfs snapshot mirror

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-07 Thread Adiga, Anantha
ed for rgw multisite and is functional. Thank you, Anantha -Original Message- From: Venky Shankar Sent: Monday, August 7, 2023 5:46 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung Hi Anantha, On Mon, Aug 7, 20

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-07 Thread Adiga, Anantha
Hi Venky, Thank you very much. Anantha -Original Message- From: Venky Shankar Sent: Monday, August 7, 2023 5:23 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung Hi Anantha, On Tue, Aug 8, 2023 at 1:59 AM

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-07 Thread Adiga, Anantha
version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable) root@fl31ca104ja0201:/# Thank you, Anantha From: Adiga, Anantha Sent: Monday, August 7, 2023 11:21 AM To: 'Venky Shankar' ; 'ceph-users@ceph.io' Subject: RE: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-03 Thread Adiga, Anantha
Attached log file -Original Message- From: Adiga, Anantha Sent: Thursday, August 3, 2023 5:50 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung Adding additional info: The cluster A and B both have the same name: ceph and each has

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-03 Thread Adiga, Anantha
Adding additional info: The cluster A and B both have the same name: ceph and each has a single filesystem with the same name cephfs. Is that the issue ? Tried using peer_add command and it is hanging as well: root@fl31ca104ja0201:/#ls /etc/ceph/ cr_ceph.conf client.mirror_remote.keying

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-03 Thread Adiga, Anantha
AQCfwMlkM90pLBAAwXtvpp8j04IvC8tqpAG9bA== -Original Message- From: Adiga, Anantha Sent: Thursday, August 3, 2023 2:31 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung Hi Could you please provide guidance on how to diagnose this issue

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-03 Thread Adiga, Anantha
Hi Could you please provide guidance on how to diagnose this issue: In this case, there are two Ceph clusters: cluster A, 4 nodes and cluster B, 3 node, in different locations. Both are already running RGW multi-site, A is master. Cephfs snapshot mirroring is being configured on the

[ceph-users] cephfs snapshot mirror peer_bootstrap import hung

2023-08-03 Thread Adiga, Anantha
Hi Could you please provide guidance on how to diagnose this issue: In this case, there are two Ceph clusters: cluster A, 4 nodes and cluster B, 3 node, in different locations. Both are already running RGW multi-site, A is master. Cephfs snapshot mirroring is being configured on the

[ceph-users] mgr services frequently crash on nodes 2,3,4

2023-08-02 Thread Adiga, Anantha
Hi, Mgr service crash frequently on nodes 2 3 and 4 with the same condition after the 4th node was added. root@zp3110b001a0104:/# ceph crash stat 19 crashes recorded 16 older than 1 days old: 2023-07-29T03:35:32.006309Z_7b622c2b-a2fc-425a-acb8-dc1673b4c189

[ceph-users] Re: warning: CEPHADM_APPLY_SPEC_FAIL

2023-06-29 Thread Adiga, Anantha
This was a simple step to delete the service /# ceph orch rm osd.iops_optimized WARN goes away Just fyi: ceph orch help does not list rm option Thank you, Anantha From: Adiga, Anantha Sent: Thursday, June 29, 2023 4:38 PM To: ceph-users@ceph.io Subject: [ceph-users] warning

[ceph-users] warning: CEPHADM_APPLY_SPEC_FAIL

2023-06-29 Thread Adiga, Anantha
Hi, I am not finding any reference to clear this warning AND stop the service. See below After creating OSD with iops_optimized option, this WARN mesg appear. Ceph 17.2.6 [cid:image001.png@01D9AAA5.8639A1F0] 6/29/23 4:10:45 PM [WRN] Health check failed: Failed to apply 1 service(s):

[ceph-users] Re: ceph orch host label rm : does not update label removal

2023-06-27 Thread Adiga, Anantha
Hello, This issue is resolved. The syntax of providing the labels was not correct. -Original Message- From: Adiga, Anantha Sent: Thursday, June 22, 2023 1:08 PM To: ceph-users@ceph.io Subject: [ceph-users] ceph orch host label rm : does not update label removal Hi , Not sure

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-23 Thread Adiga, Anantha
Hi Nizam, Thanks much for the detail. Regards, Anantha From: Nizamudeen A Sent: Friday, June 23, 2023 12:25 AM To: Adiga, Anantha Cc: Eugen Block ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade Hi, You can

[ceph-users] ceph orch host label rm : does not update label removal

2023-06-22 Thread Adiga, Anantha
Hi , Not sure if the lables are really removed or the update is not working? root@fl31ca104ja0201:/# ceph orch host ls HOST ADDR LABELS STATUS fl31ca104ja0201 XX.XX.XXX.139 ceph clients mdss mgrs monitoring mons osds rgws

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-22 Thread Adiga, Anantha
t is, in Quincy when you enable Loki and Promtail, to >>view the daemon logs Ceph board pulls in Grafana dashboard. I will let you >>know once that issue is resolved. Regards, Eugen [2] https://docs.ceph.com/en/latest/cephadm/services/monitoring/#using-custom-images >> Thank

[ceph-users] Re: Error while adding host : Error EINVAL: Traceback (most recent call last): File /usr/share/ceph/mgr/mgr_module.py, line 1756, in _handle_command

2023-06-20 Thread Adiga, Anantha
From: Adam King Sent: Tuesday, June 20, 2023 4:25 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Error while adding host : Error EINVAL: Traceback (most recent call last): File /usr/share/ceph/mgr/mgr_module.py, line 1756, in _handle_command There was a cephadm bug

[ceph-users] Error while adding host : Error EINVAL: Traceback (most recent call last): File /usr/share/ceph/mgr/mgr_module.py, line 1756, in _handle_command

2023-06-20 Thread Adiga, Anantha
Hi, I am seeing this error after an offline was deleted and while adding the host again. Thereafter, I have removed the /var/lib/cep folder and removed the ceph quincy image in the offline host. What is the cause of this issue and the solution. root@fl31ca104ja0201:/home/general# cephadm

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-17 Thread Adiga, Anantha
Hi Eugene, Thank you for your response, here is the update. The upgrade to Quincy was done following the cephadm orch upgrade procedure ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.6 Upgrade completed with out errors. After the upgrade, upon creating the Grafana service from Ceph

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-05-18 Thread Adiga, Anantha
s ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e-haproxy-nfs-nfs-1-fl31ca104ja0201-zdbzvv From: Ben Sent: Wednesday, May 17, 2023 6:32 PM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Grafana service fails to start due to bad directory name after Quincy upgrade use thi

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-05-17 Thread Adiga, Anantha
67:167 recursively. Then systemctl daemon-reload and restart the service. Good luck. Ben Adiga, Anantha mailto:anantha.ad...@intel.com>> 于2023年5月17日周三 03:57写道: Hi Upgraded from Pacific 16.2.5 to 17.2.6 on May 8th However, Grafana fails to start due to bad folder path :/tmp# journalctl

[ceph-users] Grafana service fails to start due to bad directory name after Quincy upgrade

2023-05-16 Thread Adiga, Anantha
Hi Upgraded from Pacific 16.2.5 to 17.2.6 on May 8th However, Grafana fails to start due to bad folder path :/tmp# journalctl -u ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201 -n 25 -- Logs begin at Sun 2023-05-14 20:05:52 UTC, end at Tue 2023-05-16 19:07:51 UTC. -- May 16

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
. rados -p .rgw.root ls —all You should be able to remove those objects from the pool, but be careful to not delete anything you actually need. Zitat von "Adiga, Anantha" : > Hi, > > An existing multisite configuration was removed. But the radosgw > services still see the

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
fl2site2 * Thank you, Anantha From: Danny Webb Sent: Monday, May 8, 2023 10:54 AM To: Adiga, Anantha ; ceph-users@ceph.io Subject: Re: rgw service fails to start with zone not found

[ceph-users] rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
Hi, An existing multisite configuration was removed. But the radosgw services still see the old zone name and fail to start. journalctl -u ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@rgw.default.default.fl31ca104ja0201.ninovs ... May 08 16:10:48 fl31ca104ja0201 bash[3964341]: debug

[ceph-users] Re: ceph orch ps mon, mgr, osd shows for version, image and container id

2023-03-31 Thread Adiga, Anantha
Thank you so much Adam. I will check into the older release being used and update the ticket. Anantha From: Adam King Sent: Friday, March 31, 2023 5:46 AM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] ceph orch ps mon, mgr, osd shows for version, image and container

[ceph-users] Re: ceph orch ps mon, mgr, osd shows for version, image and container id

2023-03-30 Thread Adiga, Anantha
mgr.zp3110b001a0102 zp3110b001a0102 running 9m ago 8M-- mon.zp3110b001a0101 zp3110b001a0101 running 3m ago 8M

[ceph-users] ceph orch ps shows version, container and image id as unknown

2023-03-27 Thread Adiga, Anantha
Hi, Has anybody noticed this? ceph orch ps shows version, container and image id as unknown only for mon, mgr and osds. Ceph health is OK and all daemons are running fine. cephadm ls shows values for version, container and image id. root@cr21meg16ba0101:~# cephadm shell ceph orch ps Inferring

[ceph-users] Re: Creating a role for quota management

2023-03-07 Thread Adiga, Anantha
Thank you Xiubo, will try that option, looks like it is done with the intention to keep it at the client level. Anantha -Original Message- From: Xiubo Li Sent: Tuesday, March 7, 2023 12:44 PM To: Adiga, Anantha ; ceph-users@ceph.io Subject: Re: [ceph-users] Creating a role for quota

[ceph-users] Re: Planning: Ceph User Survey 2020

2020-11-27 Thread Adiga, Anantha
Hi Yuval, Your questions have been added. Thank you, Anantha From: Yuval Lifshitz Sent: Wednesday, November 25, 2020 6:30 AM To: Mike Perez Cc: ceph-users ; Adiga, Anantha ; Paul Mezzanini ; Anthony D'Atri Subject: Re: [ceph-users] Planning: Ceph User Survey 2020 Hi Mike, Could we add more