[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-07 Thread Eugen Block

Hi,

I don't really know how the IP address is determined by the mgr, I  
only remember that there was a change introduced in 16.2.11 [1] to use  
the hostname instead of an IP address. In a 16.2.9 cluster I have all  
storage nodes (including rgw) configured with multiple ip addresses,  
and it chooses the right one although the only information pointing to  
that is in the zonegroup endpoints. I'm wondering if an upgrade to  
16.2.11 would break that in this cluster, too.


The output shows that all daemon are configured , would like to know  
also if there is a possibility of removing those RGW and redeploy  
them to see if there will be changes.


I don't know how to do that with ansible, but I'm sure there are  
plenty of ansible users here who could chime in. But I would assume  
that it's in the ceph-ansible docs somewhere.


Regards,
Eugen

[1] https://tracker.ceph.com/issues/56970

Zitat von Gilles Mocellin :


Le 2023-02-06 14:11, Eugen Block a écrit :

What does the active mgr log when you try to access the dashboard?
Please paste your rgw config settings as well.


Ah, Sorry to hijack, but I also can't access Object Storage menus in  
the Dashboard since upgrading from 16.2.10 to 16.2.11.


Here are the MGR logs :

fcadmin@fidcllabs-oct-01:~$ sudo grep 8080  
/var/log/ceph/ceph-mgr.fidcllabs-oct-01.log
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO  
rgw_client] Found RGW daemon with configuration:  
host=fidcllabs-oct-03, port=8080, ssl=False
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO  
rgw_client] Found RGW daemon with configuration:  
host=fidcllabs-oct-01, port=8080, ssl=False
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO  
rgw_client] Found RGW daemon with configuration:  
host=fidcllabs-oct-02, port=8080, ssl=False
2023-02-06T08:12:58.275+ 7ffad4910700  0 [dashboard ERROR  
rest_client] RGW REST API failed GET, connection error  
(url=http://fidcllabs-oct-03:8080/admin/metadata/user?myself):  
[errno: 111] Connection refused
urllib3.exceptions.MaxRetryError:  
HTTPConnectionPool(host='fidcllabs-oct-03', port=8080): Max retries  
exceeded with url: /admin/metadata/user?myself (Caused by  
NewConnectionError('0x7ffac1e75160>: Failed to establish a new connection: [Errno 111]  
Connection refused',))
requests.exceptions.ConnectionError:  
HTTPConnectionPool(host='fidcllabs-oct-03', port=8080): Max retries  
exceeded with url: /admin/metadata/user?myself (Caused by  
NewConnectionError('0x7ffac1e75160>: Failed to establish a new connection: [Errno 111]  
Connection refused',))


The hostname is resolvable, but not on the same IP (management  
network) than my RGW endpoints (public network).
In other cluster still on 16.2.10, I can see IPs in the  
corresponding logs, not the hostname.


--
Gilles
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Gilles Mocellin

Le 2023-02-06 14:11, Eugen Block a écrit :

What does the active mgr log when you try to access the dashboard?
Please paste your rgw config settings as well.


Ah, Sorry to hijack, but I also can't access Object Storage menus in the 
Dashboard since upgrading from 16.2.10 to 16.2.11.


Here are the MGR logs :

fcadmin@fidcllabs-oct-01:~$ sudo grep 8080 
/var/log/ceph/ceph-mgr.fidcllabs-oct-01.log
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO rgw_client] 
Found RGW daemon with configuration: host=fidcllabs-oct-03, port=8080, 
ssl=False
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO rgw_client] 
Found RGW daemon with configuration: host=fidcllabs-oct-01, port=8080, 
ssl=False
2023-02-06T08:12:58.179+ 7ffad4910700  0 [dashboard INFO rgw_client] 
Found RGW daemon with configuration: host=fidcllabs-oct-02, port=8080, 
ssl=False
2023-02-06T08:12:58.275+ 7ffad4910700  0 [dashboard ERROR 
rest_client] RGW REST API failed GET, connection error 
(url=http://fidcllabs-oct-03:8080/admin/metadata/user?myself): [errno: 
111] Connection refused
urllib3.exceptions.MaxRetryError: 
HTTPConnectionPool(host='fidcllabs-oct-03', port=8080): Max retries 
exceeded with url: /admin/metadata/user?myself (Caused by 
NewConnectionError('0x7ffac1e75160>: Failed to establish a new connection: [Errno 111] 
Connection refused',))
requests.exceptions.ConnectionError: 
HTTPConnectionPool(host='fidcllabs-oct-03', port=8080): Max retries 
exceeded with url: /admin/metadata/user?myself (Caused by 
NewConnectionError('0x7ffac1e75160>: Failed to establish a new connection: [Errno 111] 
Connection refused',))


The hostname is resolvable, but not on the same IP (management network) 
than my RGW endpoints (public network).
In other cluster still on 16.2.10, I can see IPs in the corresponding 
logs, not the hostname.


--
Gilles
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Michel Niyoyita
Hello Eugen,

The output shows that all daemon are configured , would like to know also
if there is a possibility of removing those RGW and redeploy them to see if
there will be changes.

root@ceph-mon1:~# ceph service dump
{
"epoch": 1740,
"modified": "2023-02-06T15:21:42.235595+0200",
"services": {
"rgw": {
"daemons": {
"summary": "",
"479626": {
"start_epoch": 1265,
"start_stamp": "2023-02-03T11:41:58.680359+0200",
"gid": 479626,
"addr": "10.10.110.199:0/1880864062",
"metadata": {
"arch": "x86_64",
"ceph_release": "pacific",
"ceph_version": "ceph version 16.2.11
(3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)",
"ceph_version_short": "16.2.11",
"cpu": "Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz",
"distro": "ubuntu",
"distro_description": "Ubuntu 20.04.5 LTS",
"distro_version": "20.04",
"frontend_config#0": "beast endpoint=
10.10.110.199:8080",
"frontend_type#0": "beast",
"hostname": "ceph-osd1",
"id": "ceph-osd1.rgw0",
"kernel_description": "#154-Ubuntu SMP Thu Jan 5
17:03:22 UTC 2023",
"kernel_version": "5.4.0-137-generic",
"mem_swap_kb": "8388604",
"mem_total_kb": "263556752",
"num_handles": "1",
"os": "Linux",
"pid": "47369",
"realm_id": "",
"realm_name": "",
"zone_id": "689f9b30-4380-439e-8e7c-3c2046079a2b",
"zone_name": "default",
"zonegroup_id":
"c2d060fe-bd6c-4bfb-a0cd-596124765015",
"zonegroup_name": "default"
},
"task_status": {}
},
"489542": {
"start_epoch": 1267,
"start_stamp": "2023-02-03T11:42:30.711278+0200",
"gid": 489542,
"addr": "10.10.110.200:0/3909810130",
"metadata": {
"arch": "x86_64",
"ceph_release": "pacific",
"ceph_version": "ceph version 16.2.11
(3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)",
"ceph_version_short": "16.2.11",
"cpu": "Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz",
"distro": "ubuntu",
"distro_description": "Ubuntu 20.04.5 LTS",
"distro_version": "20.04",
"frontend_config#0": "beast endpoint=
10.10.110.200:8080",
"frontend_type#0": "beast",
"hostname": "ceph-osd2",
"id": "ceph-osd2.rgw0",
"kernel_description": "#154-Ubuntu SMP Thu Jan 5
17:03:22 UTC 2023",
"kernel_version": "5.4.0-137-generic",
"mem_swap_kb": "8388604",
"mem_total_kb": "263556752",
"num_handles": "1",
"os": "Linux",
"pid": "392257",
"realm_id": "",
"realm_name": "",
"zone_id": "689f9b30-4380-439e-8e7c-3c2046079a2b",
"zone_name": "default",
"zonegroup_id":
"c2d060fe-bd6c-4bfb-a0cd-596124765015",
"zonegroup_name": "default"
},
"task_status": {}
},
"489605": {
"start_epoch": 1268,
"start_stamp": "2023-02-03T11:42:58.724973+0200",
"gid": 489605,
"addr": "10.10.110.201:0/59797695",
"metadata": {
"arch": "x86_64",
"ceph_release": "pacific",
"ceph_version": "ceph version 16.2.11
(3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)",
"ceph_version_short": "16.2.11",
"cpu": "Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz",
"distro": "ubuntu",
"distro_description": "Ubuntu 20.04.5 LTS",
"distro_version": "20.04",
"frontend_config#0": "beast endpoint=
10.10.110.201:8080",
"frontend_type#0": "beast",
"hostname": "ceph-osd3",
"id": 

[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Michel Niyoyita
Hello Eugen

Below are rgw configs and logs while I am accessing the dashboard :

root@ceph-mon1:/var/log/ceph# tail -f /var/log/ceph/ceph-mgr.ceph-mon1.log
2023-02-06T15:25:30.037+0200 7f68b15cd700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:25:30] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:25:45.033+0200 7f68b0dcc700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.133 - -
[06/Feb/2023:15:25:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:25:45.033+0200 7f68b1dce700  0 [prometheus INFO
cherrypy.access.140087714875184] :::127.0.0.1 - -
[06/Feb/2023:15:25:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:25:45.037+0200 7f68b35d1700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:25:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:00.033+0200 7f68b3dd2700  0 [prometheus INFO
cherrypy.access.140087714875184] :::127.0.0.1 - -
[06/Feb/2023:15:26:00] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:00.033+0200 7f68b25cf700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.133 - -
[06/Feb/2023:15:26:00] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:00.037+0200 7f68b2dd0700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:26:00] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:15.033+0200 7f68afdca700  0 [prometheus INFO
cherrypy.access.140087714875184] :::127.0.0.1 - -
[06/Feb/2023:15:26:15] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:15.033+0200 7f68b45d3700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.133 - -
[06/Feb/2023:15:26:15] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:15.037+0200 7f68b05cb700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:26:15] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:30.033+0200 7f68b15cd700  0 [prometheus INFO
cherrypy.access.140087714875184] :::127.0.0.1 - -
[06/Feb/2023:15:26:30] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:30.033+0200 7f68b0dcc700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.133 - -
[06/Feb/2023:15:26:30] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:30.037+0200 7f68b1dce700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:26:30] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:45.033+0200 7f68b35d1700  0 [prometheus INFO
cherrypy.access.140087714875184] :::127.0.0.1 - -
[06/Feb/2023:15:26:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:45.033+0200 7f68b3dd2700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.133 - -
[06/Feb/2023:15:26:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"
2023-02-06T15:26:45.037+0200 7f68b25cf700  0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:26:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.7.2"



[mons]
ceph-mon1
ceph-mon2
ceph-mon3

[osds]
ceph-osd1
ceph-osd2
ceph-osd3

[mgrs]
ceph-mon1
ceph-mon2
ceph-mon3

[grafana-server]
ceph-mon1
ceph-mon2
ceph-mon3

[rgws]
ceph-osd1
ceph-osd2
ceph-osd3
ceph-mon1
ceph-mon2
ceph-mon3

[rgwloadbalancers]
ceph-osd1
ceph-osd2
ceph-osd3
ceph-mon1
ceph-mon2
ceph-mon3


ceph.conf:

[client]
rbd_default_features = 1

[client.rgw.ceph-mon1.rgw0]
host = ceph-mon1
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-mon1.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-ceph-mon1.rgw0.log
rgw frontends = beast endpoint=10.10.110.198:8080
rgw frontends = beast endpoint=10.10.110.196:8080
rgw thread pool size = 512

[client.rgw.ceph-osd1]
rgw_dns_name = ceph-osd1

[client.rgw.ceph-osd2]
rgw_dns_name = ceph-osd2

[client.rgw.ceph-osd3]
rgw_dns_name = ceph-osd3

# Please do not change this file directly since it is managed by Ansible
and will be overwritten
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster network = 10.10.110.128/26
fsid = cb0caedc-eb5b-42d1-a34f-96facfda8c27
mon host =
mon initial members = ceph-mon1,ceph-mon2,ceph-mon3
mon_allow_pool_delete = True
mon_max_pg_per_osd = 400
osd pool default crush rule = -1
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public network =


Best Regards








On Mon, Feb 6, 2023 at 3:13 PM Eugen Block  wrote:

> What does the active mgr log when you try to access the dashboard?
> Please paste your rgw config settings as well.
>
> Zitat von Michel Niyoyita :
>
> > Hello Robert
> >
> > below is the output of ceph versions command
> >
> > root@ceph-mon1:~# ceph versions
> > {
> > "mon": {
> > "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
> > 

[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Eugen Block
Just a quick edit: what does the active mgr log when you try to access  
the rgw page in the dashboard?


With 'ceph service dump' you can see the rgw daemons that are  
registered to the mgr. If the daemons are not shown in the dashboard  
you'll have to check the active mgr logs for errors or hints.


Zitat von Eugen Block :

What does the active mgr log when you try to access the dashboard?  
Please paste your rgw config settings as well.


Zitat von Michel Niyoyita :


Hello Robert

below is the output of ceph versions command

root@ceph-mon1:~# ceph versions
{
   "mon": {
   "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
   },
   "mgr": {
   "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
   },
   "osd": {
   "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 48
   },
   "mds": {},
   "rgw": {
   "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 6
   },
   "overall": {
   "ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 60
   }
}
root@ceph-mon1:~#

Best Regards

Michel

On Mon, Feb 6, 2023 at 2:57 PM Robert Sander 
wrote:


On 06.02.23 13:48, Michel Niyoyita wrote:


root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)


This is the version of the command line tool "ceph".

Please run "ceph versions" to show the version of the running Ceph daemons.

Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Eugen Block
What does the active mgr log when you try to access the dashboard?  
Please paste your rgw config settings as well.


Zitat von Michel Niyoyita :


Hello Robert

below is the output of ceph versions command

root@ceph-mon1:~# ceph versions
{
"mon": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
},
"mgr": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
},
"osd": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 48
},
"mds": {},
"rgw": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 6
},
"overall": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 60
}
}
root@ceph-mon1:~#

Best Regards

Michel

On Mon, Feb 6, 2023 at 2:57 PM Robert Sander 
wrote:


On 06.02.23 13:48, Michel Niyoyita wrote:

> root@ceph-mon1:~# ceph -v
> ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
> (stable)

This is the version of the command line tool "ceph".

Please run "ceph versions" to show the version of the running Ceph daemons.

Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Michel Niyoyita
Hello Robert

below is the output of ceph versions command

root@ceph-mon1:~# ceph versions
{
"mon": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
},
"mgr": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
},
"osd": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 48
},
"mds": {},
"rgw": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 6
},
"overall": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 60
}
}
root@ceph-mon1:~#

Best Regards

Michel

On Mon, Feb 6, 2023 at 2:57 PM Robert Sander 
wrote:

> On 06.02.23 13:48, Michel Niyoyita wrote:
>
> > root@ceph-mon1:~# ceph -v
> > ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
> > (stable)
>
> This is the version of the command line tool "ceph".
>
> Please run "ceph versions" to show the version of the running Ceph daemons.
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Linux: Akademie - Support - Hosting
> http://www.heinlein-support.de
>
> Tel: 030-405051-43
> Fax: 030-405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Robert Sander

On 06.02.23 13:48, Michel Niyoyita wrote:


root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)


This is the version of the command line tool "ceph".

Please run "ceph versions" to show the version of the running Ceph daemons.

Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Michel Niyoyita
Hello Eugen,

below is the Version of Ceph I am running

root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)
root@ceph-mon1:~# ceph orch ls rgw --export --format yaml
Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
root@ceph-mon1:~#


tried ceph orch set backend , but nothing changed also.

Best Regards

On Mon, Feb 6, 2023 at 2:37 PM Eugen Block  wrote:

> Please send responses to the mailing-list.
>
> If the orchestrator is available, please share also this output (mask
> sensitive data):
>
> ceph orch ls rgw --export --format yaml
>
> Which ceph version is this? The command 'ceph dashboard
> get-rgw-api-host' was removed between Octopus and Pacific, that's why
> I asked for your ceph version.
>
> I also forgot that mgr/dashboard/RGW_API_HOST was used until Octopus,
> in Pacific it's not applied anymore. I'll need to check how it is
> determined now.
>
> Zitat von Michel Niyoyita :
>
> > Hello Eugen,
> >
> > Thanks for your reply ,
> >
> > I am trying the shared command but no output .
> >
> > root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/RGW_API_HOST
> > root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/
> >   mgradvanced  mgr/dashboard/ALERTMANAGER_API_HOST
> > http://10.10.110.196:9093
> > *
> >   mgradvanced  mgr/dashboard/GRAFANA_API_PASSWORD 
> >
> >*
> >   mgradvanced  mgr/dashboard/GRAFANA_API_SSL_VERIFY   false
> >
> >   *
> >   mgradvanced  mgr/dashboard/GRAFANA_API_URL
> > https://10.10.110.198:3000
> >  *
> >   mgradvanced  mgr/dashboard/GRAFANA_API_USERNAME admin
> >
> >   *
> >   mgradvanced  mgr/dashboard/PROMETHEUS_API_HOST
> > http://10.10.110.196:9092
> > *
> >   mgradvanced  mgr/dashboard/RGW_API_ACCESS_KEY
> > 
> >  *
> >   mgradvanced  mgr/dashboard/RGW_API_SECRET_KEY
> > 
> >  *
> >   mgradvanced  mgr/dashboard/RGW_API_SSL_VERIFY   false
> >
> >   *
> >   mgradvanced  mgr/dashboard/ceph-mon1/server_addr
> 10.10.110.196
> >
> >   *
> >   mgradvanced  mgr/dashboard/ceph-mon2/server_addr
> 10.10.110.197
> >
> >   *
> >   mgradvanced  mgr/dashboard/ceph-mon3/server_addr
> 10.10.110.198
> >
> >   *
> >   mgradvanced  mgr/dashboard/motd {"message":
> > "WELCOME TO AOS ZONE 3 STORAGE CLUSTER", "md5":
> > "87149a6798ce42a7e990bc8584a232cd", "severity": "info", "expires": ""}  *
> >   mgradvanced  mgr/dashboard/server_port  8443
> >
> >*
> >   mgradvanced  mgr/dashboard/ssl  true
> >
> >*
> >   mgradvanced  mgr/dashboard/ssl_server_port  8443
> >
> >
> > for the second one it seems is not valid
> >
> > root@ceph-mon1:~# ceph dashboard get-rgw-api-host
> > no valid command found; 10 closest matches:
> > dashboard set-jwt-token-ttl 
> > dashboard get-jwt-token-ttl
> > dashboard create-self-signed-cert
> > dashboard grafana dashboards update
> > dashboard get-account-lockout-attempts
> > dashboard set-account-lockout-attempts 
> > dashboard reset-account-lockout-attempts
> > dashboard get-alertmanager-api-host
> > dashboard set-alertmanager-api-host 
> > dashboard reset-alertmanager-api-host
> > Error EINVAL: invalid command
> > root@ceph-mon1:~#
> >
> >
> > Kindly check the output .
> >
> > Best Regards
> >
> > Michel
> > *
> >
> > On Mon, Feb 6, 2023 at 2:06 PM Eugen Block  wrote:
> >
> >> Hi,
> >>
> >> can you paste the output of:
> >>
> >> ceph config dump | grep mgr/dashboard/RGW_API_HOST
> >>
> >> Does it match your desired setup? Depending on the ceph version (and
> >> how ceph-ansible deploys the services) you could also check:
> >>
> >> ceph dashboard get-rgw-api-host
> >>
> >> I'm not familiar with ceph-ansible, but if you shared your rgw
> >> definitions and the respective ceph output we might be able to assist
> >> resolving this.
> >>
> >> Regards,
> >> Eugen
> >>
> >> Zitat von Michel Niyoyita :
> >>
> >> > Hello team,
> >> >
> >> > I have a ceph cluster deployed using ceph-ansible , running on ubuntu
> >> 20.04
> >> > OS which have 6 hosts , 3 hosts for OSD  and 3 hosts used as monitors
> and
> >> > managers , I have deployed RGW on all those hosts  and
> RGWLOADBALENCER on
> >> > top of them , for testing purpose , 

[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Eugen Block

Please send responses to the mailing-list.

If the orchestrator is available, please share also this output (mask  
sensitive data):


ceph orch ls rgw --export --format yaml

Which ceph version is this? The command 'ceph dashboard  
get-rgw-api-host' was removed between Octopus and Pacific, that's why  
I asked for your ceph version.


I also forgot that mgr/dashboard/RGW_API_HOST was used until Octopus,  
in Pacific it's not applied anymore. I'll need to check how it is  
determined now.


Zitat von Michel Niyoyita :


Hello Eugen,

Thanks for your reply ,

I am trying the shared command but no output .

root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/RGW_API_HOST
root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/
  mgradvanced  mgr/dashboard/ALERTMANAGER_API_HOST
http://10.10.110.196:9093
*
  mgradvanced  mgr/dashboard/GRAFANA_API_PASSWORD 

   *
  mgradvanced  mgr/dashboard/GRAFANA_API_SSL_VERIFY   false

  *
  mgradvanced  mgr/dashboard/GRAFANA_API_URL
https://10.10.110.198:3000
 *
  mgradvanced  mgr/dashboard/GRAFANA_API_USERNAME admin

  *
  mgradvanced  mgr/dashboard/PROMETHEUS_API_HOST
http://10.10.110.196:9092
*
  mgradvanced  mgr/dashboard/RGW_API_ACCESS_KEY

 *
  mgradvanced  mgr/dashboard/RGW_API_SECRET_KEY

 *
  mgradvanced  mgr/dashboard/RGW_API_SSL_VERIFY   false

  *
  mgradvanced  mgr/dashboard/ceph-mon1/server_addr10.10.110.196

  *
  mgradvanced  mgr/dashboard/ceph-mon2/server_addr10.10.110.197

  *
  mgradvanced  mgr/dashboard/ceph-mon3/server_addr10.10.110.198

  *
  mgradvanced  mgr/dashboard/motd {"message":
"WELCOME TO AOS ZONE 3 STORAGE CLUSTER", "md5":
"87149a6798ce42a7e990bc8584a232cd", "severity": "info", "expires": ""}  *
  mgradvanced  mgr/dashboard/server_port  8443

   *
  mgradvanced  mgr/dashboard/ssl  true

   *
  mgradvanced  mgr/dashboard/ssl_server_port  8443


for the second one it seems is not valid

root@ceph-mon1:~# ceph dashboard get-rgw-api-host
no valid command found; 10 closest matches:
dashboard set-jwt-token-ttl 
dashboard get-jwt-token-ttl
dashboard create-self-signed-cert
dashboard grafana dashboards update
dashboard get-account-lockout-attempts
dashboard set-account-lockout-attempts 
dashboard reset-account-lockout-attempts
dashboard get-alertmanager-api-host
dashboard set-alertmanager-api-host 
dashboard reset-alertmanager-api-host
Error EINVAL: invalid command
root@ceph-mon1:~#


Kindly check the output .

Best Regards

Michel
*

On Mon, Feb 6, 2023 at 2:06 PM Eugen Block  wrote:


Hi,

can you paste the output of:

ceph config dump | grep mgr/dashboard/RGW_API_HOST

Does it match your desired setup? Depending on the ceph version (and
how ceph-ansible deploys the services) you could also check:

ceph dashboard get-rgw-api-host

I'm not familiar with ceph-ansible, but if you shared your rgw
definitions and the respective ceph output we might be able to assist
resolving this.

Regards,
Eugen

Zitat von Michel Niyoyita :

> Hello team,
>
> I have a ceph cluster deployed using ceph-ansible , running on ubuntu
20.04
> OS which have 6 hosts , 3 hosts for OSD  and 3 hosts used as monitors and
> managers , I have deployed RGW on all those hosts  and RGWLOADBALENCER on
> top of them , for testing purpose , I have switched off one OSD , to
check
> if the rest can work properly , The test went well as expected,
> unfortunately while coming back an OSD , the RGW failed to connect
through
> the dashboard. below is the message :
> The Object Gateway Service is not configuredError connecting to Object
> GatewayPlease consult the documentation
> <
https://docs.ceph.com/en/latest/mgr/dashboard/#enabling-the-object-gateway-management-frontend
>
> on
> how to configure and enable the Object Gateway management functionality.
>
> would like to ask how to solve that issue or how can I proceed to remove
> completely RGW and redeploy it after .
>
>
> root@ceph-mon1:~# ceph -s
>   cluster:
> id: cb0caedc-eb5b-42d1-a34f-96facfda8c27
> health: HEALTH_OK
>
>   services:
> mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 72m)
> mgr: ceph-mon2(active, since 71m), 

[ceph-users] Re: Removing Rados Gateway in ceph cluster

2023-02-06 Thread Eugen Block

Hi,

can you paste the output of:

ceph config dump | grep mgr/dashboard/RGW_API_HOST

Does it match your desired setup? Depending on the ceph version (and  
how ceph-ansible deploys the services) you could also check:


ceph dashboard get-rgw-api-host

I'm not familiar with ceph-ansible, but if you shared your rgw  
definitions and the respective ceph output we might be able to assist  
resolving this.


Regards,
Eugen

Zitat von Michel Niyoyita :


Hello team,

I have a ceph cluster deployed using ceph-ansible , running on ubuntu 20.04
OS which have 6 hosts , 3 hosts for OSD  and 3 hosts used as monitors and
managers , I have deployed RGW on all those hosts  and RGWLOADBALENCER on
top of them , for testing purpose , I have switched off one OSD , to check
if the rest can work properly , The test went well as expected,
unfortunately while coming back an OSD , the RGW failed to connect through
the dashboard. below is the message :
The Object Gateway Service is not configuredError connecting to Object
GatewayPlease consult the documentation

on
how to configure and enable the Object Gateway management functionality.

would like to ask how to solve that issue or how can I proceed to remove
completely RGW and redeploy it after .


root@ceph-mon1:~# ceph -s
  cluster:
id: cb0caedc-eb5b-42d1-a34f-96facfda8c27
health: HEALTH_OK

  services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 72m)
mgr: ceph-mon2(active, since 71m), standbys: ceph-mon3, ceph-mon1
osd: 48 osds: 48 up (since 79m), 48 in (since 3d)
rgw: 6 daemons active (6 hosts, 1 zones)

  data:
pools:   9 pools, 257 pgs
objects: 59.49k objects, 314 GiB
usage:   85 TiB used, 348 TiB / 433 TiB avail
pgs: 257 active+clean

  io:
client:   2.0 KiB/s wr, 0 op/s rd, 0 op/s wr

Kindly help

Best Regards

Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io