Hello Graham,

We have the same issue after an upgrade from 14.2.16 to 14.2.19. I
tracked down the issue today and made a bug report a few hours ago:
https://tracker.ceph.com/issues/50249. Maybe the title can be adjusted
if more than rgw_frontends is impacted.
First nautilus release I found with the commit I mentioned in the bug
report is 14.2.17.

Hope this helps.

On Thu, 8 Apr 2021 at 21:00, Graham Allan <g...@umn.edu> wrote:
>
> We just updated one of our ceph clusters from 14.2.15 to 14.2.19, and
> see some unexpected behavior by radosgw - it seems to ignore parameters
> set by the ceph config database. Specifically this is making it start up
> listening only on port 7480, and not the configured 80 and 443 (ssl) ports.
>
> Downgrading ceph on the rgw nodes back to 14.2.15 restores the expected
> behavior (I haven't yet tried any intermediate versions). The host OS is
> CentOS 7, if that matters...
>
> Here's a ceph config dump for one of the affected nodes, along with the
> radosgw startup log:
>
> > # ceph config dump|grep tier2-gw02
> >     client.rgw.tier2-gw02        basic    log_file                          
> >              /var/log/ceph/radosgw.log                                      
> >            *
> >     client.rgw.tier2-gw02        advanced rgw_dns_name                      
> >              s3.msi.umn.edu                                                 
> >            *
> >     client.rgw.tier2-gw02        advanced rgw_enable_usage_log              
> >              true
> >     client.rgw.tier2-gw02        basic    rgw_frontends                     
> >              beast port=80 ssl_port=443 
> > ssl_certificate=/etc/ceph/civetweb.pem         *
> >     client.rgw.tier2-gw02        basic    rgw_thread_pool_size              
> >              512
>
>
> > # tail /var/log/ceph/radosgw.log
> > 2021-04-08 11:51:07.956 7f420b78f700 -1 received  signal: Terminated from 
> > /usr/lib/systemd/systemd --switched-root --system --deserialize 22  (PID: 
> > 1) UID: 0
> > 2021-04-08 11:51:07.956 7f420b78f700  1 handle_sigterm
> > 2021-04-08 11:51:07.956 7f4220bc5900 -1 shutting down
> > 2021-04-08 11:51:07.956 7f420b78f700  1 handle_sigterm set alarm for 120
> > 2021-04-08 11:51:08.010 7f4220bc5900  1 final shutdown
> > 2021-04-08 11:51:08.159 7f2ac6105900  0 deferred set uid:gid to 167:167 
> > (ceph:ceph)
> > 2021-04-08 11:51:08.159 7f2ac6105900  0 ceph version 14.2.19 
> > (bb796b9b5bab9463106022eef406373182465d11) nautilus (stable), process 
> > radosgw, pid 88256
> > 2021-04-08 11:51:08.300 7f2ac6105900  0 starting handler: beast
> > 2021-04-08 11:51:08.302 7f2ac6105900  0 set uid:gid to 167:167 (ceph:ceph)
> > 2021-04-08 11:51:08.317 7f2ac6105900  1 mgrc service_daemon_register 
> > rgw.tier2-gw02 metadata 
> > {arch=x86_64,ceph_release=nautilus,ceph_version=ceph version 14.2.19 
> > (bb796b9b5bab9463106022eef406373182465d11) nautilus 
> > (stable),ceph_version_short=14.2.19,cpu=AMD EPYC 7302P 16-Core 
> > Processor,distro=centos,distro_description=CentOS Linux 7 
> > (Core),distro_version=7,frontend_config#0=beast 
> > port=7480,frontend_type#0=beast,hostname=tier2-gw02.msi.umn.edu,kernel_description=#1
> >  SMP Tue Mar 16 18:28:22 UTC 
> > 2021,kernel_version=3.10.0-1160.21.1.el7.x86_64,mem_swap_kb=4194300,mem_total_kb=131754828,num_handles=1,os=Linux,pid=88256,zone_id=default,zone_name=default,zonegroup_id=default,zonegroup_name=default}
>
> BTW I can also change "rgw_frontends" to specify a civetweb frontend
> instead and it will still start the default beast...
>
> I haven't seen anyone else report such a problem so I wonder if this is
> something local to us - like perhaps I'm using "ceph config" incorrectly
> in a way which happened to be accepted before? Has anyone else seen this
> behavior?
>
> Graham
> --
> Graham Allan - g...@umn.edu
> Associate Director of Operations - Minnesota Supercomputing Institute
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
Arnaud Lefebvre
Clever Cloud
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to