[ceph-users] Ceph Tech Talk: Secure Token Service in the Rados Gateway

2020-08-13 Thread Mike Perez
Hi everyone, Join us August 27th at 17:00 UTC to hear Pritha Srivastava present on this month's Ceph Tech Talk: Secure Token Service in the Rados Gateway. Calendar invite and archive can be found here: https://ceph.io/ceph-tech-talks/ If you're interested or know someone who can present

[ceph-users] Re: Ceph Tech Talk: A Different Scale – Running small ceph clusters in multiple data centers by Yuval Freund

2020-08-13 Thread Mike Perez
Here's the recording for July's Ceph Tech Talk. Thanks Yuval! https://www.youtube.com/watch?list=PLrBUGiINAakM36YJiTT0qYepZTVncFDdc=XS7jpFxUYQ0=emb_title On 7/6/20 3:16 PM, Mike Perez wrote: Hi everyone, Get ready for July 23rd at 17:00 UTC another Ceph Tech Talk but a different scale:

[ceph-users] Re: osd fast shutdown provokes slow requests

2020-08-13 Thread Dan van der Ster
OK I just wanted to confirm you hadn't extended the osd_heartbeat_grace or similar. On your large cluster, what is the time from stopping an osd (with fasst shutdown enabled) to: cluster [DBG] osd.317 reported immediately failed by osd.202 -- dan On Thu, Aug 13, 2020 at 4:38 PM Manuel

[ceph-users] Re: osd fast shutdown provokes slow requests

2020-08-13 Thread Manuel Lausch
Hi Dan, The only settings in my ceph.conf related to down/out and peering are this. mon osd down out interval = 1800 mon osd down out subtree limit = host mon osd min down reporters = 3 mon osd reporter subtree level = host The Cluster has 44 Hosts á 24 OSDs Manuel On Thu, 13 Aug

[ceph-users] Re: osd fast shutdown provokes slow requests

2020-08-13 Thread Dan van der Ster
Hi Manuel, Just to clarify -- do you override any of the settings related to peer down detection? heartbeat periods or timeouts or min down reporters or anything like that? Cheers, Dan On Thu, Aug 13, 2020 at 3:46 PM Manuel Lausch wrote: > > Hi, > > I investigated an other problem with my

[ceph-users] osd fast shutdown provokes slow requests

2020-08-13 Thread Manuel Lausch
Hi, I investigated an other problem with my nautilus 14.2.11 (with 14.2.10 as well) cluster. If I stop the OSDs on one node (systemctl stop ceph-osd.target, or shutdown/reboot) it took mostly several seconds until the cluster detects the OSDs as down and I run in slow requests. I identified the

[ceph-users] RBD pool damaged, repair options?

2020-08-13 Thread Robert Sander
Hi, a customer lost 5 OSDs at the same time (and replaced them with new disks before we could do anything…). 4 PGs were incomplete but could be repaired with ceph-objectstore-tool. The cluster itself is healthy again. Now some RBDs are missing. They are still listed in the rbd_directory object

[ceph-users] Approach Epson Customer Service For Fixing Network Problems.

2020-08-13 Thread mary smith
Yes, you are not only allowed to approach Epson Customer Service but also fetch the instant technical backing directly from the professionals. If you want to avail the quickest remedy along with the supervision regarding the problems and hurdles, pertaining to HP printer network, you should get

[ceph-users] Block Network IP Of Wireless HP Printer Using HP Support Assistant.

2020-08-13 Thread mary smith
Yes, you should! If you are one of those who are have a word with the professionals who are capable of resolving the whole host of network problems. You can make use of HP Support Assistant who will solve the whole host of wireless printer problems.