[ceph-users] Re: How many data disks share one meta disks is better

2021-11-19 Thread norman.kern
Hi Anthony, Thanks for your reply.  If the SSD down, Do I have to rebuild the 3-4 OSDs and balance the data in the OSD? 在 2021/11/20 下午2:27, Anthony D'Atri 写道: On Nov 19, 2021, at 10:25 PM, norman.kern wrote: Hi guys, Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs(Data)

[ceph-users] How many data disks share one meta disks is better

2021-11-19 Thread norman.kern
Hi guys, Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs(Data) share one SSD(DB) is better? And If the SSD is broken down, it will cause all OSDs which share it down? Wait for your replies. ___ ceph-users mailing list --

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Zach Heise (SSCC)
Spot on, Ernesto - my output looks basically identical: curl -kv https://144.92.190.200:8443 * Rebuilt URL to: https://144.92.190.200:8443/ *   Trying 144.92.190.200... * TCP_NODELAY set * Connected to 144.92.190.200 (144.92.190.200) port 8443

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Ernesto Puerta
Hi Zach, I remember the Cherrypy webserver (Cheroot 8.5.1) had a hellish deadlock-kind of issue no that long ago, but that was already fixed in 8.5.2. Could you please run the same curl command with the "-v" flag to get a verbose output? You can

[ceph-users] Re: Annoying MDS_CLIENT_RECALL Warning

2021-11-19 Thread Patrick Donnelly
On Fri, Nov 19, 2021 at 2:14 AM 胡 玮文 wrote: > > Thanks Dan, > > I choose one of the stuck client to investigate, as shown below, it currently > holds ~269700 caps, which is pretty high with no obvious reason. I cannot > understand most of the output, and failed to find any documents about it. >

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Zach Heise (SSCC)
Thanks for writing, Ernesto. output of ceph mgr services: ceph mgr services {     "dashboard": "https://144.92.190.200:8443/",     "prometheus": "http://144.92.190.200:9283/" } Network tab in dev tools, doing a reload just results in

[ceph-users] Re: erasure coded pool PG stuck inconsistent on ceph Pacific 15.2.13

2021-11-19 Thread Wesley Dillingham
You may also be able to use an upmap (or the upmap balancer) to help make room for you on the osd which is too full. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Fri, Nov 19, 2021 at 1:14 PM Wesley Dillingham wrote: > Okay,

[ceph-users] Re: erasure coded pool PG stuck inconsistent on ceph Pacific 15.2.13

2021-11-19 Thread Wesley Dillingham
Okay, now I see your attachment, the pg is in state: "state": "active+undersized+degraded+remapped+inconsistent+backfill_toofull", The reason it cant scrub or repair is that its degraded and further it seems that the cluster doesnt have the space to make that recovery happen "backfill_toofull"

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-19 Thread Ernesto Puerta
Hi Zach, Thanks for the thorough description. We haven't noticed this issue so far and have some long-running clusters, but let's try to debug it: - First of all, as Kai suggested, let's ensure we're hitting the active manager address (there's a redirection mechanism, but let's ensure it

[ceph-users] Re: ceph fs Maximum number of files supported

2021-11-19 Thread Yan, Zheng
On Fri, Nov 19, 2021 at 11:36 AM 飞翔 wrote: > > ceph fs what is Maximum number of files supported per shared filesystem? > who can tell me? > we have FS contain more than 40 billions small files. when FS cluster contains this many files, osd stores can be severely fragmented and cause some issus.

[ceph-users] Re: how many developers are working on ceph?

2021-11-19 Thread Martin Verges
Hello Marc, 3. someone mentioned the option for paid 'bug' fixing, but I have never heard or seen anything about this here. How would one apply for this? Would be good, but this can only be done by the companies working on Ceph. However I would vote for hiring devs in the Ceph Foundation and do

[ceph-users] Re: OSD spend too much time on "waiting for readable" -> slow ops -> laggy pg -> rgw stop -> worst case osd restart

2021-11-19 Thread Manuel Lausch
Nice. Just now I building a 16.2.6 relese with this patch and will test it. Thanks, Manuel On Thu, 18 Nov 2021 15:02:38 -0600 Sage Weil wrote: > Okay, good news: on the osd start side, I identified the bug (and easily > reproduced locally). The tracker and fix are: > >

[ceph-users] how many developers are working on ceph?

2021-11-19 Thread Marc
The recent discussions made me wonder about: 1. how many paid full time developers are currently working on ceph? 2. how many hours are contributed by the community? 3. someone mentioned the option for paid 'bug' fixing, but I have never heard or seen anything about this here. How would one

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-11-19 Thread Marc
> > We are using cephadm and think it is OK. We also use Kubernetes, and > some manual “docker run” command at the same time, on the same set of > hosts. They works fine together. I think it should be fine to have > multiple OC systems, and take the best of each one. Oh yes, and how do you

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-11-19 Thread 胡 玮文
> 在 2021年11月19日,02:51,Marc 写道: > >  >> >> We also use containers for ceph and love it. If for some reason we >> couldn't run ceph this way any longer, we would probably migrate >> everything to a different solution. We are absolutely committed to >> containerization. > > I wonder if you are

[ceph-users] 回复: The osd-block* file is gone

2021-11-19 Thread 胡 玮文
That one should be automatically created on boot. If not, you should check if your disk is broken / not connected, maybe by checking the kernel logs. You can share the output of `lsblk` and `lvs` 发件人: GHui 发送时间: 2021年11月19日 16:43 收件人: ceph-users