Re: [ceph-users] Cluster unusable after 50% full, even with index sharding

2018-04-13 Thread Christian Balzer
Hello, On Fri, 13 Apr 2018 11:59:01 -0500 Robert Stanford wrote: > I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals > and spinning disks. Our performance before was acceptable for our purposes > - 300+MB/s simultaneous transmit and receive. Now that we're up to about

[ceph-users] Cluster unusable after 50% full, even with index sharding

2018-04-13 Thread Robert Stanford
I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals and spinning disks. Our performance before was acceptable for our purposes - 300+MB/s simultaneous transmit and receive. Now that we're up to about 50% of our total storage capacity (65/120TB, say), the write performance i

Re: [ceph-users] Cluster unusable

2014-12-23 Thread francois.pe...@san-services.com
Hi, I got a recommendation From Stephan to restart the OSDs one by one. So I did it. It helped a bit (some IOs completed), but at the end, the state was the same as before, and new IOs still hung. Loïc, thanks for the advice on moving back the osd.0 and osd.4 into the game. Actually this was d

Re: [ceph-users] Cluster unusable

2014-12-23 Thread francois.pe...@san-services.com
Here you go:http://www.filedropper.com/cephreport Francois ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cluster unusable

2014-12-23 Thread francois.pe...@san-services.com
Hi Loïc, Thanks. Am trying to find where I can make the report available to you [root@qvitblhat06 ~]# ceph report > /tmp/ceph_report report 3298035134 [root@qvitblhat06 ~]# ls -lh /tmp/ceph_report -rw-r--r--. 1 root root 4.7M Dec 23 10:38 /tmp/ceph_report [root@qvitblhat06 ~]# (Sorry guys for th

Re: [ceph-users] Cluster unusable

2014-12-23 Thread Loic Dachary
Hi François, Could you paste somewhere the output of ceph report to check the pg dump ? (it's probably going to be a little too big for the mailing list). You can bring back osd.0 and osd.4 into the host to which they belong (instead of being at the root of the crush map) with crush set: http:

[ceph-users] Cluster unusable

2014-12-23 Thread Francois Petit
Hi, We use Ceph 0.80.7 for our IceHouse PoC. 3 MONs, 3 OSD nodes (ids 10,11,12) with 2 OSDs each, 1.5TB of storage, total. 4 pools for RBD, size=2, 512 PGs per pool Everything was fine until mid of last week, and here's what happened: - OSD node #12 passed away - AFAICR, ceph recovered fine -