[ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Vickey Singh
Hello Community Happy Valentines Day ;-) I need some advice on using EXATA RAM on my OSD servers to improve Ceph's write performance. I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming cluster is not recovering, most of the time system will have at least ~150GB RAM free. A

Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread ceph
Won't be used for write : write are synced (meaning: write it to disk, now) On 14/02/2016 10:55, Vickey Singh wrote: > Hello Community > > Happy Valentines Day ;-) > > I need some advice on using EXATA RAM on my OSD servers to improve Ceph's > write performance. > > I have 20 OSD servers each w

Re: [ceph-users] Reducing the impact of OSD restarts (noout ain't uptosnuff)

2016-02-14 Thread Tom Christensen
To be clear when you are restarting these osds how many pgs go into peering state? And do they stay there for the full 3 minutes? Certainly I've seen iops drop to zero or near zero when a large number of pgs are peering. It would be wonderful if we could keep iops flowing even when pgs are peeri

Re: [ceph-users] OpenStack Developer Summit - Austin

2016-02-14 Thread Danny Al-Gaaf
Hi all, the vote for presentation period for the Austin Summit ends on 17th February, 11:59 PST (18th February 7:59 UTC / 08:59 CEST). Here a list of some very interesting and Ceph related presentation proposals waiting for your vote (shorted urls point to the OpenStack voting page)! I'm sure eve

Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Somnath Roy
I doubt it will do much good in case of 100% write workload. You can tweak your VM dirty ration stuff to help the buffered write but the down side is the more amount of data it has to sync (while dumping dirty buffer eventually) the more spikiness it will induce..The write behavior won’t be smoo

[ceph-users] Help: pool not responding

2016-02-14 Thread Mario Giammarco
Hello, I am using ceph hammer under proxmox. I have working cluster it is several month I am using it. For reasons yet to discover I am now in this situation: HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean; 7 requests are blocked > 32 sec; 1 osds have slow requests pg 0.

Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Christian Balzer
Hello, As Somnath writes below, RAM will only indirectly benefit writes. But with the right tuning to keep dentry and other FS related caches in the SLAB it can help a lot. As will all the really hot objects that get read frequently and still fit in the pagecache of your storage nodes, as anothe

Re: [ceph-users] Reducing the impact of OSD restarts (noout ain't uptosnuff)

2016-02-14 Thread Christian Balzer
Hello, Wall of text, paragraphs make for better reading. ^_- On Sun, 14 Feb 2016 06:25:11 -0700 Tom Christensen wrote: > To be clear when you are restarting these osds how many pgs go into > peering state? And do they stay there for the full 3 minutes? > I can't say that with anything resembli

Re: [ceph-users] Help: pool not responding

2016-02-14 Thread Ferhat Ozkasgarli
Hello Mario, This kind of problem usually happens for following reasons: 1-) One of the OSD nodes has network problem. 2-) Disk failure 3-) Not enough resource for OSD nodes 4-) Slow OSD Disks This happened before me. The problem was network cable problem. As soon as I replaced the cable, everyt

Re: [ceph-users] Help: pool not responding

2016-02-14 Thread koukou73gr
Have you tried restarting osd.0 ? -K. On 02/14/2016 09:56 PM, Mario Giammarco wrote: > Hello, > I am using ceph hammer under proxmox. > I have working cluster it is several month I am using it. > For reasons yet to discover I am now in this situation: > > HEALTH_WARN 4 pgs incomplete; 4 pgs st