Memstore performance improvements v0.90 vs v0.87

2015-01-13 Thread Blinick, Stephen L
In the process of moving to a new cluster (RHEL7 based) I grabbed v0.90, compiled RPM's and re-ran the simple local-node memstore test I've run on .80 - .87. It's a single Memstore OSD and a single Rados Bench client locally on the same node. Increasing queue depth and measuring latency /IOPS.

Re: MDS has inconsistent performance

2015-01-13 Thread Michael Sevilla
On Tue, Jan 13, 2015 at 11:13 AM, Gregory Farnum wrote: > On Mon, Jan 12, 2015 at 10:17 PM, Michael Sevilla > wrote: >> I can't get consistent performance with 1 MDS. I have 2 clients create >> 100,000 files (separate directories) in a CephFS mount. I ran the >> experiment 5 times (deleting the p

Re: MDS has inconsistent performance

2015-01-13 Thread Gregory Farnum
On Mon, Jan 12, 2015 at 10:17 PM, Michael Sevilla wrote: > I can't get consistent performance with 1 MDS. I have 2 clients create > 100,000 files (separate directories) in a CephFS mount. I ran the > experiment 5 times (deleting the pools/fs and restarting the MDS in > between each run). I graphed

Re: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Loic Dachary
Hi, On 13/01/2015 18:05, Miyamae, Takeshi wrote: > Hi Loic, > > Thank you for your quick review. > >> Could you reference the jerasure files (they are in the jerasure plugin >> already) >> instead of including them in your patch ? > > We had used the reference of jerasure v2.0 files before, bu

RE: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Miyamae, Takeshi
Hi Loic, Thank you for your quick review. > Could you reference the jerasure files (they are in the jerasure plugin > already) > instead of including them in your patch ? We had used the reference of jerasure v2.0 files before, but changed to including v1.2 files after the patent issue. Howeve

[PATCH v3] rbd: convert to blk-mq

2015-01-13 Thread Christoph Hellwig
This converts the rbd driver to use the blk-mq infrastructure. Except for switching to a per-request work item this is almost mechanical. This was tested by Alexandre DERUMIER in November, and found to give him 12 iops, although the only comparism available was an old 3.10 kernel which gave 8

Re: [PATCH v2] rbd: convert to blk-mq

2015-01-13 Thread Christoph Hellwig
On Mon, Jan 12, 2015 at 08:10:48PM +0300, Ilya Dryomov wrote: > Why is this call here? Why not above or below? I doubt it makes much > difference, but from a clarity standpoint at least, shouldn't it be > placed after all the checks and allocations, say before the call to > rbd_img_request_submit

Re: New Defects reported by Coverity Scan for ceph

2015-01-13 Thread Sage Weil
Hi Zhiqiang, On Tue, 13 Jan 2015, scan-ad...@coverity.com wrote: > > *** CID 1262557: Using invalid iterator (INVALIDATE_ITERATOR) > /osd/ReplicatedPG.cc: 2071 in ReplicatedPG::cancel_proxy_r

New Defects reported by Coverity Scan for ceph

2015-01-13 Thread scan-admin
Hi, Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan. 2 new defect(s) introduced to ceph found with Coverity Scan. New defect(s) Reported-by: Coverity Scan Showing 2 of 2 defect(s) ** CID 1262557: Using invalid iterator (INVALIDATE_ITERATOR) /osd/

Re: Cache pool latency impact

2015-01-13 Thread Sage Weil
On Tue, 13 Jan 2015, Sage Weil wrote: > > 2) Is there any mechanism (that I might have overlooked) to avoid this > > situation, by throttling the flush/evict operations on the fly? If not, > > shouldn't there be one? > > Hmm, we could have a 'noagent' option (similar to noout, nobackfill, > nos

Re: Cache pool latency impact

2015-01-13 Thread Sage Weil
On Tue, 13 Jan 2015, Pavan Rallabhandi wrote: > Hi, > > This is regarding cache pools and the impact of the flush/evict on the > client IO latencies. > > Am seeing a direct impact on the client IO latencies (making them worse) > when flush/evict is triggered on the cache pool. In a constant ing

Re: Slow OSD detection

2015-01-13 Thread Sreenath BH
When I run "ceph --admin-daemon perf dump" by pointing at a osd admin socket, I get a lot of performance related data. I see a few values that are of particular interest: 1. filestore : journal_latency - This is a long running average value 2. osd : op_w_latency - long running average 3. osd : op_

Re: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Loic Dachary
Hi, It's a great start :-) A few comments: Could you reference the jerasure files (they are in the jerasure plugin already) instead of including them in your patch ? In ::prepare you reuse the matrix, if possible. If your intention is to improve performances, you should consider using the same

Cache pool latency impact

2015-01-13 Thread Pavan Rallabhandi
Hi, This is regarding cache pools and the impact of the flush/evict on the client IO latencies. Am seeing a direct impact on the client IO latencies (making them worse) when flush/evict is triggered on the cache pool. In a constant ingress of IOs on the cache pool, the write performance is no

RE: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Miyamae, Takeshi
Hi Loic, I'm so sorry. The following is the correct repository. https://github.com/t-miyamae/ceph Best regards, Takeshi Miyamae -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Tuesday, January 13, 2015 7:26 PM To: Miyamae, Takeshi/宮前 剛 Cc: Ceph Development; Shioza

Re: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Loic Dachary
Hi, On 13/01/2015 11:24, Miyamae, Takeshi wrote: > Hi Loic, > >> Although we're late in the Hammer roadmap, it's a good time for an early >> preview. It will help show what needs to be changed to accomodate the SHEC >> plugin. > > Thank you for your advices. > We have uploaded our latest codes

RE: Deadline of Github pull request for Hammer release (question)

2015-01-13 Thread Miyamae, Takeshi
Hi Loic, > Although we're late in the Hammer roadmap, it's a good time for an early > preview. It will help show what needs to be changed to accomodate the SHEC > plugin. Thank you for your advices. We have uploaded our latest codes to the following folk repository for an early review. https: