In the process of moving to a new cluster (RHEL7 based) I grabbed v0.90,
compiled RPM's and re-ran the simple local-node memstore test I've run on .80 -
.87. It's a single Memstore OSD and a single Rados Bench client locally on the
same node. Increasing queue depth and measuring latency /IOPS.
On Tue, Jan 13, 2015 at 11:13 AM, Gregory Farnum wrote:
> On Mon, Jan 12, 2015 at 10:17 PM, Michael Sevilla
> wrote:
>> I can't get consistent performance with 1 MDS. I have 2 clients create
>> 100,000 files (separate directories) in a CephFS mount. I ran the
>> experiment 5 times (deleting the p
On Mon, Jan 12, 2015 at 10:17 PM, Michael Sevilla
wrote:
> I can't get consistent performance with 1 MDS. I have 2 clients create
> 100,000 files (separate directories) in a CephFS mount. I ran the
> experiment 5 times (deleting the pools/fs and restarting the MDS in
> between each run). I graphed
Hi,
On 13/01/2015 18:05, Miyamae, Takeshi wrote:
> Hi Loic,
>
> Thank you for your quick review.
>
>> Could you reference the jerasure files (they are in the jerasure plugin
>> already)
>> instead of including them in your patch ?
>
> We had used the reference of jerasure v2.0 files before, bu
Hi Loic,
Thank you for your quick review.
> Could you reference the jerasure files (they are in the jerasure plugin
> already)
> instead of including them in your patch ?
We had used the reference of jerasure v2.0 files before, but changed to
including v1.2 files
after the patent issue.
Howeve
This converts the rbd driver to use the blk-mq infrastructure. Except
for switching to a per-request work item this is almost mechanical.
This was tested by Alexandre DERUMIER in November, and found to give
him 12 iops, although the only comparism available was an old
3.10 kernel which gave 8
On Mon, Jan 12, 2015 at 08:10:48PM +0300, Ilya Dryomov wrote:
> Why is this call here? Why not above or below? I doubt it makes much
> difference, but from a clarity standpoint at least, shouldn't it be
> placed after all the checks and allocations, say before the call to
> rbd_img_request_submit
Hi Zhiqiang,
On Tue, 13 Jan 2015, scan-ad...@coverity.com wrote:
>
> *** CID 1262557: Using invalid iterator (INVALIDATE_ITERATOR)
> /osd/ReplicatedPG.cc: 2071 in ReplicatedPG::cancel_proxy_r
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
2 new defect(s) introduced to ceph found with Coverity Scan.
New defect(s) Reported-by: Coverity Scan
Showing 2 of 2 defect(s)
** CID 1262557: Using invalid iterator (INVALIDATE_ITERATOR)
/osd/
On Tue, 13 Jan 2015, Sage Weil wrote:
> > 2) Is there any mechanism (that I might have overlooked) to avoid this
> > situation, by throttling the flush/evict operations on the fly? If not,
> > shouldn't there be one?
>
> Hmm, we could have a 'noagent' option (similar to noout, nobackfill,
> nos
On Tue, 13 Jan 2015, Pavan Rallabhandi wrote:
> Hi,
>
> This is regarding cache pools and the impact of the flush/evict on the
> client IO latencies.
>
> Am seeing a direct impact on the client IO latencies (making them worse)
> when flush/evict is triggered on the cache pool. In a constant ing
When I run "ceph --admin-daemon perf dump" by pointing at a osd admin
socket, I get a lot of performance related data. I see a few values
that are of particular interest:
1. filestore : journal_latency - This is a long running average value
2. osd : op_w_latency - long running average
3. osd : op_
Hi,
It's a great start :-) A few comments:
Could you reference the jerasure files (they are in the jerasure plugin
already) instead of including them in your patch ?
In ::prepare you reuse the matrix, if possible. If your intention is to improve
performances, you should consider using the same
Hi,
This is regarding cache pools and the impact of the flush/evict on the client
IO latencies.
Am seeing a direct impact on the client IO latencies (making them worse) when
flush/evict is triggered on the cache pool. In a constant ingress of IOs on the
cache pool, the write performance is no
Hi Loic,
I'm so sorry. The following is the correct repository.
https://github.com/t-miyamae/ceph
Best regards,
Takeshi Miyamae
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Tuesday, January 13, 2015 7:26 PM
To: Miyamae, Takeshi/宮前 剛
Cc: Ceph Development; Shioza
Hi,
On 13/01/2015 11:24, Miyamae, Takeshi wrote:
> Hi Loic,
>
>> Although we're late in the Hammer roadmap, it's a good time for an early
>> preview. It will help show what needs to be changed to accomodate the SHEC
>> plugin.
>
> Thank you for your advices.
> We have uploaded our latest codes
Hi Loic,
> Although we're late in the Hammer roadmap, it's a good time for an early
> preview. It will help show what needs to be changed to accomodate the SHEC
> plugin.
Thank you for your advices.
We have uploaded our latest codes to the following folk repository for an early
review.
https:
17 matches
Mail list logo