On Sun, Jan 11, 2015 at 09:33:57AM -0800, Sage Weil wrote:
By the way I took another look and I'm not sure that it is worth
duplicating all of the tree logic for a tree view. It seems easier to
either include this optionally in the tree output (the utilzation calc is
simpler than the tree
Hi Yuri,
I analyzed the errors from the rbd run at
http://pulpito.ceph.com/loic-2015-01-08_10:36:47-rbd-giant-backports-testing-basic-vps/
and summarized my findings in the description of
http://tracker.ceph.com/issues/10501
It would be great if you could let me know if these look familiar.
Hi Josh,
While looking at errors from a giant (plus backports) run of the RBD suite, I
stumbled upon:
http://tracker.ceph.com/issues/10513
and the error is unclear (at least to me ;-). Would you have an idea ? For
information the backports included are all the pull requests mentionned at
Hi Christoph,
I'll have my production cluster ready around next month,
with a lot more powerfull nodes (each node : 2x10 cores 3,1ghz + 6 ssd intel
s3500).
I'll redo benchmark and I post results as soon as possible.
- Mail original -
De: Christoph Hellwig h...@lst.de
À: Alex Elder
On Sun, Jan 11, 2015 at 12:32:09PM -0500, Tejun Heo wrote:
Is this an optimization or something necessary for the following
changes? If latter, maybe it's a good idea to state why this is
necessary in the description? Otherwise,
It gets rid of a bdi reassignment, and thus makes life a lot
On Sun, Jan 11, 2015 at 01:16:51PM -0500, Tejun Heo wrote:
+struct backing_dev_info *inode_to_bdi(struct inode *inode)
{
struct super_block *sb = inode-i_sb;
#ifdef CONFIG_BLOCK
@@ -75,6 +75,7 @@ static inline struct backing_dev_info
*inode_to_bdi(struct inode *inode)
#endif
This converts the rbd driver to use the blk-mq infrastructure. Except
for switching to a per-request work item this is almost mechanical.
This was tested by Alexandre DERUMIER in November, and found to give
him 12 iops, although the only comparism available was an old
3.10 kernel which gave
From: Kent Overstreet k...@daterainc.com
As generic_make_request() is now able to handle arbitrarily sized bios,
it's no longer necessary for each individual block driver to define its
own -merge_bvec_fn() callback. Remove every invocation completely.
Cc: Jens Axboe ax...@kernel.dk
Cc: Lars
-- All Branches --
Adam Crume adamcr...@gmail.com
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza alfredo.d...@inktank.com
2014-07-08 13:58:35 -0400 wip-8679
2014-09-04 13:58:14 -0400 wip-8366
2014-10-13 11:10:10 -0400 wip-9730
Andreas-Joachim
Hi Ceph,
I consolidated the pull request, issues and commits into a report in the
description of http://tracker.ceph.com/issues/10501 to get a better view of the
upcoming v0.87.1 point release. It is broken down into:
* already merged in https://github.com/ceph/ceph/tree/giant
* included and
Hi Dong,
We hit a failed assert in Transaction::append():
http://tracker.ceph.com/issues/10517
2 2015-01-08 00:48:19.737753 7f6a6a4b0700 0 - 10.214.132.32:6801/6381
10.214.134.104:6801/5896 pipe(0x415a000 sd=116 :6801 s=2 pgs=1750 cs=642 l=0
c=0x40fd020).injecting socket failure
OK.
I roughly checked the problem, it seems to be the same problem which I
met in RPGBackend, and that is why use op_t.append(local_t) instead of
local_t.append(op_t).
Is there any way to reproduce the problem? So I make a patch for the problem.
On 13 January 2015 at 09:54, Sage Weil
I can't get consistent performance with 1 MDS. I have 2 clients create
100,000 files (separate directories) in a CephFS mount. I ran the
experiment 5 times (deleting the pools/fs and restarting the MDS in
between each run). I graphed the metadata throughput (requests per
second):
On 01/12/2015 09:13 AM, Ilya Dryomov wrote:
Speaking of fsx and thrasher. Last time I tried fsx (both kernel and
librbd) couldn't survive thrashing due to watch/notify troubles. Are
all the new watch/notify pieces that were supposed to fix this in?
There's just one bug left afaik:
On 01/12/2015 06:26 AM, Loic Dachary wrote:
Hi Josh,
While looking at errors from a giant (plus backports) run of the RBD suite, I
stumbled upon:
http://tracker.ceph.com/issues/10513
and the error is unclear (at least to me ;-). Would you have an idea ? For
information the backports
On Mon, Jan 12, 2015 at 3:40 PM, Christoph Hellwig h...@lst.de wrote:
This converts the rbd driver to use the blk-mq infrastructure. Except
for switching to a per-request work item this is almost mechanical.
Hi Christoph,
I too am not up to speed on blk-mq but still have a couple sanity
On Mon, Jan 12, 2015 at 7:59 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 01/12/2015 06:26 AM, Loic Dachary wrote:
Hi Josh,
While looking at errors from a giant (plus backports) run of the RBD
suite, I stumbled upon:
http://tracker.ceph.com/issues/10513
and the error is unclear
17 matches
Mail list logo