On Fri, Aug 28, 2015 at 2:35 PM, Jianhui Yuan zuiwany...@gmail.com wrote:
Hi Haomai,
when we use async messenger, the client(as: ceph -s) always stuck in
WorkerPool::barrier for 30 seconds. It seems the wakeup don't work.
What's the ceph version and os version? It should be a bug we already
Hi Ilya,
We can change sector size from 512 to 4096. This can reduce the count of
write.
I did a simple test: for 900G, mkfs.xfs -f
For default: 1m10s
Physical sector size = 4096: 0m10s.
But if change sector size, we need rbd meta record this.
Thanks!
Jianpeng
-Original Message-
On 08/27/2015 03:38 PM, Sage Weil wrote:
On Thu, 27 Aug 2015, Joshua Schmid wrote:
On 08/27/2015 02:49 AM, Sage Weil wrote:
Hi Joshua!
Hi Sage,
Overall the ceph-disk changes look pretty good, and it looks like Andrew
and David have both reviewed. My only real concern/request is that
On Thu, Aug 27, 2015 at 3:43 AM, huang jun hjwsm1...@gmail.com wrote:
hi,llya
2015-08-26 23:56 GMT+08:00 Ilya Dryomov idryo...@gmail.com:
On Wed, Aug 26, 2015 at 6:22 PM, Haomai Wang haomaiw...@gmail.com wrote:
On Wed, Aug 26, 2015 at 11:16 PM, huang jun hjwsm1...@gmail.com wrote:
hi,all
we
On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster recovery IO
and client IO operation metrics , that can be further used with collectd.
For example , i need to take out these
On Fri, Aug 28, 2015 at 10:36 AM, Ma, Jianpeng jianpeng...@intel.com wrote:
Hi Ilya,
We can change sector size from 512 to 4096. This can reduce the count of
write.
I did a simple test: for 900G, mkfs.xfs -f
For default: 1m10s
Physical sector size = 4096: 0m10s.
But if change sector
Hi,
Thanks for your good comeback!
to your KSS -;
Shinobu
- Original Message -
From: Joshua Schmid jsch...@suse.de
To: ski...@redhat.com, Sage Weil s...@newdream.net
Cc: Ceph Development ceph-devel@vger.kernel.org
Sent: Friday, August 28, 2015 5:11:34 PM
Subject: Re: Proposal:
On Fri, Aug 28, 2015 at 2:17 AM, Zhengqiankun zheng.qian...@h3c.com wrote:
hi,Yehuda:
I have a question and hope that you can help me answer it. Different
subuser of swift
can set specific permissions, but why not set specific permission for
access-key of s3?
Probably because no
Oh, yeah, we'll definitely test for correctness for async reads on
filestore, I'm just worried about validating the performance
assumptions. The 3700s might be just fine for that validation though.
-Sam
On Fri, Aug 28, 2015 at 1:01 PM, Blinick, Stephen L
stephen.l.blin...@intel.com wrote:
This
On 08/28/2015 12:16 PM, Loic Dachary wrote:
Hi Abhishek,
We've just had an example of a backport merged into hammer although it did not
follow the procedure : https://github.com/ceph/ceph/pull/5691
It's a key aspect of backports : we're bound to follow procedure, but
developers are allowed
Hi Abhishek,
We've just had an example of a backport merged into hammer although it did not
follow the procedure : https://github.com/ceph/ceph/pull/5691
It's a key aspect of backports : we're bound to follow procedure, but
developers are allowed to bypass it entirely. It may seem like
Hi Abhishek,
Since v0.94.3 was published after we started work on v0.94.4, part of
http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_start_working_on_a_new_point_release
was not done yet. I've updated the HWOTO page to link to the v0.94.4 page:
This sounds ok, with the synchronous interface still possible to the
ObjectStore based on return code.
I'd think that the async read interface can be evaluated with any hardware, at
least for correctness, by observing the queue depth to the device during a test
run. Also, I think
13 matches
Mail list logo