Hi Sage:
This patch based on my previous patchceph: fix bugs about handling
short-read for sync read mode.
I can't see this patch in ceph-client#testing.Maybe you forget it ?
Thanks!
Jianpeng Ma
For sync_read/write, it may do multi stripe operations.If one of those
met erro, we return
Following we will begin to add memcg dirty page accounting around
__set_page_dirty_{buffers,nobuffers} in vfs layer, so we'd better use vfs
interface to
avoid exporting those details to filesystems.
Since vfs set_page_dirty() should be called under page lock, here we don't need
elaborate
codes
Hi James,
I'm working on adding jerasure as a plugin to Ceph. If I remember correctly the
next version of jerasure was scheduled to be released this month. Although
using jerasure-1.2 ( as found at
http://web.eecs.utk.edu/~plank/plank/papers/CS-08-627.html ) would be perfectly
fine, it may be
Hi Sean,
I tested out the patch and unfortunately had the same results as
Andreas. About 50% of the time the rpoll() thread in Ceph still hangs
when rshutdown() is called. I saw a similar behaviour when increasing
the poll time on the pre-patched version if that's of any relevance.
Thanks
On
On 08/20/2013 11:26 PM, Noah Watkins wrote:
Wido,
I pushed up a patch to
https://github.com/ceph/rados-java/commit/ca16d82bc5b596620609880e429ec9f4eaa4d5ce
That includes a fix for this problem. The fix is a bit hacky, but the
tests pass now. I included more details about the hack in the
Hi,
Looks interesting.
On Tue, Aug 20, 2013 at 1:06 PM, Nulik Nol nulik...@gmail.com wrote:
Hi,
I am creating an email system which will handle whole company's email,
mostly internal mail. There will be thousands of companies and
hundreds of users per company. So I am planning to use one
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson mike.daw...@cloudapt.com wrote:
Sam/Josh,
We upgraded from 0.61.7 to 0.67.1 during a maintenance window this morning,
hoping it would improve
On Wed, 21 Aug 2013, Sha Zhengju wrote:
Following we will begin to add memcg dirty page accounting around
__set_page_dirty_{buffers,nobuffers} in vfs layer, so we'd better use vfs
interface to
avoid exporting those details to filesystems.
Since vfs set_page_dirty() should be called under
It's osd recover clone overlap (see http://tracker.ceph.com/issues/5401)
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: mercredi 21 août 2013 17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG;
Sam,
Tried it. Injected with 'ceph tell osd.* injectargs --
--no_osd_recover_clone_overlap', then stopped one OSD for ~1 minute.
Upon restart, all my Windows VMs have issues until HEALTH_OK.
The recovery was taking an abnormally long time, so I reverted away from
If the raring guest was fine, I suspect that the issue is not on the OSDs.
-Sam
On Wed, Aug 21, 2013 at 10:55 AM, Mike Dawson mike.daw...@cloudapt.com wrote:
Sam,
Tried it. Injected with 'ceph tell osd.* injectargs --
--no_osd_recover_clone_overlap', then stopped one OSD for ~1 minute. Upon
Am 21.08.2013 17:32, schrieb Samuel Just:
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
This might sound a bug harsh but maybe due to my limited english skills ;-)
I still think that Cephs recovery system is broken by design. If an OSD
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery. A
request only waits on recovery if the particular object being read or
written must be recovered. Your issue was that recovering the
particular object being
Hi Sam,
Am 21.08.2013 21:13, schrieb Samuel Just:
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery.
Sure but remember if you have VM random 4K workload a lot of objects go
out of date pretty soon.
A request
Hi Ceph devs,
I'm working on doing some structuring/designing plugins for RGW. In a
recent discussion with Yehuda, I'm looking into how we can statically
link 'plugins' into the RGW core (so, not really plugins, but more of
a code separation).
However, the ceph automake structure doesn't use
Wido,
How would you feel about creating two RbdSnapInfo objects. The first
would be something like ceph.rbd.RbdSnapInfo and the second would be
ceph.rbd.jna.RbdSnapInfo. The former is what will be exposed through
the API, and the later is used only internally. That should address
the hacky-ness
On Wed, Aug 21, 2013 at 12:45 PM, Roald van Loon roaldvanl...@gmail.com wrote:
from auto-registering the plugins in the RGW core. The only fix for
this is making the RGW core aware of the subdirs/plugins, but I think
that's nasty design. I'd like to have it in my make conf.
This patch will
Hi,
On Wed, Aug 21, 2013 at 10:01 PM, Noah Watkins noah.watk...@inktank.com wrote:
This patch will turn on the option (which should also fix your problem
if I understand correctly?), and should probably be committed anyway
as newer versions of autotools will complain loudly about our current
On Wed, 21 Aug 2013, Noah Watkins wrote:
On Wed, Aug 21, 2013 at 12:45 PM, Roald van Loon roaldvanl...@gmail.com
wrote:
from auto-registering the plugins in the RGW core. The only fix for
this is making the RGW core aware of the subdirs/plugins, but I think
that's nasty design. I'd like
On Wed, Aug 21, 2013 at 10:41 PM, Sage Weil s...@inktank.com wrote:
Yes, the Makefile.am is in dire need of from TLC from someone who knows a
bit of autotools-fu. It is only this way because in the beginning I
didn't know any better.
Well, my average knowledge of autotools could at least fix
This an updated version of the fscache support for the Ceph filesystem. What's
changed since the last patchset:
1. Sparater the readpages refactor into it's own patches. These were already
accepted into the testing branch.
2. Tracked down the BUG in readahead cleanup code. We were returning
Signed-off-by: Hongyi Jia jiayis...@gmail.com
Tested-by: Milosz Tanski mil...@adfin.com
---
fs/cachefiles/interface.c | 19 +++
fs/cachefiles/internal.h | 1 +
fs/cachefiles/xattr.c | 39 +++
3 files changed, 59 insertions(+)
diff --git
In some cases the ceph readapages code code bails without filling all the pages
already marked by fscache. When we return back to readahead code this causes
a BUG.
Signed-off-by: Milosz Tanski mil...@adfin.com
---
fs/ceph/addr.c | 2 ++
fs/ceph/cache.h | 7 +++
2 files changed, 9
Signed-off-by: Hongyi Jia jiayis...@gmail.com
Tested-by: Milosz Tanski mil...@adfin.com
---
fs/fscache/cookie.c | 22 ++
include/linux/fscache-cache.h | 4
include/linux/fscache.h | 17 +
3 files changed, 43 insertions(+)
diff --git
Currently the fscache code expect the netfs to call fscache_readpages_or_alloc
inside the aops readpages callback. It marks all the pages in the list provided
by readahead with PgPrivate2. In the cases that the netfs fails to read all the
pages (which is legal) it ends up returning to the
Adding support for fscache to the Ceph filesystem. This would bring it to on
par with some of the other network filesystems in Linux (like NFS, AFS, etc...)
In order to mount the filesystem with fscache the 'fsc' mount option must be
passed.
Signed-off-by: Milosz Tanski mil...@adfin.com
---
Hello,
It seems that the problem is always present in kernel client 3.10.7 / ceph
0.61.8 (sometimes)
I make a clone from last snapshot, and it works again :)
Cheers,
Laurent Barbe
Le 8 août 2013 à 18:19, Laurent Barbe laur...@ksperis.com a écrit :
Hello,
I don't know if it's useful, but
On Wed, Aug 21, 2013 at 11:35 PM, Sage Weil s...@inktank.com wrote:
On Wed, 21 Aug 2013, Sha Zhengju wrote:
Following we will begin to add memcg dirty page accounting around
__set_page_dirty_{buffers,nobuffers} in vfs layer, so we'd better use vfs
interface to
avoid exporting those details
It's not really possible at this time to control that limit because
changing the primary is actually fairly expensive and doing it
unnecessarily would probably make the situation much worse (it's
mostly necessary for backfilling, which is expensive anyway). It
seems like forwarding IO on an
On Wed, 21 Aug 2013, majianpeng wrote:
Hi Sage:
This patch based on my previous patchceph: fix bugs about handling
short-read for sync read mode.
I can't see this patch in ceph-client#testing.Maybe you forget it ?
Whoops, I did! I've pulled both patches into the testing branch now.
On 08/22/2013 01:12 PM, Sage Weil wrote:
On Wed, 21 Aug 2013, majianpeng wrote:
Hi Sage:
This patch based on my previous patchceph: fix bugs about handling
short-read for sync read mode.
I can't see this patch in ceph-client#testing.Maybe you forget it ?
Whoops, I did! I've pulled
31 matches
Mail list logo