Re: rbd image association

2013-06-03 Thread Roman Alekseev
On 03.06.2013 11:08, Wolfgang Hennerbichler wrote: On Mon, Jun 03, 2013 at 10:52:14AM +0400, Roman Alekseev wrote: Hi, Is it possible to associate certain rbd device with the appropriate rbd image (for example : image1 > /dev/rbd0)? We always need to map image1 to /dev/rbd0, image2 to /dev/rbd1

Re: rbd image association

2013-06-03 Thread Wido den Hollander
On 06/03/2013 09:17 AM, Roman Alekseev wrote: On 03.06.2013 11:08, Wolfgang Hennerbichler wrote: On Mon, Jun 03, 2013 at 10:52:14AM +0400, Roman Alekseev wrote: Hi, Is it possible to associate certain rbd device with the appropriate rbd image (for example : image1 > /dev/rbd0)? We always need

Re: rbd image association

2013-06-03 Thread Wolfgang Hennerbichler
On Mon, Jun 03, 2013 at 11:17:16AM +0400, Roman Alekseev wrote: > Dear Wolfgang, > > I was trying to use the command ln -s /dev/rbd1 image1 but it > creates wrong symlink. Most likely there is more specific command to > symlink the image and device correctly. No, you should not have to do anythin

Re: rbd image association

2013-06-03 Thread Roman Alekseev
On 03.06.2013 11:34, Wido den Hollander wrote: udev takes car Do you mean we need to create some rules in /etc/udev/rules.d/ directory and after running rbd -p pool map image my image will be mapped with appropriate device? If so, could you please provide me with the commands which should b

Re: Speed up 'rbd rm'

2013-06-03 Thread Chris Dunlop
On Thu, May 30, 2013 at 07:04:28PM -0700, Josh Durgin wrote: > On 05/30/2013 06:40 PM, Chris Dunlop wrote: >> On Thu, May 30, 2013 at 01:50:14PM -0700, Josh Durgin wrote: >>> On 05/29/2013 07:23 PM, Chris Dunlop wrote: On Wed, May 29, 2013 at 12:21:07PM -0700, Josh Durgin wrote: > On 05/28

krbd + format=2 ?

2013-06-03 Thread Chris Dunlop
G'day, Sage's recent pull message to Linus said: Please pull the following Ceph patches from git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus This is a big pull. Most of it is culmination of Alex's work to implement RBD image layering, which is now complete (

Re: [PATCH 0/2] librados: Add RADOS locks to the C/C++ API

2013-06-03 Thread Filippos Giannakos
Hi Josh, On 05/31/2013 10:44 PM, Josh Durgin wrote: On 05/30/2013 06:02 AM, Filippos Giannakos wrote: The following patches export the RADOS advisory locks functionality to the C/C++ librados API. The extra API calls added are inspired by the relevant functions of librbd. This looks good to m

Re: rbd image association

2013-06-03 Thread Sage Weil
On Mon, 3 Jun 2013, Roman Alekseev wrote: > On 03.06.2013 11:34, Wido den Hollander wrote: > > udev takes car > > Do you mean we need to create some rules in /etc/udev/rules.d/ directory and > after running rbd -p pool map image my image will be mapped with appropriate > device? The ceph package

Re: rbd image association

2013-06-03 Thread Roman Alekseev
On 03.06.2013 18:49, Sage Weil wrote: On Mon, 3 Jun 2013, Roman Alekseev wrote: On 03.06.2013 11:34, Wido den Hollander wrote: udev takes car Do you mean we need to create some rules in /etc/udev/rules.d/ directory and after running rbd -p pool map image my image will be mapped with appropriat

Re: rbd image association

2013-06-03 Thread Sage Weil
On Mon, 3 Jun 2013, Roman Alekseev wrote: > On 03.06.2013 18:49, Sage Weil wrote: > > On Mon, 3 Jun 2013, Roman Alekseev wrote: > > > On 03.06.2013 11:34, Wido den Hollander wrote: > > > > udev takes car > > > Do you mean we need to create some rules in /etc/udev/rules.d/ directory > > > and > > >

ceph branch status

2013-06-03 Thread ceph branch robot
-- All Branches -- Alex Elder 2013-05-21 14:37:01 -0500 wip-rbd-testing Babu Shanmugam 2013-05-30 10:28:23 +0530 wip-rgw-geo-enovance Dan Mick 2012-12-18 12:27:36 -0800 wip-rbd-striping 2013-03-15 17:27:54 -0700 wip-cephtool-stderr David Zafman

ceph branch status

2013-06-03 Thread ceph branch robot
-- All Branches -- Alex Elder 2013-05-21 14:37:01 -0500 wip-rbd-testing Babu Shanmugam 2013-05-30 10:28:23 +0530 wip-rgw-geo-enovance Dan Mick 2012-12-18 12:27:36 -0800 wip-rbd-striping 2013-03-15 17:27:54 -0700 wip-cephtool-stderr David Zafman

Re: Segmentation faults in ceph-osd

2013-06-03 Thread Emil Renner Berthing
Hi, Unfortunately we keep getting these segmentation faults even when all the cluster contains is objects from the rados benchmark tool. I've opened an issue for it here: http://tracker.ceph.com/issues/5239 On 21 May 2013 23:27, Anders Saaby wrote: > On 21/05/2013, at 21.00, Samuel Just wrote:

Ceph killed by OS because of OOM under high load

2013-06-03 Thread Chen, Xiaoxi
Hi, As my previous mail reported some weeks ago ,we are suffering from OSD crash/ OSD Flipping / System reboot and etc, all these unstable issue really stop us from digging further into ceph characterization. Good news is that we seems find out the cause, I explain our experiment

Re: [ceph-users] Ceph killed by OS because of OOM under high load

2013-06-03 Thread Gregory Farnum
On Mon, Jun 3, 2013 at 8:47 AM, Chen, Xiaoxi wrote: > Hi, > As my previous mail reported some weeks ago ,we are suffering from > OSD crash/ OSD Flipping / System reboot and etc, all these unstable issue > really stop us from digging further into ceph characterization. > Good news

Re: [PATCH v1 00/11] locks: scalability improvements for file locking

2013-06-03 Thread Davidlohr Bueso
On Fri, 2013-05-31 at 23:07 -0400, Jeff Layton wrote: > This is not the first attempt at doing this. The conversion to the > i_lock was originally attempted by Bruce Fields a few years ago. His > approach was NAK'ed since it involved ripping out the deadlock > detection. People also really seem to

Re: rationale for a PGLog::merge_old_entry case

2013-06-03 Thread Samuel Just
In all three cases, we know the authoritative log does not contain an entry for oe.soid, therefore: If oe.prior_version > log.tail, we must already have processed an earlier entry for that object resulting in the object being correctly marked missing (or not) (specifically, the entry for oe.prior_

Re: [PATCH v1 00/11] locks: scalability improvements for file locking

2013-06-03 Thread J. Bruce Fields
On Fri, May 31, 2013 at 11:07:23PM -0400, Jeff Layton wrote: > Executive summary (tl;dr version): This patchset represents an overhaul > of the file locking code with an aim toward improving its scalability > and making the code a bit easier to understand. Thanks for working on this, that code cou

Re: [PATCH v1 01/11] cifs: use posix_unblock_lock instead of locks_delete_block

2013-06-03 Thread J. Bruce Fields
On Fri, May 31, 2013 at 11:07:24PM -0400, Jeff Layton wrote: > commit 66189be74 (CIFS: Fix VFS lock usage for oplocked files) exported > the locks_delete_block symbol. There's already an exported helper > function that provides this capability however, so make cifs use that > instead and turn locks

Re: [PATCH v1 02/11] locks: make generic_add_lease and generic_delete_lease static

2013-06-03 Thread J. Bruce Fields
On Fri, May 31, 2013 at 11:07:25PM -0400, Jeff Layton wrote: > Signed-off-by: Jeff Layton ACK.--b. > --- > fs/locks.c |4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/fs/locks.c b/fs/locks.c > index 7a02064..e3140b8 100644 > --- a/fs/locks.c > +++ b/fs/locks.c >

Re: [PATCH v1 03/11] locks: comment cleanups and clarifications

2013-06-03 Thread J. Bruce Fields
On Fri, May 31, 2013 at 11:07:26PM -0400, Jeff Layton wrote: > Signed-off-by: Jeff Layton > --- > fs/locks.c | 24 +++- > include/linux/fs.h |6 ++ > 2 files changed, 25 insertions(+), 5 deletions(-) > > diff --git a/fs/locks.c b/fs/locks.c > index e3140b8..

[PATCH 1/2] mds: initialize some member variables of MDCache

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" I added some member variables to class MDCache, but forget to initialize them. Fixes: #5236 Signed-off-by: Yan, Zheng --- src/mds/MDCache.cc | 5 + 1 file changed, 5 insertions(+) diff --git a/src/mds/MDCache.cc b/src/mds/MDCache.cc index 8c17172..e2ecba8 100644 --- a/s

[PATCH 2/2] mds: allow purging "dirty parent" stray inode

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" Signed-off-by: Yan, Zheng --- src/mds/MDCache.cc | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/mds/MDCache.cc b/src/mds/MDCache.cc index e2ecba8..7b4f2fa 100644 --- a/src/mds/MDCache.cc +++ b/src/mds/MDCache.cc @@ -9156,7 +9156,7 @@ void MDCache:

[PATCH 7/9] ceph: check migrate seq before changing auth cap

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" We may receive old request reply from the exporter MDS after receiving the importer MDS' cap import message. Signed-off-by: Yan, Zheng --- fs/ceph/caps.c | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 54c290b

[PATCH 0/9] fixes for kclient

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" this patch series are also in: git://github.com/ukernel/linux.git wip-ceph Regards Yan, Zheng -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordo

[PATCH 2/9] libceph: call r_unsafe_callback when unsafe reply is received

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" We can't use !req->r_sent to check if OSD request is sent for the first time, this is because __cancel_request() zeros req->r_sent when OSD map changes. Rather than adding a new variable to ceph_osd_request to indicate if it's sent for the first time, We can call the unsafe cal

[PATCH 4/9] ceph: fix cap release race

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" ceph_encode_inode_release() can race with ceph_open() and release caps wanted by open files. So it should call __ceph_caps_wanted() to get the wanted caps. Signed-off-by: Yan, Zheng --- fs/ceph/caps.c | 22 ++ 1 file changed, 10 insertions(+), 12 deletion

[PATCH 5/9] ceph: reset iov_len when discarding cap release messages

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" Signed-off-by: Yan, Zheng --- fs/ceph/mds_client.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 4f22671..e2d7e56 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -1391,6 +1391,7 @@ static void discard_cap

[PATCH 8/9] ceph: clear migrate seq when MDS restarts

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" Signed-off-by: Yan, Zheng --- fs/ceph/mds_client.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index e2d7e56..ce7a789 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -2455,6 +2455,7 @@ static int encode_caps_

[PATCH 3/9] libceph: fix truncate size calculation

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" check the "not truncated yet" case Signed-off-by: Yan, Zheng --- net/ceph/osd_client.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 6972d17..93efdfb 100644 --- a/net/ceph/osd_client.c +

[PATCH 1/9] libceph: fix safe completion

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" handle_reply() calls complete_request() only if the first OSD reply has ONDISK flag. Signed-off-by: Yan, Zheng --- include/linux/ceph/osd_client.h | 1 - net/ceph/osd_client.c | 16 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/in

[PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" Signed-off-by: Yan, Zheng --- fs/ceph/caps.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 790f88b..458a66e 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -1982,8 +1982,14 @@ static void kick_flushing_inode_caps(struct c

[PATCH 6/9] ceph: fix race between page writeback and truncate

2013-06-03 Thread Yan, Zheng
From: "Yan, Zheng" The client can receive truncate request from MDS at any time. So the page writeback code need to get i_size, truncate_seq and truncate_size atomically Signed-off-by: Yan, Zheng --- fs/ceph/addr.c | 84 -- 1 file changed

RE: [ceph-users] Ceph killed by OS because of OOM under high load

2013-06-03 Thread Chen, Xiaoxi
Hi Greg, Yes, Thanks for your advice ,we do turn down the osd_client_message_size_cap to 100MB/OSD ,both Journal queue and filestore queue are set to 100MB also. That's 300MB/OSD in total, but from TOP we see: 16527 1 14:49.01 0 7.1 20 0 S 1147m0 5