Re: [PATCH 1/3] ceph: take inode lock when finding an inode alias

2011-12-29 Thread Christoph Hellwig
On Wed, Dec 28, 2011 at 06:05:13PM -0800, Sage Weil wrote:
> +/* The following code copied from "fs/dcache.c" */
> +static struct dentry * d_find_any_alias(struct inode *inode)
> +{
> + struct dentry *de;
> +
> + spin_lock(&inode->i_lock);
> + de = __d_find_any_alias(inode);
> + spin_unlock(&inode->i_lock);
> + return de;
> +}
> +/* End of code copied from "fs/dcache.c" */

I would be much happier about just exporting d_find_any_alias.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] ceph: take a reference to the dentry in d_find_any_alias()

2011-12-29 Thread Christoph Hellwig
On Wed, Dec 28, 2011 at 06:05:14PM -0800, Sage Weil wrote:
> From: Alex Elder 
> 
> The ceph code duplicates __d_find_any_alias(), but it currently
> does not take a reference to the returned dentry as it should.
> Replace the ceph implementation with an exact copy of what's
> found in "fs/dcache.c", and update the callers so they drop
> their reference when they're done with it.
> 
> Unfortunately this requires the wholesale copy of the functions
> that implement __dget().  It would be much nicer to just export
> d_find_any_alias() from "fs/dcache.c" instead.

Just exporting it would indeed be much better.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] ceph: enable/disable dentry complete flags via mount option

2011-12-29 Thread Christoph Hellwig
On Wed, Dec 28, 2011 at 06:05:15PM -0800, Sage Weil wrote:
> Enable/disable use of the dentry dir 'complete' flag via a mount option.

Please add documentation when a user would specify this option.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ceph based on ext4

2011-12-29 Thread Eric_YH_Chen
Hi, all:

We want to test the stability and performance of ceph with ext4 file
system.
Here is the information from `mount`,  do we set all attributes
correct?

/dev/sda1 on /srv/osd.0 type ext4
(rw,noatime,nodiratime,errors=remount-ro,data=writeback,user_xattr)

And from the tutorial at
http://ceph.newdream.net/wiki/Creating_a_new_file_system
We also disable the journal functionality by ` tune2fs -O
^has_journal /dev/sda1`
 
Any extra comment to get better performance? Thanks!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Hang at 'rbd info '

2011-12-29 Thread Eric_YH_Chen
Hi, Tommi:

   The health of ceph is 'HEALTH_OK'.

   Do you want to see any other information of ceph cluster?  

I also tried these two command but no use.

   `ceph osd repair *`
   `ceph osd scrub *`

And I also cannot map the rbd image to rbd device. 

Is there any solution to fix it? I cannot access the rbd image anymore.

Thanks!
   

-Original Message-
From: Tommi Virtanen [mailto:tommi.virta...@dreamhost.com] 
Sent: Tuesday, December 20, 2011 3:02 AM
To: Eric YH Chen/WHQ/Wistron
Cc: ceph-devel@vger.kernel.org; Chris YT Huang/WHQ/Wistron; Alex Lee/WHQ/Wistron
Subject: Re: Hang at 'rbd info '

On Sun, Dec 18, 2011 at 19:32,   wrote:
> Hi, all:
>
>     I met a situation that ceph system would hang at ‘rbd info .
>   Only can reproduced on one image.
>   How can I know what happened? Is there any log that I can provide to you 
> for analysis? Thanks!

That would happen if the object that stores the rbd image header would
be unreachable. Is your cluster healthy ("ceph health", "ceph -s")?
N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

Re: Bug #1047 reproduced

2011-12-29 Thread Amon Ott
Hi Greg,

I finally got the test cluster freed up for more Ceph testing.

On Friday 23 December 2011 wrote Gregory Farnum:
> Unfortunately there's not enough info in this log either. If you can
> reproduce it with "mds debug = 20" and put that log somewhere, it
> ought to be enough to work out what's going on, though. Sorry. :(
> -Greg

Here is what MDS logs with debug 20. No idea if it really helps. The cluster 
is still in the broken state, should I try to reproduce with a recreated ceph 
fs and debug 20? This could be GBs of logs.

Amon Ott
-- 
Dr. Amon Ott
m-privacy GmbH   Tel: +49 30 24342334
Am Köllnischen Park 1Fax: +49 30 24342336
10179 Berlin http://www.m-privacy.de

Amtsgericht Charlottenburg, HRB 84946

Geschäftsführer:
 Dipl.-Kfm. Holger Maczkowsky,
 Roman Maczkowsky

GnuPG-Key-ID: 0x2DD3A649
2011-12-29 12:21:16.289250 4ce1e710 ceph version  (commit:), process ceph-mds, pid 31315
2011-12-29 12:21:16.292369 4ae19b70 mds.-1.0 ms_handle_connect on 192.168.111.2:6789/0
2011-12-29 12:21:20.424101 4ae19b70 mds.-1.0 handle_mds_map standby
2011-12-29 12:21:30.369706 4ae19b70 mds.0.13 handle_mds_map i am now mds.0.13
2011-12-29 12:21:30.369778 4ae19b70 mds.0.13 handle_mds_map state change up:standby --> up:replay
2011-12-29 12:21:30.369809 4ae19b70 mds.0.13 replay_start
2011-12-29 12:21:30.369840 4ae19b70 mds.0.13  recovery set is 
2011-12-29 12:21:30.369868 4ae19b70 mds.0.13  need osdmap epoch 252, have 251
2011-12-29 12:21:30.369890 4ae19b70 mds.0.13  waiting for osdmap 252 (which blacklists prior instance)
2011-12-29 12:21:30.369960 4ae19b70 mds.0.cache handle_mds_failure mds.0 : recovery peers are 
2011-12-29 12:21:30.455693 4ae19b70 mds.0.13 ms_handle_connect on 192.168.111.2:6801/4849
2011-12-29 12:21:30.455803 4ae19b70 mds.0.13 ms_handle_connect on 192.168.111.4:6801/5054
2011-12-29 12:21:30.456156 4ae19b70 mds.0.13 ms_handle_connect on 192.168.111.3:6801/5187
2011-12-29 12:21:30.616751 4ae19b70 mds.0.cache creating system inode with ino:100
2011-12-29 12:21:30.616984 4ae19b70 mds.0.cache creating system inode with ino:1
2011-12-29 12:21:33.246212 4860db70 mds.0.13 replay_done
2011-12-29 12:21:33.246270 4860db70 mds.0.13 making mds journal writeable
2011-12-29 12:21:33.391477 4ae19b70 mds.0.13 handle_mds_map i am now mds.0.13
2011-12-29 12:21:33.391531 4ae19b70 mds.0.13 handle_mds_map state change up:replay --> up:reconnect
2011-12-29 12:21:33.391556 4ae19b70 mds.0.13 reconnect_start
2011-12-29 12:21:33.391579 4ae19b70 mds.0.13 reopen_log
2011-12-29 12:21:33.391637 4ae19b70 mds.0.server reconnect_clients -- 7 sessions
2011-12-29 12:22:21.294278 49414b70 mds.0.server reconnect gave up on client.4531 192.168.111.4:0/257053315
2011-12-29 12:22:21.294334 49414b70 mds.0.server reconnect gave up on client.5000 192.168.111.2:0/1865366646
2011-12-29 12:22:21.294363 49414b70 mds.0.server reconnect gave up on client.5113 192.168.111.1:0/4105687565
2011-12-29 12:22:21.294391 49414b70 mds.0.server reconnect gave up on client.5202 192.168.111.3:0/705815956
2011-12-29 12:22:21.294420 49414b70 mds.0.server reconnect gave up on client.5414 192.168.111.2:0/1736181750
2011-12-29 12:22:21.294447 49414b70 mds.0.server reconnect gave up on client.5417 192.168.111.3:0/3526419788
2011-12-29 12:22:21.294474 49414b70 mds.0.server reconnect gave up on client.5420 192.168.111.4:0/1587151157
2011-12-29 12:22:21.294509 49414b70 mds.0.13 reconnect_done
2011-12-29 12:22:21.419600 4ae19b70 mds.0.13 handle_mds_map i am now mds.0.13
2011-12-29 12:22:21.419647 4ae19b70 mds.0.13 handle_mds_map state change up:reconnect --> up:rejoin
2011-12-29 12:22:21.419673 4ae19b70 mds.0.13 rejoin_joint_start
2011-12-29 12:22:21.443395 4ae19b70 mds.0.13 rejoin_done
2011-12-29 12:22:21.554730 4ae19b70 mds.0.13 handle_mds_map i am now mds.0.13
2011-12-29 12:22:21.554772 4ae19b70 mds.0.13 handle_mds_map state change up:rejoin --> up:active
2011-12-29 12:22:21.554797 4ae19b70 mds.0.13 recovery_done -- successful recovery!
2011-12-29 12:22:21.557761 4ae19b70 mds.0.13 active_start
2011-12-29 12:22:21.561821 4ae19b70 mds.0.13 cluster recovered.
2011-12-29 12:22:21.569550 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80832 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569608 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80833 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569643 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80835 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569675 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80836 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569705 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80839 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569739 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80840 <= 81240, already committed, sending ack.
2011-12-29 12:22:21.569770 4ae19b70 mds.0.tableserver(anchortable) got commit for tid 80841 <= 81

Final bits for 3.2, take 2

2011-12-29 Thread Sage Weil
On second thought, I think it'll be cleaner to just disable the dcache 
trickery for now until we've flushed out all the bugs, and wait to fix up 
the d_find_any_alias() export and add new mount options until the next 
release...

sage


>From a4d46363ce96c8fd7534c6f79051c78b52464132 Mon Sep 17 00:00:00 2001
From: Sage Weil 
Date: Thu, 29 Dec 2011 08:05:14 -0800
Subject: [PATCH] ceph: disable use of dcache for readdir etc.

Ceph attempts to use the dcache to satisfy negative lookups and readdir
when the entire directory contents are in cache.  Disable this behavior
until lingering bugs in this code are shaken out; we'll re-enable these
hooks once things are fully stable.

Signed-off-by: Sage Weil 
---
 fs/ceph/dir.c |   29 +++--
 1 files changed, 3 insertions(+), 26 deletions(-)

diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
index 3eeb976..9895400 100644
--- a/fs/ceph/dir.c
+++ b/fs/ceph/dir.c
@@ -1094,42 +1094,19 @@ static int ceph_snapdir_d_revalidate(struct dentry 
*dentry,
 /*
  * Set/clear/test dir complete flag on the dir's dentry.
  */
-static struct dentry * __d_find_any_alias(struct inode *inode)
-{
-   struct dentry *alias;
-
-   if (list_empty(&inode->i_dentry))
-   return NULL;
-   alias = list_first_entry(&inode->i_dentry, struct dentry, d_alias);
-   return alias;
-}
-
 void ceph_dir_set_complete(struct inode *inode)
 {
-   struct dentry *dentry = __d_find_any_alias(inode);
-   
-   if (dentry && ceph_dentry(dentry)) {
-   dout(" marking %p (%p) complete\n", inode, dentry);
-   set_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
-   }
+   /* not yet implemented */
 }
 
 void ceph_dir_clear_complete(struct inode *inode)
 {
-   struct dentry *dentry = __d_find_any_alias(inode);
-
-   if (dentry && ceph_dentry(dentry)) {
-   dout(" marking %p (%p) NOT complete\n", inode, dentry);
-   clear_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
-   }
+   /* not yet implemented */
 }
 
 bool ceph_dir_test_complete(struct inode *inode)
 {
-   struct dentry *dentry = __d_find_any_alias(inode);
-
-   if (dentry && ceph_dentry(dentry))
-   return test_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
+   /* not yet implemented */
return false;
 }
 
-- 
1.7.0

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Hang at 'rbd info '

2011-12-29 Thread Yehuda Sadeh Weinraub
On Thu, Dec 29, 2011 at 1:49 AM,   wrote:
> Hi, Tommi:
>
>   The health of ceph is 'HEALTH_OK'.
>
>   Do you want to see any other information of ceph cluster?
>
>    I also tried these two command but no use.
>
>   `ceph osd repair *`
>   `ceph osd scrub *`
>
>    And I also cannot map the rbd image to rbd device.
>
>    Is there any solution to fix it? I cannot access the rbd image anymore.
>

What does 'ceph -s' say?

Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] ceph: take a reference to the dentry in d_find_any_alias()

2011-12-29 Thread Alex Elder
On Thu, 2011-12-29 at 04:11 -0500, Christoph Hellwig wrote:
> On Wed, Dec 28, 2011 at 06:05:14PM -0800, Sage Weil wrote:
> > From: Alex Elder 
> > 
> > The ceph code duplicates __d_find_any_alias(), but it currently
> > does not take a reference to the returned dentry as it should.
> > Replace the ceph implementation with an exact copy of what's
> > found in "fs/dcache.c", and update the callers so they drop
> > their reference when they're done with it.
> > 
> > Unfortunately this requires the wholesale copy of the functions
> > that implement __dget().  It would be much nicer to just export
> > d_find_any_alias() from "fs/dcache.c" instead.
> 
> Just exporting it would indeed be much better.

Yes, that's the plan.  We were originally hoping to get
this into 3.2 and I figured it was too late in the cycle
to be suggesting a (albeit pretty harmless) change to
the dcache code.

I will respin the patches.  I'll wait an hour or two
to see what Sage plans to do with these.

-Alex

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] ceph: take a reference to the dentry in d_find_any_alias()

2011-12-29 Thread Christoph Hellwig
On Thu, Dec 29, 2011 at 08:34:38AM -0600, Alex Elder wrote:
> Yes, that's the plan.  We were originally hoping to get
> this into 3.2 and I figured it was too late in the cycle
> to be suggesting a (albeit pretty harmless) change to
> the dcache code.
> 
> I will respin the patches.  I'll wait an hour or two
> to see what Sage plans to do with these.

I don't think getting the export in should be a problem.  Cc Al to make
sure, but IMHO this is still easily doable for 3.2.  It's a much smaller
change than duplicating all that code, too.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/2] Add Ceph integration with OCF-compliant HA resource managers

2011-12-29 Thread Florian Haas
Hi everyone,

please consider reviewing the following patches. These add
OCF-compliant cluster resource agent functionality to Ceph, allowing
MDS, OSD and MON to run as cluster resources under compliant managers
(such as Pacemaker, http://www.clusterlabs.org).

This new stuff does not build nor install by default; you must enable
with the "--with-ocf" flag. That same flag maps to a new RPM build
conditional ("--with ocf") which rolls the resource agents into a
separate subpackage, "ceph-resource-agents".

These patches require the tiny patch to the init script that I posted
here a few days ago.

Just in case you're interested, all the above changes (including the
init script patch) since commit e18b1c9734e88e3b779ba2d70cdd54f8fb94743d:

  rgw: removing swift user index when removing user (2011-12-28 17:00:19 -0800)

are also available in my GitHub repo at:
  git://github.com/fghaas/ceph ocf-ra

Florian Haas (3):
  init script: be LSB compliant for exit code on status
  Add OCF-compliant resource agent for Ceph daemons
  Spec: conditionally build ceph-resource-agents package

 ceph.spec.in|   22 ++
 configure.ac|8 ++
 src/Makefile.am |4 +-
 src/init-ceph.in|7 ++-
 src/ocf/Makefile.am |   23 +++
 src/ocf/ceph.in |  177 +++
 6 files changed, 238 insertions(+), 3 deletions(-)
 create mode 100644 src/ocf/Makefile.am
 create mode 100644 src/ocf/ceph.in

Hope this is useful. All feedback is much appreciated. Thanks!

Cheers,
Florian
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] Spec: conditionally build ceph-resource-agents package

2011-12-29 Thread Florian Haas
Put OCF resource agents in a separate subpackage,
to be enabled with a separate build conditional
(--with ocf).

Make the subpackage depend on the resource-agents
package, which provides the ocf-shellfuncs library
that the Ceph RAs use.

Signed-off-by: Florian Haas 
---
 ceph.spec.in |   22 ++
 1 files changed, 22 insertions(+), 0 deletions(-)

diff --git a/ceph.spec.in b/ceph.spec.in
index b0f3c3a..3950fd1 100644
--- a/ceph.spec.in
+++ b/ceph.spec.in
@@ -1,5 +1,6 @@
 %define with_gtk2 %{?_with_gtk2: 1} %{!?_with_gtk2: 0}
 
+%bcond_with ocf
 # it seems there is no usable tcmalloc rpm for x86_64; parts of
 # google-perftools don't compile on x86_64, and apparently the
 # decision was to not build the package at all, even if tcmalloc
@@ -130,6 +131,19 @@ gcephtool is a graphical monitor for the clusters running 
the Ceph distributed
 file system.
 %endif
 
+%if %{with ocf}
+%package resource-agents
+Summary:   OCF-compliant resource agents for Ceph daemons
+Group: System Environment/Base
+License:   LGPLv2
+Requires:  %{name} = %{version}
+Requires:  resource-agents
+%description resource-agents
+Resource agents for monitoring and managing Ceph daemons
+under Open Cluster Framework (OCF) compliant resource
+managers such as Pacemaker.
+%endif
+
 %package -n librados2
 Summary:   RADOS distributed object store client library
 Group: System Environment/Libraries
@@ -211,6 +225,7 @@ MY_CONF_OPT="$MY_CONF_OPT --without-gtk2"
--docdir=%{_docdir}/ceph \
--without-hadoop \
$MY_CONF_OPT \
+   %{?_with_ocf} \
%{?with_tcmalloc:--with-tcmalloc} 
%{!?with_tcmalloc:--without-tcmalloc}
 
 # fix bug in specific version of libedit-devel
@@ -415,6 +430,13 @@ fi
 %endif
 
 
#
+%if %{with ocf}
+%files resource-agents
+%defattr(0755,root,root,-)
+/usr/lib/ocf/resource.d/%{name}/*
+%endif
+
+#
 %files -n librados2
 %defattr(-,root,root,-)
 %{_libdir}/librados.so.*
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] Add OCF-compliant resource agent for Ceph daemons

2011-12-29 Thread Florian Haas
Add a wrapper around the ceph init script that makes
MDS, OSD and MON configurable as Open Cluster Framework
(OCF) compliant cluster resources. Allows Ceph
daemons to tie in with cluster resource managers that
support OCF, such as Pacemaker (http://www.clusterlabs.org).

Disabled by default, configure --with-ocf to enable.

Signed-off-by: Florian Haas 
---
 configure.ac|8 ++
 src/Makefile.am |4 +-
 src/ocf/Makefile.am |   23 +++
 src/ocf/ceph.in |  177 +++
 4 files changed, 210 insertions(+), 2 deletions(-)
 create mode 100644 src/ocf/Makefile.am
 create mode 100644 src/ocf/ceph.in

diff --git a/configure.ac b/configure.ac
index 60f998c..e334a24 100644
--- a/configure.ac
+++ b/configure.ac
@@ -277,6 +277,12 @@ AM_CONDITIONAL(WITH_LIBATOMIC, [test "$HAVE_ATOMIC_OPS" = 
"1"])
 #[],
 #[with_newsyn=no])
 
+AC_ARG_WITH([ocf],
+[AS_HELP_STRING([--with-ocf], [build OCF-compliant cluster 
resource agent])],
+,
+[with_ocf=no])
+AM_CONDITIONAL(WITH_OCF, [ test "$with_ocf" = "yes" ])
+
 # Checks for header files.
 AC_HEADER_DIRENT
 AC_HEADER_STDC
@@ -375,6 +381,8 @@ AM_PATH_PYTHON([2.4],
 AC_CONFIG_HEADERS([src/acconfig.h])
 AC_CONFIG_FILES([Makefile
src/Makefile
+   src/ocf/Makefile
+   src/ocf/ceph
man/Makefile
ceph.spec])
 AC_OUTPUT
diff --git a/src/Makefile.am b/src/Makefile.am
index 748425e..8026e17 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -1,6 +1,6 @@
 AUTOMAKE_OPTIONS = gnu
-SUBDIRS =
-DIST_SUBDIRS = gtest
+SUBDIRS = ocf
+DIST_SUBDIRS = gtest ocf
 CLEANFILES =
 bin_PROGRAMS =
 # like bin_PROGRAMS, but these targets are only built for debug builds
diff --git a/src/ocf/Makefile.am b/src/ocf/Makefile.am
new file mode 100644
index 000..9be40ec
--- /dev/null
+++ b/src/ocf/Makefile.am
@@ -0,0 +1,23 @@
+EXTRA_DIST = ceph.in Makefile.in
+
+if WITH_OCF
+# The root of the OCF resource agent hierarchy
+# Per the OCF standard, it's always "lib",
+# not "lib64" (even on 64-bit platforms).
+ocfdir = $(prefix)/lib/ocf
+
+# The ceph provider directory
+radir = $(ocfdir)/resource.d/$(PACKAGE_NAME)
+
+ra_SCRIPTS = ceph
+
+install-data-hook:
+   $(LN_S) ceph $(DESTDIR)$(radir)/osd
+   $(LN_S) ceph $(DESTDIR)$(radir)/mds
+   $(LN_S) ceph $(DESTDIR)$(radir)/mon
+
+uninstall-hook:
+   rm -f $(DESTDIR)$(radir)/osd
+   rm -f $(DESTDIR)$(radir)/mds
+   rm -f $(DESTDIR)$(radir)/mon
+endif
diff --git a/src/ocf/ceph.in b/src/ocf/ceph.in
new file mode 100644
index 000..9db1bc9
--- /dev/null
+++ b/src/ocf/ceph.in
@@ -0,0 +1,177 @@
+#!/bin/sh
+
+# Initialization:
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+# Convenience variables
+# When sysconfdir isn't passed in as a configure flag,
+# it's defined in terms of prefix
+prefix=@prefix@
+CEPH_INIT=@sysconfdir@/init.d/ceph
+
+ceph_meta_data() {
+local longdesc
+local shortdesc
+case $__SCRIPT_NAME in
+   "osd")
+   longdesc="Wraps the ceph init script to provide an OCF resource 
agent that manages and monitors the Ceph OSD service."
+   longdesc="Manages a Ceph OSD instance."
+   ;;
+   "mds")
+   longdesc="Wraps the ceph init script to provide an OCF resource 
agent that manages and monitors the Ceph MDS service."
+   longdesc="Manages a Ceph MDS instance."
+   ;;
+   "mon")
+   longdesc="Wraps the ceph init script to provide an OCF resource 
agent that manages and monitors the Ceph MON service."
+   longdesc="Manages a Ceph MON instance."
+   ;;
+esac
+
+cat <
+
+
+  0.1
+  ${longdesc}
+  ${shortdesc}
+  
+  
+
+
+
+
+
+  
+
+EOF
+}
+
+ceph_action() {
+local init_action
+init_action="$1"
+
+case ${__SCRIPT_NAME} in
+   osd|mds|mon)
+   ocf_run $CEPH_INIT $init_action ${__SCRIPT_NAME}
+   ;;
+   *)
+   ocf_run $CEPH_INIT $init_action
+   ;;
+esac
+}
+
+ceph_validate_all() {
+# Do we have the ceph init script?
+check_binary @sysconfdir@/init.d/ceph
+
+# Do we have a configuration file?
+[ -e @sysconfdir@/ceph/ceph.conf ] || exit $OCF_ERR_INSTALLED
+}
+
+ceph_monitor() {
+local rc
+
+ceph_action status
+
+# 0: running, and fully caught up with master
+# 3: gracefully stopped
+# any other: error
+case "$?" in
+0)
+rc=$OCF_SUCCESS
+ocf_log debug "Resource is running"
+;;
+3)
+rc=$OCF_NOT_RUNNING
+ocf_log debug "Resource is not running"
+;;
+*)
+ocf_log err "Resource has failed"
+rc=$OCF_ERR_GENERIC
+esac
+
+return $rc
+}
+
+ceph_start() {
+# if resource is already running, bail out early
+if ceph_monitor; then
+ocf_log info "Resource is already running"
+return $OCF_SUCCESS

[GIT PULL] Ceph fixes for 3.2 final

2011-12-29 Thread Sage Weil
Hi Linus,

Please pull this Ceph patch from

  git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus

There are a few lingering issues in Ceph's use of the dcache for readdir 
that we weren't able to nail down, so this patch disables it for now.  
There will be a series of patches for the next release that reenable this 
(via a mount option) and clean things up a bit.

Thanks!
sage


Sage Weil (1):
  ceph: disable use of dcache for readdir etc.

 fs/ceph/dir.c |   29 +++--
 1 files changed, 3 insertions(+), 26 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html