From: "Yan, Zheng"
When checking if inode's SnapRealm is different from readdir
SnapRealm, we should use find_snaprealm() to get inode's SnapRealm.
Without this fix, I got lots of "ceph_add_cap: couldn't find snap
realm 100" from kernel client.
Signed-off-by: Yan, Zheng
---
src/mds/CInode.cc |
Dear Josh and Travis:
I am trying to setup the openstack+ceph environment too, but I am not using
devstack.
I deploy the glance, cinder, nova, keystone into different servers.
All the basic function works fine, I can import image, create volume and create
virtual machine.
It seems the glance and
Also the open bug which is pending i have tried with it. Ceph-osd
starts up with zfs volume after the ceph service is up in sometime the
osd's stop working. I have been working around with releases from
ceph-0.30 till the latest 0.54 to check with zfs compatibility.
Kindly let me know if this
Hi Sage,
Thanks for replying back, Once a zpool is created if i mount it on
/var/lib/ceph/osd/ceph-0 the cephfs doesnt recognize it as a superblock
and hence it fails, Im trying to build this on our cloud storage since
btrfs has not been stable nor they have come up with online dedup i have
n
I understood.Thank you.
-SunJie
2012/10/25 Gregory Farnum :
> Sorry, I was unclear — I meant I think[1] it was fixed in our linux branch,
> for future kernel releases. The messages you're seeing are just logging a
> perfectly normal event that's part of the Ceph protocol.
> -Greg
> [1]: I'd have
Based on the feedback I received, I changed this patch to use the
username in the remote object.
I've also updated the commit comment to reference a user and not an owner.
The pull request is in teuthology branch wip-buck if someone could take
another look at it.
Best,
-Joe Buck
On 10/24/201
On 25/10/12 17:55, Mark Nelson wrote:
On Wed, Oct 24, 2012 at 10:58 PM, Dan Mick wrote:
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42
The other alternative is to just set the pool(s) replication size to 1,
if you are just wanting a single osd for (say) testing:
$ ceph
On 10/25/2012 04:28 PM, Dan Mick wrote:
>
>>>static void ceph_fault(struct ceph_connection *con)
>>>__releases(con->mutex)
>>>{
>>>pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
>>> ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
>>>
static void ceph_fault(struct ceph_connection *con)
__releases(con->mutex)
{
pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
Perhaps this should become pr_info() or something. Sage?
Yea
Hi All,
I have 8 servers available to test ceph on which are a bit over
powered/under-disked and I'm trying to develop a plan for how to lay
out services and how to populate the available disk slots.
The hardware is dual socket Intel E5640 chips (8 core total/ node) with 48G
RAM, dual 10G etherne
On Thu, 25 Oct 2012, Noah Watkins wrote:
> I pushed out wip-client-unmount for review please. One question: would
> we want a new messenger nonce for remounts, as opposed to messenger
> nonce per context?
This looks good to me. The one change I'd make it to document
ceph_shutdown() as deprecated
I pushed out wip-client-unmount for review please. One question: would
we want a new messenger nonce for remounts, as opposed to messenger
nonce per context?
- Noah
On Thu, Oct 25, 2012 at 9:23 AM, Sage Weil wrote:
> On Thu, 25 Oct 2012, Noah Watkins wrote:
>> I was just take a look at this for
On Thu, Oct 25, 2012 at 1:05 PM, Cláudio Martins wrote:
>
> Hello,
>
> The text at
>
> http://ceph.com/docs/master/cluster-ops/pools/
>
> appears to have a slight inconsistency. At the top it says
>
> "Replicas: You can set the desired number of copies/replicas of an object. A
> typical config
Thanks for the pointers Josh.
Stupidly, I had not looked at those docs. I forgot all about them
since they didn't used to be there. I was only using OpenStack docs
and not the Ceph ones. Looks like they are filled with great
information. You answered all my questions! Thanks again.
- Travis
On 10/25/2012 09:27 AM, Travis Rhoden wrote:
Josh,
Do you mind if I ask you a few follow-up questions? I can ask on the
OpenStack ML if needed, but I think you are the most knowledgeable
person for these...
I don't mind. ceph-devel is fine for these ceph-related questions.
1. To get "effici
Josh,
Do you mind if I ask you a few follow-up questions? I can ask on the
OpenStack ML if needed, but I think you are the most knowledgeable
person for these...
1. To get "efficient volumes from images" (i.e. volumes that are a COW
copy of the image), do the images and volumes need to live in t
On Thu, 25 Oct 2012, Noah Watkins wrote:
> I was just take a look at this for libcephfs. Does it make sense to
> have ceph_release free CephContext, and ceph_unmount with everything
> else: client, messenger, and mon client?
Yeah... _create and _release should create/release the ceph_mount_info an
I was just take a look at this for libcephfs. Does it make sense to
have ceph_release free CephContext, and ceph_unmount with everything
else: client, messenger, and mon client?
- Noah
On Wed, Oct 24, 2012 at 4:32 PM, Noah Watkins wrote:
> On the Java, side being able to do an unmount to free re
Awesome, thanks Josh. I mispoke -- my client was 0.48.1. glad
upgrading to 0.48.2 will do the trick! thanks again.
On Thu, Oct 25, 2012 at 11:42 AM, Josh Durgin wrote:
> On 2012-10-25 08:22, Travis Rhoden wrote:
>>
>> I've been trying to take advantage of the code additions made by Josh
>> Dur
On Thu, 25 Oct 2012, Alex Elder wrote:
> On 10/24/2012 06:20 PM, Sage Weil wrote:
> > The ceph_on_in_msg_alloc() method drops con->mutex while it allocates a
> > message. If that races with a timeout that resends a zillion messages and
> > resets the connection, and the ->alloc_msg() method return
On 2012-10-25 08:22, Travis Rhoden wrote:
I've been trying to take advantage of the code additions made by Josh
Durgin to OpenStack Folsom for combining boot-from-volume and Ceph
RBD. First off, nice work Josh! I'm hoping you folks can help me
out
with something strange I am seeing. The que
On Thu, 25 Oct 2012, Alex Elder wrote:
> On 10/25/2012 07:15 AM, Gregory Farnum wrote:
> > Sorry, I was unclear ? I meant I think[1] it was fixed in our linux
> > branch, for future kernel releases. The messages you're seeing are
> > just logging a perfectly normal event that's part of the Ceph
> >
[moved to ceph-devel]
On Thu, 25 Oct 2012, Raghunandhan wrote:
> Hi All,
>
> I have been working around ceph quite a long and trying to stitch zfs with
> ceph. I was able to do it to certain extent as follows:
> 1. zpool creation
> 2. set dedup
> 3. create a mountable volume of zfs (zfs create)
>
On Thu, 25 Oct 2012, Roman Alekseev wrote:
> Hi,
>
> I've simple installation of ceph on Debian server with the
> following configuration:
>
> [global]
> debug ms = 0
> [osd]
> osd journal size = 1000
> filestore xattr use omap = true
>
> [mon.a]
>
>
On 10/25/2012 07:15 AM, Gregory Farnum wrote:
> Sorry, I was unclear — I meant I think[1] it was fixed in our linux
> branch, for future kernel releases. The messages you're seeing are
> just logging a perfectly normal event that's part of the Ceph
> protocol. -Greg [1]: I'd have to check to make s
On 10/24/2012 06:20 PM, Sage Weil wrote:
> The ceph_on_in_msg_alloc() method drops con->mutex while it allocates a
> message. If that races with a timeout that resends a zillion messages and
> resets the connection, and the ->alloc_msg() method returns a NULL message,
> it will call ceph_msg_put(N
Hi all,
In looking at the design of a storage brick (just OSDs), I have found a dual
power hardware solution that allows for 10 hot-swap drives and has a
motherboard with 2 SATA III 6G ports (for the SSDs) and 8 SATA II 3G (for
physical drives). No RAID card. This seems a good match to me given m
From: "Yan, Zheng"
We should allow Locker::try_eval(MDSCacheObject *, int) to evaluate
locks in replica objects. Otherwise the locks in replica objects
may stuck on unstable states forever.
Signed-off-by: Yan, Zheng
---
src/mds/Locker.cc | 15 ++-
1 file changed, 6 insertions(+), 9
From: "Yan, Zheng"
Stray dir inodes are no longer base inodes, they are in the mdsdir
and the mdrdir is base inode.
Signed-off-by: Yan, Zheng
---
src/mds/MDCache.cc | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/src/mds/MDCache.cc b/src/mds/MDCac
From: "Yan, Zheng"
Commit f8110c (Allow export subtrees in other MDS' stray directory)
make the "directory in stray " check always return false. This is
because the directory in question is grandchild of mdsdir.
Signed-off-by: Yan, Zheng
---
src/mds/Migrator.cc | 4 ++--
1 file changed, 2 inse
From: "Yan, Zheng"
The stray migration/reintegration generates a source path that will
be rooted in a (possibly remote) MDS's MDSDIR; adjust the check in
handle_client_rename()
Signed-off-by: Yan, Zheng
---
src/leveldb | 2 +-
src/mds/Server.cc | 2 +-
2 files changed, 2 insertions(+), 2
Sorry, I was unclear — I meant I think[1] it was fixed in our linux branch, for
future kernel releases. The messages you're seeing are just logging a perfectly
normal event that's part of the Ceph protocol.
-Greg
[1]: I'd have to check to make sure. Sage, Alex, am I remembering that
correctly?
On 10/25/2012 08:25 AM, jie sun wrote:
I'm are about to support a service that vm cloud mount new device dynamically.
I'm afraid that sometimes the vm crashed(not totally crashed but just
can't be connected with, or the host can't be connected so that I
can't remove the vm at all) but the block d
33 matches
Mail list logo