Re: Braindump: multiple clusters on the same hardware

2012-10-18 Thread Jimmy Tang
Hi All, Given that there is the capability of running two clusters on the same hosts (monitors) are there plans to add cluster to cluster features? e.g. would it be possible to mount two separate ceph clusters from one host? Jimmy. On 17 Oct 2012, at 20:35, Tommi Virtanen wrote: You can

Re: rbd map error with new rbd format

2012-10-18 Thread Eric_YH_Chen
Hi, Josh: Yeah, format 2 and layering support is in progress for kernel rbd, but not ready yet. The userspace side is all ready in the master branch, but it takes more time to implement in the kernel. Btw, instead of --new-format you should use --format 2. It's in the man page in the master

Re: Braindump: multiple clusters on the same hardware

2012-10-18 Thread Jimmy Tang
What I actually meant to ask was, is it possible to copy objects or pools from one ceph cluster to another (for disaster recovery reasons) and if this feature is planned or even considered? Jimmu.-- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to

Re: Help...MDS Continuously Segfaulting

2012-10-18 Thread Nick Couchman
Hopefully this is what you're looking for... (gdb) bt #0 ESession::replay (this=0x7fffcc49a7c0, mds=0x127d5f0) at mds/journal.cc:828 #1 0x006a2446 in MDLog::_replay_thread (this=0x1281390) at mds/MDLog.cc:580 #2 0x004cf5ed in MDLog::ReplayThread::entry (this=optimized out) at

Re: Help...MDS Continuously Segfaulting

2012-10-18 Thread Gregory Farnum
Yep, thanks! I'll have to go through and see if I can figure out what's going on there. On Thu, Oct 18, 2012 at 8:56 AM, Nick Couchman nick.couch...@seakr.com wrote: Hopefully this is what you're looking for... (gdb) bt #0 ESession::replay (this=0x7fffcc49a7c0, mds=0x127d5f0) at

Re: Braindump: multiple clusters on the same hardware

2012-10-18 Thread Jimmy Tang
What I actually meant to ask was, is it possible to copy objects or pools from one ceph cluster to another (for disaster recovery reasons) and if this feature is planned or even considered? Jimmu.-- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to

Re: Braindump: multiple clusters on the same hardware

2012-10-18 Thread Tommi Virtanen
On Thu, Oct 18, 2012 at 7:40 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote: What I actually meant to ask was, is it possible to copy objects or pools from one ceph cluster to another (for disaster recovery reasons) and if this feature is planned or even considered? That's the async replication for

Braindump/announce: ceph-deploy

2012-10-18 Thread Tommi Virtanen
Hi. We've been working on the Chef cookbook and Crowbar barclamp for Ceph for a while now. At the same time, Clint Byrum and James Page have been working on the Juju Charm, and I've seen at least two separate efforts for Puppet scripts. All this time, I've repeatedly gotten one item of feedback,

Re: Help...MDS Continuously Segfaulting

2012-10-18 Thread Gregory Farnum
Okay, looked at this a little bit. Can you describe what was happening before you got into this failed-replay loop? (So, why was it in replay at all?) I see that the monitor marked it as laggy for some reason; was the cluster under load; did the monitors break; something else? I can see why it's

Re: Unable to build Ceph from source code

2012-10-18 Thread Dan Mick
On 10/17/2012 09:58 PM, hemant surale wrote: Hi Community, I have tried to build ceph from source code i.e. v0.48 tarball . After proceeding to all steps given at official site . when I execute service ceph start it gives following error -//Error

Re: v0.53 released

2012-10-18 Thread Josh Durgin
On 10/17/2012 04:26 AM, Oliver Francke wrote: Hi Sage, *, after having some trouble with the journals - had to erase the partition and redo a ceph... --mkjournal - I started my testing... Everything fine. This would be due to the change in default osd journal size. In 0.53 it's 1024MB, even