Re: [GIT PULL] Ceph updates and fixes for 3.13

2013-11-24 Thread Dave (Bob)
I have just tried ceph 0.72.1 and kernel 3.13.0-rc1. There seems to be a problem with ceph file system access from this kernel. I mount a ceph running on another machine, that seems to go OK. I create a directory on that mount, that seems to go OK. I cal 'ls -l' that mount and all looks good.

Re: Mourning the demise of mkcephfs

2013-11-14 Thread Dave (Bob)
http://eu.ceph.com/docs/master/ Is the location of the documentation that I remember, but have not been able to find recently. Thank you Wido. David -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info

Re: Mourning the demise of mkcephfs

2013-11-14 Thread Dave (Bob)
think that this day is now very close. Very warm regards, David On 13/11/2013 03:54, Mark Kirkwood wrote: > On 13/11/13 16:33, Mark Kirkwood wrote: >> On 13/11/13 04:53, Alfredo Deza wrote: >>> On Mon, Nov 11, 2013 at 12:51 PM, Dave (Bob) >>> wrote: &

Out-of-tree build of ceph

2013-11-14 Thread Dave (Bob)
This is a very minor point, but this list is very responsive and helpful, and I am trying to be helpful myself. I find that an out-of-tree build of ceph fails because 'ceph_ver.c' can't be found in the build tree. I simply 'cp ceph-0.72/src/ceph_ver.c build/src' to work around the problem for my

Re: Mourning the demise of mkcephfs

2013-11-12 Thread Dave (Bob)
n \ > + || :" > fi > fi > > On Tue, Nov 12, 2013 at 3:22 PM, Wido den Hollander wrote: >> On 11/11/2013 06:51 PM, Dave (Bob) wrote: >>> The utility mkcephfs seemed to work, it was very simple to use and >>> apparently effective. >>> >>&

Re: Mourning the demise of mkcephfs

2013-11-12 Thread Dave (Bob)
On 12/11/2013 07:22, Wido den Hollander wrote: > On 11/11/2013 06:51 PM, Dave (Bob) wrote: >> The utility mkcephfs seemed to work, it was very simple to use and >> apparently effective. >> >> It has been deprecated in favour of something called ceph-deploy, wh

Mourning the demise of mkcephfs

2013-11-11 Thread Dave (Bob)
The utility mkcephfs seemed to work, it was very simple to use and apparently effective. It has been deprecated in favour of something called ceph-deploy, which does not work for me. I've ignored the deprecation messages until now, but in going from 70 to 72 I find that mkcephfs has finally gone.

waiting for 1 open ops to drain

2013-03-20 Thread Dave (Bob)
I am using ceph 0.58 and kernel 3.9-rc2 and btrfs on my osds. I have an osd that starts up but blocks with the log message 'waiting for 1 open ops to drain'. This never happens, and I can't get the osd 'up'. I need to clear this problem. I have recently had an osd go problematic and I have recre

Re: Can't start ceph mon

2012-11-20 Thread Dave (Bob)
>Do you have other monitors in working order? The easiest way to handle >it if that's the case is just to remove this monitor from the cluster >and add it back in as a new monitor with a fresh store. If not we can >look into reconstructing it. >-Greg >Also, if you still have it, could you zip up

Re: Ceph Bug #2563

2012-10-12 Thread Dave (Bob)
our > machines. > > John, can you add a warning to whatever install/configuration/whatever > docs are appropriate? > -Greg > > On Tue, Oct 9, 2012 at 12:50 PM, Dave (Bob) wrote: >> Greg, >> >> Thank you very much for your prompt rely. >> >> Yes, I am

Ceph Bug #2563

2012-10-09 Thread Dave (Bob)
I have a problem with this leveldb corruption issue. My logs show the same failure as is shown in Ceph's redmine as bug #2563. I am using linux-3.6.0 (x86_64) and ceph-0.52. I am using btrfs on my 4 osd's. Each osd is using a partition on a disk drive, there are 4 disk drives, all on the same mac