I have just tried ceph 0.72.1 and kernel 3.13.0-rc1.
There seems to be a problem with ceph file system access from this kernel.
I mount a ceph running on another machine, that seems to go OK.
I create a directory on that mount, that seems to go OK.
I cal 'ls -l' that mount and all looks good.
http://eu.ceph.com/docs/master/
Is the location of the documentation that I remember, but have not been
able to find recently.
Thank you Wido.
David
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info
think that this day is now very close.
Very warm regards,
David
On 13/11/2013 03:54, Mark Kirkwood wrote:
> On 13/11/13 16:33, Mark Kirkwood wrote:
>> On 13/11/13 04:53, Alfredo Deza wrote:
>>> On Mon, Nov 11, 2013 at 12:51 PM, Dave (Bob)
>>> wrote:
&
This is a very minor point, but this list is very responsive and
helpful, and I am trying to be helpful myself.
I find that an out-of-tree build of ceph fails because 'ceph_ver.c'
can't be found in the build tree.
I simply 'cp ceph-0.72/src/ceph_ver.c build/src' to work around the
problem for my
n \
> + || :"
> fi
> fi
>
> On Tue, Nov 12, 2013 at 3:22 PM, Wido den Hollander wrote:
>> On 11/11/2013 06:51 PM, Dave (Bob) wrote:
>>> The utility mkcephfs seemed to work, it was very simple to use and
>>> apparently effective.
>>>
>>&
On 12/11/2013 07:22, Wido den Hollander wrote:
> On 11/11/2013 06:51 PM, Dave (Bob) wrote:
>> The utility mkcephfs seemed to work, it was very simple to use and
>> apparently effective.
>>
>> It has been deprecated in favour of something called ceph-deploy, wh
The utility mkcephfs seemed to work, it was very simple to use and
apparently effective.
It has been deprecated in favour of something called ceph-deploy, which
does not work for me.
I've ignored the deprecation messages until now, but in going from 70 to
72 I find that mkcephfs has finally gone.
I am using ceph 0.58 and kernel 3.9-rc2 and btrfs on my osds.
I have an osd that starts up but blocks with the log message 'waiting
for 1 open ops to drain'.
This never happens, and I can't get the osd 'up'.
I need to clear this problem. I have recently had an osd go problematic
and I have recre
>Do you have other monitors in working order? The easiest way to handle
>it if that's the case is just to remove this monitor from the cluster
>and add it back in as a new monitor with a fresh store. If not we can
>look into reconstructing it.
>-Greg
>Also, if you still have it, could you zip up
our
> machines.
>
> John, can you add a warning to whatever install/configuration/whatever
> docs are appropriate?
> -Greg
>
> On Tue, Oct 9, 2012 at 12:50 PM, Dave (Bob) wrote:
>> Greg,
>>
>> Thank you very much for your prompt rely.
>>
>> Yes, I am
I have a problem with this leveldb corruption issue. My logs show the
same failure as is shown in Ceph's redmine as bug #2563.
I am using linux-3.6.0 (x86_64) and ceph-0.52.
I am using btrfs on my 4 osd's. Each osd is using a partition on a disk drive,
there are 4 disk drives, all on the same mac
11 matches
Mail list logo