Hi All,
Pls I got this result after i did an upgrade to 0.56.3. I'm not sure
if its a problem with upgrade or some other things.
# ceph osd tree
# idweight type name up/down reweight
-1 96 root default
-3 96 rack unknownrack
-2 4
I setup ceph cluster on 28 nodes.
24 nodes for osds...Each storage node has 16 drives. raid0 on 4
drives. therefore i have 4 osds daemon on each node. each osd daemon
is allocated a raid volume. so total of 96 osds daemon in the entire
cluster.
3 nodes for mon ...
1 node for mds ...
I mounted a sh
Hi ... I am now testing cephfs on an ubuntu client before going back to centos.
A quick question about this command:
mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
name=admin,secretfile=/etc/ceph/admin.secret ?
mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
name=ceph,secretfile=/etc/ceph/cep
t; Did you build your own kernel?
>
> See
> http://ceph.com/docs/master/install/os-recommendations/#linux-kernel
>
>
> On 02/05/2013 09:37 PM, femi anjorin wrote:
>>
>> Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS 6.3
>>
>> Pls can somebody help ... This
status 1
mount error: ceph filesystem not supported by the system
Regards,
Femi
On Tue, Feb 5, 2013 at 1:49 PM, femi anjorin wrote:
>
> Hi ...
>
> Thanks. i set --debug ms = 0. The result is HEALTH_OK ...but i get an
> error when trying to setup an client access to the
error: ceph filesystem not supported by the system
Regards,
Femi.
On Mon, Feb 4, 2013 at 3:27 PM, Joao Eduardo Luis wrote:
> This wasn't obvious due to all the debug being outputted, but here's why
> 'ceph health' wasn't replying with HEALTH_OK:
>
>
> On
HEALTH_OK
2013-02-04 14:33:24.878592 7f1200b65760 1 -- 172.16.0.25:0/23252 mark_down_all
2013-02-04 14:33:24.878914 7f1200b65760 1 -- 172.16.0.25:0/23252
shutdown complete.
On Mon, Feb 4, 2013 at 2:29 PM, Joao Eduardo Luis wrote:
> On 02/04/2013 12:21 PM, femi anjorin wrote:
>>
>&
Hi,
Can anyone help with this?
I am running a cluster of 6 servers. Each with 16 hard drives. I
mounted all the hard drives on the recommended mount point
/var/lib/ceph/osd/ceph-n . look like this:
/dev/sda1 on /var/lib/ceph/osd/ceph-0
/dev/sdb1 on /var/lib/ceph/osd/ceph-1
/dev/sdc1 on /var/lib/c
/mem usage.
>
> Cheers,
> Martin
>
>
> On Tue, Jan 29, 2013 at 3:56 AM, femi anjorin wrote:
>>
>> Please can anyone an advise on how exactly a CEPH production
>> environment should look like? and what the configuration files should
>> be. My hardwares
Hi,
Please with regards to my questions on Ceph Production Environment ...
I like to give u these details.
i like to test a write, read and delete operation on ceph storage
cluster in a production environment.
i also like to check the self healing and managing functionalities.
i like to know in
10 matches
Mail list logo