OSD's DOWN ---after upgrade to 0.56.3

2013-02-15 Thread femi anjorin
Hi All, Pls I got this result after i did an upgrade to 0.56.3. I'm not sure if its a problem with upgrade or some other things. # ceph osd tree # idweight type name up/down reweight -1 96 root default -3 96 rack unknownrack -2 4

Fwd: optimizing ceph-fuse performance

2013-02-13 Thread femi anjorin
I setup ceph cluster on 28 nodes. 24 nodes for osds...Each storage node has 16 drives. raid0 on 4 drives. therefore i have 4 osds daemon on each node. each osd daemon is allocated a raid volume. so total of 96 osds daemon in the entire cluster. 3 nodes for mon ... 1 node for mds ... I mounted a sh

Re: CEPHFS mount error !!!

2013-02-07 Thread femi anjorin
Hi ... I am now testing cephfs on an ubuntu client before going back to centos. A quick question about this command: mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin.secret ? mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=ceph,secretfile=/etc/ceph/cep

Re: CEPHFS mount error !!!

2013-02-05 Thread femi anjorin
t; Did you build your own kernel? > > See > http://ceph.com/docs/master/install/os-recommendations/#linux-kernel > > > On 02/05/2013 09:37 PM, femi anjorin wrote: >> >> Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS 6.3 >> >> Pls can somebody help ... This

CEPHFS mount error !!!

2013-02-05 Thread femi anjorin
status 1 mount error: ceph filesystem not supported by the system Regards, Femi On Tue, Feb 5, 2013 at 1:49 PM, femi anjorin wrote: > > Hi ... > > Thanks. i set --debug ms = 0. The result is HEALTH_OK ...but i get an > error when trying to setup an client access to the

Re: CEPH HEALTH NOT OK ...CEPHFS mount error !!!

2013-02-05 Thread femi anjorin
error: ceph filesystem not supported by the system Regards, Femi. On Mon, Feb 4, 2013 at 3:27 PM, Joao Eduardo Luis wrote: > This wasn't obvious due to all the debug being outputted, but here's why > 'ceph health' wasn't replying with HEALTH_OK: > > > On

Re: CEPH HEALTH NOT OK ceph version 0.56.2.!!!

2013-02-04 Thread femi anjorin
HEALTH_OK 2013-02-04 14:33:24.878592 7f1200b65760 1 -- 172.16.0.25:0/23252 mark_down_all 2013-02-04 14:33:24.878914 7f1200b65760 1 -- 172.16.0.25:0/23252 shutdown complete. On Mon, Feb 4, 2013 at 2:29 PM, Joao Eduardo Luis wrote: > On 02/04/2013 12:21 PM, femi anjorin wrote: >> >&

HEALTH_ERR 18624 pgs stuck inactive; 18624 pgs stuck unclean; no osds

2013-01-30 Thread femi anjorin
Hi, Can anyone help with this? I am running a cluster of 6 servers. Each with 16 hard drives. I mounted all the hard drives on the recommended mount point /var/lib/ceph/osd/ceph-n . look like this: /dev/sda1 on /var/lib/ceph/osd/ceph-0 /dev/sdb1 on /var/lib/ceph/osd/ceph-1 /dev/sdc1 on /var/lib/c

Re: Ceph Production Environment Setup?

2013-01-29 Thread femi anjorin
/mem usage. > > Cheers, > Martin > > > On Tue, Jan 29, 2013 at 3:56 AM, femi anjorin wrote: >> >> Please can anyone an advise on how exactly a CEPH production >> environment should look like? and what the configuration files should >> be. My hardwares

Fwd: Ceph Production Environment Setup and Configurations?

2013-01-28 Thread femi anjorin
Hi, Please with regards to my questions on Ceph Production Environment ... I like to give u these details. i like to test a write, read and delete operation on ceph storage cluster in a production environment. i also like to check the self healing and managing functionalities. i like to know in