Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-11 Thread B L
Thank you Vickie .. and thanks to the ceph community for showing continued support Best of luck to all ! > On Feb 11, 2015, at 3:58 AM, Vickie ch wrote: > > Hi > The weight is reflect spaces or ability of disks. > For example, the weight of 100G OSD disk is 0.100(100G/1T). > > > Best wis

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi The weight is reflect spaces or ability of disks. For example, the weight of 100G OSD disk is 0.100(100G/1T). Best wishes, Vickie 2015-02-10 22:25 GMT+08:00 B L : > Thanks for everyone!! > > After applying the re-weighting command (*ceph osd crush reweight osd.0 > 0.0095*), my cluster is ge

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks for everyone!! After applying the re-weighting command (ceph osd crush reweight osd.0 0.0095), my cluster is getting healthy now :)) But I have one question, what if I have hundreds of OSDs, shall I do the re-weighting on each device, or there is some way to make this happen automatical

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
Oh , I have miss placed the places for osd names and weight ceph osd crush reweight osd.0 0.0095 and so on .. Regards, Vikhyat On 02/10/2015 07:31 PM, B L wrote: Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Udo Lembke
Hi, use: ceph osd crush set 0 0.01 pool=default host=ceph-node1 ceph osd crush set 1 0.01 pool=default host=ceph-node1 ceph osd crush set 2 0.01 pool=default host=ceph-node3 ceph osd crush set 3 0.01 pool=default host=ceph-node3 ceph osd crush set 4 0.01 pool=default host=ceph-node2 ceph osd crush

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Micha Kersloot
Vikhyat Umrao" , "Udo Lembke" > Cc: ceph-users@lists.ceph.com > Sent: Tuesday, February 10, 2015 3:01:34 PM > Subject: Re: [ceph-users] Placement Groups fail on fresh Ceph cluster > installation with all OSDs up and in > Thanks Vikhyat, > As suggested .. > cep

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn't represent a float osd crush reweight : change 's weight to in crush map Error EINVAL: invalid command What do you think > On Feb 10, 2015, at 3:18 PM, Vikhy

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
Hello, Your osd does not have weights , please assign some weight to your ceph cluster osds as Udo said in his last comment. osd crush reweightchange 's weight to in crush map sudo ceph osd crush reweight 0.0095 osd.0 to osd.5. Regards, Vikhya

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hello Udo, Thanks for your answer .. 2 questions here: 1- Does what you say mean that I have to remove my drive devices (8GB each) and add new ones with at least 10GB? 2- Shall I manually re-weight after disk creation and preparation using this command (ceph osd reweight osd.2 1.0), or things w

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Owen Synge
Hi, To add to Udo's point, Do remember that by default journals take ~6Gb. For this reason I suggest making Virtual disks larger than 20Gb for testing although its slightly bigger than absolutely necessary. Best regards Owen On 02/10/2015 01:26 PM, Udo Lembke wrote: > Hi, > your will get fu

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Udo Lembke
Hi, your will get further trouble, because your weight is not correct. You need an weight >= 0.01 for each OSD. This mean, you OSD must be 10GB or greater! Udo Am 10.02.2015 12:22, schrieb B L: > Hi Vickie, > > My OSD tree looks like this: > > ceph@ceph-node3:/home/ubuntu$ ceph osd tree > # i

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hello Vickie, After changing the size and min_size on all the existing pools, the cluster seems to be working, and I can store objects to the cluster .. but the cluster still shows non healthy: cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs degraded; 256 pgs stuck

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I changed the size and min_size as you suggested while opening the ceph -w on a different window, and I got this: ceph@ceph-node1:~$ ceph -w cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_n

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I will try to change the replication size now as you suggested .. but how is that related to the non-healthy cluster? > On Feb 10, 2015, at 1:22 PM, B L wrote: > > Hi Vickie, > > My OSD tree looks like this: > > ceph@ceph-node3:/home/ubuntu$ ceph osd tree > # id weight type name up/d

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hi Vickie, My OSD tree looks like this: ceph@ceph-node3:/home/ubuntu$ ceph osd tree # idweight type name up/down reweight -1 0 root default -2 0 host ceph-node1 0 0 osd.0 up 1 1 0 osd.1 up

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos: BTW, if your cluster just for test. You may try to reduce replica size and min_size. "ceph osd pool set rbd size 2;ceph osd pool set data size 2;ceph osd pool set metadata size 2 " "ceph osd pool set rbd min_size 1;ceph osd pool set data min_size 1;ceph osd pool set metadata min_size 1"

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos: So you have 3 OSD servers and each of them have 2 disks. I have a question. What result of "ceph osd tree". Look like the osd status is "down". Best wishes, Vickie 2015-02-10 19:00 GMT+08:00 B L : > Here is the updated direct copy/paste dump > > eph@ceph-node1:~$ ceph osd dump > epoc

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Here is the updated direct copy/paste dump eph@ceph-node1:~$ ceph osd dump epoch 25 fsid 17bea68b-1634-4cd1-8b2a-00a60ef4761d created 2015-02-08 16:59:07.050875 modified 2015-02-09 22:35:33.191218 flags pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
> On Feb 10, 2015, at 12:37 PM, B L wrote: > > Hi Vickie, > > Thanks for your reply! > > You can find the dump in this link: > > https://gist.github.com/anonymous/706d4a1ec81c93fd1eca > > > Thanks! > B. > > >> On Feb 10, 2015, at 12

[ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Having problem with my fresh non-healthy cluster, my cluster status summary shows this: ceph@ceph-node1:~$ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 > pgp_num 64 m