Re: [ceph-users] Full OSD questions

2013-09-24 Thread Gregory Farnum
On Sun, Sep 22, 2013 at 5:25 AM, Gaylord Holder wrote: > > > On 09/22/2013 02:12 AM, yy-nm wrote: >> >> On 2013/9/10 6:38, Gaylord Holder wrote: >>> >>> Indeed, that pool was created with the default 8 pg_nums. >>> >>> 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got. >>> >>> I bumped up

Re: [ceph-users] Full OSD questions

2013-09-22 Thread Gaylord Holder
On 09/22/2013 02:12 AM, yy-nm wrote: On 2013/9/10 6:38, Gaylord Holder wrote: Indeed, that pool was created with the default 8 pg_nums. 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got. I bumped up the pg_num to 600 for that pool and nothing happened. I bumped up the pgp_num to 600

Re: [ceph-users] Full OSD questions

2013-09-21 Thread yy-nm
On 2013/9/10 6:38, Gaylord Holder wrote: Indeed, that pool was created with the default 8 pg_nums. 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got. I bumped up the pg_num to 600 for that pool and nothing happened. I bumped up the pgp_num to 600 for that pool and ceph started shifting

Re: [ceph-users] Full OSD questions

2013-09-21 Thread yy-nm
On 2013/9/10 6:38, Gaylord Holder wrote: Indeed, that pool was created with the default 8 pg_nums. 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got. I bumped up the pg_num to 600 for that pool and nothing happened. I bumped up the pgp_num to 600 for that pool and ceph started shifting

Re: [ceph-users] Full OSD questions

2013-09-09 Thread Gaylord Holder
Indeed, that pool was created with the default 8 pg_nums. 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got. I bumped up the pg_num to 600 for that pool and nothing happened. I bumped up the pgp_num to 600 for that pool and ceph started shifting things around. Can you explain the dif

Re: [ceph-users] Full OSD questions

2013-09-09 Thread Timofey
I don't see any very bad. Try rename your racks from numbers to unique string, for example change rack 1 { to rack rack1 { and i.e. On 09.09.2013, at 23:56, Gaylord Holder wrote: > Thanks for your assistance. > > Crush map: > > # begin crush map > tunable choose_local_tries 0 > tunable choose_

Re: [ceph-users] Full OSD questions

2013-09-09 Thread Samuel Just
This is usually caused by having too few pgs. Each pool with a significant amount of data needs at least around 100pgs/osd. -Sam On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder wrote: > I'm starting to load up my ceph cluster. > > I currently have 12 2TB drives (10 up and in, 2 defined but down

[ceph-users] Full OSD questions

2013-09-09 Thread Gaylord Holder
I'm starting to load up my ceph cluster. I currently have 12 2TB drives (10 up and in, 2 defined but down and out). rados df says I have 8TB free, but I have 2 nearly full OSDs. I don't understand how/why these two disks are filled while the others are relatively empty. How do I tell ceph t