On Sun, Sep 22, 2013 at 5:25 AM, Gaylord Holder wrote:
>
>
> On 09/22/2013 02:12 AM, yy-nm wrote:
>>
>> On 2013/9/10 6:38, Gaylord Holder wrote:
>>>
>>> Indeed, that pool was created with the default 8 pg_nums.
>>>
>>> 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
>>>
>>> I bumped up
On 09/22/2013 02:12 AM, yy-nm wrote:
On 2013/9/10 6:38, Gaylord Holder wrote:
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600
On 2013/9/10 6:38, Gaylord Holder wrote:
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600 for that pool and ceph started shifting
On 2013/9/10 6:38, Gaylord Holder wrote:
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600 for that pool and ceph started shifting
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600 for that pool and ceph started shifting
things around.
Can you explain the dif
I don't see any very bad.
Try rename your racks from numbers to unique string, for example change
rack 1 {
to
rack rack1 {
and i.e.
On 09.09.2013, at 23:56, Gaylord Holder wrote:
> Thanks for your assistance.
>
> Crush map:
>
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_
This is usually caused by having too few pgs. Each pool with a
significant amount of data needs at least around 100pgs/osd.
-Sam
On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder wrote:
> I'm starting to load up my ceph cluster.
>
> I currently have 12 2TB drives (10 up and in, 2 defined but down
I'm starting to load up my ceph cluster.
I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
rados df
says I have 8TB free, but I have 2 nearly full OSDs.
I don't understand how/why these two disks are filled while the others
are relatively empty.
How do I tell ceph t