Hi - As per the OSD calculations: no of OSD * 100/pool size => 96
*100/3 = 3200 => 4096
So 4096 is correct pg_num.
this case - PG are correct number as per the recommended.
On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli wrote:
> As message satates, you must increase
As message satates, you must increase placement group number for the pool.
Because 108T data require larger pg mumber.
On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy" wrote:
> Hi,
>
> I am using ceph for my storage cluster and health shows as WARN state
> with too few pgs.
>
Fwd: HEALTH_WARN pool vol has too few pgs
Current pg_num: 4096. As per the PG num formula, no OSD * 100/pool size ->
184 * 100/3 = 6133, so I can increase to 8192. Is this solves the problem?
Thanks
Swami
On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli <ozkasga...@gmail.com> wrote:
> As
Regards
> Somnath
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of M
> Ranga Swami Reddy
> Sent: Wednesday, February 03, 2016 9:48 PM
> To: Ferhat Ozkasgarli
> Cc: ceph-users
> Subject: Re: [ceph-users] Fwd: HEALTH_WARN
Current pg_num: 4096. As per the PG num formula, no OSD * 100/pool size ->
184 * 100/3 = 6133, so I can increase to 8192. Is this solves the problem?
Thanks
Swami
On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli wrote:
> As message satates, you must increase placement
Hi,
I am using ceph for my storage cluster and health shows as WARN state
with too few pgs.
==
health HEALTH_WARN pool volumes has too few pgs
==
The volume pool has 4096 pgs
--
ceph osd pool get volumes pg_num
pg_num: 4096
---
and
>ceph df
NAME ID USED %USED