Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-05 Thread M Ranga Swami Reddy
Hi - As per the OSD calculations: no of OSD * 100/pool size => 96
*100/3 = 3200 => 4096
So 4096 is correct pg_num.
this case - PG are correct number as per the recommended.


On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli  wrote:
> As message satates, you must increase placement group number for the pool.
> Because 108T data require larger pg mumber.
>
> On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy"  wrote:
>>
>> Hi,
>>
>> I am using ceph for my storage cluster and health shows as WARN state
>> with too few pgs.
>>
>> ==
>> health HEALTH_WARN pool volumes has too few pgs
>> ==
>>
>> The volume pool has 4096 pgs
>> --
>> ceph osd pool get volumes pg_num
>> pg_num: 4096
>> ---
>>
>> and
>> >ceph df
>> NAME   ID USED  %USED MAX AVAIL
>> OBJECTS
>> volumes4  2830G  0.82  108T
>> 763509
>> --
>>
>> How do we fix this, without downtime?
>>
>> Thanks
>> Swami
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread Ferhat Ozkasgarli
As message satates, you must increase placement group number for the pool.
Because 108T data require larger pg mumber.
On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy"  wrote:

> Hi,
>
> I am using ceph for my storage cluster and health shows as WARN state
> with too few pgs.
>
> ==
> health HEALTH_WARN pool volumes has too few pgs
> ==
>
> The volume pool has 4096 pgs
> --
> ceph osd pool get volumes pg_num
> pg_num: 4096
> ---
>
> and
> >ceph df
> NAME   ID USED  %USED MAX AVAIL OBJECTS
> volumes4  2830G  0.82  108T  763509
> --
>
> How do we fix this, without downtime?
>
> Thanks
> Swami
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread Somnath Roy
You can increase it, but, that will trigger rebalancing and based on the amount 
of data it will take some time before cluster is coming into clean state.
Client IO performance will be affected during this.
BTW this is not really an error , it is a warning because performance on that 
pool will be affected because of low pg count.

Thanks & Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of M 
Ranga Swami Reddy
Sent: Wednesday, February 03, 2016 9:48 PM
To: Ferhat Ozkasgarli
Cc: ceph-users
Subject: Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

Current pg_num: 4096.  As per the PG num formula, no OSD * 100/pool size ->
184 * 100/3 = 6133, so I can increase to 8192. Is this solves the problem?

Thanks
Swami

On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli <ozkasga...@gmail.com> wrote:
> As message satates, you must increase placement group number for the pool.
> Because 108T data require larger pg mumber.
>
> On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy" <swamire...@gmail.com> wrote:
>>
>> Hi,
>>
>> I am using ceph for my storage cluster and health shows as WARN state
>> with too few pgs.
>>
>> ==
>> health HEALTH_WARN pool volumes has too few pgs ==
>>
>> The volume pool has 4096 pgs
>> --
>> ceph osd pool get volumes pg_num
>> pg_num: 4096
>> ---
>>
>> and
>> >ceph df
>> NAME   ID USED  %USED MAX AVAIL
>> OBJECTS
>> volumes4  2830G  0.82  108T
>> 763509
>> --
>>
>> How do we fix this, without downtime?
>>
>> Thanks
>> Swami
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread M Ranga Swami Reddy
Yes, if I change the pg_num on current pool, cluster rebalance start..
Alternatively - I plan to do as below:
1. Createa a new pool with max possible pg_num (as per the pg calc).
2. Copy the current pool to new pool  (during this step - IO should be stopped)
3. Rename the curent pool current.old and rename the new pool to current pool.

After 3rd step - I guess, cluster should be fine without rebalance.

Thanks
Swami


On Thu, Feb 4, 2016 at 11:38 AM, Somnath Roy <somnath@sandisk.com> wrote:
> You can increase it, but, that will trigger rebalancing and based on the 
> amount of data it will take some time before cluster is coming into clean 
> state.
> Client IO performance will be affected during this.
> BTW this is not really an error , it is a warning because performance on that 
> pool will be affected because of low pg count.
>
> Thanks & Regards
> Somnath
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of M 
> Ranga Swami Reddy
> Sent: Wednesday, February 03, 2016 9:48 PM
> To: Ferhat Ozkasgarli
> Cc: ceph-users
> Subject: Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs
>
> Current pg_num: 4096.  As per the PG num formula, no OSD * 100/pool size ->
> 184 * 100/3 = 6133, so I can increase to 8192. Is this solves the problem?
>
> Thanks
> Swami
>
> On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli <ozkasga...@gmail.com> 
> wrote:
>> As message satates, you must increase placement group number for the pool.
>> Because 108T data require larger pg mumber.
>>
>> On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy" <swamire...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I am using ceph for my storage cluster and health shows as WARN state
>>> with too few pgs.
>>>
>>> ==
>>> health HEALTH_WARN pool volumes has too few pgs ==
>>>
>>> The volume pool has 4096 pgs
>>> --
>>> ceph osd pool get volumes pg_num
>>> pg_num: 4096
>>> ---
>>>
>>> and
>>> >ceph df
>>> NAME   ID USED  %USED MAX AVAIL
>>> OBJECTS
>>> volumes4  2830G  0.82  108T
>>> 763509
>>> --
>>>
>>> How do we fix this, without downtime?
>>>
>>> Thanks
>>> Swami
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread M Ranga Swami Reddy
Current pg_num: 4096.  As per the PG num formula, no OSD * 100/pool size ->
184 * 100/3 = 6133, so I can increase to 8192. Is this solves the problem?

Thanks
Swami

On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli  wrote:
> As message satates, you must increase placement group number for the pool.
> Because 108T data require larger pg mumber.
>
> On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy"  wrote:
>>
>> Hi,
>>
>> I am using ceph for my storage cluster and health shows as WARN state
>> with too few pgs.
>>
>> ==
>> health HEALTH_WARN pool volumes has too few pgs
>> ==
>>
>> The volume pool has 4096 pgs
>> --
>> ceph osd pool get volumes pg_num
>> pg_num: 4096
>> ---
>>
>> and
>> >ceph df
>> NAME   ID USED  %USED MAX AVAIL
>> OBJECTS
>> volumes4  2830G  0.82  108T
>> 763509
>> --
>>
>> How do we fix this, without downtime?
>>
>> Thanks
>> Swami
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread M Ranga Swami Reddy
Hi,

I am using ceph for my storage cluster and health shows as WARN state
with too few pgs.

==
health HEALTH_WARN pool volumes has too few pgs
==

The volume pool has 4096 pgs
--
ceph osd pool get volumes pg_num
pg_num: 4096
---

and
>ceph df
NAME   ID USED  %USED MAX AVAIL OBJECTS
volumes4  2830G  0.82  108T  763509
--

How do we fix this, without downtime?

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com