Never. I would only consider increasing it if you were increasing your
target max bytes or target full ratio.
On Sun, May 28, 2017, 11:14 PM Konstantin Shalygin wrote:
>
> On 05/29/2017 10:08 AM, David Turner wrote:
>
> If you aren't increasing your target max bytes and target full ratio, I
> wo
On 05/29/2017 10:08 AM, David Turner wrote:
If you aren't increasing your target max bytes and target full ratio,
I wouldn't bother increasing your pgs on the cache pool. It will not
gain any increased size at all as its size is dictated by those
settings and not the total size of the clust
If you aren't increasing your target max bytes and target full ratio, I
wouldn't bother increasing your pgs on the cache pool. It will not gain
any increased size at all as its size is dictated by those settings and not
the total size of the cluster. It will remain as redundant as always.
If you
On 05/28/2017 09:43 PM, David Turner wrote:
What are your pg numbers for each pool? Your % used in each pool? And
number of OSDs?
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89380G 74755G 14625G 16.36
POOLS:
NAME ID USED %USED
What are your pg numbers for each pool? Your % used in each pool? And
number of OSDs?
On Sun, May 28, 2017, 10:30 AM Konstantin Shalygin wrote:
>
> > You can also just remove the caching from the pool, increase the pgs,
> > then set it back up as a cache pool. It'll require downtime if it's
> >
You can also just remove the caching from the pool, increase the pgs,
then set it back up as a cache pool. It'll require downtime if it's
in front of an EC rbd pool or EC cephfs on Jewel or Hammer, but it
won't take long as all of the objects will be gone.
Why do you need to increase the PG
You can also just remove the caching from the pool, increase the pgs, then
set it back up as a cache pool. It'll require downtime if it's in front of
an EC rbd pool or EC cephfs on Jewel or Hammer, but it won't take long as
all of the objects will be gone.
Why do you need to increase the PG count
I don't recall finding a definitive answer - though it was some time ago.
IIRC, it did work but made the pool fragile; I remember having to rebuild
the pools for my test rig soon after. Don't quite recall the root cause,
though - could have been newbie operator error on my part. May have also
had
# ceph osd pool set cephfs_data_cache pg_num 256
Error EPERM: splits in cache pools must be followed by scrubs and
leave sufficient free space to avoid overfilling. use
--yes-i-really-mean-it to force.
Is there something I need to do, before increasing PGs on a cache
pool? Can this be (safely
I mistakenly created a cache pool with way too few PGs. It's attached
as a write-back cache to an erasure-coded pool, has data in it, etc.;
cluster's using Infernalis. Normally, I can increase pg_num live, but
when I try in this case I get:
# ceph osd pool set cephfs_data_cache pg_num 256
Error
10 matches
Mail list logo