Re: [ceph-users] Changing pg_num on cache pool

2017-05-28 Thread David Turner
Never. I would only consider increasing it if you were increasing your target max bytes or target full ratio. On Sun, May 28, 2017, 11:14 PM Konstantin Shalygin wrote: > > On 05/29/2017 10:08 AM, David Turner wrote: > > If you aren't increasing your target max bytes and target

Re: [ceph-users] Changing pg_num on cache pool

2017-05-28 Thread Konstantin Shalygin
On 05/28/2017 09:43 PM, David Turner wrote: What are your pg numbers for each pool? Your % used in each pool? And number of OSDs? GLOBAL: SIZE AVAIL RAW USED %RAW USED 89380G 74755G 14625G 16.36 POOLS: NAME ID USED

Re: [ceph-users] Changing pg_num on cache pool

2017-05-28 Thread David Turner
What are your pg numbers for each pool? Your % used in each pool? And number of OSDs? On Sun, May 28, 2017, 10:30 AM Konstantin Shalygin wrote: > > > You can also just remove the caching from the pool, increase the pgs, > > then set it back up as a cache pool. It'll require

Re: [ceph-users] Changing pg_num on cache pool

2017-05-28 Thread Konstantin Shalygin
You can also just remove the caching from the pool, increase the pgs, then set it back up as a cache pool. It'll require downtime if it's in front of an EC rbd pool or EC cephfs on Jewel or Hammer, but it won't take long as all of the objects will be gone. Why do you need to increase the

Re: [ceph-users] Changing pg_num on cache pool

2017-05-28 Thread David Turner
You can also just remove the caching from the pool, increase the pgs, then set it back up as a cache pool. It'll require downtime if it's in front of an EC rbd pool or EC cephfs on Jewel or Hammer, but it won't take long as all of the objects will be gone. Why do you need to increase the PG

Re: [ceph-users] Changing pg_num on cache pool

2017-05-27 Thread Michael Shuey
I don't recall finding a definitive answer - though it was some time ago. IIRC, it did work but made the pool fragile; I remember having to rebuild the pools for my test rig soon after. Don't quite recall the root cause, though - could have been newbie operator error on my part. May have also

Re: [ceph-users] Changing pg_num on cache pool

2017-05-27 Thread Konstantin Shalygin
# ceph osd pool set cephfs_data_cache pg_num 256 Error EPERM: splits in cache pools must be followed by scrubs and leave sufficient free space to avoid overfilling. use --yes-i-really-mean-it to force. Is there something I need to do, before increasing PGs on a cache pool? Can this be

[ceph-users] Changing pg_num on cache pool

2016-05-03 Thread Michael Shuey
I mistakenly created a cache pool with way too few PGs. It's attached as a write-back cache to an erasure-coded pool, has data in it, etc.; cluster's using Infernalis. Normally, I can increase pg_num live, but when I try in this case I get: # ceph osd pool set cephfs_data_cache pg_num 256