I don't recall finding a definitive answer - though it was some time ago.
IIRC, it did work but made the pool fragile; I remember having to rebuild
the pools for my test rig soon after. Don't quite recall the root cause,
though - could have been newbie operator error on my part. May have also
had
Sorry for the late reply - been traveling.
I'm doing exactly that right now, using the ceph-docker container.
It's just in my test rack for now, but hardware arrived this week to
seed the production version.
I'm using separate containers for each daemon, including a container
for each OSD. I've
as also wondering about the pros and cons performance wise of having a
> pool size of 3 vs 2. It seems there would be a benefit for reads (1.5 times
> the bandwidth) but a penalty for writes because the primary has to forward
> to 2 nodes instead of 1. Does that make sense?
>
> -Roland
Reads will be limited to 1/3 of the total bandwidth. A set of PGs has
a "primary" - that's the first one (and only one, if it's up & in)
consulted on a read. The other PGs will still exist, but they'll only
take writes (and only after the primary PG forwards along data). If
you have multiple PGs
I'm preparing to use it in production, and have been contributing
fixes for bugs I find. It's getting fairly solid, but it does need to
be moved to Jewel before we really scale it out.
--
Mike Shuey
On Wed, May 4, 2016 at 8:50 AM, Daniel Gryniewicz wrote:
> On 05/03/2016 04:17 PM, Vincenzo Pii
I mistakenly created a cache pool with way too few PGs. It's attached
as a write-back cache to an erasure-coded pool, has data in it, etc.;
cluster's using Infernalis. Normally, I can increase pg_num live, but
when I try in this case I get:
# ceph osd pool set cephfs_data_cache pg_num 256
Error