On Tue, 16 Apr 2019 at 17:05, Paul Emmerich wrote:
>
> No, the problem is that a storage system should never tell a client
> that it has written data if it cannot guarantee that the data is still
> there if one device fails.
[...]
Ah, now I got your point.
Anyways, it should be users' choice (wi
No, the problem is that a storage system should never tell a client
that it has written data if it cannot guarantee that the data is still
there if one device fails.
Scenario: one OSD is down for whatever reason and another one fails.
You've now lost all writes that happened while one OSD was down
On Tue, 16 Apr 2019 at 16:52, Paul Emmerich wrote:
> On Tue, Apr 16, 2019 at 11:50 AM Igor Podlesny wrote:
> > On Tue, 16 Apr 2019 at 14:46, Paul Emmerich wrote:
[...]
> > Looked at it, didn't see any explanation of your point of view. If
> > there're 2 active data instances
> > (and 3rd is miss
On Tue, Apr 16, 2019 at 11:50 AM Igor Podlesny wrote:
>
> On Tue, 16 Apr 2019 at 14:46, Paul Emmerich wrote:
> > Sorry, I just realized I didn't answer your original question.
> [...]
>
> No problemo. -- I've figured out the answer to my own question earlier
> anyways.
> And actually gave a hint
On Tue, 16 Apr 2019 at 14:46, Paul Emmerich wrote:
> Sorry, I just realized I didn't answer your original question.
[...]
No problemo. -- I've figured out the answer to my own question earlier anyways.
And actually gave a hint today
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-Apri
Sorry, I just realized I didn't answer your original question.
ceph df does take erasure coding settings into account and shows the
correct free space.
However, it also takes the current data distribution into account,
i.e., the amount of data you can write until the first OSD is full
assuming you
And as to min_size choice -- since you've replied exactly to that part
of mine message only.
On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote:
> On Fri, Apr 12, 2019 at 9:30 PM Igor Podlesny wrote:
> > For e. g., an EC pool with default profile (2, 1) has bogus "sizing"
> > params (size=3, min_
On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote:
>
> Please don't use an EC pool with 2+1, that configuration makes no sense.
That's too much of an irony given that (2, 1) is default EC profile,
described in CEPH documentation in addition.
> min_size 3 is the default for that pool, yes. That m
Please don't use an EC pool with 2+1, that configuration makes no sense.
min_size 3 is the default for that pool, yes. That means your data
will be unavailable if any OSD is offline.
Reducing min_size to 2 means you are accepting writes when you cannot
guarantee durability which will cause problem
For e. g., an EC pool with default profile (2, 1) has bogus "sizing"
params (size=3, min_size=3).
Min. size 3 is wrong as far as I know and it's been fixed in fresh
releases (but not in Luminous).
But besides that it looks like pool usage isn't calculated according
to EC overhead but as if it was
10 matches
Mail list logo