On 3/4/24 08:40, Maged Mokhtar wrote:
On 04/03/2024 15:37, Frank Schilder wrote:
Fast write enabled would mean that the primary OSD sends #size
copies to the
entire active set (including itself) in parallel and sends an ACK
to the
client as soon as min_size ACKs have been received from the pe
On 04/03/2024 15:37, Frank Schilder wrote:
Fast write enabled would mean that the primary OSD sends #size copies to the
entire active set (including itself) in parallel and sends an ACK to the
client as soon as min_size ACKs have been received from the peers (including
itself). In this way, one
>>> Fast write enabled would mean that the primary OSD sends #size copies to the
>>> entire active set (including itself) in parallel and sends an ACK to the
>>> client as soon as min_size ACKs have been received from the peers (including
>>> itself). In this way, one can tolerate (size-min_size) s
On 04/03/2024 13:35, Marc wrote:
Fast write enabled would mean that the primary OSD sends #size copies to the
entire active set (including itself) in parallel and sends an ACK to the
client as soon as min_size ACKs have been received from the peers (including
itself). In this way, one can toler
>
> Fast write enabled would mean that the primary OSD sends #size copies to the
> entire active set (including itself) in parallel and sends an ACK to the
> client as soon as min_size ACKs have been received from the peers (including
> itself). In this way, one can tolerate (size-min_size) slow(e
Sent: Wednesday, February 21, 2024 1:10 PM
To: list Linux fs Ceph
Subject: [ceph-users] Re: Performance improvement suggestion
> 1. Write object A from client.
> 2. Fsync to primary device completes.
> 3. Ack to client.
> 4. Writes sent to replicas.
[...]
As mentioned in the discussion
> 1. Write object A from client.
> 2. Fsync to primary device completes.
> 3. Ack to client.
> 4. Writes sent to replicas.
[...]
As mentioned in the discussion this proposal is the opposite of
what the current policy, is, which is to wait for all replicas
to be written before writes are acknowledg
Hi,
I just want to echo what the others are saying.
Keep in mind that RADOS needs to guarantee read-after-write consistency for
the higher level apps to work (RBD, RGW, CephFS). If you corrupt VM block
devices, S3 objects or bucket metadata/indexes, or CephFS metadata, you're
going to suffer some
27;d not be confortable to enable "mon_allow_pool_size_one" at a
> specific pool.
> >
> > It would be better if this feature could make a replica at a second time
> on selected pool.
> > Thanks.
> > Rafael.
> >
> >
> >
> > De: &quo
on.
>> If this type of functionality is not interesting, it is ok.
>>
>>
>> Rafael.
>>
>> --
>>
>> *De: *"Anthony D'Atri"
>> *Enviada: *2024/02/01 12:10:30
>> *Para: *quag...@bol.com.br
>&
procedures.
>
>
> Remembering: it's just a suggestion.
> If this type of functionality is not interesting, it is ok.
>
>
> Rafael.
>
> --
>
> *De: *"Anthony D'Atri"
> *Enviada: *2024/02/01 12:10:30
> *Para: *qua
t; pool.
>
> It would be better if this feature could make a replica at a second time on
> selected pool.
> Thanks.
> Rafael.
>
>
>
> De: "Anthony D'Atri"
> Enviada: 2024/02/01 15:00:59
> Para: quag...@bol.com.br
> Cc: ceph-users@ceph.io
> A
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
7;s just a suggestion.
> If this type of functionality is not interesting, it is ok.
>
>
>
> Rafael.
>
>
> De: "Anthony D'Atri"
> Enviada: 2024/02/01 12:10:30
> Para: quag...@bol.com.br
> Cc: ceph-users@ceph.io
> Assunto: [ceph-users] Re: Performa
t's just a suggestion.
If this type of functionality is not interesting, it is ok.
Rafael.
De: "Anthony D'Atri"
Enviada: 2024/02/01 12:10:30
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: [ceph-users] Re: Performance improvement suggestion
> I didn't
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> I didn't say I would accept the risk of losing data.
That's implicit in what you suggest, though.
> I just said that it would be interesting if the objects were first
> recorded only in the primary OSD.
What happens when that host / drive smokes before it can replicate? What
hap
De: "Janne Johansson"
Enviada: 2024/02/01 04:08:05
Para: anthony.da...@gmail.com
Cc: acozy...@gmail.com, quag...@bol.com.br, ceph-users@ceph.io
Assunto: Re: [ceph-users] Re: Performance improvement suggestion
> I’ve heard conflicting asserts on whether the write returns wi
Hi Anthony,
Thanks for your reply.
I didn't say I would accept the risk of losing data.
I just said that it would be interesting if the objects were first recorded only in the primary OSD.
This way it would greatly increase performance (both for iops and throuput).
La
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> I’ve heard conflicting asserts on whether the write returns with min_size
> shards have been persisted, or all of them.
I think it waits until all replicas have written the data, but from
simplistic tests with fast network and slow drives, the extra time
taken to write many copies is not linear
I’ve heard conflicting asserts on whether the write returns with min_size
shards have been persisted, or all of them.
> On Jan 31, 2024, at 2:58 PM, Can Özyurt wrote:
>
> I never tried this myself but "min_size = 1" should do what you want to
> achieve.
___
Would you be willing to accept the risk of data loss?
> On Jan 31, 2024, at 2:48 PM, quag...@bol.com.br wrote:
>
> Hello everybody,
> I would like to make a suggestion for improving performance in Ceph
> architecture.
> I don't know if this group would be the best place or if my propos
I never tried this myself but "min_size = 1" should do what you want to achieve.
On Wed, 31 Jan 2024 at 22:48, quag...@bol.com.br wrote:
>
> Hello everybody,
> I would like to make a suggestion for improving performance in Ceph
> architecture.
> I don't know if this group would be the
Hello everybody,
I would like to make a suggestion for improving performance in Ceph architecture.
I don't know if this group would be the best place or if my proposal is correct.
My suggestion would be in the item https://docs.ceph.com/en/latest/architecture/, at the end of the top
26 matches
Mail list logo