We had a distributed replicated volume of 3 x 7 HDD, the volume was used
for small files workload with heavy IO, we decided to replace the
bricks with SSDs because of IO saturation to the disks, so we started by
swapping the bricks one by one, and the fun started, some files lost its
attributes and we had to manually fix the missing attributes by removing
the file and its gfid and copy the file again to the volume.
This issue affected 5 of the 21 bricks.
On another volume, we had a disk failure and during the replace brick
process, the mount point of one of the clients crashed.


On Mon, Jun 22, 2020 at 10:55 AM Gionatan Danti <g.da...@assyoma.it> wrote:

> Il 2020-06-21 20:41 Mahdi Adnan ha scritto:
> > Hello Gionatan,
> >
> >  Using Gluster brick in a RAID configuration might be safer and
> > require less work from Gluster admins but, it is a waste of disk
> > space.
> > Gluster bricks are replicated "assuming you're creating a
> > distributed-replica volume" so when brick went down, it should be easy
> > to recover it and should not affect the client's IO.
> > We are using JBOD in all of our Gluster setups, overall, performance
> > is good, and replacing a brick would work "most" of the time without
> > issues.
>
> Hi Mahdi,
> thank you for reporting. I am interested in the "most of the time
> without isses" statement. Can you elaborate on what happened the few
> times when it did not work correctly?
>
> Thanks.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it [1]
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8
>


-- 
Respectfully
Mahdi
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to