BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20200623T09Z
DTEND:20200623T10Z
DTSTAMP:20200623T055438Z
ORGANIZER;CN=sacha...@redhat.com:mailto:sacha...@redhat.com
UID:51cmdqukvf6ab8sm0281doh...@google.com
Hey Erik,
I actually ment that there is no point in using controllers with fast
storage like SAS SSDs or NVMEs.
They (the controllers) usually have 1-2 GB of RAM to buffer writes until the
risc processor analyzes the requests and stacks them - thus JBOD (in 'replica
3' )makes much more sen
> For NVMe/SSD - raid controller is pointless , so JBOD makes most sense.
I am game for an education lesson here. We're still using spinng drives
with big RAID caches but we keep discussing SSD in the context of RAID. I
have read for many real-world workloads, RAID0 makes no sense with
modern S
We had a distributed replicated volume of 3 x 7 HDD, the volume was used
for small files workload with heavy IO, we decided to replace the
bricks with SSDs because of IO saturation to the disks, so we started by
swapping the bricks one by one, and the fun started, some files lost its
attributes and
Hi Felix,
Have you deleted the session with reset-sync-time and recreated the session?
If yes, the crawling starts from beginning.
Which happens in this way:
It begins with hybrid crawl, as data is already in master before re
creating the geo-rep session. If geo-rep session is craeted before cr
Hi Hubert,
keep in mind RH recommends disks of size 2-3 TB, not 10. I guess that has
changed the situation.
For NVMe/SSD - raid controller is pointless , so JBOD makes most sense.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 7:58:56 GMT+03:00, Hu Bert написа:
>Am So., 21. Juni 2020 um
Dear Shwetha,
sorry, one more question, since I try to collect some more information
which may helpful for other gluster-users.
Does the suggested command
# setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
also work regardless of the current mode, i.e. history, hybrid or
changelog crawl?
Dear Shwetha,
thanks for your reply. I mounted the volume via fuse on the
gluster storage server and ran the command:
setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
Before that, I mounted the volume with my default options:
mount -t glusterfs -o acl glusterStorageServer:/volName /mnt/mo
Hi Felix,
File path is the path from mount point. Need not include any other options.
Regards,|
Shwetha
On Mon, Jun 22, 2020 at 3:15 PM Felix Kölzow wrote:
> Dear Shwetha,
>
> > One more alternative would be to triggering sync on indivisual files,
> > # setfattr -n glusterfs.geo-rep.trigger-sy
Dear Shwetha,
One more alternative would be to triggering sync on indivisual files,
# setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
So, how to do it exactly and what is ? Is it a gluster mount
point with certain mount options
or is this the brick path? Furthermore, does it work for direc
Deleting geo-rep session with reset-sync-time, makes sure that sync
time(stime) is reset. Sync time is last time when the sync happened from
master to slave.
Regards,
Shwetha
On Mon, Jun 22, 2020 at 2:42 PM Shwetha Acharya wrote:
> Hi Felix,
>
> Index here is stime or sync time. Once we set syn
Hi Felix,
Index here is stime or sync time. Once we set sync time to 0, crawl happens
from the beginning, till stime < xtime. (xtime is last modified time of
file or directory on master)
We can achieve it using following steps:
# gluster volume geo-replication master-vol slave-ip::slave-vol stop
Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti :
> For the RAID6/10 setup, I found no issues: simply replace the broken
> disk without involing Gluster at all. However, this also means facing
> the "iops wall" I described earlier for single-brick node. Going
> full-Guster with JBODs wou
Il 2020-06-22 06:58 Hu Bert ha scritto:
Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti
:
For the RAID6/10 setup, I found no issues: simply replace the broken
disk without involing Gluster at all. However, this also means facing
the "iops wall" I described earlier for single-brick no
Il 2020-06-21 20:41 Mahdi Adnan ha scritto:
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and
require less work from Gluster admins but, it is a waste of disk
space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick
Dear Shwetha,
thank you very much for your immediate response.
It turns out we have rsync 3.1.2, so this sould be fine.
Actually, geo-rep is in xsync mode, but just due to
the fact that I deleted the geo-replication and used the reset-sync-time
option.
If you are looking to still resync, y
16 matches
Mail list logo