Fuse or not, you just can't do anything to the brick.
Backup and restore the mount point (be it NFS or fuse) and everything will be
fine,
gluster will take care of re-distributing the files to the correct brick.
No point in backuping the brick themselves by the way, keep the volume config
somewh
A search reveals little definitive in the way of best practise for backing up
and restoring glusterfs volumes.
I'm particularly interested in best practise for replicated volumes. Is there
any recommended way that is both efficient and proven?
It seems that geo-replication, snapshotting and glu
its really highly appreciable if some one respond on this.
On Thu, Dec 1, 2016 at 6:31 PM, ABHISHEK PALIWAL
wrote:
> is there anyone who can reply on this query?
>
> On Thu, Dec 1, 2016 at 7:58 AM, ABHISHEK PALIWAL
> wrote:
>
>> Please reply I am waiting for your response.
>> On Nov 30, 2016 2:
Hi,
I encountered this when i do the FIO random write on the fuse mount
gluster volume. After this assertion happen, the client log is filled
with pending frames messages and FIO just show zero IO in the progress
status. As i leave this test to run overnight, the client log file
fill up with those
Hi all,
I upgraded Gluster servers and clients to 3.8.5 but still getting "file changed
as we read it” when using tar (GNU tar). The upgrade to 3.8.5 was performed
following the “online upgrade procedure”.
My current configuration:
- CentOS 7.2 64bits
- Gluster 3.8.5 from RP