I too would like to know how to "sync up" a replicated pair of bricks. Right
now I've got a slight difference between the 2...
Scale-n-defrag.sh didn't do much either. Looking forward to some help :)
13182120616 139057220 12362648984 2% /export
Vs
1318
I have a volume that is distributed and replicated. While deleting a
directory structure on the mounted volume, I also restarted the
GlusterFS daemon on one of the replicated servers. After the "rm -rf"
command completed, it complained that it couldn't delete a directory
because it wasn't emp
I setup a test install client on Debian that is exporting
Opensolaris/zfs server. After running it over the weekend I noticed
the following in the gluster.log file:
Version : glusterfs 3.0.5 built on Jul 22 2010 19:53:29
git: v3.0.5
Starting Time: 2010-07-22 20:09:44
Command line : glust
It really depends on which server was hosting what VM.
This is why distribute/replicate is a good idea :) any 1 of the pairs can fail
without data loss/outage
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Brad Alexande
If I have the following setup how will files be distributed? What
happens if I lose one box in this scenario?
Server1 - 3 X 146GB
Server2 - 3 X 146GB
Server3 - 3 X 146GB
Server4 - 3 X 146GB
Total = 1752 GB
I have large VMS that around 600 GB Each.
Thanks.
___
Thank you, but I have error,
After change code like your guide:
(REPLACE:
ret = dict_set_static_ptr
(dict,"trusted.glusterfs.location",priv->hostname);
BY:
ret = dict_set_str (dict, "trusted.glusterfs.location", priv->hostname);)
I rebuild (make and make install, then restart gluster)
The res
Hi,
(i) a note at the top on the NUFA with single process page
http://www.gluster.com/community/documentation/index.php/NUFA_with_single_process
declares NUFA as deprecated.
Does this mean any Nufa setup is scheduled to be unsupported of is it just the
NUFA with single process as client and ser