Our production GlusterFS 3.3.1GA setup is a 3x2 distribute-replicate,
with 100TB usable for staff. This is one of 4 identical GlusterFS
clusters we're running.
Very early in the life of our production Gluster rollout, we ran
Netatalk 2.X to share files with MacOSX clients (due to slow negative
On Tue, Apr 09, 2013 at 09:44:26AM -0400, Whit Blauvelt wrote:
Had some data loss with an older - 3.1.4 - Gluster share last night. Now
trying to see what the best lessons are to learn from it. Obviously it's too
old a version for a bug report to matter. Wondering if anyone recognizes
this
Hi, all
I've problem with iptables when use glusterfs,
my glusterfs' version:
--
glusterfs 3.2.7 built on Jun 11 2012 13:22:29
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes
On 4/10/13 8:28 AM, Jian Lee wrote:
# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Thu Apr 11 00:09:23 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:1996]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A
David,
How foolish it is !
thank you very much !
it's worked now !
于 2013年04月10日 20:31, David Coulson 写道:
On 4/10/13 8:28 AM, Jian Lee wrote:
# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Thu Apr 11 00:09:23 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
with help on irc guys, I moved that line to bottom:
# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Thu Apr 11 00:09:23 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:1996]
-A INPUT -m
Hi Dan
I've come up against this recently whilst trying to delete large amounts of
files from our cluster.
I'm resolving it with the method from
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/1917
With Fabric as a helping hand, it's not too tedious.
Not sure about the level of
Hello list,
I've installed GlusterFS via Debian experimental packages, version
3.4.0~qa9realyalpha2-1.
( For the records, the reason I use an alpha release is that I want
this feature:
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
)
I've also followed the Quick
This is a great question, something I've been wondering.
Reposting some details from jeff darcy's email regarding a similar question
which i asked could help shed some light on this:
1) The daemons that run in gluster are:
glusterd = management daemon
glusterfsd = per-brick
Hey,
stopping the glusterd instance does not stop any of the other spawned
daemons. I know this for a fact as I start and stop glusterd all the
time with out it affecting any of the other daemons.
As for stopping the spawned daemons, Craig Carl ?? ( I think that's
right) years ago when
I just want to say that trying to run a VM on almost any glusterfs
mounted fuse is going to suck when using 1G nic cards.
That said your setup looks fine except you need to change replica 4
to replica 2. I'm assuming you want redundancy and speed.
Replicating to all 4 bricks is probably not what
I just discovered yesterday that the systemd configs (in the fedora rpms) do,
indeed, stop the bricks. I think I know how to fix that and will test that and
submit a bug report today and a patch.
Patrick Irvine p...@cybersites.ca wrote:
Hey,
stopping the glusterd instance does not stop any
I use gentoo and it's init scripts do stop all the daemons too. I never
use it though.
Pat.
On 10/04/2013 10:58 AM, Joe Julian wrote:
I just discovered yesterday that the systemd configs (in the fedora
rpms) do, indeed, stop the bricks. I think I know how to fix that and
will test that and
Sending this again, since I'm not even sure that the 1st made it to the list
and it's just happened again, even with the same user (one of the heaviest
users, but I don't think there's anything odd about his usage).
In the last 3 days, we've had 6 such errors (resulting in the logged error:
E
I filed a bug with Gentoo about a year ago related to this.
https://bugs.gentoo.org/show_bug.cgi?id=413417 The takeaway is that the
glusterd init scripts should not kill any fs or fsd processes, but they
have yet to merge the supplied patch.
--
Adam
On Wed, Apr 10, 2013 at 1:00 PM, Patrick
On 04/10/13 03:44, Daniel Mons wrote:
[snip]
Option 1) Delete the file from the bad brick
I would do this. Then trigger a self-heal.
--
Mr. Flibble
King of the Potato People
___
Gluster-users mailing list
Gluster-users@gluster.org
16 matches
Mail list logo