Re: [Gluster-users] Is glusterd required on clients?

2012-05-22 Thread Amar Tumballi

On 05/22/2012 10:54 AM, Toby Corkindale wrote:

Hi,
Just wanted to confirm something..

On Linux clients, using the FUSE method of mounting volumes, do you need
glusterd to be running?

I don't *think* so, but want to check.


'glusterd' is *not* required on clients. If you have/had issues without 
'glusterd' on client, please open a bug at http://bugzilla.redhat.com 
(Community - GlusterFS)


Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 'remove-brick' is removing more bytes than are in the brick(?)

2012-05-22 Thread Amar Tumballi




pbs2ib 8780091379699182236 2994733 in progress



Hi Harry,

Can you please test once again with 'glusterfs-3.3.0qa42' and confirm 
the behavior? This seems like a bug (suspect it to be some overflow type 
of bug, not sure yet). Please help us with opening a bug report, 
meantime, we will investigate on this issue.


Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ping time-out

2012-05-22 Thread alfred de sollize
How to increase the ping-time-out beyond 1301 sec?
Want to make it  2hrs.
regards
Al
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replication setup and writing erroneously to read-only folder

2012-05-22 Thread Haris Zukanovic

Hi everyone,

I have a replicated setup with 3 Gluster nodes and on each I have a 
brick of the same size.
I know that the brick folder should never be written to as changes ar 
not proragated to replicated gluster setup. It should remain read-only.
It has happened on several occasions (manual error and application using 
wrong path) that the brick folder has been written to. So, I got an 
inconsistency between the brick and gluster mounted replicated folder. I 
am using the brick folder to read the files as it is much much faster.


My question is what exactly to do in this situation to bring the gluster 
up to date and resolve the inconsistency?
Is it enough to write the file that went written into the brick once 
more to the gluster mounted read-write path?



kind regards

--
--
Haris Zukanovic

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Is glusterd required on clients?

2012-05-22 Thread Toby Corkindale

On 22/05/12 16:59, Amar Tumballi wrote:

On 05/22/2012 10:54 AM, Toby Corkindale wrote:

Hi,
Just wanted to confirm something..

On Linux clients, using the FUSE method of mounting volumes, do you need
glusterd to be running?

I don't *think* so, but want to check.


'glusterd' is *not* required on clients. If you have/had issues without
'glusterd' on client, please open a bug at http://bugzilla.redhat.com
(Community - GlusterFS)


No, don't worry. I have not had any issues.
I just wanted to check that it was the correct behaviour -- it wasn't 
immediately clear to me.



Thanks for your help,
Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] [REPOST] Connection failed

2012-05-22 Thread Emmanuel Seyman

[Reposting because I still have the same problem]

Hello, all.

I'm trying to get glusterfs working on two machines (so that I can have
replicated storage on both of them) and I'm stuck on getting glusterd
working.

The two machines are Debian 6.0 (Squeeze) and I'm using the glusterfs
packages from the backports repo (3.2.4-1~bpo60+1).

/etc/glusterfs/glusterd.vol on both machines contains:

  volume management
  type mgmt/glusterd
  option working-directory /etc/glusterd
  option transport-type socket
  option transport.socket.keepalive-time 10
  option transport.socket.keepalive-interval 2
  end-volume

When I start glusterd, I get different results on the two servers:

root@silicium:~# gluster peer status
Number of Peers: 1

Hostname: titane
Uuid: 448e3316-74e3-44aa-a495-5b540e7b8927
State: Peer in Cluster (Connected)

root@titane:~# gluster peer status
Connection failed. Please check if gluster daemon is operational.

The output from 'glusterd --debug' is here:

http://people.parinux.org/~seyman/gluster/glusterd.log
or
http://people.parinux.org/~seyman/gluster/glusterd.log.gz

Any comments or suggestions are welcome.

Emmanuel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users