Re: [Gluster-users] rebalancing after remove-brick

2011-04-22 Thread HU Zhong

Hi

I succeed to remove a pair of bricks from my dht+afr system, but my system
is not configured by the command line tool 'gluster', instead, I wrote
configuration files and run glusterfs+glusterfsd process to start the system
by hand. so it may  differ with yours,  anyway just for a reference.

1. Remove the target replica set from the system. Update the configuration
   file of the original system and restart the glusterfs process.

2. Create a dummy GlusterFS system with the removed replica set and mount it
   on a directory.

3. Copy the data from the dummy GlusterFS system to the original system.



-Original Message- 
From: Vincent Thomasset

Sent: Friday, April 22, 2011 3:51 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] rebalancing after remove-brick

Hello,

I'm having trouble migrating data from 1 removed replica set to
another active one in a dist replicated volume.

My test scenario is the following:

- create set (A)
- create a bunch of files on it
- add another set (B)
- rebalance (works fine)
- remove-brick A
- rebalance (doesn't rebalance - ran on one brick in each set)

The doc seems to imply that it is possible to remove bricks
and rebalance afterwards, is this incorrect ? Or should I maybe
use replace-brick instead for this ?

Thanks a lot,
Vincent
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] TCP connection incresement when reconfigure and question about multi-graph

2011-04-06 Thread HU Zhong
Hi, all.

I set up a dht system, and sent a HUP signal to client  to trigger the 
reconfiguration.
But i found that the TCP connection established increased by the number
of bricks(the number of glusterfsd progress). 

$ ps -ef | grep glusterfs
root  8579 1  0 11:28 ?00:00:00 glusterfsd -f 
/home/huz/dht/server.vol -l /home/huz/dht/server.log -L TRACE
root  8583 1  0 11:28 ?00:00:00 glusterfsd -f 
/home/huz/dht/server2.vol -l /home/huz/dht/server2.log -L TRACE
root  8587 1  0 11:28 ?00:00:00 glusterfsd -f 
/home/huz/dht/server3.vol -l /home/huz/dht/server3.log -L TRACE
root  8595 1  1 11:28 ?00:00:00 glusterfs -f 
/home/huz/dht/client.vol -l /home/huz/dht/client.log -L TRACE /home/huz/mnt

$ sudo netstat -ntp | grep glusterfs
tcp0  0 127.0.0.1:6998  127.0.0.1:1023  ESTABLISHED 
8579/glusterfsd 
tcp0  0 127.0.0.1:1021  127.0.0.1:7000  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:7000  127.0.0.1:1021  ESTABLISHED 
8587/glusterfsd 
tcp0  0 127.0.0.1:1023  127.0.0.1:6998  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:6999  127.0.0.1:1022  ESTABLISHED 
8583/glusterfsd 
tcp0  0 127.0.0.1:1022  127.0.0.1:6999  ESTABLISHED 
8595/glusterfs  

huz@furutuki:~/dht$ sudo kill -s HUP 8595

huz@furutuki:~/dht$ sudo netstat -ntp | grep glusterfs
tcp0  0 127.0.0.1:6998  127.0.0.1:1023  ESTABLISHED 
8579/glusterfsd 
tcp0  0 127.0.0.1:1021  127.0.0.1:7000  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:6999  127.0.0.1:1019  ESTABLISHED 
8583/glusterfsd 
tcp0  0 127.0.0.1:7000  127.0.0.1:1021  ESTABLISHED 
8587/glusterfsd 
tcp0  0 127.0.0.1:1018  127.0.0.1:7000  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:6998  127.0.0.1:1020  ESTABLISHED 
8579/glusterfsd 
tcp0  0 127.0.0.1:1023  127.0.0.1:6998  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:7000  127.0.0.1:1018  ESTABLISHED 
8587/glusterfsd 
tcp0  0 127.0.0.1:1019  127.0.0.1:6999  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:6999  127.0.0.1:1022  ESTABLISHED 
8583/glusterfsd 
tcp0  0 127.0.0.1:1022  127.0.0.1:6999  ESTABLISHED 
8595/glusterfs  
tcp0  0 127.0.0.1:1020  127.0.0.1:6998  ESTABLISHED 
8595/glusterfs 

lookup at the example above, the TCP connection increased by 3. I wonder if 
this is the normal? 

Further, I checked the log of client, and found something like this:
 [2011-04-07 11:28:06.92451] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client0: 
breaking reconnect chain
 [2011-04-07 11:28:06.92596] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client1: 
breaking reconnect chain
 [2011-04-07 11:28:06.92648] T [rpc-clnt.c:405:rpc_clnt_reconnect] 0-client2: 
breaking reconnect chain

 [2011-04-07 11:29:05.101120] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client0: 
breaking reconnect chain
 [2011-04-07 11:29:05.101254] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client1: 
breaking reconnect chain
 [2011-04-07 11:29:05.101307] T [rpc-clnt.c:405:rpc_clnt_reconnect] 1-client2: 
breaking reconnect chain

Apparently, there are two graphs in the system.(glusterFS 3.1.3 output graph id 
in gf_log function).
My understanding is that there should only be 1 graph which is consist of 
xlators configured in the .vol file.
Due to the HUP signal, the GlusterFS set up a new graph and initialize new TCP 
connections, so that i got
the result above. My question is why there need to be two graphs or more? 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] dht problems when one server is down

2011-03-16 Thread HU Zhong
hi, 

I started 4 glusterfsd process to export 4 bricks, and  1 glusterfs process to 
mount them in a directory.
If i write a file named "KAKA" into the mountpoint, it's hashed to server 2 and 
the
write returns ok. Then if i kill the process of server 2, and write a file 
named "KAKA" again into the mountpoint
 to test whether the glusterfs can hash the file to one of the rest 3 servers. 
but the write operation got a "Transport endpoint is not connected"
error in command line. the log of client process shows that it also tried to 
hash file "KAKA" to server 2,not
one of the rest 3 servers. is this the expected result?

Actually,  I expected the glusterfs can write the file "KAKA" to one of the 
rest 3 servers. so i can test
when the process of server 2 is restarted, how glusterfs handle the duplicate 
files.

Anyone can help me ? thanks in advance!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Dose Gluster 3.1 support authorisation control and how to do

2011-01-10 Thread HU Zhong
Hi

It seems that the node 10.18.14.240 runs both server and client. 
If not, write the server list and the client list here.
As you can see in the log, the node other than above are all accepted by
the server, so you can add both 10.18.14.240 and 127.0.0.1 to the
ip-allowed list to see whether it works or not.


On Tue, 2011-01-11 at 01:25 +0800, W.C Lee wrote: 
> Hi, HU
> 
> Thank for your help.
> I tried to use your example(1 server ,1 Client) to test authentication 
> function, it's work.
> 
> But I tried to test it in replication mode (multi-node),FUSE mounting work, 
> but NFS didn't.
> Any node can mount volume via NFS. ><
> 
> And 
> Following is my config.
> 
>  26: volume gluster-new-volume-server
>  27: type protocol/server
>  28: option transport-type tcp
>  29: option auth.addr./mnt/gluster1.allow 
> 10.18.14.240,10.18.14.248,10.18.14.241,10.18.14.242,10.18.14.243
>  30: subvolumes /mnt/gluster1
>  31: end-volume
> 
> 
> After starting volume, log showed below:
> 
> +--+
> [2011-01-11 01:07:54.188695] E [authenticate.c:235:gf_authenticate] auth: no 
> authentication module is interested in accepting remote-client (null)
> [2011-01-11 01:07:54.188716] E [server-handshake.c:545:server_setvolume] 
> gluster-new-volume-server: Cannot authenticate client from 127.0.0.1:1017
> [2011-01-11 01:07:55.264728] I [server-handshake.c:535:server_setvolume] 
> gluster-new-volume-server: accepted client from 10.18.14.241:995
> [2011-01-11 01:07:55.267990] I [server-handshake.c:535:server_setvolume] 
> gluster-new-volume-server: accepted client from 10.18.14.242:1012
> [2011-01-11 01:07:55.272025] I [server-handshake.c:535:server_setvolume] 
> gluster-new-volume-server: accepted client from 10.18.14.243:996
> 
> 
> Do you know is it necessary to set 127.0.0.1 to allow list?
> And it can't use host real ip (10.18.14.240) ?
> 
> But even if I used 127.0.0.1 to replace 10.18.14.240, NFS authentication 
> control still not work. ><
> 
> 
> 
> -Original message-
> From:HU Zhong 
> To:wei.ch...@m2k.com.tw
> Cc:gluster-users 
> Date:Mon, 10 Jan 2011 11:36:00 +0800
> Subject:Re: [Gluster-users] Dose Gluster 3.1 support authorisation control 
> and how to do
> 
> 
> Hi, Cheng
> 
> I think you did the configuration in the wrong place. Instead of
> /etc/glusterd/nfs/nfs-server.vol, you need to modify files
> under /etc/glusterd/vols/.
> 
> As a simple example, consider a one-server-one-client system, both
> server and client are one machine(localhost, ip:192.168.4.112), and
> export directory /home/huz/share for sharing, the client wants to mount
> it on /home/huz/mnt.
> 
> if i modify default
> configuration 
> /etc/glusterd/vols/testvol/testvol.192.168.4.112.home-huz-share.vol
> 
> from
> ..
> 26 volume testvol-server
> 27 type protocol/server
> 28 option transport-type tcp
> 29 option auth.addr./home/huz/share.allow *
> 30 subvolumes /home/huz/share
> 31 end-volume
> 
> to
> ..
> 26 volume testvol-server
> 27 type protocol/server
> 28 option transport-type tcp
> 29 option auth.addr./home/huz/share.reject *
> 30 subvolumes /home/huz/share
> 31 end-volume
> 
> the mount command will fail:
> $sudo mount -o mountproto=tcp -t nfs localhost:/testvol /home/huz/mnt
> mount.nfs: mounting localhost:/testvol failed, reason given by server:
>   No such file or directory
> 
> and the log shows that the authentication error.
> 11-01-10 11:09:58.203600] E
> [client-handshake.c:786:client_setvolume_cbk] testvol-client-0:
> SETVOLUME on remote-host failed: Authentication failed
> 
> change "reject" to "allow", the mount operation will be ok.
> 
> you can configure you own ip rule. As for how to use ip auth and
> usrname/password auth, you can check the attachment. It's a
> documentation file under the directory "doc" of glusterfs src project.
> 
> On Sun, 2011-01-09 at 22:31 +0800, 第二信箱 wrote:
> > Hi, HU:
> > Thanks for your help.
> > 
> > I have the following environment:
> > Gluster 3.1.1
> > Volume Name: gluster-volume
> > Type: Distributed-Replicate
> > Status: Started
> > Number of Bricks: 2 x 2 = 4
> > Transport-type: tcp
> > Bricks:
> > Brick1: gluster1:/mnt/gluster1
> > Brick2: gluster2:/mnt/gluster2
> > Brick3: gluster3:/mnt/gluster3
> > Brick4: gluster4:/mnt/gluster4
> > 
> > 
> > I want to use authenticate module by your suggestion.
> > The way I used below:
> > 1. Stop Volume
> > 2. Edit /etc/glusterd/nfs/nfs-server.vol on Brick1(Gluster1)
> > 3. Modify and Add  From
> >volume nfs-server
> > type nfs/server
> > option nfs.dynamic-volumes on
> > option rpc-auth.addr.gluster-volume.allow *
> > option nfs3.gluster-volume.volume-id 907941d9-6950-425b-
> > b3d5-4e43dd420d9e
> > subvolumes gluster-volume
> > end-volume
> > 
> > to 
> > 
> > volume nfs-server
> > type nfs/server
> > option nfs.

Re: [Gluster-users] Dose Gluster 3.1 support authorisation control and how to do

2011-01-09 Thread HU Zhong
Hi, Cheng

I think you did the configuration in the wrong place. Instead of
/etc/glusterd/nfs/nfs-server.vol, you need to modify files
under /etc/glusterd/vols/.

As a simple example, consider a one-server-one-client system, both
server and client are one machine(localhost, ip:192.168.4.112), and
export directory /home/huz/share for sharing, the client wants to mount
it on /home/huz/mnt.

if i modify default
configuration 
/etc/glusterd/vols/testvol/testvol.192.168.4.112.home-huz-share.vol

from
..
26 volume testvol-server
27 type protocol/server
28 option transport-type tcp
29 option auth.addr./home/huz/share.allow *
30 subvolumes /home/huz/share
31 end-volume

to
..
26 volume testvol-server
27 type protocol/server
28 option transport-type tcp
29 option auth.addr./home/huz/share.reject *
30 subvolumes /home/huz/share
31 end-volume

the mount command will fail:
$sudo mount -o mountproto=tcp -t nfs localhost:/testvol /home/huz/mnt
mount.nfs: mounting localhost:/testvol failed, reason given by server:
  No such file or directory

and the log shows that the authentication error.
11-01-10 11:09:58.203600] E
[client-handshake.c:786:client_setvolume_cbk] testvol-client-0:
SETVOLUME on remote-host failed: Authentication failed

change "reject" to "allow", the mount operation will be ok.

you can configure you own ip rule. As for how to use ip auth and
usrname/password auth, you can check the attachment. It's a
documentation file under the directory "doc" of glusterfs src project.

On Sun, 2011-01-09 at 22:31 +0800, 第二信箱 wrote:
> Hi, HU:
> Thanks for your help.
> 
> I have the following environment:
> Gluster 3.1.1
> Volume Name: gluster-volume
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/mnt/gluster1
> Brick2: gluster2:/mnt/gluster2
> Brick3: gluster3:/mnt/gluster3
> Brick4: gluster4:/mnt/gluster4
> 
> 
> I want to use authenticate module by your suggestion.
> The way I used below:
> 1. Stop Volume
> 2. Edit /etc/glusterd/nfs/nfs-server.vol on Brick1(Gluster1)
> 3. Modify and Add  From
>volume nfs-server
> type nfs/server
> option nfs.dynamic-volumes on
> option rpc-auth.addr.gluster-volume.allow *
> option nfs3.gluster-volume.volume-id 907941d9-6950-425b-
> b3d5-4e43dd420d9e
> subvolumes gluster-volume
> end-volume
> 
> to 
> 
> volume nfs-server
> type nfs/server
> option nfs.dynamic-volumes on
> option rpc-auth.addr.gluster-volume.allow  10.18.14.1
> option auth.addr.gluster-volume.allow 10.18.14.1
> option nfs3.gluster-volume.volume-id
> 907941d9-6950-425b-b3d5-4e43dd420d9e
> subvolumes gluster-volume
> end-volume
> 
> 4.Start Volume
> 
> --> But I still be able to mount volume from 10.18.14.2 by NFS.
> 
> Anything I missed or be wrong?
> 
> And I find 
> 
> A. After I started volume , nfs-server.vol was initialed to option
> rpc-auth.addr.gluster-volume.allow * .
> B. 4 nodes all have /etc/glusterd/nfs/nfs-server.vol , Should I Edit
> every .vol file on 4 nodes?
> 
> 
> 
> 
> 
> 
> -Original message-
> From:HU Zhong 
> To:wei.ch...@m2k.com.tw
> Cc:gluster-users 
> Date:Fri, 07 Jan 2011 21:17:14 +0800
> Subject:Re: [Gluster-users] Dose Gluster 3.1 support authorisation
> control and how to do
> 
> Hi, Cheng
> 
> There are 2 types of authenticate module that you can config:
> 1. IP address
> 2. login user/password
> 
> please check this site:
> http://www.gluster.com/community/documentation/index.php/Translators/protocol/server
> 
> 
> On Fri, 2011-01-07 at 17:07 +0800, 第二信箱 wrote: 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 


* Authentication is provided by two modules addr and login. Login based 
authentication uses username/password from client for authentication. Each 
module returns either ACCEPT, REJCET or DONT_CARE. DONT_CARE is returned if the 
input authentication information to the module is not concerned to its working. 
The theory behind authentication is that "none of the auth modules should 
return REJECT and atleast one of them should return ACCEPT"

* Currently all the authentication related information is passed un-encrypted 
over the network from client to server.


* options provided in protocol/client:
* for username/password based authentication:
  option username 
  option password 
* client can have only one set of username/password
* for addr based authentication:
  * no options required in protocol/client. Client has to bind to 
privileged port (port < 1024 ) which means the process in which protocol/client 
is loaded has to be run as root.

---

Re: [Gluster-users] Dose Gluster 3.1 support authorisation control and how to do

2011-01-07 Thread HU Zhong
Hi, Cheng

There are 2 types of authenticate module that you can config:
1. IP address
2. login user/password

please check this site:
http://www.gluster.com/community/documentation/index.php/Translators/protocol/server


On Fri, 2011-01-07 at 17:07 +0800, 第二信箱 wrote: 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] bind-address in 3.1

2011-01-05 Thread HU Zhong
Hi Stefan

use transport.socket.bind-address or transport.rdma.bind-address
insteading of bind-address.


On Mon, 2011-01-03 at 10:43 +0100, Stefan Becker wrote: 
> Hi together,
> 
> I am trying to bind all gluster daemons to an internal interface. I worked 
> with version 1.x some time ago and it had the option "bind-address". Is this 
> gone or just some docs missing? I cant figure it out. I want ALL daemons to 
> bind to an internal interface, is this possible?
> 
> Greets,
> stefan
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] What does "dict: @data=(nil)" mean?

2010-12-30 Thread HU Zhong
Hi

I think it means that the glusterfs cannot find some options from the
configure file. For example, if you doesn't specify option
"remote-subvolume" in protocol/client section, the warning message would
be logged.


On Wed, 2010-12-29 at 12:21 -0800, Freddie Gutierrez wrote: 
> I get lots of these in my logs and I haven't been able to find anything on 
> google...
> 
> W [dict.c:1204:data_to_str] dict: @data=(nil)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users