[Gluster-users] Nufa deprecated?

2010-07-26 Thread Matt
Hi,

(i) a note at the top on the NUFA with single process page
http://www.gluster.com/community/documentation/index.php/NUFA_with_single_process
declares NUFA as deprecated.  
Does this mean any Nufa setup is scheduled to be unsupported of is it just the 
NUFA with single process as client and server processes have been merged. 

(ii) In setting up  Nufa/replicate with a single process I am right now 
starting the glusterfsd daemon by locally mounting my Nufa/replicate volume.  
This is a pretty strange way to start a daemon but first starting the daemon 
and then trying to mount locally does not work with my single .vol file. 

(iii) I am also wondering about my NUFA/replicate/distribute setup as 
described in http://gluster.org/pipermail/gluster-users/2010-July/004988.html
is this considered by the glusterFS developers an unsupported setup?

(iv) Would it be possible to re-introduce the unify translator to get a 
replicated NUFA setup with multiple local discs working? (I am quite willing
to not have the files distributed across local bricks.)

Thanks for the help!

Regards,
Matt
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to know (IP of GlusterServer) store a File in a GlusterFS System?

2010-07-26 Thread galaxmen

Thank you, but I have error,
After change code like your guide:
(REPLACE:
ret = dict_set_static_ptr 
(dict,trusted.glusterfs.location,priv-hostname); 
BY:

ret = dict_set_str (dict, trusted.glusterfs.location,  priv-hostname);)

I rebuild (make and make install, then restart gluster)
The result changed, but I't a strange string, not the Hostname:

[r...@test00/]# getfattr -n trusted.glusterfs.location /mnt/glfs/file1
getfattr: Removing leading '/' from absolute path names
# file: mnt/glfs/file1
trusted.glusterfs.location=*0sdGVzdDAxAA==*

While client is test00, servername is test01
-- Any thing error?


Amar Tumballi wrote:

Please apply this patch, and you should be able to get info of hostname:

diff --git a/xlators/storage/posix/src/posix.c
b/xlators/storage/posix/src/posix.c
index 0b7ab19..3275542 100644
--- a/xlators/storage/posix/src/posix.c
+++ b/xlators/storage/posix/src/posix.c
@@ -3232,9 +3232,9 @@ posix_getxattr (call_frame_t *frame, xlator_t *this,

if (loc-inode  S_ISREG (loc-inode-st_mode)  name 
(strcmp (name, trusted.glusterfs.location) == 0)) {
-ret = dict_set_static_ptr (dict,
-   trusted.glusterfs.location,
-   priv-hostname);
+ret = dict_set_str (dict,
+trusted.glusterfs.location,
+priv-hostname);
 if (ret  0) {
 gf_log (this-name, GF_LOG_WARNING,
 could not set hostname (%s) in
dictionary,





On Tue, Jul 13, 2010 at 4:31 AM, Jeff Anderson-Lee
jo...@eecs.berkeley.eduwrote:

  

On 7/9/2010 2:00 AM, Amar Tumballi wrote:



As of now (in 3.0.x releases) there is no proper solution.. but you can
try

bash# getfattr -n trusted.glusterfs.location $filename

which gives the hostname of the machine which contains it.. (it gives only
one output even if you are using 'replicate')


  

I tried this and got nothing.  In fact getfattr -d somefilename returned
nothing as well.

Jeff Anderson-Lee

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




  



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Simple Distributed Question

2010-07-26 Thread Brad Alexander
If I have the following setup how will files be distributed?  What
happens if I lose one box in this scenario?

 

Server1 - 3 X 146GB

Server2 - 3 X 146GB

Server3 - 3 X 146GB

Server4 - 3 X 146GB

 

Total = 1752 GB

 

 

I have large VMS that around 600 GB Each.

 

Thanks.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Simple Distributed Question

2010-07-26 Thread Andy Pace
It really depends on which server was hosting what VM.

This is why distribute/replicate is a good idea :) any 1 of the pairs can fail 
without data loss/outage

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Brad Alexander
Sent: Monday, July 26, 2010 10:44 AM
To: Gluster-users@gluster.org
Subject: [Gluster-users] Simple Distributed Question

If I have the following setup how will files be distributed?  What happens if I 
lose one box in this scenario?

 

Server1 - 3 X 146GB

Server2 - 3 X 146GB

Server3 - 3 X 146GB

Server4 - 3 X 146GB

 

Total = 1752 GB

 

 

I have large VMS that around 600 GB Each.

 

Thanks.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Missing files....No such file or directory

2010-07-26 Thread Shain Miley
I setup a test install client on Debian that is exporting 
Opensolaris/zfs  server.  After running it over the weekend I noticed 
the following in the gluster.log file:


Version  : glusterfs 3.0.5 built on Jul 22 2010 19:53:29
git: v3.0.5
Starting Time: 2010-07-22 20:09:44
Command line : glusterfs -f /etc/glusterfs/glusterfs-client.vol 
/glusterfs/client/ -l /var/log/glusterfs.log

PID  : 430
System name  : Linux
Nodename : gluster2
Kernel Release : 2.6.24-8-pve
Hardware Identifier: i686

Given volfile:
+--+
 1: # cat /etc/glusterfs/glusterfs-client.vol
 2: volume test
 3:   type protocol/client
 4:   option transport-type tcp
 5:   option remote-host 172.19.4.191 # IP address of the Server
 6:   option remote-subvolume brick
 7: end-volume
 8:
 9: #volume test2
10: #  type protocol/client
11: #  option transport-type tcp
12: #  option remote-host 172.19.4.124 # IP address of the Server
13: #  option remote-subvolume brick
14: #end-volume
15:
16:
17: volume distribute0
18:   type cluster/distribute
19:   subvolumes test
20: end-volume
21:

+--+
[2010-07-22 20:09:44] N [glusterfsd.c:1409:main] glusterfs: Successfully 
started
[2010-07-22 20:09:44] N [client-protocol.c:6288:client_setvolume_cbk] 
test: Connected to 172.19.4.191:6996, attached to remote volume 'brick'.
[2010-07-22 20:09:44] N [client-protocol.c:6288:client_setvolume_cbk] 
test: Connected to 172.19.4.191:6996, attached to remote volume 'brick'.
[2010-07-22 20:09:44] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.9
[2010-07-22 20:15:50] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:16:03] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:21:13] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:21:23] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:21:33] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:21:45] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:21:55] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:22:06] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:22:20] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:22:33] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-22 20:22:45] E [fuse-bridge.c:2836:fuse_setlk_cbk] 
glusterfs-fuse: SETLK not supported. loading 'features/posix-locks' on 
server side will add SETLK support.
[2010-07-24 01:17:11] W [fuse-bridge.c:493:fuse_entry_cbk] 
glusterfs-fuse: LOOKUP(/Logs/20100721) inode (ptr=0xb59e5718, 
ino=202300, gen=15) found conflict (ptr=0xb6400498, ino=202300, gen=15)
[2010-07-24 01:17:29] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
623309: OPENDIR() /lib/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-24 01:17:47] W [fuse-bridge.c:493:fuse_entry_cbk] 
glusterfs-fuse: LOOKUP(/libmp3/John Denver/The Harbor Lights Concert 
[Disc 2]) inode (ptr=0x8412190, ino=159061, gen=37529) found conflict 
(ptr=0xb5a183f0, ino=159061, gen=37529)
[2010-07-24 01:17:51] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
634501: OPENDIR() /libmp3/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-24 10:25:29] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
655822: OPENDIR() /lib/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-24 10:25:50] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
666990: OPENDIR() /libmp3/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-25 01:17:13] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
679467: OPENDIR() /lib/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-25 01:17:30] W [fuse-bridge.c:862:fuse_fd_cbk] glusterfs-fuse: 
687720: OPENDIR() /libmp3/Ramsey Lewis/Time Flies = -1 (No such file or 
directory)
[2010-07-25 10:25:14] W 

[Gluster-users] Inconsistent volume

2010-07-26 Thread Steve Wilson
I have a volume that is distributed and replicated.  While deleting a 
directory structure on the mounted volume, I also restarted the 
GlusterFS daemon on one of the replicated servers.  After the rm -rf 
command completed, it complained that it couldn't delete a directory 
because it wasn't empty.  But from the perspective of the mounted volume 
it appeared empty.  Looking at the individual bricks, though, I could 
see that there were files remaining in this directory.


My question: what is the proper way to correct this problem and bring 
the volume back to a consistent state?  I've tried using the ls -alR 
command to force a self-heal but for some reason this always causes the 
volume to become unresponsive from any client after 10 minutes or so.


Some clients/servers are running version 3.0.4 while the others are 
running 3.0.5.


Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager
Markey Center for Structural Biology
Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Inconsistent volume

2010-07-26 Thread Andy Pace
I too would like to know how to sync up a replicated pair of bricks. Right 
now I've got a slight difference between the 2...

Scale-n-defrag.sh didn't do much either. Looking forward to some help :)

 13182120616 139057220 12362648984   2% /export
Vs
 13181705324 139057208 12362233500   2% /export

Granted, it's a very small amount (and the total availalble is slightly 
different), but the amount used should be the same, no?



-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Wilson
Sent: Monday, July 26, 2010 3:35 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Inconsistent volume

I have a volume that is distributed and replicated.  While deleting a directory 
structure on the mounted volume, I also restarted the GlusterFS daemon on one 
of the replicated servers.  After the rm -rf 
command completed, it complained that it couldn't delete a directory because it 
wasn't empty.  But from the perspective of the mounted volume it appeared 
empty.  Looking at the individual bricks, though, I could see that there were 
files remaining in this directory.

My question: what is the proper way to correct this problem and bring the 
volume back to a consistent state?  I've tried using the ls -alR 
command to force a self-heal but for some reason this always causes the volume 
to become unresponsive from any client after 10 minutes or so.

Some clients/servers are running version 3.0.4 while the others are running 
3.0.5.

Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager Markey Center for Structural 
Biology Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users