Re: [Gluster-users] Disabling NFS

2011-03-31 Thread Shehjar Tikoo

Mike Hanby wrote:

Strange, I do have the fuse and gluster-fuse / gluster-core packages
installed on the client.

I can mount the volume using the gluster

nas-srv-01:/users   /users   glusterfs defaults,_netdev   0
0

Maybe I just need to figure out how to configure builtin Gluster NFS to
export like I want.


See http://gluster.com/community/documentation/index.php/Gluster_3.1_NFS_Guide



I'm trying to ensure that from the Gluster servers, I have fine control
over which ip addresses can mount specific parts of the volume, similar
to what can be done via /etc/exports


See rpc-auth.addr option in the NFS options section in the 3.1.3 release notes:

http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/Gluster_FS_Release_Notes_3.1.3.pdf

-Shehjar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] strange error hangs hangs any access to gluster mount

2011-03-31 Thread Amar Tumballi
Hi James,

To fix this, you can go to *any one pair* backend and run below commands on
the directories where the layout has issues:

bash# setfattr -x trusted.glusterfs.dht directory

[ pair backend means, in replica set's volumes ]

and then from the client machine (ie, where you have mount point), run below
commands,

 bash# echo 3  /proc/sys/vm/drop_caches
 bash# stat directory # through the mount point.

In this step, the layout will get fixed again automatically, which should
solve this issue.

Regards,
Amar


On Tue, Mar 29, 2011 at 12:45 AM, Burnash, James jburn...@knight.comwrote:

 Thanks Jeff. That at least gives me shot at figuring out some similar
 problems.

 It's possible that in the course of bringing up the mirrors initially I
 futzed something up. I'll have to check the read-write servers as well.

 James Burnash, Unix Engineering

 -Original Message-
 From: Jeff Darcy [mailto:jda...@redhat.com]
 Sent: Monday, March 28, 2011 3:09 PM
 To: Burnash, James
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] strange error hangs hangs any access to
 gluster mount

 On 03/28/2011 02:29 PM, Burnash, James wrote:
  Sorry - paste went awry.
 
  Updated here:
 
  http://pastebin.com/M74LAYej

 OK, that definitely shows a problem.  Here's the whole map of which
 nodes are claiming which ranges:

  0ccb: g07 on gfs17/gfs18
 0ccc 1997: g08 on gfs17/gfs18
 1998 2663: g09 on gfs17/gfs18
 2664 332f: g10 on gfs17/gfs18
 3330 3ffb: g01 on gfs17/gfs18
 3ffc 4cc7: g02 on gfs17/gfs18
 4cc8 5993: g01 on gfs14/gfs14
 5994 665f: g02 on gfs14/gfs14
 6660 732b: g03 on gfs14/gfs14
 732c 7ff7: g04 on gfs14/gfs14
 7ff8 8cc3: g05 on gfs14/gfs14
 8cc4 998f: g06 on gfs14/gfs14
 9990 a65b: g07 on gfs14/gfs14
 a65c b327: g08 on gfs14/gfs14
 b328 b32e: g09 on gfs14/gfs14
 b32f bff3: g09 on gfs14/gfs14
   *** AND g04 on gfs17/18
 bff4 ccbf: g10 on gfs14/gfs14
   *** AND g04 on gfs17/18
 ccc0 ccc7: g03 on gfs17/gfs18
   *** AND g04 on gfs17/18
 ccc8 d98b: g03 on gfs17/gfs18
 d98c e657: *** GAP ***
 e658 f323: g05 on gfs17/gfs18
 f324 : g06 on gfs17/gfs18

 I know this all seems like numerology, but bear with me.  Note that all
 of the problems seem to involve g04 on gfs17/gfs18 claiming the wrong
 range, and that the range it's claiming is almost exactly twice the size
 of all the other ranges.  In fact, it's the range it would have been
 assigned if there had been ten nodes instead of twenty.  For example, if
 that filesystem had been restored to an earlier state on gfs17/gfs18,
 and then self-healed in the wrong direction (self-mangled?) you would
 get exactly this set of symptoms.  I'm not saying that's what happened;
 it's just a way to illustrate what these values mean and the
 consequences of their being out of sync with each other.

 So, why only one client?  Since you're reporting values on the servers,
 I'd guess it's because only that client has remounted.  The others are
 probably still operating from cached (and apparently correct) layout
 information.  This is a very precarious state, I'd have to say.  You
 *might* be able to fix this by fixing the xattr values on that one
 filesystem, but I really can't recommend trying that without some input
 from Gluster themselves.


 DISCLAIMER:
 This e-mail, and any attachments thereto, is intended only for use by the
 addressee(s) named herein and may contain legally privileged and/or
 confidential information. If you are not the intended recipient of this
 e-mail, you are hereby notified that any dissemination, distribution or
 copying of this e-mail, and any attachments thereto, is strictly prohibited.
 If you have received this in error, please immediately notify me and
 permanently delete the original and any copy of any e-mail and any printout
 thereof. E-mail transmission cannot be guaranteed to be secure or
 error-free. The sender therefore does not accept liability for any errors or
 omissions in the contents of this message which arise as a result of e-mail
 transmission.
 NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
 its discretion, monitor and review the content of all e-mail communications.
 http://www.knight.com
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Any Update: Req help for configuring glusterfs 3.1.2

2011-03-31 Thread Deadpan110
*sorry - twice today I have forgot to include the mailing list in the reply*

Hiya,

Using the 'gluster' command in gluster 3.1.x is the preferred method
to set up volumes so there should be no need to alter config files
manually (I am unsure of the state of custom translators and would not
know if you need them for your situation).

But just to recap...


Distributed Volumes:
-

# gluster volume create test-volume transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4

This method will use the storage allocated in every brick for use as a
large file system. There is no failover - so if you reboot a server,
the filesystem will be unavailable until the server is back online (if
you loose the server and its bricks - some files will be
unrecoverable).

Files are written to the storage so as to attempt to make each brick
use approximately the same space.

(file1 - server1 ... file2 - server2 ... etc)


Distributed Replicated Volumes:
---

# gluster volume create test-volume replica 2 transport tcp
server1:/exp1 server2:/exp2

The above example shows what could be considered similar to a raid
mirror. Using 'replica 2' means that files will be written to 2 bricks
and if there are only 2 servers, then each will contain all the files.
If one server becomes unavailable, files can still be accessed (if you
loose a server, all data is still available).

The key point to note here is the 'replica 2' statement.
You can possibly think of it as [(replica count - 1) = bricks that can
go down before the file system is unavailable]

In the case of 10 servers, the files will be distributed across the
bricks with each file stored on a maximum of 2 bricks which gives you
the ability to safely reboot a node (one at a time).


Distributed Striped Volumes:
-

This is great for storing very large files as they can be striped and
replicated across bricks - I will not go into detail on this, but the
setup is similar to 'Distributed Replicated Volumes' above.


Suggestion:
-

For 10 servers with varying amount of space available on each, I would
make partitions for bricks to reside on all of the same size.

This may involve a lot of work if you have:

server1:/data(120GB)
server2:/data(120GB)
server3:/home/g1(2TB)
server4:/home/g2(2TB)

But for an example, if each server contained a brick of 120GB and you
used 'replica 2' - you would then have a total of 10 bricks with a
single file stored on 2 bricks - giving you approximately 1/5 of
distributed replicated storage used for the replication.

(Not sure if my figures are correct, but in essence - you would get
more storage and a degree of failover)

It is also recommended that the number of bricks should be a multiple
of the replica count.

It is possible to use multiple bricks on a single server, for example
20 bricks across 10 nodes using replica 4 - but then it should be
noted that a single server containing 2 bricks would be safe to reboot
- but rebooting a second server will take down an additional 2 bricks
and could make the filesystem unavailable until one of the servers are
back online depending on where the bricks have been configured in the
replication.


I hope that helps

On 28 March 2011 22:37, s.varadha rajan rajanvara...@gmail.com wrote:
 Hi Team,

 Anybody can help me for my below query ?

 Regards,
 Varadharajan.S

 -- Forwarded message --
 From: s.varadha rajan rajanvara...@gmail.com
 Date: Wed, Mar 23, 2011 at 6:27 PM
 Subject: Req help for configuring glusterfs 3.1.2
 To: gluster-de...@nongnu.org


 Hi,

 I would like to implement glusterfs 3.1 in my concern.We have around 10
 servers
 and all are holding diff applications such as
 webserver(apache,Tomcat),vmware,DNS serverall the servers are having
 Diff
 disk capacity such as 1 TB,2 TB like that.All the systems are having Ubuntu
 10.04

 My Requirement:

 1.Need to connect all the servers through glusterfs 3.1.x
 2.If i configured as Replication method, i am not getting Disk space.So i
 need
 to configure as like striping method.but glusterfs doesn't provide, fail
 over
 for this.
 3.for e.x if i take
 server1:/data(120GB),server2:/data(120GB),server3:/home/g1=2TB,server4:/home/g2=2TB..i
 want to connect all the servers.so that i can get big storage space for
 all.if i
 go for striping or distributed, if one server fails, i can't access the
 volume
 and get an error as Transport end point not connected

 4.Suppose if i go for unify - Translator will solve my requirement ? but i
 think no support for unify config under
  3.1.2. version.

 Please let me know the solution and idea for config for this.i am searching
 in
 google for the past 10 days but no proper result.

 Regards,
 Varadharajan.S

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 

[Gluster-users] Error rpmbuild Glusterfs 3.1.3

2011-03-31 Thread Jürgen Winkler

Hi,

i have a lot of troubles when i try to build rpm´s out of the glusterfs 
3.1.3 tgz on my SLES Servers (SLES10.1  SLES11.1)


all is running fine i guess until it try´s to build the rpm´s.

Then i always run into this error :

RPM build errors:
File not found: 
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd
File not found by glob: 
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/python/syncdaemon/*


are there missing dependencies ore something ?

Thx


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error rpmbuild Glusterfs 3.1.3

2011-03-31 Thread Lakshmipathi.G
Hi - 
Please apply the patch (by Joe) from here 
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2279 and rebuild 
again.

-- 

Cheers,
Lakshmipathi.G
FOSS Programmer.

- Original Message -
From: Jürgen Winkler juergen.wink...@xidras.com
To: gluster-users@gluster.org
Sent: Thursday, March 31, 2011 1:21:13 PM
Subject: [Gluster-users] Error rpmbuild Glusterfs 3.1.3

Hi,

i have a lot of troubles when i try to build rpm´s out of the glusterfs 
3.1.3 tgz on my SLES Servers (SLES10.1  SLES11.1)

all is running fine i guess until it try´s to build the rpm´s.

Then i always run into this error :

RPM build errors:
File not found: 
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd
File not found by glob: 
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/python/syncdaemon/*

are there missing dependencies ore something ?

Thx


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] strange error hangs hangs any access to gluster mount

2011-03-31 Thread Burnash, James
Amar,

Thank you so much! I have a big mEeting today with the customer, and having 
this solved will go a long way towards making them happier.

James

From: Amar Tumballi [mailto:a...@gluster.com]
Sent: Thursday, March 31, 2011 02:33 AM
To: Burnash, James
Cc: Jeff Darcy jda...@redhat.com; gluster-users@gluster.org 
gluster-users@gluster.org
Subject: Re: [Gluster-users] strange error hangs hangs any access to gluster 
mount

Hi James,

To fix this, you can go to any one pair backend and run below commands on the 
directories where the layout has issues:

bash# setfattr -x trusted.glusterfs.dht directory

[ pair backend means, in replica set's volumes ]

and then from the client machine (ie, where you have mount point), run below 
commands,

 bash# echo 3  /proc/sys/vm/drop_caches
 bash# stat directory # through the mount point.

In this step, the layout will get fixed again automatically, which should solve 
this issue.

Regards,
Amar


On Tue, Mar 29, 2011 at 12:45 AM, Burnash, James 
jburn...@knight.commailto:jburn...@knight.com wrote:
Thanks Jeff. That at least gives me shot at figuring out some similar problems.

It's possible that in the course of bringing up the mirrors initially I futzed 
something up. I'll have to check the read-write servers as well.

James Burnash, Unix Engineering

-Original Message-
From: Jeff Darcy [mailto:jda...@redhat.commailto:jda...@redhat.com]
Sent: Monday, March 28, 2011 3:09 PM
To: Burnash, James
Cc: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] strange error hangs hangs any access to gluster 
mount

On 03/28/2011 02:29 PM, Burnash, James wrote:
 Sorry - paste went awry.

 Updated here:

 http://pastebin.com/M74LAYej

OK, that definitely shows a problem.  Here's the whole map of which
nodes are claiming which ranges:

 0ccb: g07 on gfs17/gfs18
0ccc 1997: g08 on gfs17/gfs18
1998 2663: g09 on gfs17/gfs18
2664 332f: g10 on gfs17/gfs18
3330 3ffb: g01 on gfs17/gfs18
3ffc 4cc7: g02 on gfs17/gfs18
4cc8 5993: g01 on gfs14/gfs14
5994 665f: g02 on gfs14/gfs14
6660 732b: g03 on gfs14/gfs14
732c 7ff7: g04 on gfs14/gfs14
7ff8 8cc3: g05 on gfs14/gfs14
8cc4 998f: g06 on gfs14/gfs14
9990 a65b: g07 on gfs14/gfs14
a65c b327: g08 on gfs14/gfs14
b328 b32e: g09 on gfs14/gfs14
b32f bff3: g09 on gfs14/gfs14
  *** AND g04 on gfs17/18
bff4 ccbf: g10 on gfs14/gfs14
  *** AND g04 on gfs17/18
ccc0 ccc7: g03 on gfs17/gfs18
  *** AND g04 on gfs17/18
ccc8 d98b: g03 on gfs17/gfs18
d98c e657: *** GAP ***
e658 f323: g05 on gfs17/gfs18
f324 : g06 on gfs17/gfs18

I know this all seems like numerology, but bear with me.  Note that all
of the problems seem to involve g04 on gfs17/gfs18 claiming the wrong
range, and that the range it's claiming is almost exactly twice the size
of all the other ranges.  In fact, it's the range it would have been
assigned if there had been ten nodes instead of twenty.  For example, if
that filesystem had been restored to an earlier state on gfs17/gfs18,
and then self-healed in the wrong direction (self-mangled?) you would
get exactly this set of symptoms.  I'm not saying that's what happened;
it's just a way to illustrate what these values mean and the
consequences of their being out of sync with each other.

So, why only one client?  Since you're reporting values on the servers,
I'd guess it's because only that client has remounted.  The others are
probably still operating from cached (and apparently correct) layout
information.  This is a very precarious state, I'd have to say.  You
*might* be able to fix this by fixing the xattr values on that one
filesystem, but I really can't recommend trying that without some input
from Gluster themselves.


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org

[Gluster-users] Trying to get gluster up and running. Features/locks

2011-03-31 Thread Hreinn Ágústsson
I always get this error.

Running - Linux 2.6.32-28-server, Ubuntu server, 10.10 x64


[2011-03-31 12:22:58.441625] W [posix.c:1883:init] 0-locks: Volume is dangling. 
Please check the volume file.
Given volfile:
+--+
  1: volume brick
  2: type storage/posix
  3: option directory /glustermnt/sdb1/a1
  4: end-volume
  5:
  6: volume locks
  7:   type features/posix-locks
  8:   subvolumes brick
  9: end-volume
10:
 11:
 12: volume server
13: type protocol/server
14: option transport-type tcp/server
15: option transport.socket.listen-port 6996
16: option transport.socket.bind-address 127.0.0.1
17: option auth.addr.brick.allow *
18: subvolumes brick
 19: end-volume
[2011-03-31 12:22:58.480571] I [server-handshake.c:535:server_setvolume] 
0-server: accepted client from 127.0.0.1:1023
[2011-03-31 12:22:58.504177] C [posix.c:3929:posix_entrylk] 0-brick: 
features/locks translator is not loaded. You need to use it for proper 
functioning of GlusterFS

Am unable to find any refrences to this problem online...

Best regards,
Hreinn
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Trying to get gluster up and running. Features/locks

2011-03-31 Thread Amar Tumballi
2011/3/31 Hreinn Ágústsson hre...@nordicphotos.com

 I always get this error.

 Running - Linux 2.6.32-28-server, Ubuntu server, 10.10 x64


 [2011-03-31 12:22:58.441625] W [posix.c:1883:init] 0-locks: Volume is
 dangling. Please check the volume file.
 Given volfile:

 +--+
  1: volume brick
  2: type storage/posix
  3: option directory /glustermnt/sdb1/a1
  4: end-volume
  5:
  6: volume locks
  7:   type features/posix-locks
  8:   subvolumes brick
  9: end-volume
 10:
  11:
  12: volume server
 13: type protocol/server
 14: option transport-type tcp/server
 15: option transport.socket.listen-port 6996
 16: option transport.socket.bind-address 127.0.0.1
 17: option auth.addr.brick.allow *
 18: subvolumes brick
  19: end-volume
 [2011-03-31 12:22:58.480571] I [server-handshake.c:535:server_setvolume]
 0-server: accepted client from 127.0.0.1:1023
 [2011-03-31 12:22:58.504177] C [posix.c:3929:posix_entrylk] 0-brick:
 features/locks translator is not loaded. You need to use it for proper
 functioning of GlusterFS

 Looking at the logs, either you are using one of 3.1.3, 3.2.0qaNN, or git
master branch code.  In all these, we recommend using the volumes created by
glusterd using gluster CLI. Please refer to our documentation for how to get
started with GlusterFS 3.1.x releases.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Any Update: Req help for configuring

2011-03-31 Thread Hand, John (CONTR)
Hello,

With the Distributed Volumes configuration, is it possible to specify
that files be written locally first (like NUFA in 3.0) ?

Thanks,
-john






Message: 6
Date: Thu, 31 Mar 2011 18:01:53 +1030
From: Deadpan110 deadpan...@gmail.com
Subject: Re: [Gluster-users] Any Update: Req help for configuring
glusterfs   3.1.2
To: s.varadha rajan rajanvara...@gmail.com
Cc: gluster-users@gluster.org
Message-ID:
aanlktikx9v-_hwphtfwotnkd+a1mpujqfxknom3u8...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

*sorry - twice today I have forgot to include the mailing list in the
reply*

Hiya,

Using the 'gluster' command in gluster 3.1.x is the preferred method
to set up volumes so there should be no need to alter config files
manually (I am unsure of the state of custom translators and would not
know if you need them for your situation).

But just to recap...


Distributed Volumes:
-

# gluster volume create test-volume transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4

This method will use the storage allocated in every brick for use as a
large file system. There is no failover - so if you reboot a server,
the filesystem will be unavailable until the server is back online (if
you loose the server and its bricks - some files will be
unrecoverable).

Files are written to the storage so as to attempt to make each brick
use approximately the same space.

(file1 - server1 ... file2 - server2 ... etc)



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster CLI, no volume present although a volume is working fine.

2011-03-31 Thread Hreinn Agustsson
Hi all,
having some problems connecting with gluster CLI,

This is in my server.vol file:

*volume management*
*type mgmt/glusterd*
*option working-directory /etc/glusterd*
*option transport-type tcp*
*option transport.socket.keepalive-time 10*
*option transport.socket.keepalive-interval 2*
*end-volume*

And I can connect to it with
*root@steve:~# gluster volume info --remote-host=127.0.0.1*
(I can see it makes contact if I run glusterfsd --debug, so it's connecting
fine.. )

But it always responds with:
*No volumes present*

This is what I can see in the server.
*[2011-03-31 21:53:06.706491] I
[glusterd-handler.c:775:glusterd_handle_cli_get_volume] 0-glusterd: Received
get vol req*

But I have a mounted volume, and am copying files to it and it's working
fine.

Below are my client.vol  server.vol files if interested.

I'm running glusterfs on a single server for now.


-- 
-Hr1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Trying to evaluate gluster 3.1

2011-03-31 Thread Bill Gerrard
I'm setting up gluster for the very first time as an evaluation on
debian/squeeze.  Using the 3.1.3 debian package and following the
online 3.1 documentation I'm able to get the volume created and
started, however I'm unable to connect from the client.

After mounting on the client I get cannot access /mnt/glusterfs:
Transport endpoint is not connected when trying to do an ls.  I
believe the issue is listed in the log file and I do not know how to
solve it as I've followed the instructions step by step.

transcript below:

testsan1:/# gluster volume create test-volume replica 2 transport tcp
testsan1:/export/test-volume testsan2:/export/test-volume
Creation of volume test-volume has been successful. Please start the
volume to access data.

testsan1:/# gluster volume start test-volume
Starting volume test-volume has been successful

testsan1:/# gluster volume info test-volume

Volume Name: test-volume
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: testsan1:/export/test-volume
Brick2: testsan2:/export/test-volume

testsan1:/# cat /var/log/glusterfs/bricks/export-test-volume.log

[2011-03-31 16:08:28.869084] W [graph.c:274:gf_add_cmdline_options]
0-test-volume-server: adding option 'listen-port' for volume
'test-volume-server' with value '24009'
[2011-03-31 16:08:28.871483] W
[rpc-transport.c:444:validate_volume_options]
0-tcp.test-volume-server: option 'listen-port' is deprecated,
preferred is 'transport.socket.listen-port', continuing with
correction
[2011-03-31 16:08:28.871784] C [posix.c:4371:init]
0-test-volume-posix: Extended attribute not supported, exiting.
[2011-03-31 16:08:28.871806] E [xlator.c:784:xlator_init]
0-test-volume-posix: Initialization of volume 'test-volume-posix'
failed, review your volfile again
[2011-03-31 16:08:28.871818] E [graph.c:331:glusterfs_graph_init]
0-test-volume-posix: initializing translator failed
[2011-03-31 16:08:28.872146] I [glusterfsd.c:700:cleanup_and_exit]
0-glusterfsd: shutting down

on client:

mkdir -p /mnt/glusterfs
sudo mount -t glusterfs testsan1:/test-volume /mnt/glusterfs

$ ls -l /mnt/glusterfs
ls: cannot access /mnt/glusterfs: Transport endpoint is not connected
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Trying to evaluate gluster 3.1

2011-03-31 Thread Mohit Anchlia
Which file system are you using?

0-test-volume-posix: Extended attribute not supported, exiting.

You need to use ext3/ext4 or xfs that supports extended attributes.

On Thu, Mar 31, 2011 at 3:29 PM, Bill Gerrard b...@daze.net wrote:
 I'm setting up gluster for the very first time as an evaluation on
 debian/squeeze.  Using the 3.1.3 debian package and following the
 online 3.1 documentation I'm able to get the volume created and
 started, however I'm unable to connect from the client.

 After mounting on the client I get cannot access /mnt/glusterfs:
 Transport endpoint is not connected when trying to do an ls.  I
 believe the issue is listed in the log file and I do not know how to
 solve it as I've followed the instructions step by step.

 transcript below:

 testsan1:/# gluster volume create test-volume replica 2 transport tcp
 testsan1:/export/test-volume testsan2:/export/test-volume
 Creation of volume test-volume has been successful. Please start the
 volume to access data.

 testsan1:/# gluster volume start test-volume
 Starting volume test-volume has been successful

 testsan1:/# gluster volume info test-volume

 Volume Name: test-volume
 Type: Replicate
 Status: Started
 Number of Bricks: 2
 Transport-type: tcp
 Bricks:
 Brick1: testsan1:/export/test-volume
 Brick2: testsan2:/export/test-volume

 testsan1:/# cat /var/log/glusterfs/bricks/export-test-volume.log

 [2011-03-31 16:08:28.869084] W [graph.c:274:gf_add_cmdline_options]
 0-test-volume-server: adding option 'listen-port' for volume
 'test-volume-server' with value '24009'
 [2011-03-31 16:08:28.871483] W
 [rpc-transport.c:444:validate_volume_options]
 0-tcp.test-volume-server: option 'listen-port' is deprecated,
 preferred is 'transport.socket.listen-port', continuing with
 correction
 [2011-03-31 16:08:28.871784] C [posix.c:4371:init]
 0-test-volume-posix: Extended attribute not supported, exiting.
 [2011-03-31 16:08:28.871806] E [xlator.c:784:xlator_init]
 0-test-volume-posix: Initialization of volume 'test-volume-posix'
 failed, review your volfile again
 [2011-03-31 16:08:28.871818] E [graph.c:331:glusterfs_graph_init]
 0-test-volume-posix: initializing translator failed
 [2011-03-31 16:08:28.872146] I [glusterfsd.c:700:cleanup_and_exit]
 0-glusterfsd: shutting down

 on client:

 mkdir -p /mnt/glusterfs
 sudo mount -t glusterfs testsan1:/test-volume /mnt/glusterfs

 $ ls -l /mnt/glusterfs
 ls: cannot access /mnt/glusterfs: Transport endpoint is not connected
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Trying to evaluate gluster 3.1

2011-03-31 Thread Bill Gerrard
The filesystem is ext3.

An additional piece of information is that we are using openvz virtual
hosts for the evaluation. Perhaps gluster does not work on a virtual
machine?

On Thu, Mar 31, 2011 at 4:41 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
 Which file system are you using?

 0-test-volume-posix: Extended attribute not supported, exiting.

 You need to use ext3/ext4 or xfs that supports extended attributes.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] where are files physically stored in recent version (3.1.x)

2011-03-31 Thread Thai. Ngo Bao
Hello list,


I have been developing a mapreduce - application running on GlusterFS. 
Therefore files’ extended attributes are pretty much critical for the 
application. I am curious to know what is the official way of finding out where 
files are physically stored.

Should you have any suggestions or experience you want to share, please let me 
know.

Thanks,
Thai Ngo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users