Re: [Gluster-users] permission denied and no such file for real files

2011-01-06 Thread Thai. Ngo Bao
Dear All,

I am having a problem, and I think it is similar to Lana's case. The problem is 
glusterfs prevents clients to create/delete files and folders. This problem 
appears so regularly. I attached the log file of our client.

Please also keep informed that I'm running glusterfs 3.1.1 with 4 bricks, 
volumes distributed, tcp transport, fuse clients. The servers and clients are 
running CentOS 5.4.

Would you have any suggestions, please let me know. Thanks.

~Thai

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Lana Deere
Sent: Tuesday, December 21, 2010 6:27 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] permission denied and no such file for real files

I'm running glusterfs 3.1.1, 4 volumes distributed, tcp transport,
fuse clients.  The servers are running CentOS 5.5, the clients CentOS 5.4.

The symptom I'm having which reproduces the most frequently goes
like this.  There are a bunch of jobs (several hundred) running on
various client nodes (~50), but for no obvious reason one or several of
them will get a failure to open a file (errno 13, permission denied).
Subsequently it will not be possible to delete that directory because it
is not empty.  Attempts to delete the files which make it non-empty fail,
"no such file or directory".

I attached the program and driver script which I use to reproduce this,
but not some log files (I ran them with trace and even compressed they
are 150M).  It is not perfect, it only reproduces maybe one-third of the
runs.

Here's a typical output:


$ ATTACK-QRSH
Quantity of jobs: 1000
Mon Dec 20 14:59:52 EST 2010: rm .
Mon Dec 20 14:59:52 EST 2010: create data .
Mon Dec 20 14:59:54 EST 2010: wait for creates.
Couldn't create
data/051/307-this!"":is:!""the""filename""and""it""is""very""strange.tmp:
errno 13 (Permission denied)
Couldn't create
data/051/563-this!"":is:!""the""filename""and""it""is""very""strange.tmp:
errno 13 (Permission denied)
Mon Dec 20 15:04:45 EST 2010: check data .
Mon Dec 20 15:04:47 EST 2010: wait for checks.
Unable to reopen for read
data/051/563-this!"":is:!""the""filename""and""it""is""very""strange,
errno 2 (No such file or directory)
Unable to reopen for read
data/051/307-this!"":is:!""the""filename""and""it""is""very""strange,
errno 2 (No such file or directory)
Mon Dec 20 15:08:53 EST 2010: rm again .
rm: cannot remove directory `data/051': Directory not empty
Mon Dec 20 15:09:16 EST 2010: done
$ cd data/051
$ ls
179-this!"":is:!""the""filename""and""it""is""very""strange
435-this!"":is:!""the""filename""and""it""is""very""strange
$ rm *
rm: cannot remove
`179-this!"":is:!""the""filename""and""it""is""very""strange': No such
file or directory
rm: cannot remove
`435-this!"":is:!""the""filename""and""it""is""very""strange': No such
file or directory
$ cd ..
$ rm -r 051
rm: cannot remove
`051/179-this!"":is:!""the""filename""and""it""is""very""strange': No
such file or directory
rm: cannot remove
`051/435-this!"":is:!""the""filename""and""it""is""very""strange': No
such file or directory

Note that the files which won't delete are in the same directory as
had the permission error earlier.

.. Lana (lana.de...@gmail.com)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs debian package

2011-01-06 Thread Bernard Li
Hi all:

On Thu, Jan 6, 2011 at 2:09 PM, Fabricio Cannini  wrote:

> I second Piotr suggestion.
> May i also suggest to the gluster devs 2 things:
>
> - To follow debian's way of separate packages ( server, client, libraries, and
> common data packages). Makes automated installation much easier and cleaner.
>
> - To create a proper debian repository of the "latest and greatest" release of
> gluster at gluster.org. Again, it would make our life as sysadmins much easier
> to just set up a repo in '/etc/apt/sources.list' and let $management_system
> take care of the rest.

It looks like GlusterFS 3.1.1 is already in the Debian repository
under "experimental":

http://packages.debian.org/search?suite=experimental&searchon=names&keywords=glusterfs

So perhaps what needs to be done is someone to take the patches from
Debian and file a bug at bugs.gluster.com so that the Gluster
developers could include that for future release.

Just my $0.02.

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS setup question

2011-01-06 Thread Shain Miley

Ok,
Well I setup my zfs filesystems like this:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/audio
zfs create pool1/glusterfs/video
zfs create pool1/glusterfs/documents


it all went well, I restarted gluster...could read the files 
fine...however...I see these errors when trying to write (touch a file):


[2011-01-06 19:11:23] W [posix.c:331:posix_fstat_with_gen] posix1: 
Access to fd 13 (on dev 47775760) is crossing device (47775759)
[2011-01-06 19:11:23] E [posix.c:2267:posix_create] posix1: fstat on 13 
failed: Cross-device link
[2011-01-06 19:15:58] W [posix.c:331:posix_fstat_with_gen] posix1: 
Access to fd 13 (on dev 47775761) is crossing device (47775759)
[2011-01-06 19:15:58] E [posix.c:2267:posix_create] posix1: fstat on 13 
failed: Cross-device link



which makes me think I cannot put everything under pool1/glusterfs like 
that.


So I thought I would do something like this:

zfs create pool1/glusterfs01/audio
zfs create pool1/glusterfs02/video
zfs create pool1/glusterfs03/documents

but the problem with that is then I cannot just share out 
pool1/glusterfs in the .vol file like I wanted to.


Do I have any choice...other then a longer .vol file...or a single 
gluster zfs filesystem named pool1/glusterfs?


At this point the whole cluster is down...so I have to make a choice 
shortly.


Thanks in advance,

Shain



On 01/06/2011 05:12 PM, Jacob Shucart wrote:

Shain,

That's correct.  There really is no downside to doing separate ZFS
filesystems unless you consider the process of creating them or managing
them a downside.  ZFS is pretty easy to administer, so my overall
recommendation would be scenario #2.

-Jacob

-Original Message-
From: Shain Miley [mailto:smi...@npr.org]
Sent: Thursday, January 06, 2011 2:10 PM
To: Jacob Shucart
Cc: 'Gluster General Discussion List'
Subject: Re: [Gluster-users] ZFS setup question

Jocab,
Thanks for the input.  I did consider that along with having the ability
to set different properties on each (compression, dedup, etc)  none of
which I plan on using right now...however I would at least have the
option in the future.

The only other thing I was able to come up with was this:

If one of the shares did get out of sync for example (or if you simply
wanted to know the size the shares for that matter)...it might be easier
to tell which one using 'zfs list' or something like that...rather then
having to do a 'du' on a several TB folder.


Shain




On 01/06/2011 04:58 PM, Jacob Shucart wrote:

Shain,

If you are planning on taking snapshots of the underlying filesystems

then

#2 would be better.  If you are not planning on taking snapshots then #1
and #2 are equal really and so I would say that #1 is fine because there
are less filesystems to manage.  I hope this clarifies things.  Since

ZFS

snapshots are done at the filesystem level, if you wanted to take a
snapshot of just music then you could not do that unless music was on

its

own ZFS filesystem.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Shain Miley
Sent: Thursday, January 06, 2011 1:47 PM
To: Gluster General Discussion List
Subject: [Gluster-users] ZFS setup question

Hello,
I am in the process of setting up my Gluster shares and I am looking at
the following two setup options and I am wondering if anyone can speak
to the pros/cons of either:

1) Create one large zfs filesystem for gluster.

eg:

zfs create pool1/glusterfs

and then create several folders with 'mkdir' inside '/pool1/glusterfs'
(music,videos,documents).

2) Create 1 zfs filesystem per share.

eg:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/music
zfs create pool1/glusterfs/videos
zfs create pool1/glusterfs/documents


I would then share /pool1/glusterfs out with gluster (I do not want to
have to have an overly complicated .vol file with each share having it's
own gluster volume).


Any thoughts would be great.

Thanks,

Shain



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Instability with large setup and infiniband

2011-01-06 Thread Fabricio Cannini
Hi all

I''ve setup glusterfs , version 3.0.5 ( debian squeeze amd64 stock packages ) 
, like this, being each node both a server and client:

Client config
http://pastebin.com/6d4BjQwd

Server config
http://pastebin.com/4ZmX9ir1


Configuration of each node:
2x Intel xeon 5420 2.5GHz , 16GB ddr2 ECC , 1 sata2 hd of 750GB.
Of which ~600GB is a partition ( /glstfs ) dedicated to gluster. Each node 
also have 1 Mellanox MT25204 [InfiniHost III Lx] Inifiniband DDR hca used by 
gluster through the 'verbs' interface.

This cluster of 22 nodes is used for scientific computing, and glusterfs is 
used to create a scratch area for I/O intensive apps.

And this is one of the problems: *one* I/O intensive job can bring the whole 
volume to its knees, with "Transport endpoint not connected" errors and so, 
till complete uselessness; Especially if the job is running in a parallel way 
( through MPI ) in more than one node.

The other problem is that gluster have been somewhat unstable, even without 
I/O intensive jobs. Out of the blue a simple 'ls -la /scratch' is answered 
with a "Transport endpoint not connected" error. But when this happens, 
restarting all servers brings things back to a working state.

If anybody here using glusterfs with infiniband have been through this ( or 
something like that ) and could share your experiences , please please please 
do

TIA,
Fabricio.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-06 Thread Max Ivanov
You can try following (untested):

1. mount glusterfs volume to /mnt/g
2. mount --bind /mnt/g /mnt/g_ro
3. mount -o remount,ro /mnt/g_ro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS setup question

2011-01-06 Thread Jacob Shucart
Shain,

That's correct.  There really is no downside to doing separate ZFS
filesystems unless you consider the process of creating them or managing
them a downside.  ZFS is pretty easy to administer, so my overall
recommendation would be scenario #2.

-Jacob

-Original Message-
From: Shain Miley [mailto:smi...@npr.org] 
Sent: Thursday, January 06, 2011 2:10 PM
To: Jacob Shucart
Cc: 'Gluster General Discussion List'
Subject: Re: [Gluster-users] ZFS setup question

Jocab,
Thanks for the input.  I did consider that along with having the ability 
to set different properties on each (compression, dedup, etc)  none of 
which I plan on using right now...however I would at least have the 
option in the future.

The only other thing I was able to come up with was this:

If one of the shares did get out of sync for example (or if you simply 
wanted to know the size the shares for that matter)...it might be easier 
to tell which one using 'zfs list' or something like that...rather then 
having to do a 'du' on a several TB folder.


Shain




On 01/06/2011 04:58 PM, Jacob Shucart wrote:
> Shain,
>
> If you are planning on taking snapshots of the underlying filesystems
then
> #2 would be better.  If you are not planning on taking snapshots then #1
> and #2 are equal really and so I would say that #1 is fine because there
> are less filesystems to manage.  I hope this clarifies things.  Since
ZFS
> snapshots are done at the filesystem level, if you wanted to take a
> snapshot of just music then you could not do that unless music was on
its
> own ZFS filesystem.
>
> -Jacob
>
> -Original Message-
> From: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Shain Miley
> Sent: Thursday, January 06, 2011 1:47 PM
> To: Gluster General Discussion List
> Subject: [Gluster-users] ZFS setup question
>
> Hello,
> I am in the process of setting up my Gluster shares and I am looking at
> the following two setup options and I am wondering if anyone can speak
> to the pros/cons of either:
>
> 1) Create one large zfs filesystem for gluster.
>
> eg:
>
> zfs create pool1/glusterfs
>
> and then create several folders with 'mkdir' inside '/pool1/glusterfs'
> (music,videos,documents).
>
> 2) Create 1 zfs filesystem per share.
>
> eg:
>
> zfs create pool1/glusterfs
> zfs create pool1/glusterfs/music
> zfs create pool1/glusterfs/videos
> zfs create pool1/glusterfs/documents
>
>
> I would then share /pool1/glusterfs out with gluster (I do not want to
> have to have an overly complicated .vol file with each share having it's
> own gluster volume).
>
>
> Any thoughts would be great.
>
> Thanks,
>
> Shain
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Tar: file changed as we read it

2011-01-06 Thread Max Ivanov
Hi everybody!
I am doing some glusterFS tests and found strange behaviour: tar
constantly reports "file changed as we read it" on lots of files (but
not all), details are following.

How to reproduce.
1. Setup 2 nodes replication mode cluster following official 3.1 docs.
glusterFS version is 3.1.1
2. sync time with ntpdate on both servers
3. mount volume over NFS
4. run "ls -laR /mount/point" to be sure that things are syncrhonized
5. tar cf /dev/null /mount/point
6. check console!

If I do same on FUSE mount then everything goes fine.
Not sure, but it seems that thoose warnings came more rare if NFS
client is on 3rd host, not on any gluster server.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS setup question

2011-01-06 Thread Shain Miley

Jocab,
Thanks for the input.  I did consider that along with having the ability 
to set different properties on each (compression, dedup, etc)  none of 
which I plan on using right now...however I would at least have the 
option in the future.


The only other thing I was able to come up with was this:

If one of the shares did get out of sync for example (or if you simply 
wanted to know the size the shares for that matter)...it might be easier 
to tell which one using 'zfs list' or something like that...rather then 
having to do a 'du' on a several TB folder.



Shain




On 01/06/2011 04:58 PM, Jacob Shucart wrote:

Shain,

If you are planning on taking snapshots of the underlying filesystems then
#2 would be better.  If you are not planning on taking snapshots then #1
and #2 are equal really and so I would say that #1 is fine because there
are less filesystems to manage.  I hope this clarifies things.  Since ZFS
snapshots are done at the filesystem level, if you wanted to take a
snapshot of just music then you could not do that unless music was on its
own ZFS filesystem.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Shain Miley
Sent: Thursday, January 06, 2011 1:47 PM
To: Gluster General Discussion List
Subject: [Gluster-users] ZFS setup question

Hello,
I am in the process of setting up my Gluster shares and I am looking at
the following two setup options and I am wondering if anyone can speak
to the pros/cons of either:

1) Create one large zfs filesystem for gluster.

eg:

zfs create pool1/glusterfs

and then create several folders with 'mkdir' inside '/pool1/glusterfs'
(music,videos,documents).

2) Create 1 zfs filesystem per share.

eg:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/music
zfs create pool1/glusterfs/videos
zfs create pool1/glusterfs/documents


I would then share /pool1/glusterfs out with gluster (I do not want to
have to have an overly complicated .vol file with each share having it's
own gluster volume).


Any thoughts would be great.

Thanks,

Shain



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs debian package

2011-01-06 Thread Fabricio Cannini
Em Quinta-feira 06 Janeiro 2011, às 17:24:02, Piotr Kandziora escreveu:
> Hello,
> 
> Quick question for gluster developers: Could you add to debian package
> automatic creating rc scripts in postinst action? Currently this is
> not supported and user has to manually execute update-rc.d command.
> This would be helpful in large cluster installations...

Hi All.

I second Piotr suggestion.
May i also suggest to the gluster devs 2 things:

- To follow debian's way of separate packages ( server, client, libraries, and 
common data packages). Makes automated installation much easier and cleaner.

- To create a proper debian repository of the "latest and greatest" release of 
gluster at gluster.org. Again, it would make our life as sysadmins much easier 
to just set up a repo in '/etc/apt/sources.list' and let $management_system 
take care of the rest.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS setup question

2011-01-06 Thread Jacob Shucart
Shain,

If you are planning on taking snapshots of the underlying filesystems then
#2 would be better.  If you are not planning on taking snapshots then #1
and #2 are equal really and so I would say that #1 is fine because there
are less filesystems to manage.  I hope this clarifies things.  Since ZFS
snapshots are done at the filesystem level, if you wanted to take a
snapshot of just music then you could not do that unless music was on its
own ZFS filesystem.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Shain Miley
Sent: Thursday, January 06, 2011 1:47 PM
To: Gluster General Discussion List
Subject: [Gluster-users] ZFS setup question

Hello,
I am in the process of setting up my Gluster shares and I am looking at 
the following two setup options and I am wondering if anyone can speak 
to the pros/cons of either:

1) Create one large zfs filesystem for gluster.

eg:

zfs create pool1/glusterfs

and then create several folders with 'mkdir' inside '/pool1/glusterfs' 
(music,videos,documents).

2) Create 1 zfs filesystem per share.

eg:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/music
zfs create pool1/glusterfs/videos
zfs create pool1/glusterfs/documents


I would then share /pool1/glusterfs out with gluster (I do not want to 
have to have an overly complicated .vol file with each share having it's 
own gluster volume).


Any thoughts would be great.

Thanks,

Shain



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ZFS setup question

2011-01-06 Thread Shain Miley

Hello,
I am in the process of setting up my Gluster shares and I am looking at 
the following two setup options and I am wondering if anyone can speak 
to the pros/cons of either:


1) Create one large zfs filesystem for gluster.

eg:

zfs create pool1/glusterfs

and then create several folders with 'mkdir' inside '/pool1/glusterfs' 
(music,videos,documents).


2) Create 1 zfs filesystem per share.

eg:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/music
zfs create pool1/glusterfs/videos
zfs create pool1/glusterfs/documents


I would then share /pool1/glusterfs out with gluster (I do not want to 
have to have an overly complicated .vol file with each share having it's 
own gluster volume).



Any thoughts would be great.

Thanks,

Shain



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] glusterfs debian package

2011-01-06 Thread Piotr Kandziora
Hello,

Quick question for gluster developers: Could you add to debian package
automatic creating rc scripts in postinst action? Currently this is
not supported and user has to manually execute update-rc.d command.
This would be helpful in large cluster installations...

Cheers
Piotr Kandziora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-06 Thread Burnash, James
Hi Jacob.

Turns out that it does in fact work - the problem is that the mount command 
does not reflect this. E.g.

glusterfs#-pfs1:/pfs-ro1 on /pfs/pfs-ro1 type fuse 
(rw,allow_other,default_permissions,max_read=131072)

That's not fatal, but it is misleading.

Thanks for responding.

James Burnash, Unix Engineering


-Original Message-
From: Jacob Shucart [mailto:ja...@gluster.com] 
Sent: Thursday, January 06, 2011 12:44 PM
To: Burnash, James; gluster-users@gluster.org
Subject: RE: [Gluster-users] Read-only volumes - timeframe for features/filter?

James,

What OS is this with?  I ran a test with 3.1.1:

[r...@jacobgfs31-s1 /]# mount -t glusterfs -o ro localhost:/test /tmpo
[r...@jacobgfs31-s1 /]# cd /tmpo
[r...@jacobgfs31-s1 tmpo]# touch file2
touch: cannot touch `file2': Read-only file system

Is this what you are looking for?

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Thursday, January 06, 2011 8:43 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?

Gluster Devs - I know you are busy, but could you possibly give me some
guidance on this?

That said - how do members of the Gluster community handle read-only
Gluster volumes - or is nobody else doing this?

Thanks,

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Wednesday, January 05, 2011 4:02 PM
To: 'Ian Rogers'; Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?

Was this read-only option (-O ro) added to gluster 3.1.1? I've tried a
couple of invocations:

mount -t glusterfs -O ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1
and
mount -t glusterfs -o ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1

Neither appears to have any effect.

Thanks,

James Burnash, Unix Engineering


-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ian Rogers
Sent: Thursday, March 11, 2010 4:28 PM
To: Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?


Excellent, although it must be possible to specify this in /etc/fstab or
"by hand" in the volume specification.

Thanks,

Ian

On 11/03/2010 07:15, Amar Tumballi wrote:
> We are in the process of providing '-o ro' option for 'mount -t 
> glusterfs' (internally using filter). It should be included in 
> codebase soon.
>
> I will reply here in the thread about the availability.
>
> -Amar
>
> On Thu, Mar 11, 2010 at 12:21 AM, Ian Rogers 
> mailto:ian.rog...@contactclean.com>>
wrote:
>
>
> Gluster Devs,
>
> I'd like be able to mount some volumes readonly and
> features/filter -
>
http://www.gluster.com/community/documentation/index.php/Translators/featu
res/filter
> - should be perfect.
>
> But I've compiled 3.0.2 from source and features/filter is
> installed in a "testing" subdirectory...
>
> Is there a time-frame for promoting this xlater to the main set,
> or is there a better way of doing this?
>
> Cheers,  Ian
>
> -- 
> www.ContactClean.com 
> Making changing email address as easy as clicking a mouse.
> Helping you keep in touch.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mail

Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-06 Thread Jacob Shucart
James,

What OS is this with?  I ran a test with 3.1.1:

[r...@jacobgfs31-s1 /]# mount -t glusterfs -o ro localhost:/test /tmpo
[r...@jacobgfs31-s1 /]# cd /tmpo
[r...@jacobgfs31-s1 tmpo]# touch file2
touch: cannot touch `file2': Read-only file system

Is this what you are looking for?

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Thursday, January 06, 2011 8:43 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?

Gluster Devs - I know you are busy, but could you possibly give me some
guidance on this?

That said - how do members of the Gluster community handle read-only
Gluster volumes - or is nobody else doing this?

Thanks,

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Wednesday, January 05, 2011 4:02 PM
To: 'Ian Rogers'; Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?

Was this read-only option (-O ro) added to gluster 3.1.1? I've tried a
couple of invocations:

mount -t glusterfs -O ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1
and
mount -t glusterfs -o ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1

Neither appears to have any effect.

Thanks,

James Burnash, Unix Engineering


-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ian Rogers
Sent: Thursday, March 11, 2010 4:28 PM
To: Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for
features/filter?


Excellent, although it must be possible to specify this in /etc/fstab or
"by hand" in the volume specification.

Thanks,

Ian

On 11/03/2010 07:15, Amar Tumballi wrote:
> We are in the process of providing '-o ro' option for 'mount -t 
> glusterfs' (internally using filter). It should be included in 
> codebase soon.
>
> I will reply here in the thread about the availability.
>
> -Amar
>
> On Thu, Mar 11, 2010 at 12:21 AM, Ian Rogers 
> mailto:ian.rog...@contactclean.com>>
wrote:
>
>
> Gluster Devs,
>
> I'd like be able to mount some volumes readonly and
> features/filter -
>
http://www.gluster.com/community/documentation/index.php/Translators/featu
res/filter
> - should be perfect.
>
> But I've compiled 3.0.2 from source and features/filter is
> installed in a "testing" subdirectory...
>
> Is there a time-frame for promoting this xlater to the main set,
> or is there a better way of doing this?
>
> Cheers,  Ian
>
> -- 
> www.ContactClean.com 
> Making changing email address as easy as clicking a mouse.
> Helping you keep in touch.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-06 Thread Burnash, James
Gluster Devs - I know you are busy, but could you possibly give me some 
guidance on this?

That said - how do members of the Gluster community handle read-only Gluster 
volumes - or is nobody else doing this?

Thanks,

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Wednesday, January 05, 2011 4:02 PM
To: 'Ian Rogers'; Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

Was this read-only option (-O ro) added to gluster 3.1.1? I've tried a couple 
of invocations:

mount -t glusterfs -O ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1
and
mount -t glusterfs -o ro ${GLUSTER_RO}:/pfs-ro1 /pfs/pfs-ro1

Neither appears to have any effect.

Thanks,

James Burnash, Unix Engineering


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ian Rogers
Sent: Thursday, March 11, 2010 4:28 PM
To: Amar Tumballi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Read-only volumes - timeframe for features/filter?


Excellent, although it must be possible to specify this in /etc/fstab or "by 
hand" in the volume specification.

Thanks,

Ian

On 11/03/2010 07:15, Amar Tumballi wrote:
> We are in the process of providing '-o ro' option for 'mount -t 
> glusterfs' (internally using filter). It should be included in 
> codebase soon.
>
> I will reply here in the thread about the availability.
>
> -Amar
>
> On Thu, Mar 11, 2010 at 12:21 AM, Ian Rogers 
> mailto:ian.rog...@contactclean.com>> wrote:
>
>
> Gluster Devs,
>
> I'd like be able to mount some volumes readonly and
> features/filter -
> 
> http://www.gluster.com/community/documentation/index.php/Translators/features/filter
> - should be perfect.
>
> But I've compiled 3.0.2 from source and features/filter is
> installed in a "testing" subdirectory...
>
> Is there a time-frame for promoting this xlater to the main set,
> or is there a better way of doing this?
>
> Cheers,  Ian
>
> -- 
> www.ContactClean.com 
> Making changing email address as easy as clicking a mouse.
> Helping you keep in touch.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Errors in logs and partition crash

2011-01-06 Thread Samuel Hassine
Hi there,

I have a problem here, in all my gluster clients logs files, I have since
one or two days the following errors:

[2011-01-06 11:52:33.396075] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319109: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.455547] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319223: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.499089] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319254: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.841787] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319756: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:34.38679] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319819: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)

Here the same: http://pastebin.com/kYzcD3qq

And after a few hours like this, gluster partition freezes and crashes.

Here the config:

r...@on-001:~# gluster volume info
Volume Name: dns
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: on-001.olympe-network.com:/store1
Brick2: on-002.olympe-network.com:/store1
Brick3: on-003.olympe-network.com:/store1
Brick4: on-004.olympe-network.com:/store1
Options Reconfigured:
performance.cache-refresh-timeout: 0
performance.cache-size: 6144MB

Someone know what that means?

Regards.
Samuel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1 client

2011-01-06 Thread Dan Bretherton

Hello All,
I have seen what appears to be the same problem when a user tried to 
change the ownership of some files.  The client log file entries were 
like this:


[2011-01-05 17:10:02.93] W [fuse-bridge.c:648:fuse_setattr_cbk] 
glusterfs-fuse: 16210806: SETATTR() 
/users/rle/INTERIM/INTERIM_2003_VOR850.nc => -1 (Invalid argument)


The error reported on the command line was "permission denied" I am 
told, not "invalid argument" as with the file deletion problem.  
Unfortunately I didn't get a chance set the log level to TRACE because 
the user in question went to an NFS client to change the ownership 
before telling me there had been a problem.  However this might make it 
easier to reproduce the problem, now that we know that it isn't 
restricted to file deletion.


-Dan.

-

On 01/06/2011 09:07 AM, Thai. Ngo Bao wrote:

Dear All,

I'd like to confirm that I am having the same problem. I am using glusterfs 
3.1.1.

I did set volume diagnostics.client-log-level to TRACE, please see the attached 
file for further information.

Thanks for your support.

~Thai
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Vijay Bellur
Sent: Wednesday, January 05, 2011 1:30 AM
To: Dan Bretherton
Cc: gluster-users
Subject: Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1 
client

On Tuesday 04 January 2011 11:21 PM, Dan Bretherton wrote:

Hello Vijay,
There is nothing except routine "accepted client" messages in the
brick log files on the servers, and I can't see anything relevant in
the /var/log/messages files.  On the client the only messages relating
to this problem were included in my original mailing list message. Are
there some other log files I should be looking at?  I don't know how
to recreate this problem I'm afraid - it's just a case of waiting
until it crops up again.  When it does I will try to persuade the
users to leave the files in place while I investigate further.  When
the time comes what should I look for?


When it happens again, please set the glusterfs client log level to
TRACE through

#gluster volume set  diagnostics.client-log-level TRACE,
perform the delete operation on such files and send across the client
log file to us.

You can revert back the log level to INFO after you are done with this.

Thanks,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1 client

2011-01-06 Thread Thai. Ngo Bao
Dear All,

I'd like to confirm that I am having the same problem. I am using glusterfs 
3.1.1. 

I did set volume diagnostics.client-log-level to TRACE, please see the attached 
file for further information.

Thanks for your support.

~Thai
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Vijay Bellur
Sent: Wednesday, January 05, 2011 1:30 AM
To: Dan Bretherton
Cc: gluster-users
Subject: Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1 
client

On Tuesday 04 January 2011 11:21 PM, Dan Bretherton wrote:
> Hello Vijay,
> There is nothing except routine "accepted client" messages in the 
> brick log files on the servers, and I can't see anything relevant in 
> the /var/log/messages files.  On the client the only messages relating 
> to this problem were included in my original mailing list message. Are 
> there some other log files I should be looking at?  I don't know how 
> to recreate this problem I'm afraid - it's just a case of waiting 
> until it crops up again.  When it does I will try to persuade the 
> users to leave the files in place while I investigate further.  When 
> the time comes what should I look for?
>
When it happens again, please set the glusterfs client log level to 
TRACE through

#gluster volume set  diagnostics.client-log-level TRACE,
perform the delete operation on such files and send across the client 
log file to us.

You can revert back the log level to INFO after you are done with this.

Thanks,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users