Re: [Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS

2010-03-18 Thread hgichon

I installed librpc2-5 package in my ubuntu 9.10 machine.
So I can using NFS without unfsd/knfsd !!!

- kpkim

hgichon wrote:

wow good news!
thanks.

I was installed source.
but mount failed.

my config is wrong?

- kpkim

r...@ccc1:/usr/local/etc/glusterfs# mount -t glusterfs 
/usr/local/etc/glusterfs/nfs.vol /ABCD -o loglevel=DEBUG
Volume 'nfs-server', line 60: type 'nfs/server' is not valid or not 
found on this machine

error in parsing volume file /usr/local/etc/glusterfs/nfs.vol
exiting
r...@ccc1:/usr/local/etc/glusterfs#

[pid  4270] 
open("/usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so", 
O_RDONLY) = 7
[pid  4270] read(7, 
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\337\0\0\0\0\0\0"..., 
832) = 832

[pid  4270] fstat(7, {st_mode=S_IFREG|0755, st_size=781248, ...}) = 0
[pid  4270] mmap(NULL, 2337392, PROT_READ|PROT_EXEC, 
MAP_PRIVATE|MAP_DENYWRITE, 7, 0) = 0x7f4be41bc000


my config
- 


r...@ccc1:/usr/local/etc/glusterfs# cat glusterfsd.vol
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 
192.168.1.128:/export --nfs --cifs


volume posix1
  type storage/posix
  option directory /export
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
- 


r...@ccc1:/usr/local/etc/glusterfs# cat nfs.vol
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 
192.168.1.128:/export --nfs --cifs


volume 192.168.1.128-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.128
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.127-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.127
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume distribute
type cluster/distribute
subvolumes 192.168.1.127-1 192.168.1.128-1
end-volume

#volume writebehind
#type performance/write-behind
#option cache-size 4MB
#subvolumes distribute
#end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes writebehind
#end-volume

volume iocache
type performance/io-cache
option cache-size 128MB
option cache-timeout 1
subvolumes distribute
end-volume

#volume quickread
#type performance/quick-read
#option cache-timeout 1
#option max-file-size 64kB
#subvolumes iocache
#end-volume

#volume statprefetch
#type performance/stat-prefetch
#subvolumes quickread
#end-volume

volume nfs-server
type nfs/server
subvolumes iocache
option rpc-auth.addr.allow *
end-volume



Tejas N. Bhise wrote:

Dear Community Users,

Gluster is happy to announce the ALPHA release of the native NFS Server.
The native NFS server is implemented as an NFS Translator and hence
integrates very well, the NFS protocol on one side and GlusterFS protocol
on the other side.

This is an important step in our strategy to extend the benefits of
Gluster to other operating system which can benefit from a better NFS
based data service, while enjoying all the backend smarts that Gluster
provides.

The new NFS Server also strongly supports our efforts towards
becoming a virtualization storage of choice.

The release notes of the NFS ALPHA Release are available at -

http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf 



The Release notes describe where RPMs and source code can be obtained
and where bugs found in this ALPHA release can be filed. Some examples 
on usage are also provided.


Please be aware that this is an ALPHA release and in no way should be
used in production. Gluster is not responsible for any loss of data
or service resulting from the use of this ALPHA NFS Release.

Feel free to send feedback, comments and questions to: 
nfs-al...@gluster.com


Regards,
Tejas Bhise.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS

2010-03-18 Thread hgichon

wow good news!
thanks.

I was installed source.
but mount failed.

my config is wrong?

- kpkim

r...@ccc1:/usr/local/etc/glusterfs# mount -t glusterfs 
/usr/local/etc/glusterfs/nfs.vol /ABCD -o loglevel=DEBUG
Volume 'nfs-server', line 60: type 'nfs/server' is not valid or not found on 
this machine
error in parsing volume file /usr/local/etc/glusterfs/nfs.vol
exiting
r...@ccc1:/usr/local/etc/glusterfs#

[pid  4270] open("/usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so", 
O_RDONLY) = 7
[pid  4270] read(7, 
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\337\0\0\0\0\0\0"..., 832) = 
832
[pid  4270] fstat(7, {st_mode=S_IFREG|0755, st_size=781248, ...}) = 0
[pid  4270] mmap(NULL, 2337392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 
7, 0) = 0x7f4be41bc000

my config
-
r...@ccc1:/usr/local/etc/glusterfs# cat glusterfsd.vol
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 
192.168.1.128:/export --nfs --cifs

volume posix1
  type storage/posix
  option directory /export
end-volume

volume locks1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume

volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
-
r...@ccc1:/usr/local/etc/glusterfs# cat nfs.vol
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 
192.168.1.128:/export --nfs --cifs

volume 192.168.1.128-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.128
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume 192.168.1.127-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.127
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume

volume distribute
type cluster/distribute
subvolumes 192.168.1.127-1 192.168.1.128-1
end-volume

#volume writebehind
#type performance/write-behind
#option cache-size 4MB
#subvolumes distribute
#end-volume

#volume readahead
#type performance/read-ahead
#option page-count 4
#subvolumes writebehind
#end-volume

volume iocache
type performance/io-cache
option cache-size 128MB
option cache-timeout 1
subvolumes distribute
end-volume

#volume quickread
#type performance/quick-read
#option cache-timeout 1
#option max-file-size 64kB
#subvolumes iocache
#end-volume

#volume statprefetch
#type performance/stat-prefetch
#subvolumes quickread
#end-volume

volume nfs-server
type nfs/server
subvolumes iocache
option rpc-auth.addr.allow *
end-volume



Tejas N. Bhise wrote:

Dear Community Users,

Gluster is happy to announce the ALPHA release of the native NFS Server.
The native NFS server is implemented as an NFS Translator and hence
integrates very well, the NFS protocol on one side and GlusterFS protocol
on the other side.

This is an important step in our strategy to extend the benefits of
Gluster to other operating system which can benefit from a better NFS
based data service, while enjoying all the backend smarts that Gluster
provides.

The new NFS Server also strongly supports our efforts towards
becoming a virtualization storage of choice.

The release notes of the NFS ALPHA Release are available at -

http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf

The Release notes describe where RPMs and source code can be obtained
and where bugs found in this ALPHA release can be filed. Some examples 
on usage are also provided.


Please be aware that this is an ALPHA release and in no way should be
used in production. Gluster is not responsible for any loss of data
or service resulting from the use of this ALPHA NFS Release.

Feel free to send feedback, comments and questions to: nfs-al...@gluster.com

Regards,
Tejas Bhise.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] poor dd performance w/ gluster 3.0.3

2010-03-18 Thread Vikas Gorur

On Mar 18, 2010, at 4:43 PM, Jeremy Enos wrote:
> 
> Run finished: Thu Mar 18 17:58:57 2010
> [je...@ac31 IOR-2.10.1]$ dd conv=fsync if=/dev/zero of=/scratch/jenos/bigfile 
> bs=1024 count=100
> ^C489835+0 records in
> 489835+0 records out
> 501591040 bytes (502 MB) copied, 421.996 s, 1.2 MB/s


Were you using the same block size for 3.0? Have you changed the backend
filesystem type?

--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] poor dd performance w/ gluster 3.0.3

2010-03-18 Thread Jeremy Enos

What worked at 100MB/sec w/ gluster 3.0 is now 1.2MB/sec with 3.0.3.
I have 2 5 host stripes, replicated.  (I'm aware that this config is 
unsupported)  Just updated from 3.0 to 3.0.3.
While my IOR tests across 8 clients come out fine (slightly better than 
w/ 3.0) - dd performance has gone to crap.  While this is fine I guess 
as long as applications still perform well, I'm curious as to why dd is 
affected this way.  (it used to be ~100MB/sec w/ 3.0, now is 1.2MB/sec 
w/ 3.0.3)


Jeremy

[je...@ac31 IOR-2.10.1]$ ./runior.sh -t 1m -b 1g
IOR-2.10.1: MPI Coordinated Test of Parallel I/O

Run began: Thu Mar 18 17:58:02 2010
Command line used: ./IOR -a POSIX -q -wr -m -C -F -e -N 8 -t 1m -b 1g -o 
/scratch/jenos/iortest

Machine: Linux ac31.ncsa.uiuc.edu

Summary:
api= POSIX
test filename  = /scratch/jenos/iortest
access = file-per-process
ordering   = sequential offsets
clients= 8 (1 per node)
repetitions= 1
xfersize   = 1 MiB
blocksize  = 1 GiB
aggregate filesize = 8 GiB

accessbw(MiB/s)  block(KiB) xfer(KiB)  open(s)wr/rd(s)   
close(s)   iter
---  -- -        
   
write 180.90 10485761024.000.072115   45.25  
20.07  0
read  980.56 10485761024.000.032333   8.34   
3.56   0


Max Write: 180.90 MiB/sec (189.68 MB/sec)
Max Read:  980.56 MiB/sec (1028.19 MB/sec)

Run finished: Thu Mar 18 17:58:57 2010
[je...@ac31 IOR-2.10.1]$ dd conv=fsync if=/dev/zero 
of=/scratch/jenos/bigfile bs=1024 count=100

^C489835+0 records in
489835+0 records out
501591040 bytes (502 MB) copied, 421.996 s, 1.2 MB/s

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Storage Platform 3.0.3 - Cannot Login

2010-03-18 Thread Roberto Lucignani
It's happened to me too, but I found that the problem was my DNS.

I called the first gluster server node01 and I set the domain to my network
domain, when I created the entry in my DNS the login process had no problems

 

It seems that in the login process something check the FQDN name of the
node, if ti can't be verified the process fails

 

I hope this helps.

 

Regards

R. Lucignani 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Quickread Translator Memory Leak?

2010-03-18 Thread Benjamin Long
On Thursday 18 March 2010 03:03:03 pm Vijay Bellur wrote:
> Benjamin Long wrote:
> > Has anyone else noticed a memory leak when using the Quickread
> > translator?
> 
> Quickread translator does unlimited caching as of now. This is not a
> memory leak but it has the same effect in exhausting available memory.
> We are going to improve this behavior through enhancement bug 723.
> 
> > My workstations are having a problem as well. After running for a few
> > days (as long as a week) the users start having their sessions killed.
> > They are returned to a login prompt, and can login again. Glusterfs is
> > still running at this point, but I think thats because all the users apps
> > were first on the kill list for an oom condition. The backup server runs
> > nothing but glusterfs and rsync.
> 
> Do you have details of GlusterFS's memory usage (Resident Memory and
> percentage of memory used) at the instant when the oom condition was
> observed?
> 
> 
> Regards,
> Vijay
> 

Yep. It's a VM with 1GB of ram. It runs nothing but gluster, rsync, and ssh. I 
saw glusterfs using 97% of the ram just before it died. All the swap was used 
up too.

Here's the output of top about 10 min before that:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


   
 2868 root  20   0 55664 2356  648 S   14  0.2   0:23.28 rsync  


   
 2239 root  20   0  933m 752m 1300 R5 74.9   0:12.06 glusterfs

I can turn quickread back on and test some more if it will be helpful.

-- 
Benjamin Long
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Quickread Translator Memory Leak?

2010-03-18 Thread Vijay Bellur

Benjamin Long wrote:
Has anyone else noticed a memory leak when using the Quickread translator? 

  
Quickread translator does unlimited caching as of now. This is not a 
memory leak but it has the same effect in exhausting available memory.

We are going to improve this behavior through enhancement bug 723.
My workstations are having a problem as well. After running for a few days (as 
long as a week) the users start having their sessions killed. They are 
returned to a login prompt, and can login again. Glusterfs is still running at 
this point, but I think thats because all the users apps were first on the kill 
list for an oom condition. The backup server runs nothing but glusterfs and 
rsync.
  


Do you have details of GlusterFS's memory usage (Resident Memory and 
percentage of memory used) at the instant when the oom condition was 
observed?



Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Quickread Translator Memory Leak?

2010-03-18 Thread Benjamin Long
Has anyone else noticed a memory leak when using the Quickread translator? I 
created a backup server that rsync's gluster mounts to a Coraid SAN device. 
Originally I simply copied the vol files from my production workstations to 
this server. However I quickly found out that when rsync was running the 
memory usage of the glusterfs processed was climbing out of control. The 
server would exhaust the ram, swap everything out, and then glusterfs would 
get killed by the oom condition. Commenting out the Quickread translator:

#volume quickread
#type performance/quick-read
#option cache-timeout 1
#option max-file-size 64kB
#subvolumes iocache
#end-volume

and skipping it for the mount fixed the problem.

My workstations are having a problem as well. After running for a few days (as 
long as a week) the users start having their sessions killed. They are 
returned to a login prompt, and can login again. Glusterfs is still running at 
this point, but I think thats because all the users apps were first on the kill 
list for an oom condition. The backup server runs nothing but glusterfs and 
rsync.

-- 
Benjamin Long
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Random "No such file or directory" error in gluster client logs - FIXED

2010-03-18 Thread Benjamin Long
Are we sure it wasn't a problem with the file system that simply reformatting 
fixed? Maybe reformatting with ext4 again would have fixed it too.

-- 
Benjamin Long

On Thursday 18 March 2010 02:16:46 pm phil cryer wrote:
> > The solution was quite simple. It turned out that it was because the
> > server's data drive was formatted in ext4. Switched it to ext3 and the
> > problems went away!
> 
> Is this a known issue with Gluster 3.0.3?  I've setup our cluster with
> ext4 on Debian, but have not had any issues like this yet (but we're
> not running live yet).  Is this something to be concerned about?
> Should we change everything back to ext3?
> 
> P
> 
> On Thu, Mar 18, 2010 at 8:48 AM, Lee Simpson  wrote:
> > Hello,
> >
> > Just thought id share the experience i had with a gluster client error
> > and the solution i found after much searching and chatting with some IRC
> > guys.
> >
> > Im running a simple 2 server with multiple clients using
> > cluster/replicate. Randomly newly created files produced the following
> > error in the gluster client logs when accessed;
> >
> > "W [fuse-bridge.c:858:fuse_fd_cbk] glusterfs-fuse: 59480: OPEN()
> > /data/randomfile-here => -1 (No such file or directory)"
> >
> > These files are created by apache or other scripts (such as awstats on a
> > cron).  Apache is then unable to read the file, and the above message
> > appears in the gluster logs everytime you try. If i SSH into the apache
> > server and cat the file it displays fine and then apache starts reading
> > it fine.
> >
> > I upgraded the client and server to 3.03 and tried reducing my configs to
> > the bare min without any performance volumes..  But the problem
> > persisted...
> >
> >
> > SOLUTION
> >
> > The solution was quite simple. It turned out that it was because the
> > server's data drive was formatted in ext4. Switched it to ext3 and the
> > problems went away!
> >
> >
> > Hope that helps someone else who finds this.
> >
> >
> > - Lee
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Random "No such file or directory" error in gluster client logs - FIXED

2010-03-18 Thread phil cryer
> The solution was quite simple. It turned out that it was because the server's 
> data drive was formatted in ext4. Switched it to ext3 and the problems went 
> away!

Is this a known issue with Gluster 3.0.3?  I've setup our cluster with
ext4 on Debian, but have not had any issues like this yet (but we're
not running live yet).  Is this something to be concerned about?
Should we change everything back to ext3?

P

On Thu, Mar 18, 2010 at 8:48 AM, Lee Simpson  wrote:
> Hello,
>
> Just thought id share the experience i had with a gluster client error and 
> the solution i found after much searching and chatting with some IRC guys.
>
> Im running a simple 2 server with multiple clients using cluster/replicate. 
> Randomly newly created files produced the following error in the gluster 
> client logs when accessed;
>
> "W [fuse-bridge.c:858:fuse_fd_cbk] glusterfs-fuse: 59480: OPEN() 
> /data/randomfile-here => -1 (No such file or directory)"
>
> These files are created by apache or other scripts (such as awstats on a 
> cron).  Apache is then unable to read the file, and the above message appears 
> in the gluster logs everytime you try. If i SSH into the apache server and 
> cat the file it displays fine and then apache starts reading it fine.
>
> I upgraded the client and server to 3.03 and tried reducing my configs to the 
> bare min without any performance volumes..  But the problem persisted...
>
>
> SOLUTION
>
> The solution was quite simple. It turned out that it was because the server's 
> data drive was formatted in ext4. Switched it to ext3 and the problems went 
> away!
>
>
> Hope that helps someone else who finds this.
>
>
> - Lee
>
>
>
>
>
> [ Disclaimer ]
> This e-mail and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the sender by replying to 
> this e-mail.
>
> This email has been scanned for viruses
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
http://philcryer.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS

2010-03-18 Thread Tejas N. Bhise
Dear Community Users,

Gluster is happy to announce the ALPHA release of the native NFS Server.
The native NFS server is implemented as an NFS Translator and hence
integrates very well, the NFS protocol on one side and GlusterFS protocol
on the other side.

This is an important step in our strategy to extend the benefits of
Gluster to other operating system which can benefit from a better NFS
based data service, while enjoying all the backend smarts that Gluster
provides.

The new NFS Server also strongly supports our efforts towards
becoming a virtualization storage of choice.

The release notes of the NFS ALPHA Release are available at -

http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf

The Release notes describe where RPMs and source code can be obtained
and where bugs found in this ALPHA release can be filed. Some examples 
on usage are also provided.

Please be aware that this is an ALPHA release and in no way should be
used in production. Gluster is not responsible for any loss of data
or service resulting from the use of this ALPHA NFS Release.

Feel free to send feedback, comments and questions to: nfs-al...@gluster.com

Regards,
Tejas Bhise.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Random "No such file or directory" error in gluster client logs - FIXED

2010-03-18 Thread Lee Simpson
Hello,

Just thought id share the experience i had with a gluster client error and the 
solution i found after much searching and chatting with some IRC guys.

Im running a simple 2 server with multiple clients using cluster/replicate. 
Randomly newly created files produced the following error in the gluster client 
logs when accessed;

"W [fuse-bridge.c:858:fuse_fd_cbk] glusterfs-fuse: 59480: OPEN() 
/data/randomfile-here => -1 (No such file or directory)"

These files are created by apache or other scripts (such as awstats on a cron). 
 Apache is then unable to read the file, and the above message appears in the 
gluster logs everytime you try. If i SSH into the apache server and cat the 
file it displays fine and then apache starts reading it fine.

I upgraded the client and server to 3.03 and tried reducing my configs to the 
bare min without any performance volumes..  But the problem persisted... 


SOLUTION

The solution was quite simple. It turned out that it was because the server's 
data drive was formatted in ext4. Switched it to ext3 and the problems went 
away!


Hope that helps someone else who finds this.


- Lee





[ Disclaimer ]
This e-mail and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender by replying to 
this e-mail.

This email has been scanned for viruses 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2010-03-18 Thread Ian Rogers



On 18/03/2010 09:52, Amar Tumballi wrote:



Excellent, although it must be possible to specify this in
/etc/fstab or "by hand" in the volume specification.


You can add it in volume with 'type testing/features/filter' and add 
an line "option read-only yes".




Thanks, that's obvious now you say it. But I can't use a "testing" 
filter in a production environment.  I'll look forward to the progress 
of your QA so features/filter can be "promoted" out of testing.


Thanks

Ian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2010-03-18 Thread Amar Tumballi
>
>
> Excellent, although it must be possible to specify this in /etc/fstab or
> "by hand" in the volume specification.
>
>
You can add it in volume with 'type testing/features/filter' and add an line
"option read-only yes".

But a dedicated set of patches to provide '-o ro' option for mount command
(ie, /etc/fstab) is sent to review. You can review/test it by pulling it
from

http://patches.gluster.com/patch/2938/
http://patches.gluster.com/patch/2939/
http://patches.gluster.com/patch/2940/

We haven't yet finished our internal QA over it. We plan to make it
available in stable releases before another 2 months.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] rsync causing gluster to crash

2010-03-18 Thread Jan Pisacka
Isn't the /mnt/control directory also exported via NFS? Do you have any
".nfsX..." files there? The second command, which you successfully
used, doesn't attempt to rsync such files.

Regards,
Jan Pisacka
Inst. of Plasma Physics AS CR


On 18.3.2010 01:13, Joe Grace wrote:
> Hello,
>
>  
>
> I am trying to test glusterfs for use as a backup/storage system.
>
>  
>
> After installation everything seems to run fine. That is, until I run a
> particular rsync command against a glusterfs mounted directory.
>
>  
>
> '/mnt/backups/' being the gluster mounted directory and '/mnt/control'
> containing various image files and directories.
>
>  
>
> This command will crash gluster:
>
> backup0:/mnt# rsync -rav -X -i /mnt/control/ /mnt/backups/
>
>  
>
> with this error:
>
> [2010-03-17 19:07:22] W [fuse-bridge.c:722:fuse_attr_cbk]
> glusterfs-fuse: 16173: LOOKUP() / => -1 (Stale NFS file handle)
>
>  
>
> Yet this command does not:
>
> backup0:/mnt/control# rsync -rav -X -i * /mnt/backups/
>
>  
>
> I need to figure out what is causing this as I can't have normal
> operations (and rsync will be used) causing these kinds of crashes in
> our production environment.
>
>  
>
> I can post any logs/configs necessary. Any help is appreciated. 
>
>  
>
>  
>
> --
>
> Joe grace
>
>  
>
>
>   
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>   

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users