Re: [Gluster-users] intended behavior filter translator

2009-06-22 Thread cnr

Hi,

Ate Poorthuis pisze:

Hi all,

Can someone enlighten me about the intended behavior of the filter 
translator? From the documentation, I thought it would behave the same 
as NFS mapping/squashing. However, this is not what I see in my setup.


Let's say I map everything to UID 1500 - using either the fixed-uid or 
the translate-uid and gid option. Now, on the client side, every file 
and directory appears to be owned by 1500. If I try to create new files 
or directories as uid 1001 this fails because of a lack of permission.  
If I chmod 777 a directory then user 1001 can create new 
files/directories but cannot change them afterwards as they appear to be 
owned by 1500. On the server side, those files are owned by 1001. This 
is exactly opposite of NFS. There mapping everything to 1500 has the 
result that every file created by 1001 is owned by uid 1500, but 1001 
can change these files since his uid is mapped to 1500.


Am I doing something wrong or is this intended behavior? I have tried 
loading the filter translator on both the client and the server side. 
They both give the same result. The end goal is to have every user in 
the network write and read each other's files. I thought uid mapping 
would be the best way to do this.




I confirm described behaviour of filter translator. Is there any
workaround to map client-side uid & gid to server-side uid & gid? i'm
using glusterfs 2.0.1 and debian-stable fuse module.

regards, konrad szeromski



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster (2.0.1 -> git) with fuse 2.8 crashes NFS

2009-06-22 Thread Harshavardhana
Hi Justice,

 Can you get a backtrace from the segfault through gdb? .

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


On Sat, Jun 20, 2009 at 10:47 PM,  wrote:

> Sure, the kernel version is 2.6.29 and the fuse release is the just
> released 2.8.0-pre3 (although I can use pre2 if needed).
>
> Justice London
> jlon...@lawinfo.com
>
> > Hi Justice,
> >
> >  There are certain modifications required in fuse-extra.c to make
> > glusterfs work properly for fuse 2.8.0 release. glusterfs 2.0.1 release
> is
> > not tested against 2.8.0 release fuse and certainly will not work without
> > those modifications.  May i know the kernel version you are trying to
> use?
> > and the version of fuse being under use? pre1 or pre2 release?
> >
> > Regards
> > --
> > Harshavardhana
> > Z Research Inc http://www.zresearch.com/
> >
> >
> > On Fri, Jun 19, 2009 at 11:14 PM, Justice London
> > wrote:
> >
> >>  No matter what I do I cannot seem to get gluster to stay stable when
> >> doing any sort of writes to the mount, when using gluster in combination
> >> with fuse 2.8.0-preX and NFS. I tried both unfs3 and standard kernel-nfs
> >> and
> >> no matter what, any sort of data transaction seems to crash gluster
> >> immediately. The error log is as such:
> >>
> >>
> >>
> >> pending frames:
> >>
> >>
> >>
> >> patchset: git://git.sv.gnu.org/gluster.git
> >>
> >> signal received: 11
> >>
> >> configuration details:argp 1
> >>
> >> backtrace 1
> >>
> >> bdb->cursor->get 1
> >>
> >> db.h 1
> >>
> >> dlfcn 1
> >>
> >> fdatasync 1
> >>
> >> libpthread 1
> >>
> >> llistxattr 1
> >>
> >> setfsid 1
> >>
> >> spinlock 1
> >>
> >> epoll.h 1
> >>
> >> xattr.h 1
> >>
> >> st_atim.tv_nsec 1
> >>
> >> package-string: glusterfs 2.0.0git
> >>
> >> [0xf57fe400]
> >>
> >> /usr/local/lib/libglusterfs.so.0(default_fxattrop+0xc0)[0xb7f4d530]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(server_fxattrop+0x175)[0xb7565af5]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_interpret+0xbb)[0xb755beeb]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_pollin+0x9c)[0xb755c19c]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(notify+0x7f)[0xb755c21f]
> >>
> >> /usr/local/lib/libglusterfs.so.0(xlator_notify+0x3f)[0xb7f4937f]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_poll_in+0x3d)[0xb4d528dd]
> >>
> >>
> >>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_handler+0xab)[0xb4d5299b]
> >>
> >> /usr/local/lib/libglusterfs.so.0[0xb7f6321a]
> >>
> >> /usr/local/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7f62001]
> >>
> >> /usr/local/sbin/glusterfsd(main+0xb3b)[0x804b81b]
> >>
> >> /lib/libc.so.6(__libc_start_main+0xe5)[0xb7df3455]
> >>
> >> /usr/local/sbin/glusterfsd[0x8049db1]
> >>
> >>
> >>
> >> Any ideas on if there is a solution, or will be one upcoming in either
> >> gluster or fuse?  Other than with NFS, the git version of gluster seems
> >> to
> >> be really, really fast with fuse 2.8
> >>
> >>
> >>
> >> Justice London
> >> jlon...@lawinfo.com
> >>
> >>
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >>
> >>
> >
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster (2.0.1 -> git) with fuse 2.8 crashes NFS

2009-06-22 Thread Harshavardhana
Hi Justice,

 There are certain modifications required in fuse-extra.c to make
glusterfs work properly for fuse 2.8.0 release. glusterfs 2.0.1 release is
not tested against 2.8.0 release fuse and certainly will not work without
those modifications.  May i know the kernel version you are trying to use?
and the version of fuse being under use? pre1 or pre2 release?

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


On Fri, Jun 19, 2009 at 11:14 PM, Justice London wrote:

>  No matter what I do I cannot seem to get gluster to stay stable when
> doing any sort of writes to the mount, when using gluster in combination
> with fuse 2.8.0-preX and NFS. I tried both unfs3 and standard kernel-nfs and
> no matter what, any sort of data transaction seems to crash gluster
> immediately. The error log is as such:
>
>
>
> pending frames:
>
>
>
> patchset: git://git.sv.gnu.org/gluster.git
>
> signal received: 11
>
> configuration details:argp 1
>
> backtrace 1
>
> bdb->cursor->get 1
>
> db.h 1
>
> dlfcn 1
>
> fdatasync 1
>
> libpthread 1
>
> llistxattr 1
>
> setfsid 1
>
> spinlock 1
>
> epoll.h 1
>
> xattr.h 1
>
> st_atim.tv_nsec 1
>
> package-string: glusterfs 2.0.0git
>
> [0xf57fe400]
>
> /usr/local/lib/libglusterfs.so.0(default_fxattrop+0xc0)[0xb7f4d530]
>
>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(server_fxattrop+0x175)[0xb7565af5]
>
>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_interpret+0xbb)[0xb755beeb]
>
>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_pollin+0x9c)[0xb755c19c]
>
>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(notify+0x7f)[0xb755c21f]
>
> /usr/local/lib/libglusterfs.so.0(xlator_notify+0x3f)[0xb7f4937f]
>
>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_poll_in+0x3d)[0xb4d528dd]
>
>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_handler+0xab)[0xb4d5299b]
>
> /usr/local/lib/libglusterfs.so.0[0xb7f6321a]
>
> /usr/local/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7f62001]
>
> /usr/local/sbin/glusterfsd(main+0xb3b)[0x804b81b]
>
> /lib/libc.so.6(__libc_start_main+0xe5)[0xb7df3455]
>
> /usr/local/sbin/glusterfsd[0x8049db1]
>
>
>
> Any ideas on if there is a solution, or will be one upcoming in either
> gluster or fuse?  Other than with NFS, the git version of gluster seems to
> be really, really fast with fuse 2.8
>
>
>
> Justice London
> jlon...@lawinfo.com
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Dubious write performance on simple "nfs" setup

2009-06-22 Thread Peter Gervai
Hello,

Simple setup: one server one client. Client is using 2.6.26 (debian)
kernel but with the gluster-provided fuse module. (Without it the
performance is non-existant.)

# server confy
volume stor
  type storage/posix
  option directory /srv/glusterfs/
end-volume

volume locks
  type features/posix-locks
  option mandatory-locks on
  subvolumes stor
end-volume

volume readahead
  type performance/read-ahead
  option page-count 2 # 4   # def 2
  option force-atime-update off # def off
  subvolumes locks
end-volume

volume cache
  type performance/io-cache
  option cache-size 128MB   # default 32MB
  option page-size 512KB# def 128KB
  option cache-timeout 2# def 1
  subvolumes readahead
end-volume

volume threads
  type performance/io-threads
  option thread-count 16# default 16
  subvolumes cache
end-volume

-
# gluster client confy
volume remote
  type protocol/client
  option transport-type tcp
  option remote-host 1.1.1.1
  option remote-subvolume brick
end-volume

volume cache
  type performance/io-cache
  option cache-size 1GB
  option page-size 128KB
  subvolumes remote
end-volume

volume threads
  type performance/io-threads
#   option thread-count 16# default 16
  subvolumes cache
end-volume

volume writebehind
  type performance/write-behind
  option flush-behind on# default is 'off', let's live dangerously
 subvolumes threads
end-volume
---

bonnie++ say what reality backs:

Version 1.03d   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
gluster-shlevin  4G 66262  90  3437   0  3778   0 35682  49 55111   3 383.6   0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  1165   2  3695   2  1621   2  1109   1  3787   2  1583   2

Basically sequential block output is extremely slow, and tiobench
helped it to narrow to block sizes below 64k. Below 64k performance is
around 2-3MB/s, above is the normal 60-65MB/s.

Dropping writebehind gave a performance "boost"... 2-3MB/s went up to
10-15MB/s, while, of course, putc performance went down to around
30MB/s.

However I cannot seem to be able to raise block performance below 64k
(especially around 4k) higher than 2-3MB/s (or 9-10MB/s without WB);
it basically doesn't change if I try to remove other translators.

CPU and network load seems to be low on both sides.

Local fuse test gives 70+MB/s for any IO.

Ideas? (Maybe more fuse tweaks? kernel variables?)

Thanks,
Peter

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] mirroring problems with replicate Please Help!

2009-06-22 Thread Phillip Walsh




Hello,

I'm having problems with 2 server setup using replicate to create a 2
system mirror for small HA setup.  It seems like a locking issue or
something.  The below configuration was based on a tutorial and seemed
solid when testing, however some file operations such as PHP BB3
caching and SVN are causing file corruption.  I am pretty new with
glusterfs so please let me know if there is something I can change in
my configuration to fix this.  Thank you!

Configuration:

2 x server running:
# file: /etc/glusterfs/glusterfs-server.vol
volume posix
  type storage/posix
  option directory /data
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume

2 servers and all clients running:
# file: /etc/glusterfs/glusterfs-client.vol
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.100.63
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.100.64
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 128KB
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 64MB
  subvolumes writebehind
end-volume



-- 
Kind Regards,

Phillip Walsh





___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users