Re: [Gluster-devel] Has something broken with the -s option in the git builds?

2009-03-06 Thread Amar S. Tumballi
Yes, with latest git, you are required to keep all the fetchable files same
in all servers. This is because of a possibility where you have more than
one machine serving volume files, and if they are different (say one uses
replicate, another stripe), there is a high chance that we will messup the
backend.

Hence this checksum protection for files fetched from server now. Either you
can keep all the files same, or keep the files only on one server.

Regards,
Amar

On Fri, Mar 6, 2009 at 10:37 AM, Paul Rawson plr...@gmail.com wrote:

 I've been using glusterfs -s server /glusterfs and just recently it's
 stopped working. The following is in the logs:

 2009-03-06 10:34:10 E [client-protocol.c:5912:client_setvolume_cbk]
 client101-0: SETVOLUME on remote-host failed: volume-file checksum varies
 from earlier access
 2009-03-06 10:34:10 C [fuse-bridge.c:2662:notify] fuse: remote volume file
 changed, try re-mounting
 2009-03-06 10:34:10 W [glusterfsd.c:811:cleanup_and_exit] glusterfs:
 shutting down
 2009-03-06 10:34:10 W [fuse-bridge.c:2903:fini] fuse: unmounting
 '/glusterfs'

 glusterfs -f configfile does still work though.

 Thanks,
 -Paul
 --
 Got r00t?

 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Distribute problem on rc2 release

2009-02-26 Thread Amar S. Tumballi
2009/2/26 ad...@hostyle.it



 Configuration, hardware and s.o. are same at rc1, that work without
 problems!


 volume wb
  type performance/write-behind
  option aggregate-size 2MB #1048576
  option window-size 2MB #1048576
  option flush-behind on# default is 'off'
  subvolumes iot
 end-volume


Can you change the 'option aggregate-size 2MB' to 'option block-size 1MB',
and try again?
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Re: big problem with unify in rc2

2009-02-25 Thread Amar S. Tumballi
Avati should be having more idea about this. But if you need this to work
right away, and not harm production, you can revertback to rc1, or add
'option lookup-unhashed yes' in dht volume

On Feb 25, 2009 2:12 PM, Dan Parsons dpars...@nyip.net wrote:

Yes, all the filenames that don't work are exactly 16 characters long. Does
this mean there's an easy fix? :)

Dan

On Wed, Feb 25, 2009 at 2:07 PM, Amar Tumballi (bulde) a...@gluster.com
wrote:   Dan, can you...
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Namespace cache size ratio

2008-11-25 Thread Amar S. Tumballi
Hi Daniel,

2008/11/25 Daniel van Ham Colchete [EMAIL PROTECTED]


 Yesterday I started the tests and what I'm learning is really making it
 worth. I will send all the results latter but for now the biggest find is
 that the XFS filesystem of *the* best for maildir storage in comprassion
 from Ext3 in my case. I'll put all the results in the wiki in a few days (I
 can do it, right?).


Thats nice to know. Surely you are welcome to contribute to wiki.



 I have a question here. It's easy to test but I would like to hear from a
 more experienced person. Say I use everything XFS for everything and I have
 4 storages with 2TB each (after RAID10), I'll do Unify+AFR using GlusterFS
 version 1.3.12. So it will give-me 4TB of usable data. I'll store a lot of
 files, averaging 14KB for each file. What partition size should I use for
 the namespace cache (I'll do AFR on it too)? 40GB (1%) is enought? More?
 Less?


You can share namespace as another directory inside the 2TB partition
itself. Or if you choose to keep namespace separate, then as its XFS, 40GB
should be good enough. I would have recommended more if it was ext3 (as
there default blocksize is 4K, so each inode consumes 4k of disk, where as
its not the case with reiserfs and xfs).

Regards,
Amar

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What IS DHT?

2008-11-24 Thread Amar S. Tumballi
2008/11/24 Anand Avati [EMAIL PROTECTED]



 I cant seem to find (searching mailing list and wiki) what DHT is? I
 come up with some references to it in the mailing list but no
 documentation on what it is or does really.


 That is because the developers are in a lag for completing the
 documentation :-) We will put up the details very soon.


I did try to keep up with documentation. But haven't yet linked it to main
page.

http://gluster.org/docs/index.php/GlusterFS_Translators_v1.4#DHT_.28Distributed_Hash_Table.29_Translator



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.4.0 final release date?

2008-11-12 Thread Amar S. Tumballi
Waiting to finish documentation before stable release now. Code freezing is
done. only QA/bug fixes cycle is going on.

pre releases are being made,
http://ftp.zresearch.com/pub/gluster/glusterfs/1.4-pre/

Regards,

2008/11/11 Dan Parsons [EMAIL PROTECTED]

 Any expected release date for 1.4.0 final?


 Dan Parsons




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Mail cluster storage benchmark

2008-11-11 Thread Amar S. Tumballi
I think so.. he would have meant the migration plans.

2008/11/10 Basavanagowda Kanur [EMAIL PROTECTED]

 Daniel,
   If you meant to migrate an existing storage from unify-AFR-posix to
 DHT-AFR-BDB or fresh setup?

 --
 gowda


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fuse on newer kernels

2008-09-22 Thread Amar S. Tumballi
Hi Reinis,
 Yes! latest fuse doesn't come with kernel modules IMO, Miklos was talking
about removing kernel module from the fuse package, and he will have kernel
module updated with every new kernel release.

Regards,


2008/9/9 Reinis Rozitis [EMAIL PROTECTED]

 Actually nevermind the first question - fuse-2.8.0-pre1 somehow doesnt have
 the kernel module or I have no idea how to build it (at least couldnt see
 the option in ./configure)


 rr


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: {Disarmed} Re: [Gluster-devel] GlusterFS drops mount point

2008-09-12 Thread Amar S. Tumballi
Also which version of GlusterFS?

[EMAIL PROTECTED]

 may be configuration issue...  lets start with config, what does you config
 look like on client and server?

 Marcus Herou wrote:

 Lots of these on server
 2008-09-12 20:48:14 E [protocol.c:271:gf_block_unserialize_transport]
 server: EOF from peer (*MailScanner has detected a possible fraud attempt
 from 192.168.10.4:1007 claiming to be* *MailScanner warning: numerical
 links are often malicious:* 192.168.10.4:1007 http://192.168.10.4:1007)
 ...
 2008-09-12 20:50:12 E [server-protocol.c:4153:server_closedir] server: not
 getting enough data, returning EINVAL
 ...
 2008-09-12 20:50:12 E [server-protocol.c:4148:server_closedir] server:
 unresolved fd 6
 ...
 2008-09-12 20:51:47 E [protocol.c:271:gf_block_unserialize_transport]
 server: EOF from peer (*MailScanner has detected a possible fraud attempt
 from 192.168.10.10:1015 claiming to be* *MailScanner warning: numerical
 links are often malicious:* 192.168.10.10:1015 http://192.168.10.10:1015
 )
 ...

 And lots of these on client

 2008-09-12 19:54:45 E [afr.c:2201:afr_open] home-namespace: self heal
 failed, returning EIO
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3954: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3956: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3958: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3987: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3989: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3991: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:45 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3993: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:54 C [client-protocol.c:212:call_bail] home3: bailing
 transport
 2008-09-12 19:54:54 E [client-protocol.c:4827:client_protocol_cleanup]
 home3: forced unwinding frame type(2) op(5) [EMAIL PROTECTED]
 2008-09-12 19:54:54 E [client-protocol.c:4239:client_lock_cbk] home3: no
 proper reply from server, returning ENOTCONN
 2008-09-12 19:54:54 E [afr.c:1933:afr_selfheal_lock_cbk] home-afr-3:
 (path=/rsyncer/.ssh/authorized_keys2 child=home3) op_ret=-1 op_errno=107
 2008-09-12 19:54:54 E [afr.c:2201:afr_open] home-afr-3: self heal failed,
 returning EIO
 2008-09-12 19:54:54 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3970: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:54 E [client-protocol.c:4827:client_protocol_cleanup]
 home3: forced unwinding frame type(2) op(5) [EMAIL PROTECTED]
 2008-09-12 19:54:54 E [client-protocol.c:4239:client_lock_cbk] home3: no
 proper reply from server, returning ENOTCONN
 2008-09-12 19:54:54 E [afr.c:1933:afr_selfheal_lock_cbk] home-afr-3:
 (path=/rsyncer/.ssh/authorized_keys2 child=home3) op_ret=-1 op_errno=107
 2008-09-12 19:54:54 E [afr.c:2201:afr_open] home-afr-3: self heal failed,
 returning EIO
 2008-09-12 19:54:54 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3971: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:54 E [client-protocol.c:4827:client_protocol_cleanup]
 home3: forced unwinding frame type(2) op(5) [EMAIL PROTECTED]
 2008-09-12 19:54:54 E [client-protocol.c:4239:client_lock_cbk] home3: no
 proper reply from server, returning ENOTCONN
 2008-09-12 19:54:54 E [afr.c:1933:afr_selfheal_lock_cbk] home-afr-3:
 (path=/rsyncer/.ssh/authorized_keys2 child=home3) op_ret=-1 op_errno=107
 2008-09-12 19:54:54 E [afr.c:2201:afr_open] home-afr-3: self heal failed,
 returning EIO
 2008-09-12 19:54:54 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3972: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:54 E [client-protocol.c:4827:client_protocol_cleanup]
 home3: forced unwinding frame type(2) op(5) [EMAIL PROTECTED]
 2008-09-12 19:54:54 E [client-protocol.c:4239:client_lock_cbk] home3: no
 proper reply from server, returning ENOTCONN
 2008-09-12 19:54:54 E [afr.c:1933:afr_selfheal_lock_cbk] home-afr-3:
 (path=/rsyncer/.ssh/authorized_keys2 child=home3) op_ret=-1 op_errno=107
 2008-09-12 19:54:54 E [afr.c:2201:afr_open] home-afr-3: self heal failed,
 returning EIO
 2008-09-12 19:54:54 E [fuse-bridge.c:715:fuse_fd_cbk] glusterfs-fuse:
 3974: (12) /rsyncer/.ssh/authorized_keys2 = -1 (5)
 2008-09-12 19:54:54 E [client-protocol.c:4827:client_protocol_cleanup]
 home3: forced unwinding frame type(2) op(5) [EMAIL PROTECTED]
 2008-09-12 19:54:54 E [client-protocol.c:4239:client_lock_cbk] home3: no
 proper reply from server, returning ENOTCONN
 2008-09-12 19:54:54 E [afr.c:1933:afr_selfheal_lock_cbk] home-afr-3:
 (path=/rsyncer/.ssh/authorized_keys2 child=home3) op_ret=-1 op_errno=107
 2008-09-12 19:54:54 E [afr.c:2201:afr_open] home-afr-3: 

Re: [Gluster-devel] UID mapping feature

2008-08-27 Thread Amar S. Tumballi
This was long pending task, and now is complete. But the uid/gid mapping has
a upper limit of 32 mappings per translator instance. Also it provides the
'root-squashing' option like NFS. Hope few people like it.

Code committed to glusterfs--mainline--3.0--patch-323

I have done testing of few straight forward cases. There may be glitches in
some corner cases. Let me know how it works for you.

Regards,
Amar

2007/7/3 Anand Avati [EMAIL PROTECTED]

 Bruce,
  Yes, your idea makes sense. Basically the fixed-id translator came in as a
 hack a while back. Another idea would be to make fixed-id lookup PAM and
 work like the NFS uidmapper.

 thanks,
 avati
 2007/6/29, Bruce [EMAIL PROTECTED]:


 Howdy all,

 I would like to request a feature which maybe quite easy with the
 existing fixed-id translator.

 The theory we have here is that we want to be able to have all our
 servers use glusterfs (or a system like it) to store files, however at
 the moment each server has a unique set of users and some even have
 Control panels users setup other users with which makes migration
 difficult.

 Basically the idea is to allow for UID mapping, for example on all
 servers, users are currently set up with user IDs between 100 and 199
 (as an example), however the translator would convert these to 200-299
 for server1, 300-399 for server2, and so on... Hence all the servers
 have unique User IDs on the storage servers and allow for quotas to be
 set and avoid security issues.

 The client config file might look something like this:
 volume server1-live
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.45.208
  option remote-subvolume brick
 end-volume

 volume server1
  type features/fixed-id
  option map-uids 100:199 200:299
  option map-gids 100:199 200:299
  subvolumes server1
 end-volume

 volume server2-live
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.45.208
  option remote-subvolume brick
  subvolumes server2
 end-volume

 volume server2
  type features/fixed-id
  option map-uids 100:199 300:399
  option map-gids 100:199 300:399
  subvolumes server2
 end-volume

 and of course a unify on top of this (and maybe afr's under the fixed-id
 types)


 I am hoping that because this should be an easy add with the fixed-id
 stuff that it will make it into a pre-release.

 Cheers,
 --
 Bruce Parker
 Engineering Team Leader

 DDI  +64 6 757 2881

 WebFarm Limited  I  Level 2, 2 Devon Street East, New Plymouth, New
 Zealand  I  Telephone +64 6 757 2881  I  Facsimile +64 6 757 2883
 ICONZ  I WebFarm I  Freeparking I  2day.com I Worldpay
 Specializing in Domain Name Registration, Web Hosting, Internet 
 E-Commerce Solutions


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Anand V. Avati

 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Upgrade planning

2008-08-27 Thread Amar S. Tumballi
Sorry Martin for missing out the mail earlier.

Generally in testing what I do is just install new glusterfs (1.4.xx) on
some temporary path. (the only problem comes if  you are using
mount.glusterfs, as it gets overwritten), and use glusterfs from temporary
path (and new ports, spec file) to run tests. Where as stable glusterfs will
be running in standard path. This works fine for me as i run every instance
from 'glusterfs', instead of relaying on mount.glusterfs and 'mount -t
glusterfs'

Regards,
Amar

Since I haven't gotten any response, perhaps I can word the
 question slightly differently.  Is there an easy way to run
 two independent glusterfs mount binaries which support
 different versions of the glusterfs protocol?  Something
 like having two different binaries, perhaps named:
 glusterfs.1.3.8 and glusterfs.1.4.1? Would I have
 to statically compile them so that I do not have library
 version conflicts?

 Thanks,

 -Martin






 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Upgrade planning

2008-08-27 Thread Amar S. Tumballi


 Since glusterfs is so flexible/configurable, I suspect
 that any substantial deployment will have to deal with
 this at some point.  Anytime a client mounts shares
 from two different servers it is likely to run into
 this problem.  Should glusterfs perhaps develop a
 strategy for administrators to deal with this in a
 simple way?  Whether that be by suggesting a certain
 package management strategy to distributors (allowing
 mulitiple glusterfs client packages to be installed on
 the same machine), or by making the client binaries
 become backwards compatible?  I fear that without a
 clean (and easy) solution to this serious admins might
 shy away from deploying glusterfs in production
 environments.  Thoughts?

 Martin,
 Yes! we know that is a limitation as of now. We have aim to have backward
compatibility from 1.5.x releases. But currently 1.3 - 1.4 doesn't work
fine. (and even 1.4.0qaXX releases may not work with other 1.4.0qa releases
as there may be some changes in APIs internally.

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Mac OS X client

2008-08-25 Thread Amar S. Tumballi
Hi,
 Sorry for the delay in response. Actually, there are some development
happening regarding GlusterFS mount and Finder issue.  Btw, which version of
GlusterFS are you using? Let me get back to you after I fix the problems.

Can you try http://gnu.zresearch.com/~amar/glusterfs-1.4.0qa35mac.tar.gz and
let me know if things are fine. That would help me in development.

Regards,
Amar

2008/8/25 [EMAIL PROTECTED]

 Hi

 I have two servers and one client Glusterfs.
 I think I have a problem with the mac os X tiger extended attributes.
 When I try to drag a file into a samba share (export of a Glusterfs
 volume), I have a message in the finder : I havn't all the necessaries
 authorizations.
 But if I create a file with the command touch in a terminal then I can
 drag it in the share without problem.

 thanks,



 Philippe
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs 1.3.10 and linux kernel 2.6.26.2

2008-08-14 Thread Amar S. Tumballi
Hi Nicolas,
 Thanks for sharing info. Just few questions.. when using 2.6.24 kernel you
used fuse module from the kernel tree itself? and when you say slow, is it
file operations (like out put of dd/iozone etc) or regular metadata ops
(like 'ls', 'find', 'du')?

Regards,
Amar

2008/8/14 nicolas prochazka [EMAIL PROTECTED]

 Hi,
 I'm using glusterfs with ten node without problem,
 but ths week, I update my differents serveur to 2.6.26.2 , before
 kernel were 2.6.24 and then,
 glusterfs seems to be have some problem, it is very slowly.

 Is this issue known ?

 Regards,
 Nicolas Prochazka.


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] latest TLA, 1.4 branch

2008-08-14 Thread Amar S. Tumballi
Brent,
  Thanks for the report. Is this 'btrfs' backend or ext3 backend? also,
there is a new release of btrfs, is that working fine? (the 'ls' atleast?)

Regards,

2008/8/14 Brent A Nelson [EMAIL PROTECTED]

 With ACL-mounted shares, cp -a gives an I/O error when setting permissions
 on setuid and setgrpid files, although the copy seems to work properly,
 anyway (apart from the known issue with ACL-enabled filesystems); the
 setuid/setgrpid bits, ownerships, and permissions are set correctly. Looking
 further, I see that GlusterFS now appears to mount as nosuid by default, so
 this may be correct behavior, although it seems a bit odd for nosuid (I/O
 error? setting the bits, anyway?).

 When the underlying filesystems are mounted noacl, however, an I/O error on
 setting permissions seems to occur for all files.

 On the bright side, it appears that the previously-reported filesystem hang
 is gone.

 Thanks,

 Brent


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Two patches against glusterfs-1.3.10

2008-08-14 Thread Amar S. Tumballi
Hi Mateusz,
 I just upgraded to gcc-4.3.1 and tried to compile with and without LDADD,
and with both things glusterfs worked fine. (even the translator linking).

 (I just checked posix and client protocol translators, [on qa34 tarball]
but the behavior should be same for all translators).

Regards,
Amar

2008/8/13 Mateusz Korniak [EMAIL PROTECTED]

 On Wednesday 13 of August 2008, Mateusz Korniak wrote:
Thanks for the patches, they are committed to mainline now :-) [In
 both
   branches]
 
  Hi, Amar
 
  Unfortunately link patch solves only build issues.
  When trying to start glusterfs client I get:
 
  2008-08-13 10:04:23 E [xlator.c:120:xlator_set_type] xlator:
  dlopen(/usr/lib/glusterfs/1.3.10/xlator/protocol/client.so):
  /usr/lib/glusterfs/1.3.10/xlator/protocol/client.so: cannot dynamically
  load executable

 I found solution to that problem which seems be:
 change xlator_PROGRAMS to xlator_LTLIBRARIES so
 whole Makefile.am looks like [1] which seems to produce proper valid shared
 object [2]. I will test that solution and provide you with another version
 of
 link patch.
 Sorry for inconvenience, best wishes and regards,


 [1]:
 # xlator_PROGRAMS = client.so

 version_type = None
 xlator_LTLIBRARIES = client.la

 xlatordir = $(libdir)/glusterfs/$(PACKAGE_VERSION)/xlator/protocol

 # client_so_SOURCES = client-protocol.c
 # client_so_LDADD = $(top_builddir)/libglusterfs/src/.libs/libglusterfs.so

 client_la_SOURCES = client-protocol.c

 ## client_la_LIBADD = $(top_srcdir)/libglusterfs/src/libglusterfs.la

 client_la_LDFLAGS  = -module -avoid-version

 noinst_HEADERS = client-protocol.h

 AM_CFLAGS = -fPIC -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -Wall
 -D$(GF_HOST_OS) \
-I$(top_srcdir)/libglusterfs/src -shared -nostartfiles
 $(GF_DARWIN_BUNDLE_CFLAGS)

 CLEANFILES = *~

 [2]
 file client.so
 client.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV),
 dynamically linked, stripped

 --
 Mateusz Korniak




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster on MAC OS X

2008-08-12 Thread Amar S. Tumballi
can you check the client log file for the error msg in mac?

The 'ls' error is coming because the handshake is not happening. tail of the
log file should help us to corner the issue.

Regards,
Amar

2008/8/12 jenish soosainayagam [EMAIL PROTECTED]

 Dear Team,

 I have installed Gluster server on RHEL. I am trying to connect MAC
 OS X client with the RHEL server. I am not able to mount storage brick
 with my OS X Client. It is giving the below error.

 shake25:Gluster root# ls
 ls: .: Socket is not connected

  There is no more detailed installation document
 with Gluster.org website.

 Please advice me step by step installation guide for MAC OS X client
 installation and mount with the versions of MACFUSE and Glusterfs

 My config :

 OS X version : OS X 10.5 (Leopard)
 FUSE version : MACFUSE 1.7
 Glusterfs: glusterfs-1.4.0qa30

 As of now Server is working fine. I have connected RHEL client with
 that server.

 Regards,
 Jenish.S

 =
 Portable Aircraft Air Conditioners
 Let Victory GSE be your supplier for ground support equipment. The company
 offers a wide range of airport support equipment. Call toll free or e-mail
 us your order. Based in Los Angeles, CA.

 http://a8-asy.a8ww.net/a8-ads/adftrclick?redirectid=730f99f0aae60313578465935cc42e79


 --
 ___
 Search for products and services at:
 http://search.mail.com


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Two patches against glusterfs-1.3.10

2008-08-11 Thread Amar S. Tumballi
Hi Mateusz,
 Thanks for the patches, they are committed to mainline now :-) [In both
branches]

Regards,
Amar

2008/8/9 Mateusz Korniak [EMAIL PROTECTED]

 On Saturday 09 of August 2008, Amar S. Tumballi wrote:
  Hi Mateusz,
   If the patch is small, can you paste the content in body of message? or
  else you can mail it to me or any developer address. I will verify it and
  commit.

 Hi Amar,
 here they go (attached).

 Best wishes and regards !

 --
 Mateusz Korniak
 CTO in ANT sp. z o.o. http://www.ant.gliwice.pl/http://kupujemy.pl/
 (+48) 32-339-3188  (+48) 607-913-268   skype: mateusz_korniak
 ICQ: 355957006   gg: 2721683jabber: [EMAIL PROTECTED]
 GPG Key ID: 0x0C63B750FA5129D7




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Booster Translator not working on client.

2008-08-11 Thread Amar S. Tumballi
Hi Rohan,
 I am aware of the issue, I could reproduce the same locally. Currently the
idea of booster was to eliminate whole layered stack including fuse, and
talk to server volume directly from client application. With the booster in
1.3.x releases, there are some limitations like its not so well performing
for small files, and the LD_PRELOAD is supposed to be used with processes,
which does mostly largefile operations.

 We came up with a better design for booster, which doesn't need any
translator, but will just have a LD_PRELOAD'able shared object. And it
should work fine with stripe/afr too.

Regards,
Amar

2008/8/11 Rohan [EMAIL PROTECTED]

 Hi,

 I'm trying to implement booster.

 When I start gluster on server I get following error. But it's starts and
 clients, without Booster Translator, works.
 2008-08-12 05:05:27 E [transport.c:74:transport_load] transport: 'option
 transport-type /_' missing in specification


 I also want to implement Booster Translator on client side.

 But I'm getting error when include booster translator on client side. After
 I mount any operation on mount folder hangs. And I get following error in
 client.

 [transport.c:74:transport_load] transport: 'option transport-type
 /_' missing in specification


 Following are my server  client vol files.

 My server vol
 http://glusterfs.pastebin.com/d1a16c2cd




 My client vol

 http://glusterfs.pastebin.com/d2a5b2943

 Your help is appreciated/

 Rohan



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Two patches against glusterfs-1.3.10

2008-08-08 Thread Amar S. Tumballi
Hi Mateusz,
 If the patch is small, can you paste the content in body of message? or
else you can mail it to me or any developer address. I will verify it and
commit.

Thanks and Regards,
Amar

2008/8/8 Mateusz Korniak [EMAIL PROTECTED]

 First fixes linking issues
 of
 undefined reference to `_gf_log' and others,
 which occurred after upgrade to gcc 4.3.1 (propably  pulling other upgrades
 of
 build suite)

 Second fixes compile failure when gcc uses -D_FORTIFY_SOURCE=2.

 Regards,

 --
 Mateusz Korniak

 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] chmod -R not working

2008-08-08 Thread Amar S. Tumballi
Oh! then it should not fail at all. What type of setup do you have? unify?
stripe? afr? 1:1 nfs like?

-amar

2008/8/7 Shaofeng Yang [EMAIL PROTECTED]

 Thanks. But they are not symlinks and they are real directories. A same
 directory on certain mounts works but doesn't work for other glusterfs
 mounts.




 On Thu, Aug 7, 2008 at 3:51 PM, Amar S. Tumballi [EMAIL PROTECTED]wrote:

 mostly because the files are symlinks ? (chmod on symlink doesn't work on
 GNU/Linux machines)

 2008/8/7 Shaofeng Yang [EMAIL PROTECTED]

 We are using glusterfs 1.3-suske release for our production servers. We
 recently found out chmod -R is not working for certain glusterfs
 mounts,
 which gives error like no such file or directory. But the directory is
 actually there and chmod works without -R option.
 any idea about this?


 Thanks

 Shaofeng
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!





-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Casual inquiry

2008-08-08 Thread Amar S. Tumballi
Hi Brandon,
 Replies Inline.



 I was just reviewing the roadmap again after months and thought I
 would ask if there was any kind of idea on a time frame for the
 following 1.4 features

 storage/bdb - distributed BerkeleyDB based storage backend (very
 efficient for small files)


Working code base is in archive, available in QA releases. Though more
testing is happening on it (BDB comes with *lots* of options)).



 binary protocol  - bit level protocol headers (huge improvement in
 performance for small files)


Working code base is in archive, available in QA releases, protocol has
tested well, and there are no outstanding bugs.


 AFR atomic writes - handle total power-loss, split-brains,.. (bullet-proof
 AFR)


Coding freeze done, (still at Krishna's tree, not yet merged to mainline, he
is testing preliminary test cases), should be added to mainline within a
week.



 And I was wondering what this was or did
 hash - hash based clustering with no name-space cache (linear scalability
 even for small files)

 Is hash some internal thing or is this a translator or something?


Hash will be bought in as a new translator,  it will avoid the complexity of
namespace cache. Hence the meta operations over GlusterFS will not choke at
namespace node. (Even create rate will be very high). It will reduce the
total number of calls made by the filesystem. Hence theoretically solve the
issues for small file performance.

Status: still in development, final stages (in Avati's tree). Should be
merged to mainline ~Aug 15-20.


Regards,

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] chmod -R not working

2008-08-07 Thread Amar S. Tumballi
mostly because the files are symlinks ? (chmod on symlink doesn't work on
GNU/Linux machines)

2008/8/7 Shaofeng Yang [EMAIL PROTECTED]

 We are using glusterfs 1.3-suske release for our production servers. We
 recently found out chmod -R is not working for certain glusterfs mounts,
 which gives error like no such file or directory. But the directory is
 actually there and chmod works without -R option.
 any idea about this?


 Thanks

 Shaofeng
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Latest unstable (1.4) branch checkout seems... unstable

2008-08-06 Thread Amar S. Tumballi
Hi Brent,
 Vikas fixed some bugs in xattr related path of posix. (in patch-272), can
you check now?

Regards,
Amar

2008/7/30 Brent A Nelson [EMAIL PROTECTED]

 FYI, ls works okay in directories that only contain other directories.  If
 the directory contains files, it complains.  After that, the namespace
 glusterfsd processes seem to hang altogether and require a kill -9.

 Thanks,

 Brent


 On Wed, 30 Jul 2008, Brent A Nelson wrote:

  On Wed, 30 Jul 2008, Vikas Gorur wrote:

  Brent,

 Thanks for pin-pointing the patch. I tried to reproduce this with an
 AFR+Unify setup. However, I haven't been able to yet. How easy is it to
 reproduce this? Which operations did you do before it screwed up?


 Right after mounting, I find that df works, but the very first ls -al is
 extremely slow, complains, and gives bad results.  So it's failing right
 away, in my case.

 Thanks,

 Brent



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] debugging glusterfs

2008-08-02 Thread Amar S. Tumballi
When you use gdb, you need to use -N flag (on any OS).

But can you explain us why you needed gdb? I would like to help if possible.

Regards,
Amar

2008/8/2 Michael Grant [EMAIL PROTECTED]

 I'd like to help with glusterfs but I can't seem to run it in gdb on
 freebsd, this is what I get:

  # gdb /usr/local/sbin/glusterfs
  (gdb) run -f test.vol /mnt
  Starting program: /usr/local/sbin/glusterfs -f test.vol /mnt
  warning: Unable to get location for thread creation breakpoint: generic
 error
  [New LWP 100278]

  Program exited normally.

 Is there a trick to debugging glusterfs in gdb?

 Michael Grant


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] debugging glusterfs

2008-08-02 Thread Amar S. Tumballi
Michael,
 Mostly I know whats wrong with it.. Will be making another qa release this
week, that should work fine. Or you can try the latest tla too.

Regards,
Amar

2008/8/2 Michael Grant [EMAIL PROTECTED]

 Ahh, sorry, you mean -N to glusterfs.  Duh.

 It is the same:

 (gdb) run -N -f test.vol /mnt
 Starting program: /usr/local/sbin/glusterfs -N -f test.vol /mnt
 warning: Unable to get location for thread creation breakpoint: generic
 error
 [New LWP 100351]
 [New Thread 0x8057000 (LWP 100399)]

 Program exited with code 0377.


 On Sat, Aug 2, 2008 at 11:42 AM, Michael Grant [EMAIL PROTECTED] wrote:
  I don't see a -N flag.  There's a -n flag to gdb but that doesn't
  help.  Are you sure you mean -N?
 
  I wanted to try to locate the source of the segv that is happening
  that I mentioned to you a few days ago.  I cannot get 1.4.0qa32 to run
  on freebsd as a client.
 
  Michael Grant
 
  On Sat, Aug 2, 2008 at 10:53 AM, Amar S. Tumballi [EMAIL PROTECTED]
 wrote:
  When you use gdb, you need to use -N flag (on any OS).
 
  But can you explain us why you needed gdb? I would like to help if
 possible.
 
  Regards,
  Amar
 
  2008/8/2 Michael Grant [EMAIL PROTECTED]
 
  I'd like to help with glusterfs but I can't seem to run it in gdb on
  freebsd, this is what I get:
 
   # gdb /usr/local/sbin/glusterfs
   (gdb) run
   Starting program: /usr/local/sbin/glusterfs -f test.vol /mnt
   warning: Unable to get location for thread creation breakpoint:
 generic
  error
   [New LWP 100278]
 
   Program exited normally.
 
  Is there a trick to debugging glusterfs in gdb?
 
  Michael Grant
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@nongnu.org
  http://lists.nongnu.org/mailman/listinfo/gluster-devel
 
 
 
  --
  Amar Tumballi
  Gluster/GlusterFS Hacker
  [bulde on #gluster/irc.gnu.org]
  http://www.zresearch.com - Commoditizing Super Storage!
 
 




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Re: AFR between freebsd and debian linux

2008-07-27 Thread Amar S. Tumballi
Hi Michael,
Over FreeBSD side, client is not tested for 1.3.x releases. You can try
version 1.40qa30 onwards if you want to have cross OS GlusterFS, and client
on any of the MacOS X/BSD (other than GNU/Linux).  Solaris fuse (ie, client
part) is not yet tested even with 1.4.0qa releases.

Regards,
Amar

2008/7/26 Michael Grant [EMAIL PROTECTED]

 On Sat, Jul 26, 2008 at 3:42 PM, Michael Grant [EMAIL PROTECTED] wrote:
  I'm trying to get Brandon Lamb's example from May 1 2008 working on my
  server.  (
 http://www.mail-archive.com/gluster-devel@nongnu.org/msg04021.html)
 
  I started with just a simple client/server though and I'm having some
 trouble.
 
  On the debian side of things, I created a client and server to mount
  the remote brick1 from the freebsd side.  However, when I mount it,
  this is what I get:
 
  # glusterfs -f test.vol /mnt
  fuse: device not found, try 'modprobe fuse' first
 
  When the debian box boots, it dutifully reports this:
   Loading kernel modules...fuse init (API version 7.7)
 
  I tried the 'modprobe fuse', here's what it says:
 
  # modprobe -V fuse
  module-init-tools version 3.3-pre2
 
  but the glusterfs still fails.  What am I missing?
 
  Michael Grant
 

 I'm a little farther along.  I had to mknod /dev/fuse manually on the
 linux side:

 # cd /dev
 # ./MAKEDEV fuse
 # ls -l /dev/.static/dev/fuse
 crw-rw 1 root root 10, 229 Jul 26 16:37 /dev/.static/dev/fuse
 # mknod /dev/fuse c 10 229

 however, now I'm having problems on the freebsd side:

 I'm trying to mount a simple server like this:

 server.vol:
 volume brick1
type storage/posix
option directory /shared/testdir
 end-volume

 volume server
type protocol/server
option transport-type tcp/server
subvolumes brick1
option auth.ip.brick1.allow 127.0.0.1
 end-volume

 client.vol:
 volume brick
type protocol/client
option transport-type tcp/client
option remote-host localhost
option remote-subvolume brick1
 end-volume

 When I run:

 # glusterfs -f client.vol /mnt

 it exits rather quickly.  Also:

 #  ls /mnt
 ls: /mnt: Bad file descriptor

 which I guess is to be expected.

 if I run glusterfs /mnt in the forground (-N) with full debug, here is the
 log:

 2008-07-26 16:46:58 D [glusterfs.c:167:get_spec_fp] glusterfs: loading
 spec from test.vol
 2008-07-26 16:46:58 D [spec.y:107:new_section] parser: New node for 'brick'
 2008-07-26 16:46:58 D [xlator.c:115:xlator_set_type] xlator: attempt
 to load file /usr/local/lib/glusterfs/1.3.10/xlator/protocol/client.so
 2008-07-26 16:46:58 D [spec.y:127:section_type] parser:
 Type:brick:protocol/client
 2008-07-26 16:46:58 D [spec.y:141:section_option] parser:
 Option:brick:transport-type:tcp/client
 2008-07-26 16:46:58 D [spec.y:141:section_option] parser:
 Option:brick:remote-host:localhost
 2008-07-26 16:46:58 D [spec.y:141:section_option] parser:
 Option:brick:remote-subvolume:brick1
 2008-07-26 16:46:58 D [spec.y:209:section_end] parser: end:brick
 2008-07-26 16:46:58 D [client-protocol.c:5006:init] brick: defaulting
 transport-timeout to 42
 2008-07-26 16:46:58 D [transport.c:80:transport_load] transport:
 attempt to load file
 /usr/local/lib/glusterfs/1.3.10/transport/tcp/client.so
 2008-07-26 16:46:58 D [client-protocol.c:5033:init] brick: defaulting
 limits.transaction-size to 268435456
 2008-07-26 16:46:58 D [client-protocol.c:5333:notify] brick: got
 GF_EVENT_PARENT_UP, attempting connect on transport
 2008-07-26 16:46:58 D
 [client-protocol.c:4750:client_protocol_reconnect] brick: attempting
 reconnect
 2008-07-26 16:46:58 D [tcp-client.c:77:tcp_connect] brick: socket fd = 4
 2008-07-26 16:46:58 D [tcp-client.c:107:tcp_connect] brick: finalized
 on port `1021'
 2008-07-26 16:46:58 D [tcp-client.c:128:tcp_connect] brick: defaulting
 remote-port to 6996
 2008-07-26 16:46:58 D [common-utils.c:179:gf_resolve_ip] resolver: DNS
 cache not present, freshly probing hostname: localhost
 2008-07-26 16:46:58 D [common-utils.c:204:gf_resolve_ip] resolver:
 returning IP:127.0.0.1[0] for hostname: localhost
 2008-07-26 16:46:58 D [common-utils.c:212:gf_resolve_ip] resolver:
 flushing DNS cache
 2008-07-26 16:46:58 D [tcp-client.c:161:tcp_connect] brick: connect on
 4 in progress (non-blocking)
 2008-07-26 16:46:58 D [tcp-client.c:205:tcp_connect] brick: connection
 on 4 success
 2008-07-26 16:46:58 D [client-protocol.c:5355:notify] brick: got
 GF_EVENT_CHILD_UP
 2008-07-26 16:46:58 D
 [client-protocol.c:5096:client_protocol_handshake_reply] brick: reply
 frame has callid: 424242
 2008-07-26 16:46:58 D
 [client-protocol.c:5130:client_protocol_handshake_reply] brick:
 SETVOLUME on remote-host succeeded
 2008-07-26 16:46:59 D
 [client-protocol.c:4756:client_protocol_reconnect] brick: breaking
 reconnect chain


 on the server side:

 2008-07-26 16:46:58 D [tcp-server.c:145:tcp_server_notify] server:
 Registering socket (5) for new transport object of 127.0.0.1
 2008-07-26 16:46:58 D 

Re: [Gluster-devel] fchmod glitch in 1.4 tla?

2008-07-23 Thread Amar S. Tumballi
Brent,
 When you say 'error is gone' you are telling about fchmod error? or the
setxattr logs?

A general rule on GNU/Linux system is, xattrs  supports only

user.anything, trusted.anything and system.anything, as key. and
with noacl, the system.posix_acl_default and system.posix_acl_access
keys too return EOPNOTSUP errno. Hence the log in afr_setxattr. (Actually
whenever this particular errno is returned to application, they neglect it,
and proceed, so user won't know about this). The right fix is to correct the
log entry for afr, not to show up if the errno is EOPNOTSUP. (which is done
in other places like unify, fuse, and posix). So, this error msg is
harmless.

I am concerned about the EIO for fchmod, is it reprodicible with noacl flag
set for backend fs?  (sorry for me not testing atm, my test nodes are down
as of now). So, I can have the right fix for it not before tomorrow.

Thanks for detailed bug report and finding out the cause for it.

Regards,
Amar

2008/7/23 Brent A Nelson [EMAIL PROTECTED]:

 I think I've found the culprit.  It was due to having mounted with noacl.
 cp -a attempts ACL operations, which fail wiht noacl mounts, but GlusterFS
 was apparently remembering that error and passing it as the return for
 fchmod.  With the filesystems mounted with acl support, the error is gone.

 Here is the glusterfs log from cp -a /bin/ls /beast when the filesystems
 were mounted with noacl:

 2008-07-23 15:59:19 D [fuse-bridge.c:363:fuse_entry_cbk] glusterfs-fuse:
 34: (op_num=34) / = 1
 2008-07-23 15:59:19 D [fuse-bridge.c:505:fuse_lookup] glusterfs-fuse: 35:
 LOOKUP /ls
 2008-07-23 15:59:19 D [fuse-bridge.c:443:fuse_entry_cbk] glusterfs-fuse:
 35: (op_num=34) /ls = -1 (No such file or directory)
 2008-07-23 15:59:19 D [inode.c:397:__passive_inode] fuse/inode: purging
 inode(0) lru=5/0
 2008-07-23 15:59:19 D [fuse-bridge.c:505:fuse_lookup] glusterfs-fuse: 36:
 LOOKUP /ls
 2008-07-23 15:59:19 D [fuse-bridge.c:443:fuse_entry_cbk] glusterfs-fuse:
 36: (op_num=34) /ls = -1 (No such file or directory)
 2008-07-23 15:59:19 D [inode.c:397:__passive_inode] fuse/inode: purging
 inode(0) lru=5/0
 2008-07-23 15:59:19 D [fuse-bridge.c:1511:fuse_create] glusterfs-fuse: 37:
 CREATE /ls
 2008-07-23 15:59:19 D [fuse-bridge.c:1383:fuse_create_cbk] glusterfs-fuse:
 37: (op_num=27) /ls = 0xb4b01170
 2008-07-23 15:59:19 D [inode.c:569:__create_inode] fuse/inode: create
 inode(30044)
 2008-07-23 15:59:19 D [inode.c:362:__active_inode] fuse/inode: activating
 inode(30044), lru=5/0
 2008-07-23 15:59:19 D [inode.c:397:__passive_inode] fuse/inode: purging
 inode(0) lru=5/0
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 38:
 WRITE (0xb4b01170, size=8192, offset=0)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 38: WRITE = 8192/8192,0/8192
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 39:
 WRITE (0xb4b01170, size=8192, offset=8192)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 39: WRITE = 8192/8192,8192/16384
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 40:
 WRITE (0xb4b01170, size=8192, offset=16384)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 40: WRITE = 8192/8192,16384/24576
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 41:
 WRITE (0xb4b01170, size=8192, offset=24576)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 41: WRITE = 8192/8192,24576/32768
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 42:
 WRITE (0xb4b01170, size=8192, offset=32768)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 42: WRITE = 8192/8192,32768/40960
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 43:
 WRITE (0xb4b01170, size=8192, offset=40960)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 43: WRITE = 8192/8192,40960/49152
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 44:
 WRITE (0xb4b01170, size=8192, offset=49152)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 44: WRITE = 8192/8192,49152/57344
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 45:
 WRITE (0xb4b01170, size=8192, offset=57344)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 45: WRITE = 8192/8192,57344/65536
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 46:
 WRITE (0xb4b01170, size=8192, offset=65536)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 46: WRITE = 8192/8192,65536/73728
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 47:
 WRITE (0xb4b01170, size=8192, offset=73728)
 2008-07-23 15:59:19 D [fuse-bridge.c:1644:fuse_writev_cbk] glusterfs-fuse:
 47: WRITE = 8192/8192,73728/81920
 2008-07-23 15:59:19 D [fuse-bridge.c:1682:fuse_write] glusterfs-fuse: 48:
 WRITE (0xb4b01170, 

Re: [Gluster-devel] GlusterFS File Semamtics

2008-07-23 Thread Amar S. Tumballi
Hi Luke,
 Sorry for the delay in answering your question.

The semantics of GlusterFS is posix and not NFS. (The only known glitch in
it is with regard to some permission setting when posix ACLs are used).

And can I ask you why you assume it as not the full POSIX semantics?

Regards,
Amar

2008/7/17 Luke McGregor [EMAIL PROTECTED]:

 Hi there

 Im just wondering what file semantics GLusterFS implements. Im assuming
 that
 it is not the full POSIX semantics that are implemented as it is
 essentially
 a network file system. Are the semantics NFS?

 Thanks

 Luke
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Mac OS's gluster client can mount linux's gluster server ?

2008-07-22 Thread Amar S. Tumballi
http://gluster.org/docs/index.php/GlusterFS_on_MAC_OS_X

Current version works fine between cross OS. (ie, mac server-linux client,
or linux server-mac client).

Regards,
Amar

2008/5/22 Amar S. Tumballi [EMAIL PROTECTED]:

 Its not yet complete for cross os.  (we have some compatibility issues
 known within MacFUSE itself. But in my tests, if linux is running client,
 and Mac is exporting some volume, it works fine.

 Soon I will be working on completing that, I will mail here when tests are
 complete.

 Regards,
 Amar


 On Wed, May 21, 2008 at 11:49 PM, caven [EMAIL PROTECTED] wrote:

 Hi,

 I am new to this list. If this question has been posted before, my
 apology.

 I was wondering whether Mac OS's gluster client can mount linux's gluster
 server ?
 or gluster's client and server must be used with the same OS?

 Thanks a lot.



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] fchmod glitch in 1.4 tla?

2008-07-22 Thread Amar S. Tumballi
Brent,
 My tests were ok on latest branch, (though there were changes in chmod of
posix, added lchmod (which is not present on all the platforms), if lchmod
is not present it should do 'chmod'). But fchmod was not touched.

Is there any hints in log files? (mainly server log file. but surely check
client log file too).

2008/7/22 Brent A Nelson [EMAIL PROTECTED]:

 I'm getting I/O errors with cp -a and cp -p in the latest tla of the 1.4
 branch:

 fchmod(4, 0100755)  = -1 EIO (Input/output error)

 Is something awry?

 Thanks,

 Brent


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tla 3.0 missing transport/socket/src/Makefile.in

2008-07-18 Thread Amar S. Tumballi
Hi Mic,
 I figured it out too, waiting for Raghu to come to office, he had missed
adding 'Makefile.am' while merging transport changes from his tree.

 Sorry for these build breaks in commits.

Regards,
Amar

2008/7/18 mic [EMAIL PROTECTED]:

 I'm seeing:
 # ./autogen.sh
 configure.ac:109: required file `transport/socket/src/Makefile.in' not
 found

 Thanks!
 -Mic


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] RE: Help needed

2008-07-17 Thread Amar S. Tumballi
Can you make sure these are not symlinks? i have seen that if you issue
open() over symlink, it tries to open the target, and it may not exits, so
returns ENOENT.




 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/fc2080d273fb767df2660b52cd69b67eb1
 8e56f6ohttp://[EMAIL 
 PROTECTED]/mails/fc2080d273fb767df2660b52cd69b67eb18e56f6o'
 for reading: No such file or directory

 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/6f444176af4a238239360800172a6a1a19
 3afbef http://[EMAIL 
 PROTECTED]/mails/6f444176af4a238239360800172a6a1a193afbef'
 for reading: No such file or directory

 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/6bd73dbf7db1cf0c6acf56fa18d4f6fe63
 f950ecohttp://[EMAIL 
 PROTECTED]/mails/6bd73dbf7db1cf0c6acf56fa18d4f6fe63f950eco'
 for reading: No such file or directory

 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/3fec9cef25a2f43b1b1f0a712375dc23de
 30fb28ohttp://[EMAIL 
 PROTECTED]/mails/3fec9cef25a2f43b1b1f0a712375dc23de30fb28o'
 for reading: No such file or directory

 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/0fd0ba2a4d8c52de7c52b09abbfec726c6
 e4e67bohttp://[EMAIL 
 PROTECTED]/mails/0fd0ba2a4d8c52de7c52b09abbfec726c6e4e67bo'
 for reading: No such file or directory

 head: cannot open
 `/gfs-data1/mailbox/0/0/
 [EMAIL PROTECTED]/mails/52fae85ff4366d7d8492d5154ead0659fd
 37f48dohttp://[EMAIL 
 PROTECTED]/mails/52fae85ff4366d7d8492d5154ead0659fd37f48do'
 for reading: No such file or directory





 Why No such file or directory these errors are coming? These files are
 not
 present in file system. Why find command finding them?



 Your help is really appreciated.



 Thanks,



 Rohan

 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Exporting filesystems 1 per server port

2008-07-17 Thread Amar S. Tumballi
Actually we can make 'ps ax' show glusterfs command as is, for which you can
give specfile of identification.

2008/7/17 John Marshall [EMAIL PROTECTED]:

 David Mayr - Hetzner Online AG wrote:

 [...]
 One of the difficulties I have had when firing up multiple glusterfsd
 servers is in identifying what each is serving (of course, on the server
 side). Is there a simple way to determine this? E.g., to get info like:
pid  port   exporting
1234 7000   /mnt/xyz
2001 7001   /mnt/abc
...



 Try lsof -i


 Hi,

 That is a useful tool but
 1) it does not help to identify the exported filesystem
 2) lsof does not report anything when using the SDP transport

 Perhaps I would extend the desired info to be:
   pid   transport   port   exporting
   1234  tcp 7000   /mnt/xyz
   2001  ib-sdp  7001   /mnt/abc

 Ideally, this would all be wrapped up in a glusterfs tool which
 may or may not do stuff like lsof. I suspect that glusterfs would
 be able to do things more simply because it knows stuff that a
 mishmash of other tools would not.

 John




 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] infiniband in tla 3.0

2008-07-16 Thread Amar S. Tumballi
Hi Mickey,
 ib-verbs code in mainline--3.0 branch is coded, and under review, it should
be there anytime this week.

Regards,
Amar

2008/7/16 mic [EMAIL PROTECTED]:

 When will infiniband support make it back into tla 3.0?
 I have diskless booting working over infiniband to gluster and I'm dying to
 see it work after 2 node failures (ie afr1 goes down, is restored, then afr2
 goes down).

 Thanks!
 -Mickey Mazarick


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] exporting a filesystem using tcp and ib-sdp

2008-07-16 Thread Amar S. Tumballi
Hi John, yes, it can be done.

your definition of server spec file should be something like..


-
volume posix
..
end-volume

volume posix-locks
...
end-volume

volume server-ib
  type protocol/server
  option transport-type ib-sdp/server
  option listen-port 7011
  option auth.ip.posix-locks.allow *
  subvolumes posix-locks
end-volume

volume server-tcp
 type protocol/server
 option transport-type tcp/server
 option listen-port 7021 # notice the different port number
 option auth.ip.posix-locks.allow *
 subvolumes posix-locks
end-volume
---

With the above spec file on server side, you need to have client spec file,
something like


volume client
  type protocol/client
  option remote-host whichever.server.exports
  option remote-port 7011
  option transport-type ib-sdp/client
  option remote-subvolume posix-locks
end-volume
---
or

volume client
  type protocol/client
  option remote-host whichever.server.exports
  option remote-port 7021
  option transport-type tcp/client
  option remote-subvolume posix-locks
end-volume
---

(Make sure you use 1.3.10 version for this). [1]

Regards,
Amar

[1] - A mail will be sent about the release notes and further updates of
1.4.x developments by AB soon to the list.



2008/7/16 John Marshall [EMAIL PROTECTED]:

 Hi,

 I have a machine with an interface on an IB network and
 another on a GigE network. Does glusterfs handle exporting
 the same filesystem over two different transports (tcp and
 ib-sdp) at the same time?

 Thanks,
 John


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Exporting filesystems 1 per server port

2008-07-16 Thread Amar S. Tumballi
Each filesystem?

if you mean to split up multiple export directories into separate
'glusterfs' process, yes, thats surely do-able, and actually is tested to
give a better performance than having a single process exporting more than 2
volumes.

Hope this thread helps you

http://lists.gnu.org/archive/html/gluster-devel/2008-03/msg00042.html

Regards,
Amar


2008/7/16 John Marshall [EMAIL PROTECTED]:

 Hi,

 Do you have any comments about exporting each filesystem
 using its own server port? This would allow me to bring up and
 down individually exported filesystems without affecting the rest.

 Thanks,
 John


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Re: [Gluster-users] Invisible data with glusterfs on FreeBSD 7.0

2008-07-14 Thread Amar S. Tumballi
Hi All,
 The readdir issue over BSD is solved now with version
http://ftp.zresearch.com/pub/gluster/glusterfs/1.4-qa/glusterfs-1.4.0qa28.tar.gz

 Thanks to Christ, and Paul Arch, who provided a remote access to me for
testing it.

Regards,
Amar

2008/7/9 Amar S. Tumballi [EMAIL PROTECTED]:

 Hi all,
  Sorry for delay in replying to this post. Over FreeBSD, we haven't got the
 client (fuse) part working yet (never tested rather). We are busy with
 development/testing of 1.4.0 branch code.

  I would like to test/fix this issue over BSD, but sadly I don't have
 access to a BSD machine as of now. So, any help regarding giving remote
 access to BSD machine for a week (I hope to get it working sooner than that,
 but this is maximum duration), would help us to get the things working
 smoothly over BSD (other than the part that, I may not be fixing critical
 bugs inside BSD fuse itself). If anyone is interested to get the BSD part
 working and is willing to give me atleast remote login to one BSD machine,
 that would be of great help.

  We want to have 1.4.0 release to work seemlessly on GNU/Linux, BSD,
 Solaris, MacOSX, (considering smoothly working over cross flatform too).

 Regards,
 Amar

 2008/7/4 christop [EMAIL PROTECTED]:

 Hi,

 i am testing here glusterfs on fresh FreeBSD 7.0 amd64 and having a
 problem. I can read/write directory and files on my volume, but i can
 not see them with a  ls -lah.
 I am using default fuse install from ports with a
 fusefs-kmod-0.3.9.p1.20080208_1 and fusefs-libs-2.7.2_1 installed. I
 compiled gluster-1.3.9 along this guide
 http://jordan.spicylogic.com/blog/?cat=19 , but using ufs not zfs.
 Any ideas how to fix this?



 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rsync failure problem

2008-07-12 Thread Amar S. Tumballi
Any logs from GlusterFS? That would help a lot.

2008/7/12 skimber [EMAIL PROTECTED]:


 Hi Everyone,

 I have just set up glusterfs for the first time using two server machines
 and two clients.

 I'm trying to rsync a large amount of data (approx 1.2m files totalling
 14Gb) from another server using the following command (real names changed!)

 rsync -v --progress --stats --links --safe-links --recursive
 www.domain.com::mydir /mydir

 Rsync connects, gets the file list and then as soon as it starts trying to
 copy files it dies, like this:

 receiving file list ...
 1195208 files to consider
 .bash_history
 rsync: connection unexpectedly closed (24714877 bytes received so far)
 [generator]
 rsync error: error in rsync protocol data stream (code 12) at io.c(453)
 [generator=2.6.9]

 As soon as that happens the following lines are added, twice, to syslog:

 Jul 12 13:23:03 server01 kernel: hdb: dma_intr: status=0x51 { DriveReady
 SeekComplete Error }
 Jul 12 13:23:03 server01 kernel: hdb: dma_intr: error=0x40 {
 UncorrectableError }, LBAsect=345341, sector=345335
 Jul 12 13:23:03 server01 kernel: ide: failed opcode was: unknown
 Jul 12 13:23:03 server01 kernel: end_request: I/O error, dev hdb, sector
 345335

 We were previously looking at a DRBD based solution with which rsync worked
 fine.  Nothing has changed at the remote server, the only difference is we
 are now using Gluster instead of DRBD.

 We are using glusterfs-1.3.9 and fuse-2.7.3glfs10 on Debian Etch

 The server config looks like this:

 volume posix
  type storage/posix
  option directory /data/export
 end-volume

 volume plocks
  type features/posix-locks
  subvolumes posix
 end-volume

 volume brick
  type performance/io-threads
  option thread-count 4
  subvolumes plocks
 end-volume

 volume brick-ns
  type storage/posix
  option directory /data/export-ns
 end-volume

 volume server
  type protocol/server
  option transport-type tcp/server
  option auth.ip.brick.allow *
  option auth.ip.brick-ns.allow *
  subvolumes brick brick-ns
 end-volume



 And the client config looks like this:


 volume brick1
  type protocol/client
  option transport-type tcp/client # for TCP/IP transport
  option remote-host data01  # IP address of the remote brick
  option remote-subvolume brick# name of the remote volume
 end-volume

 volume brick2
  type protocol/client
  option transport-type tcp/client
  option remote-host data02
  option remote-subvolume brick
 end-volume

 volume brick-ns1
  type protocol/client
  option transport-type tcp/client
  option remote-host data01
  option remote-subvolume brick-ns  # Note the different remote volume name.
 end-volume

 volume brick-ns2
  type protocol/client
  option transport-type tcp/client
  option remote-host data02
  option remote-subvolume brick-ns  # Note the different remote volume name.
 end-volume

 volume afr1
  type cluster/afr
  subvolumes brick1 brick2
 end-volume

 volume afr-ns
  type cluster/afr
  subvolumes brick-ns1 brick-ns2
 end-volume

 volume unify
  type cluster/unify
  option namespace afr-ns
  option scheduler rr
  subvolumes afr1
 end-volume


 Any advice or suggestions would be greatly appreciated!

 Many thanks

 Simon
 --
 View this message in context:
 http://www.nabble.com/Rsync-failure-problem-tp18420195p18420195.html
 Sent from the gluster-devel mailing list archive at Nabble.com.



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] name of server provided client config?

2008-07-10 Thread Amar S. Tumballi
No worries, it looks for client.vol.ip-addr first and looks for the
regular file whatever is given as the option. So, you need not worry about
the ip-addr part, that was mostly used when spec file changes per client
(like NUFA setup), etc.

Regards,
Amar

2008/7/10 John Marshall [EMAIL PROTECTED]:

 Hi,

 Instead of copying the gluster client vol file to all my clients, I am
 pulling it from the server. But when I do this, the server is looking
 for a file like glusterfs-client.vol.ipaddr . Is this expected? Does
 it mean that I need to do something like put symlinks to the
 glusterfs-client.vol file for all clients ipaddrs?

 My glusterfs-server.vol file is below.

 Thanks,
 John
 -

 ### Export volume brick with the contents of /home/export directory.
 volume cava_test0
  type storage/posix   # POSIX FS translator
  option directory /mnt/cava_test0 # Export this directory
 end-volume

 ### Add network serving capability to above brick.
 volume server
  type protocol/server
  option transport-type tcp/server # For TCP/IP transport
  option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes cava_test0
  option auth.ip.cava_test0.allow * # Allow access to volume
 end-volume




 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Posix filesystem test suite

2008-06-27 Thread Amar S. Tumballi
Thanks for the link, there was discussion about it in the fuse mailing list.
I use it for testing on my daily builds..

Regards,
Amar

2008/6/27 Alain Baeckeroot [EMAIL PROTECTED]:

 Hi

 http://lwn.net/Articles/276617/ speaks of a Posix filesystem test suite
 available at http://www.ntfs-3g.org/pjd-fstest.html

 I didn't find references to it in mail archives, or in the wiki, hence
 this post.

 Regards
 Alain



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS 1.3.9tla786 unify client crashes while moving dirs

2008-06-26 Thread Amar S. Tumballi
Hi NovA,
 Thanks for these backtraces. I will look into it right away.

Regards,
Amar

2008/6/26 NovA [EMAIL PROTECTED]:

 Hello everybody!

 Since 1.3.9tla784 glusterFS crashes while moving dirs. For example,
 the command mv dir1/ dir2/ leads to crash with the following
 back-trace:
 ---
 Program terminated with signal 11, Segmentation fault.
 #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
 this=0x6109b0, op_ret=0,
op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
 3266  for (index = 0; list[index] != -1; index++)

 (gdb) bt
 #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
 this=0x6109b0, op_ret=0,
op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
 #1  0x2aab330b in client_rename_cbk (frame=0x2aaab0ce9e20,
 args=value optimized out)
at client-protocol.c:3578
 #2  0x2aab2372 in notify (this=0x610620, event=value
 optimized out, data=0x649200)
at client-protocol.c:4937
 #3  0x2ac8cb3d36d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
 #4  0x2ac8cb3d2a79 in poll_iteration (ctx=0x0) at transport.c:312
 #5  0x00402948 in main (argc=-883030544, argv=0x7fffdf905658)
 at glusterfs.c:565
 ---

 Also glusterFS unify crashes if it founds a duplicate of a file
 (generated by previous glusterFS versions). In such a case the client
 log ends up with lines:
 
 2008-06-20 10:49:55 E [unify.c:881:unify_open] bricks:
 /nova/.mc/filepos: entry_count is 3
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c48
 2008-06-20 10:50:25 E [unify.c:335:unify_lookup] bricks: returning
 ESTALE for /nova/.mc/filepos: file count is 3
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c48
 2008-06-20 10:50:25 E [fuse-bridge.c:468:fuse_entry_cbk]
 glusterfs-fuse: 3068: (34) /nova/.mc/filepos = -1 (116)
 2008-06-20 10:50:25 E [unify.c:881:unify_open] bricks:
 /nova/.mc/filepos: entry_count is 3
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c48

 [... skipped client.vol spec ...]

 frame : type(1) op(35)
 frame : type(1) op(35)
 frame : type(1) op(35)
 --

 And the back-trace obtained from the core-file is:
 
 Program terminated with signal 11, Segmentation fault.
 #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
 296 for (pair = inode-ctx-members_list; pair; pair = pair-next)
 {

 (gdb) bt
 #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
 #1  0x2acca31e in unify_rename_unlink_cbk
 (frame=0x2b602017e9c0, cookie=0xd27d10,
this=0x2b602017e9d0, op_ret=value optimized out, op_errno=2) at
 unify.c:3150
 #2  0x2aab146c in client_unlink_cbk (frame=0xd27d10,
 args=value optimized out)
at client-protocol.c:3519
 #3  0x2aab2372 in notify (this=0x60def0, event=value
 optimized out, data=0x630080)
at client-protocol.c:4937
 #4  0x2b601f80f6d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
 #5  0x2b601f80ea79 in poll_iteration (ctx=0x2017eba02b60) at
 transport.c:312
 #6  0x00402948 in main (argc=530695664, argv=0x7fff8b4c9208)
 at glusterfs.c:565
 ---

 Hope this can be fixed soon. :)

 With best regards,
   Andrey


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS 1.3.9tla786 unify client crashes while moving dirs

2008-06-26 Thread Amar S. Tumballi
Hi NovA,
 Fix committed. Thanks for the backtraces.

Regards,
Amar

2008/6/26 Amar S. Tumballi [EMAIL PROTECTED]:

 Hi NovA,
  Thanks for these backtraces. I will look into it right away.

 Regards,
 Amar

 2008/6/26 NovA [EMAIL PROTECTED]:

 Hello everybody!

 Since 1.3.9tla784 glusterFS crashes while moving dirs. For example,
 the command mv dir1/ dir2/ leads to crash with the following
 back-trace:
 ---
 Program terminated with signal 11, Segmentation fault.
 #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
 this=0x6109b0, op_ret=0,
op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
 3266  for (index = 0; list[index] != -1; index++)

 (gdb) bt
 #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
 this=0x6109b0, op_ret=0,
op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
 #1  0x2aab330b in client_rename_cbk (frame=0x2aaab0ce9e20,
 args=value optimized out)
at client-protocol.c:3578
 #2  0x2aab2372 in notify (this=0x610620, event=value
 optimized out, data=0x649200)
at client-protocol.c:4937
 #3  0x2ac8cb3d36d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
 #4  0x2ac8cb3d2a79 in poll_iteration (ctx=0x0) at transport.c:312
 #5  0x00402948 in main (argc=-883030544, argv=0x7fffdf905658)
 at glusterfs.c:565
 ---

 Also glusterFS unify crashes if it founds a duplicate of a file
 (generated by previous glusterFS versions). In such a case the client
 log ends up with lines:
 
 2008-06-20 10:49:55 E [unify.c:881:unify_open] bricks:
 /nova/.mc/filepos: entry_count is 3
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c48
 2008-06-20 10:50:25 E [unify.c:335:unify_lookup] bricks: returning
 ESTALE for /nova/.mc/filepos: file count is 3
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
 /nova/.mc/filepos: found on c48
 2008-06-20 10:50:25 E [fuse-bridge.c:468:fuse_entry_cbk]
 glusterfs-fuse: 3068: (34) /nova/.mc/filepos = -1 (116)
 2008-06-20 10:50:25 E [unify.c:881:unify_open] bricks:
 /nova/.mc/filepos: entry_count is 3
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c33
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c-ns
 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
 /nova/.mc/filepos: found on c48

 [... skipped client.vol spec ...]

 frame : type(1) op(35)
 frame : type(1) op(35)
 frame : type(1) op(35)
 --

 And the back-trace obtained from the core-file is:
 
 Program terminated with signal 11, Segmentation fault.
 #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
 296 for (pair = inode-ctx-members_list; pair; pair = pair-next)
 {

 (gdb) bt
 #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
 #1  0x2acca31e in unify_rename_unlink_cbk
 (frame=0x2b602017e9c0, cookie=0xd27d10,
this=0x2b602017e9d0, op_ret=value optimized out, op_errno=2) at
 unify.c:3150
 #2  0x2aab146c in client_unlink_cbk (frame=0xd27d10,
 args=value optimized out)
at client-protocol.c:3519
 #3  0x2aab2372 in notify (this=0x60def0, event=value
 optimized out, data=0x630080)
at client-protocol.c:4937
 #4  0x2b601f80f6d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
 #5  0x2b601f80ea79 in poll_iteration (ctx=0x2017eba02b60) at
 transport.c:312
 #6  0x00402948 in main (argc=530695664, argv=0x7fff8b4c9208)
 at glusterfs.c:565
 ---

 Hope this can be fixed soon. :)

 With best regards,
   Andrey


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] patch

2008-06-24 Thread Amar S. Tumballi
Thanks Prithu,
 The patch looks fine. Raghu, can you commit it?

Regards,
Amar

2008/6/24 [EMAIL PROTECTED]:

 Hi,
  Here is a patch for moderation.

 PATCH


 -*
 local directory is at
 [EMAIL PROTECTED]/glusterfs--mainline--3.0--patch-198
 * comparing to [EMAIL PROTECTED]/glusterfs--mainline--3.0--patch-198
 M  xlators/performance/write-behind/src/write-behind.c

 * modified files

 --- orig/xlators/performance/write-behind/src/write-behind.c
 +++ mod/xlators/performance/write-behind/src/write-behind.c
 @@ -1316,11 +1316,17 @@
  return -1;
}
 }
 +  if (conf-window_size == 0){
 +   gf_log (this-name,
 +   GF_LOG_ERROR,
 +  WARNING: window-size  not specified -- Setting it tobe equal to
 the
 aggregate-size -- please check the spec-file if this is not intended);
 +  conf-window_size = conf-aggregate_size;
 + }

   if (conf-window_size  conf-aggregate_size) {
 gf_log (this-name,
GF_LOG_ERROR,
 -   FATAL: window-size (%d) is less than aggregate-size (%d),
 conf-window_size, conf-aggregate_size);
 +   FATAL: The aggregate-size (%d) cannot be more than the
 window-size
 (%d), conf-aggregate_size, conf-window_size);
 FREE (conf);
 return -1;
   }
 --

 Prithu




 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] various questions AFR + unify + small files

2008-06-24 Thread Amar S. Tumballi


 Aim is 20 nodes (x4 cores) with 1TB disk each, on gigabit switch
   - we are interested in AFR + unify for having 10TB x2 replications
   - maybe NUFA
   - we have tons of small files (10kB) which are our read bottleneck
   - we want performance (currently we easyly exhaust 4 NFS servers)
   - our users won't do concurrent write to the same file, except append
to logfile (we don't care of order as far as everything is logged)

 1/ which version should we use? 1.3.9.tgz or some more recent from tla ?

1.3.9 has some minor problems (like excessive logging in posix_setdents
etc), we are about to make 1.3.10 after a week or two with afr write order
fix. But I suggest latest tla of 2.5 branch as the best bet. (as there is
only bug fixes happening on that branch).



 2/ i guess we should use all the performance modules

'io-cache' is useful. as you told all are small files, 'read-ahead' is not
so useful.



 3/ docs seems to say we should do (unify over NUFA over AFR) ?
   or is it better to keep it simple and do only (unify over AFR) ?


NUFA is used when each server has a client part too.




 4/ i'm a bit confused about namespaces, especially if we use
   unify + NUFA + AFR. I probably missed a link explaining ns
   and how avoid single point of failure.


http://gluster.org/docs/index.php/Understanding_Unify_Translator#Configuration_needed_to_have_redundant_namespace_bricks




 Btw, http://www.gluster.org/docs/index.php/GlusterFS_Volume_Specification
 should be better advertised in documentation, as it is so important.
 I'll write my step-by-step-glusterfs-howto-for-newbies and send it if
 we succeed.


Cool! that would be a great help.



 Thanks in advance for answers, and for your great job.
 Best Regards.
 Alain Baeckeroot



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.4.0qa19 - readonly filter translator

2008-06-18 Thread Amar S. Tumballi
Hi Snezhana,
 The reason why its not mounting is
[CRITICAL]: 'client-wks12' doesn't support Extended attribute: Read-only
file system

Currently, AFR (and Stripe) translator needs Extended attribute support in
all of its subvolumes, and if not it will not allow you to mount. (because
it may fail to maintain consistency otherwise).  Can you explain the reason
behind making a Read-only filesystem as subvolume of afr? so, we can think
of some method of making it possible?

Regards,
Amar

On Wed, Jun 18, 2008 at 3:17 AM, Snezhana Bekova [EMAIL PROTECTED] wrote:

 Hi Amar,
 Thanks for your answer, I tested glusterfs-1.4.0tla197. On the client
 glusterfs I can't mount the readonly volume.
 I have posted the client logs on pastebin, when I try to mount the volume:
 http://gluster.pastebin.org/44413

 Here is the client config file:

 volume client-wks12
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.1
   option remote-subvolume brick-readonly
   option transport-timeout 10
 end-volume

 volume client-valhalla
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.2
   option remote-subvolume brick-readonly
   option transport-timeout 10
 end-volume

 volume afr
  type cluster/afr
  subvolumes client-wks12 client-valhalla
 end-volume

 volume client-wks12-webtmp
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.1
  option remote-subvolume brick-webtmp
  option transport-timeout 10
 end-volume

 volume client-valhalla-webtmp
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.2
   option remote-subvolume brick-webtmp
   option transport-timeout 10
 end-volume

 volume afr-webtmp
   type cluster/afr
   subvolumes client-wks12-webtmp client-valhalla-webtmp
 end-volume

 volume client-wks12-wwwroot1
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.1
  option remote-subvolume brick-local1
  option transport-timeout 10
 end-volume

 volume client-valhalla-wwwroot1
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.0.0.2
   option remote-subvolume brick-local1
   option transport-timeout 10
 end-volume

 volume afr-wwwroot1
   type cluster/afr
   subvolumes client-wks12-wwwroot1 client-valhalla-wwwroot1
 end-volume

 Here is the one server config file:

 volume brick-local
   type storage/posix
   option directory /wwwroot
 end-volume

 volume brick-local1
   type storage/posix
   option directory /wwwroot/Advert
 end-volume

 volume brick-readonly

   type features/filter
   subvolumes brick-local
 end-volume

 volume brick-webtmp
   type storage/posix
   option directory /var/webtmp
 end-volume

 volume server
   type protocol/server
   option transport-type tcp/server
   subvolumes brick-local brick-readonly brick-webtmp brick-local1
   option auth.ip.brick-webtmp.allow *
   option auth.ip.brick-local.allow *
   option auth.ip.brick-local1.allow *
   option auth.ip.brick-readonly.allow *
 end-volume


 Thanks,
 Snezhana

 Цитат от Amar S. Tumballi [EMAIL PROTECTED]:

  Hi Snezhana,
   The fix for this already went into the source repo. To make testing easy
  for you, I have created a tarball, available here:
 
  http://gnu.zresearch.com/~amar/qa-releases/glusterfs-1.4.0tla197.tar.gzhttp://gnu.zresearch.com/%7Eamar/qa-releases/glusterfs-1.4.0tla197.tar.gz
 
  Let us know how the testing goes.
 
  Regards,
  Amar
 
  2008/6/17 Snezhana Bekova [EMAIL PROTECTED]:
 
  Many Thanks! I'll wait next release.
 
  --
 
  Snezhana
 
  Цитат от Amar S. Tumballi [EMAIL PROTECTED]:
 
 
   Well, thanks for the report. I found the bug. will be fixed in next
  commit.
   Should be available in tar.gz format with next release (due in a day
 or
   two).
  
   -amar
  
   On Mon, Jun 16, 2008 at 5:35 PM, Amar S. Tumballi [EMAIL PROTECTED]
 
   wrote:
  
   Yes! any crash is treated as bug. But I would like to see the client
 log
   file too. The log about extended attribute not supported is due to
  having
   filter (which doesn't allow setxattr to succeed). Anyways, it would
 be
  great
   help if you could send the client spec file.
  
   Regards,
   Amar
  
   2008/6/16 Snezhana Bekova [EMAIL PROTECTED]:
  
  
  
Hello,
   I've started testing glusterfs version 1.4.0qa19. There is problem
 with
   readonly filter. When I try to a make write operation on readonly
   brick on a
   glusterfs client (client side afr), the glustrefs server die.
  
   This is from the glusterfs client log messages when mounting
 readonly
   volume:
   2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr:
  [CRITICAL]:
   'client-wks1' doesn't support Extended attribute: Read-only file
 system
   2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr:
 [CRITICAL]:
   'client-wks2' doesn't support Extended attribute: Read-only file
  system
  
And this is from the glusterfs server log when die:
  
Here is a part

Re: [Gluster-devel] Unify behaviour if one of the servers disconnected

2008-06-17 Thread Amar S. Tumballi
Currently if the server which got disconnected is having Namespace export
too.. then the lookups return ENOENT (file not found). Otherwise what you
described (whole filesystem will be online without few files).

Regards,
Amar

On Tue, Jun 17, 2008 at 4:57 AM, NovA [EMAIL PROTECTED] wrote:

 Hello, everybody!

 I'm continuing to stress-test glusterFS 1.3.8+ series. Just upgraded
 to tla781. It seems stable in my setup by now, no lockups yet. ;)
 Great!
 But I still can't reveal the desired feature concerning the subj. So I
 have a concrete question. :) What is the supposed behaviour of the
 unify translator (without AFR), when one of the servers disconnected?
 I assumed, that in this case the glusterFS volume should remain online
 with some files being inaccessible (which are on the disconnected
 server). But now, if I plug the network cable out of a cluster node,
 then ls unify_volume says that it cannot open directory,
 Transport endpoint is not connected. Am I just believe what I
 desire? Is it supposed that the unify volume goes back online only
 after the disconnected server return?

 WBR,
  Andrey


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unify behaviour if one of the servers disconnected

2008-06-17 Thread Amar S. Tumballi
NovA,
 I just noticed this behavior, which ideally should not be the case, you
will have a fix to it tomorrow.

Regards,
Amar

On Tue, Jun 17, 2008 at 6:21 AM, Amar S. Tumballi [EMAIL PROTECTED]
wrote:

 Currently if the server which got disconnected is having Namespace export
 too.. then the lookups return ENOENT (file not found). Otherwise what you
 described (whole filesystem will be online without few files).

 Regards,
 Amar


 On Tue, Jun 17, 2008 at 4:57 AM, NovA [EMAIL PROTECTED] wrote:

 Hello, everybody!

 I'm continuing to stress-test glusterFS 1.3.8+ series. Just upgraded
 to tla781. It seems stable in my setup by now, no lockups yet. ;)
 Great!
 But I still can't reveal the desired feature concerning the subj. So I
 have a concrete question. :) What is the supposed behaviour of the
 unify translator (without AFR), when one of the servers disconnected?
 I assumed, that in this case the glusterFS volume should remain online
 with some files being inaccessible (which are on the disconnected
 server). But now, if I plug the network cable out of a cluster node,
 then ls unify_volume says that it cannot open directory,
 Transport endpoint is not connected. Am I just believe what I
 desire? Is it supposed that the unify volume goes back online only
 after the disconnected server return?

 WBR,
  Andrey


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: Fwd: [Gluster-devel] glusterfs_1.4.0qa19: small issues

2008-06-17 Thread Amar S. Tumballi
Hi Brent,
 Thanks for reporting it. Strange that it failed to compile on your machine.
It worked fine on my machine hence I committed it. Mostly looks like some
gcc/glibc version issue (I use gcc - 4.1.2). Anyways, I have made the commit
to solve that issue. It should be fine now.

Regards,
Amar

On Tue, Jun 17, 2008 at 3:26 PM, Brent A Nelson [EMAIL PROTECTED] wrote:

 On Tue, 17 Jun 2008, Raghavendra G wrote:

  Hi Brent,
 I've committed a fix in glusterfs--mainline--3.0--patch-194. This patch
 aims
 to fix high memory usage during multiple dd s. Can you please check
 whether
 it works?


 I went to test, but I got this when compiling, presumably due to a previous
 patch:

 stripe.c: In function stripe_check_xattr_cbk:
 stripe.c:3396: warning: implicit declaration of function raise
 stripe.c:3396: error: SIGTERM undeclared (first use in this function)
 stripe.c:3396: error: (Each undeclared identifier is reported only once
 stripe.c:3396: error: for each function it appears in.)
 stripe.c: In function init:
 stripe.c:3577: warning: passing argument 2 of gf_string2bytesize from
 incompatible pointer type

 Thanks,

 Brent



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Error messages in namespace vol

2008-06-17 Thread Amar S. Tumballi
These messages were solved in 'mainline--2.5--patch-771', Latest tla
patchset from 2.5 branch should be good and has fix for this issue.

Regards,
Amar

On Thu, May 15, 2008 at 9:23 PM, Anand Avati [EMAIL PROTECTED] wrote:

 Harris,
  can you send the complete log file from the namespace server?

 avati

 On 16/05/2008, Harris Landgarten [EMAIL PROTECTED] wrote:
 
  I have been getting lots of error messages in the server logs of the
  namespace volume of the form:
 
  2008-05-15 20:26:08 E [posix.c:1984:posix_setdents] posix3: Error
 creating
  file /mnt/namespace/test/zoneinfo/posix/Indian/Chagos with mode (0100644)
 
  for every file accessed by a client for the first time. There is no
  apparent effect on operations other than a slow down in performance.
 
  This error was one of many caused by running:
 
  cp -av /usr/share/zoneinfo /mnt/glusterfs/test/
 
  The volume is mounted:
  /dev/sda2 on /mnt type reiserfs (rw,acl,user_xattr)
 
  a stat of the file in question from the server:
  stat /mnt/namespace/test/zoneinfo/posix/Indian/Chagos
File: `/mnt/namespace/test/zoneinfo/posix/Indian/Chagos'
Size: 0   Blocks: 0  IO Block: 131072 regular empty
  file
  Device: 802h/2050d  Inode: 5427373 Links: 1
  Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
  Access: 2008-05-15 20:26:08.0 -0400
  Modify: 2007-09-19 18:59:32.0 -0400
  Change: 2008-05-15 20:26:55.0 -0400
 
  This is a unify only installation running patch-770
 
  Any idea what could be causing this?
 
 
  Harris Landgarten
  LHJ Technology Solutions, Inc.
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@nongnu.org
  http://lists.nongnu.org/mailman/listinfo/gluster-devel
 



 --
 If I traveled to the end of the rainbow
 As Dame Fortune did intend,
 Murphy would be there to tell me
 The pot's at the other end.
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.4.0qa19 - readonly filter translator

2008-06-16 Thread Amar S. Tumballi
Yes! any crash is treated as bug. But I would like to see the client log
file too. The log about extended attribute not supported is due to having
filter (which doesn't allow setxattr to succeed). Anyways, it would be great
help if you could send the client spec file.

Regards,
Amar

2008/6/16 Snezhana Bekova [EMAIL PROTECTED]:



  Hello,
 I've started testing glusterfs version 1.4.0qa19. There is problem with
 readonly filter. When I try to a make write operation on readonly brick on a
 glusterfs client (client side afr), the glustrefs server die.

 This is from the glusterfs client log messages when mounting readonly
 volume:
 2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr: [CRITICAL]:
 'client-wks1' doesn't support Extended attribute: Read-only file system
 2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr: [CRITICAL]:
 'client-wks2' doesn't support Extended attribute: Read-only file system

  And this is from the glusterfs server log when die:

  Here is a part of the log from glusterfs server that crashed:
 TLA Repo Revision: glusterfs--mainline--3.0--patch-192
 Time : 2008-06-16 17:55:41
 Signal Number : 11

  /usr/sbin/glusterfsd -f /etc/glusterfs/glusterfs-server.vol -l
 /var/log/glusterfs/glusterfsd.log -L
 WARNING --pidfile /var/run/glusterfsd.pid
 volume server
   type protocol/server
   option auth.ip.brick-readonly.allow *
   option auth.ip.brick-local1.allow *
   option auth.ip.brick-local.allow *
   option auth.ip.brick-webtmp.allow *
   option transport-type tcp
   subvolumes brick-local brick-readonly brick-webtmp brick-local1
 end-volume

  volume brick-webtmp
   type storage/posix
   option directory /var/webtmp
 end-volume

  volume brick-readonly
   type features/filter
   subvolumes brick-local
 end-volume

  volume brick-local1
   type storage/posix
   option directory /wwwroot/Advert
 end-volume

  volume brick-local
   type storage/posix
   option directory /wwwroot
 end-volume

  frame : type(1) op(27)
 2008-06-16 17:55:41 C [common-utils.c:155:gf_print_bytes] : xfer == 27919,
 rcvd == 14515[0xe420]

 /usr/lib/glusterfs/1.4.0qa19/xlator/features/filter.so(filter_create+0x6c)[0xb7f77f9c]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(server_create+0x180)[0xb7584440]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(protocol_server_interpret+0xd6)[0xb7585056]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(protocol_server_pollin+0xb3)[0xb7585263]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(notify+0x51)[0xb7585351]
 /usr/lib/glusterfs/1.4.0qa19/transport/tcp.so[0xb757c249]
 /usr/lib/libglusterfs.so.0[0xb7f6d5c5]
 /usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7f6c431]
 [glusterfs](main+0x795)[0x804a545]
 /lib/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7e02450]
 [glusterfs][0x8049871]
 -
 2008-06-16 18:13:05 W [glusterfs.c:419:glusterfs_cleanup_and_exit]
 glusterfs: shutting down server
 2008-06-16 18:13:05 C [common-utils.c:155:gf_print_bytes] : xfer == 0, rcvd
 == 0

 The underlying file system is ext3 and there is extended attribute support!
 The problem not exist on version 1.3.9.
 Can you tell me what is wrong? Maybe it is a bug?

  Thanks,
 Snezhana
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.4.0qa19 - readonly filter translator

2008-06-16 Thread Amar S. Tumballi
Well, thanks for the report. I found the bug. will be fixed in next commit.
Should be available in tar.gz format with next release (due in a day or
two).

-amar

On Mon, Jun 16, 2008 at 5:35 PM, Amar S. Tumballi [EMAIL PROTECTED]
wrote:

 Yes! any crash is treated as bug. But I would like to see the client log
 file too. The log about extended attribute not supported is due to having
 filter (which doesn't allow setxattr to succeed). Anyways, it would be great
 help if you could send the client spec file.

 Regards,
 Amar

 2008/6/16 Snezhana Bekova [EMAIL PROTECTED]:



  Hello,
 I've started testing glusterfs version 1.4.0qa19. There is problem with
 readonly filter. When I try to a make write operation on readonly brick on a
 glusterfs client (client side afr), the glustrefs server die.

 This is from the glusterfs client log messages when mounting readonly
 volume:
 2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr: [CRITICAL]:
 'client-wks1' doesn't support Extended attribute: Read-only file system
 2008-06-16 17:54:17 C [afr.c:6187:afr_check_xattr_cbk] afr: [CRITICAL]:
 'client-wks2' doesn't support Extended attribute: Read-only file system

  And this is from the glusterfs server log when die:

  Here is a part of the log from glusterfs server that crashed:
 TLA Repo Revision: glusterfs--mainline--3.0--patch-192
 Time : 2008-06-16 17:55:41
 Signal Number : 11

  /usr/sbin/glusterfsd -f /etc/glusterfs/glusterfs-server.vol -l
 /var/log/glusterfs/glusterfsd.log -L
 WARNING --pidfile /var/run/glusterfsd.pid
 volume server
   type protocol/server
   option auth.ip.brick-readonly.allow *
   option auth.ip.brick-local1.allow *
   option auth.ip.brick-local.allow *
   option auth.ip.brick-webtmp.allow *
   option transport-type tcp
   subvolumes brick-local brick-readonly brick-webtmp brick-local1
 end-volume

  volume brick-webtmp
   type storage/posix
   option directory /var/webtmp
 end-volume

  volume brick-readonly
   type features/filter
   subvolumes brick-local
 end-volume

  volume brick-local1
   type storage/posix
   option directory /wwwroot/Advert
 end-volume

  volume brick-local
   type storage/posix
   option directory /wwwroot
 end-volume

  frame : type(1) op(27)
 2008-06-16 17:55:41 C [common-utils.c:155:gf_print_bytes] : xfer == 27919,
 rcvd == 14515[0xe420]

 /usr/lib/glusterfs/1.4.0qa19/xlator/features/filter.so(filter_create+0x6c)[0xb7f77f9c]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(server_create+0x180)[0xb7584440]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(protocol_server_interpret+0xd6)[0xb7585056]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(protocol_server_pollin+0xb3)[0xb7585263]

 /usr/lib/glusterfs/1.4.0qa19/xlator/protocol/server.so(notify+0x51)[0xb7585351]
 /usr/lib/glusterfs/1.4.0qa19/transport/tcp.so[0xb757c249]
 /usr/lib/libglusterfs.so.0[0xb7f6d5c5]
 /usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7f6c431]
 [glusterfs](main+0x795)[0x804a545]
 /lib/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7e02450]
 [glusterfs][0x8049871]
 -
 2008-06-16 18:13:05 W [glusterfs.c:419:glusterfs_cleanup_and_exit]
 glusterfs: shutting down server
 2008-06-16 18:13:05 C [common-utils.c:155:gf_print_bytes] : xfer == 0,
 rcvd == 0

 The underlying file system is ext3 and there is extended attribute
 support!
 The problem not exist on version 1.3.9.
 Can you tell me what is wrong? Maybe it is a bug?

  Thanks,
 Snezhana
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What does it mean (unify bug ?) ?

2008-06-14 Thread Amar S. Tumballi
Hi Anton,
 This bug is fixed in patch-780. It is related to rename.

Regards,
Amar

On Sun, Jun 8, 2008 at 10:19 PM, Антон Халиков [EMAIL PROTECTED] wrote:

 Hello. It's me again.

 Hi Anton, your volume spec looks fine. Please upgrade to 1.3.9


 I've upgraded to 1.3.9. Then I recreated a fresh storage from scratch and
 after a few days of using this storage I got the same error again.

 Example:

 dom0r1:~# head

 /backup/hosting/-databases//databases/mysql/db_iis_enfo_en/iis_informer.MYI
 head: cannot open

 `/backup/hosting/-databases//databases/mysql/db_iis_enfo_en/iis_informer.MYI'
 for reading: Input/output error

 glusterfs.log contains:
 2008-06-09 11:04:49 E [unify.c:873:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 entry_count is 3
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on bbrick2
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on bbrick1
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on afr-ns
 2008-06-09 11:04:49 E [fuse-bridge.c:692:fuse_fd_cbk] glusterfs-fuse:
 61894193: (12)
 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI
 = -1 (5)

 We use rdiff-backup to backup data from our servers. After 2-5 backup
 cycles
 we get the situation.

 Storage systems are placed over ext3fs mounted with attributes
 rw,noatime,user_xattr.

 In the same time I tried to 'head' the file, glusterfsd.log got a lot of
 error messages like:
 2008-06-09 11:04:49 E [posix.c:1984:posix_setdents] backupbrick-ns: Error
 creating file

 /var/lib/gluster/ns//hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_mylinks_cat.MYI
 with mode (0100660)

 You can notice the error message complaints about different file. In one
 second I got 349 lines complaining about different file names in total. All
 these lines were caused by my 'head' call.

 glusterfs 1.3.9 built on Jun  2 2008 20:36:02
 Repository revision: glusterfs--mainline--2.5--patch-770


 Any ideas ?

 --
 Best regards
 Anton Khalikov
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs_1.4.0qa19: small issues

2008-06-13 Thread Amar S. Tumballi
Brent,
 I will have a update to you by EOD (PDT) today. Thanks for all the help.

Regards,
Amar

On Fri, Jun 13, 2008 at 3:31 PM, Brent A Nelson [EMAIL PROTECTED] wrote:

 Well, I went to do the debugging compile and rerun; unfortunately, the
 binaries got stripped during install, so I still didn't end up with a good
 backtrace.  However, I doubt that a backtrace will help, as the client
 probably is just running out of memory to consume and is forced to die. The
 client is indeed enormous (~3GB) before dying.

 So, any luck hunting down a memory leak, presumably somewhere in AFR,
 unify, or client? Is there anything else you'd like me to do to try to
 narrow it down?


 Thanks,

 Brent


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unify lockup with infinite bailing transport

2008-06-13 Thread Amar S. Tumballi
 Looking forward for any answer,
  Andrey


I will try to reproduce it today. Will have an answer for you soon.

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS AFR not failing over

2008-06-12 Thread Amar S. Tumballi
Yes! that was the flaw of 1.3.x series's timeout and transport layer design.
There were lot of problems which we couldn't solve by just some simple
things. Hence We came up with 1.4.x series with non-blocking i/o as an
important fix, where the timeouts will be more real, and we can get more
control over it.

-amar

On Thu, Jun 12, 2008 at 7:49 PM, Raghavendra G [EMAIL PROTECTED]
wrote:

 Hi Gordan,
 Actually you should wait for maximum time of  (transport_timeout * 2 ) to
 actually bail-out and do the cleanup of pending frames. The logic is that
 the timer thread initiates the logic to check whether the call has to be
 bailed out for every transport_timeout seconds. And this logic does the
 cleanup only if there is no frame is sent or received in the last
 transport_timeout seconds.

 regards,
 On Mon, Jun 9, 2008 at 5:41 PM, [EMAIL PROTECTED] wrote:

  No - this is a different problem. If the transport timeout was the
 problem,
  the access should return after  60 seconds, should it not? In the case
 I'm
  seeing, something goes wrong and the only way to recover is to restart
  glusterfsd on the server(s) _AND_ glusterfs on the clients.
 
  It's kind of hard to reproduce, as I only see it happening about once
 every
  week or so.
 
  Gordan
 
 
  On Sat, 7 Jun 2008, Krishna Srinivas wrote:
 
   Gordon,
 
  Is this the case of transport-timeout being high?
 
  Krishna
 
  On Sat, Jun 7, 2008 at 1:04 AM, Gordan Bobic [EMAIL PROTECTED] wrote:
 
  Hi,
 
  I have /home mounted from GlusterFS with AFR, and if one of the servers
  (secondary) goes away, I cannot log in. sshd tries to read ~/.ssh and
  bash
  tries to read ~/.bashrc and this seems to fail - or at least take a
 very
  long time to time out and try the remaining server (which verifiably
  works).
 
  I get this sort of thing in the logs:
 
  E [tcp-client.c:190:tcp_connect] home2: non-blocking connect()
 returned:
  110
  (Connection timed out)
  E [client-protocol.c:4423:client_lookup_cbk] home2: no proper reply
 from
  server, returning ENOTCONN
  C [client-protocol.c:212:call_bail] home2: bailing transport
 
  where home2 is the name of the GlusterFS export on the secondary.
 
  Is this a known issue or have I managed to trip another error case?
 
  Gordan
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@nongnu.org
  http://lists.nongnu.org/mailman/listinfo/gluster-devel
 
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@nongnu.org
  http://lists.nongnu.org/mailman/listinfo/gluster-devel
 



 --
 Raghavendra G

 A centipede was happy quite, until a toad in fun,
 Said, Prey, which leg comes after which?,
 This raised his doubts to such a pitch,
 He fell flat into the ditch,
 Not knowing how to run.
 -Anonymous
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Re: New Development Update - [1.4.X releases]

2008-06-09 Thread Amar S. Tumballi
Currently our target is to get a solid state for what we have in code base
already (with added efficiency). There are lot of pending issues about
consistency and locking in corner cases. (for example: afr write order,
unify rename issue etc).
Our main focus with this release is to have a codebase with no complaints.
(about what it claims what it supports). Also we understand that now our
logging is not user friendly, which we are working to make userfriendly and
no excessive logs.

As of now, our plan is to start with new development features as soon as
basic things needed for above mentioned points are complete. We will try to
start development with new (hot add/remove, distributed namespace, lot more
options with management tools) features at July beginning.

Regards,
Amar

On Mon, Jun 9, 2008 at 9:30 AM, Onyx [EMAIL PROTECTED] wrote:

 Any news on the schedule for the live config change (add/remove volumes
 without having to restart glusterfs)?




 Amar S. Tumballi wrote:

 One more addition to this release is

 * errno compatibility:
   This will enable a process to see the proper errnos in cases of cross OS
 server/client setups.

 Regards,
 Amar

 On Fri, Jun 6, 2008 at 2:24 PM, Amar S. Tumballi [EMAIL PROTECTED]
 wrote:



 Hi,
  As the topic says, I want to give you all a snapshot of whats coming in
 1.4.x series and where is our main focus for the release.

 1.4.x -
 * Performance of GlusterFS (reduced CPU and memory usage, protocol
 enhancements)
 * Non blocking I/O - to give more responsiveness to GlusterFS, remove the
 issues faced due to timeout, freezing etc.
 * Few features to handle different verticals of storage requirements.

 The tarballs will be available from -
 http://ftp.zresearch.com/pub/gluster/glusterfs/1.4-qa/
 Advised to use only latest tarballs in directory, and Report bugs through
 Savannah Bug tracking system only, so its easier for us to track them.

 You can shift to 'glusterfs--mainline--3.0' branch (from which glusterfs
 -1.4.0qa releases are made) if you want to try the latest fixes. Though
 none
 of these are not yet advised for production usage.

 That was a higher level description of what is coming. Here are exact
 module/translator wise description of whats inside the tarball for you.

 * nbio - (non-blocking I/O) This feature comes with significant/drastic
 changes in transport layer. Lot of drastic changes to improve the
 responsiveness of server and to handle more connections. Designed to
 scale
 for higher number of servers/clients.
  - NOTE that this release of QA has only TCP/IP transport layer
 supported,
 work is going on to port it to ib-verbs module.

 * binary protocol - this reduces the amount of header/protocol data
 transfered over wire significantly, also reduces the CPU usage as there
 is
 no binary to ASCII (and vica-versa) involved at protocol layer. The
 difference may not be significant for large file performance, but for
 small
 files and meta operations this will be phenomenal improvement.

 * BDB storage translator - Few people want to use lot and lot of small
 files over large storage volume, but for them the filesystem performance
 for
 small files was a main bottleneck. Number of inodes spent, overall kernel
 overhead in create cycle etc was quite high. With introduction of BDB
 storage at the backend, we tried to solve this issue. This is aimed at
 giving tremendous boost for cases where, millions and millions of small
 files are in a single directory.
 [NOTE: This is not posix complaint as no file attribute fops are not
 supported over these files, however file rename and having multiple
 directory levels, having symlinks is supported].
 GlusterFS BDB options here:
 http://gluster.org/docs/index.php/GlusterFS_Translators_v1.3#Berkeley_DB
 Also refer this link -

 http://www.oracle.com/technology/documentation/berkeley-db/db/gsg/CXX/dbconfig.html,
 so you can tune BDB better. We are still investigating the performance
 numbers for files bigger than page-size, but you can give it a try if
 your
 avg file size is well below 100k mark.


 * libglusterfsclient - A API interface for glusterfs (for file
 create/write, open/write, open/read cases). Which will merge all these
 calls
 in one fop and gives a much better performance by removing the latency
 caused by lot of calls happening in a single file i/o. (Aimed at small
 file
 performance, but users may have to write their own apps using
 libglusterfs
 client.
 Check this tool -
 http://ftp.zresearch.com/pub/gluster/glusterfs/misc/glfs-bm.c
 You need to compile this like ' # gcc -lglusterfsclient -o glfs-bm
 glfs-bm.c'


 * mod_glusterfs: Aimed at solving the problem web hosting companies have.
 We embedded glusterfs into apache-1.3.x, lighttpd-1.4.x, lighttpd-1.5,
 (and
 work going on for apache-2.0). By doing this, we could save the context
 switch overhead which was significant if web servers were using glusterfs
 mountpoint as document root. So, now, the web servers itself can

Re: [Gluster-devel] glusterfs_1.4.0qa19: small issues

2008-06-09 Thread Amar S. Tumballi
Hi Brent,
 We really appreciate for taking time and testing qa release. Can you look
into glusterfs log file, and send us the logs related to
'cp0/share/linux-sound-base'. That will help to fix the issue right away.
(though looking at this msg, i think chmod() after create is failing.

 I really appreciate the focus on metadata performance in the latest
branch, and would be interested in any tips or patches to further boost the
metadata performance (high-speed file creation and lookups).  I do wonder if
BDB would be effective for namespace, as someone asked in a previous post.

Yes! we realized small files performance and metadata calls (lookup, utimes,
chmod etc) are very costly once the filesystem size grows. We are still
thinking about it. Currently BDB is not supporting any attributes to files.
(file mode, ownership etc). Also its not supporting hardlinks. Hence I am
not sure how good it will be to use BDB on namespace.  Sure more discussion
on this will help us too to get some idea.

Regards,
Amar

On Mon, Jun 9, 2008 at 2:30 PM, Brent A Nelson [EMAIL PROTECTED] wrote:

 I've just started testing the new branch.  So far, it's working and stable,
 but I'm testing it with rsync, local-to-GlusterFS and
 GlusterFS-to-GlusterFS, which is failing to set the destination mtimes to
 match the origin.

 Also, I just tried a cp -a, which gives lots of complaints:
 cp -a /usr cp0
 cp: setting permissions for `cp0/share/linux-sound-base': No such file or
 directory
 cp: setting permissions for `cp0/share/bug/libmagic1': No such file or
 directory
 cp: setting permissions for `cp0/share/bug/apt': No such file or directory
 cp: setting permissions for `cp0/share/bug/grub': No such file or directory

 But, when I check, those directories do exist.

 My test setup is a 4-node, 4-exports-per-node unified AFR, with AFRed
 namespace.  Each export is its own glusterfsd process.  No performance
 translators on the client, with io-threads (8) and posix-locks on the
 servers.  The client is using read-subvolume, where appropriate.  I'm just
 using standard FUSE 2.7.2 at the moment.

 Thanks,

 Brent


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs_1.4.0qa19: small issues

2008-06-09 Thread Amar S. Tumballi
Yes, for mtime, can you try changing the mtime of a single file, and send
logs (if you find anything interesting on serverside logs for the same file,
that will also help.

Regards,
Amar

On Mon, Jun 9, 2008 at 3:38 PM, Brent A Nelson [EMAIL PROTECTED] wrote:

 Note that there were also log lines such as this mixed in:

 2008-06-09 17:14:20 E [unify.c:3182:unify_removexattr_cbk] mirrors:
 child(mirror2): file() errno(No such file or directory)

 (with others than just mirror2 also being reported).

 Thanks,

 Brent


 On Mon, 9 Jun 2008, Brent A Nelson wrote:

  On Mon, 9 Jun 2008, Amar S. Tumballi wrote:

  Hi Brent,
 We really appreciate for taking time and testing qa release. Can you look
 into glusterfs log file, and send us the logs related to
 'cp0/share/linux-sound-base'. That will help to fix the issue right away.
 (though looking at this msg, i think chmod() after create is failing.


 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base/OSS-module-list child=share4-1) op_ret=-1
 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror2:
 (path=/cp0/share/linux-sound-base child=share2-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror1:
 (path=/cp0/share/linux-sound-base child=share1-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror0:
 (path=/cp0/share/linux-sound-base child=share0-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror3:
 (path=/cp0/share/linux-sound-base child=share3-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror2:
 (path=/cp0/share/linux-sound-base child=share2-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror0:
 (path=/cp0/share/linux-sound-base child=share0-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror3:
 (path=/cp0/share/linux-sound-base child=share3-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror1:
 (path=/cp0/share/linux-sound-base child=share1-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [fuse-bridge.c:934:fuse_err_cbk] glusterfs-fuse:
 5085496: (op_num=21) /cp0/share/linux-sound-base = -1 (No such file or
 directory)

 Anything you want me to do to help diagnose the mtime issue?

 Thanks,

 Brent





-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs_1.4.0qa19: small issues

2008-06-09 Thread Amar S. Tumballi
Hi Brent,
 Thanks for this report, we will try to fix it. This report is good enough
to find the issue i believe.

Regards,
Amar

On Mon, Jun 9, 2008 at 4:02 PM, Brent A Nelson [EMAIL PROTECTED] wrote:

 Nothing gets logged for the mtime issue, on the client or any of the
 servers.  Not all of the wrong mtimes are from the time of the copy; many of
 them are from other times today.  Weird.

 I did find in my logs that one of my servers wasn't using user_xattr, but
 fixing that unfortunately didn't fix either issue.

 Thanks,

 Brent


 On Mon, 9 Jun 2008, Amar S. Tumballi wrote:

  Yes, for mtime, can you try changing the mtime of a single file, and send
 logs (if you find anything interesting on serverside logs for the same
 file,
 that will also help.

 Regards,
 Amar

 On Mon, Jun 9, 2008 at 3:38 PM, Brent A Nelson [EMAIL PROTECTED]
 wrote:

  Note that there were also log lines such as this mixed in:

 2008-06-09 17:14:20 E [unify.c:3182:unify_removexattr_cbk] mirrors:
 child(mirror2): file() errno(No such file or directory)

 (with others than just mirror2 also being reported).

 Thanks,

 Brent


 On Mon, 9 Jun 2008, Brent A Nelson wrote:

  On Mon, 9 Jun 2008, Amar S. Tumballi wrote:


  Hi Brent,

 We really appreciate for taking time and testing qa release. Can you
 look
 into glusterfs log file, and send us the logs related to
 'cp0/share/linux-sound-base'. That will help to fix the issue right
 away.
 (though looking at this msg, i think chmod() after create is failing.


  2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base/OSS-module-list child=share4-1)
 op_ret=-1
 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1139:afr_setxattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-1) op_ret=-1 op_errno=95
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror2:
 (path=/cp0/share/linux-sound-base child=share2-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror1:
 (path=/cp0/share/linux-sound-base child=share1-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror0:
 (path=/cp0/share/linux-sound-base child=share0-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror3:
 (path=/cp0/share/linux-sound-base child=share3-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror2:
 (path=/cp0/share/linux-sound-base child=share2-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror0:
 (path=/cp0/share/linux-sound-base child=share0-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror3:
 (path=/cp0/share/linux-sound-base child=share3-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror1:
 (path=/cp0/share/linux-sound-base child=share1-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-1) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror4:
 (path=/cp0/share/linux-sound-base child=share4-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror7:
 (path=/cp0/share/linux-sound-base child=share7-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror5:
 (path=/cp0/share/linux-sound-base child=share5-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [afr.c:1289:afr_removexattr_cbk] mirror6:
 (path=/cp0/share/linux-sound-base child=share6-0) op_ret=-1 op_errno=2
 2008-06-09 17:14:20 E [fuse-bridge.c:934:fuse_err_cbk] glusterfs-fuse:
 5085496: (op_num=21) /cp0/share/linux-sound-base = -1 (No such file or
 directory)

 Anything you want me to do to help diagnose the mtime issue?

 Thanks,

 Brent





 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!





-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel

Re: [Gluster-devel] What does it mean (unify bug ?) ?

2008-06-08 Thread Amar S. Tumballi
2008-06-09 11:04:49 E [posix.c:1984:posix_setdents] backupbrick-ns: Error
creating file
/var/lib/gluster/ns//hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_mylinks_cat.MY
with mode (0100660)
This is a wrong error level for error. The error is EEXIST, and it comes
when unify does self-heal of a directory. Fixed in tla repo, will be coming
out with new release.

Working on rsync/rdiff creating multiple files. Yet to corner it down. Will
have 1.3.10 with this fix this week.



 dom0r1:~# head

 /backup/hosting/-databases//databases/mysql/db_iis_enfo_en/iis_informer.MYI
 head: cannot open

 `/backup/hosting/-databases//databases/mysql/db_iis_enfo_en/iis_informer.MYI'
 for reading: Input/output error

 glusterfs.log contains:
 2008-06-09 11:04:49 E [unify.c:873:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 entry_count is 3
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on bbrick2
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on bbrick1
 2008-06-09 11:04:49 E [unify.c:876:unify_open] backup-unify:

 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI:
 found on afr-ns
 2008-06-09 11:04:49 E [fuse-bridge.c:692:fuse_fd_cbk] glusterfs-fuse:
 61894193: (12)
 /hosting/-databases/***/databases/mysql/db_iis_enfo_en/iis_informer.MYI
 = -1 (5)

 We use rdiff-backup to backup data from our servers. After 2-5 backup
 cycles
 we get the situation.

 Storage systems are placed over ext3fs mounted with attributes
 rw,noatime,user_xattr.

 In the same time I tried to 'head' the file, glusterfsd.log got a lot of
 error messages like:


Thanks for the report, we will try to reproduce it locally.

Regards,
Amar


-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] New Development Update - [1.4.X releases]

2008-06-06 Thread Amar S. Tumballi
Hi,
 As the topic says, I want to give you all a snapshot of whats coming in
1.4.x series and where is our main focus for the release.

1.4.x -
* Performance of GlusterFS (reduced CPU and memory usage, protocol
enhancements)
* Non blocking I/O - to give more responsiveness to GlusterFS, remove the
issues faced due to timeout, freezing etc.
* Few features to handle different verticals of storage requirements.

The tarballs will be available from -
http://ftp.zresearch.com/pub/gluster/glusterfs/1.4-qa/
Advised to use only latest tarballs in directory, and Report bugs through
Savannah Bug tracking system only, so its easier for us to track them.

You can shift to 'glusterfs--mainline--3.0' branch (from which glusterfs
-1.4.0qa releases are made) if you want to try the latest fixes. Though none
of these are not yet advised for production usage.

That was a higher level description of what is coming. Here are exact
module/translator wise description of whats inside the tarball for you.

* nbio - (non-blocking I/O) This feature comes with significant/drastic
changes in transport layer. Lot of drastic changes to improve the
responsiveness of server and to handle more connections. Designed to scale
for higher number of servers/clients.
  - NOTE that this release of QA has only TCP/IP transport layer supported,
work is going on to port it to ib-verbs module.

* binary protocol - this reduces the amount of header/protocol data
transfered over wire significantly, also reduces the CPU usage as there is
no binary to ASCII (and vica-versa) involved at protocol layer. The
difference may not be significant for large file performance, but for small
files and meta operations this will be phenomenal improvement.

* BDB storage translator - Few people want to use lot and lot of small files
over large storage volume, but for them the filesystem performance for small
files was a main bottleneck. Number of inodes spent, overall kernel overhead
in create cycle etc was quite high. With introduction of BDB storage at the
backend, we tried to solve this issue. This is aimed at giving tremendous
boost for cases where, millions and millions of small files are in a single
directory.
[NOTE: This is not posix complaint as no file attribute fops are not
supported over these files, however file rename and having multiple
directory levels, having symlinks is supported].
GlusterFS BDB options here:
http://gluster.org/docs/index.php/GlusterFS_Translators_v1.3#Berkeley_DB
Also refer this link -
http://www.oracle.com/technology/documentation/berkeley-db/db/gsg/CXX/dbconfig.html,
so you can tune BDB better. We are still investigating the performance
numbers for files bigger than page-size, but you can give it a try if your
avg file size is well below 100k mark.


* libglusterfsclient - A API interface for glusterfs (for file create/write,
open/write, open/read cases). Which will merge all these calls in one fop
and gives a much better performance by removing the latency caused by lot of
calls happening in a single file i/o. (Aimed at small file performance, but
users may have to write their own apps using libglusterfs client.
Check this tool -
http://ftp.zresearch.com/pub/gluster/glusterfs/misc/glfs-bm.c
You need to compile this like ' # gcc -lglusterfsclient -o glfs-bm
glfs-bm.c'


* mod_glusterfs: Aimed at solving the problem web hosting companies have. We
embedded glusterfs into apache-1.3.x, lighttpd-1.4.x, lighttpd-1.5, (and
work going on for apache-2.0). By doing this, we could save the context
switch overhead which was significant if web servers were using glusterfs
mountpoint as document root. So, now, the web servers itself can be cluster
filesystem aware hence they see much bigger storage system well within their
space.
For Apache 1.3 -
http://gluster.org/docs/index.php/Getting_modglusterfs_to_work
For Lighttpd - http://gluster.org/docs/index.php/Mod_glusterfs_for_lighttpd

* improvement to io-cache to handle mod-glusterfs better.

Other significant work going on parallel to these things are:
* work towards proper input validation and abort in the cases where any
memory corruption is seen.
* work towards reducing the overhead caused by runtime memory allocations
and free.
* work on reducing the string based operations in the code base, which
reduces the CPU usage.
* log msgs improvement (reduce the log amount, make log rotate work
seemlessly).
* strict check in init() time, so the mounting itself wont happen if some
config is not valid. (To get the points mentioned in 'Best Practices' page
into the codebase itself).
* Make sure the port on other OSes work fine.

We expect to get this branch to stability very soon (within a month or so),
so we won't be having complains about small file performance and
timeout/hang issues anymore (from 1.3.x branch). Hence your help in testing
this out for your application,  your configuration, and reporting bugs would
help us to get it to stability even faster.

Regards,
GlusterFS Team

PS: Currently 

Re: [Gluster-devel] Files being double created on bricks

2008-06-02 Thread Amar S. Tumballi
Hi Brian,
 I am looking into the issue. Mostly the issue is with rename() not
create(). Let me get back with more information.

Regards,
Amar

On Fri, May 30, 2008 at 8:51 AM, Brian Taber [EMAIL PROTECTED] wrote:

 I am using glusterfs 1.3.9 (glusterfs--mainline--2.5--patch-770)  I am
 using Dovecot for email retrieval.  Dovecot creates files called
 dovecot-uidlist which is a listing of all messages int he mailbox.
 somehow, the file is getting created on more than one brick, which
 causes gluster to go nuts and deny access to the file.  Here is the
 errors in the client log:

 2008-05-30 11:31:43 E [unify.c:873:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: entry_count is 3
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on mail
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on mail2
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on ns-mail
 2008-05-30 11:31:43 E [fuse-bridge.c:692:fuse_fd_cbk] glusterfs-fuse:
 1530174: (12) /domain.com/usermailbox/dovecot-uidlist = -1 (5)

 On the server there is stranger errors:
 2008-05-29 11:57:27 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-uidlist with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-uidlist with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/courierpop3dsizelist
 with mode (0100644)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/maildirsize with mode
 (0100644)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-keywords with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot.index.cache
 with mode (0100600)



 How can this file be created on more than one brick?
 What are all these errors I cam getting on the server?  The files the
 server is yelling about exist.
 Am I doing something wrong?



 My configs looks like this:

 # namespace
 volume ns-mail
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7010
  option remote-subvolume ns-mail
 end-volume

 # first storage vol
 volume mail
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7011
  option remote-subvolume io-threads-mail
 end-volume

 # second storage vol
 volume mail2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7012
  option remote-subvolume io-threads-mail2
 end-volume

 volume brick
  type cluster/unify
  subvolumes mail mail2
  option namespace ns-mail
  option scheduler alu
  option alu.limits.min-free-disk  5%  # Don't create files one a
 volume with less than 5% free diskspace
  option alu.limits.max-open-files 1   # Don't create files on a
 volume with more than 1 files open

  option alu.order
 disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
  option alu.disk-usage.entry-threshold 2GB   # Kick in if the
 discrepancy in disk-usage between volumes is more than 2GB
  option alu.disk-usage.exit-threshold  60MB   # Don't stop writing to
 the least-used volume until the discrepancy is 1988MB
  option alu.open-files-usage.entry-threshold 1024   # Kick in if the
 discrepancy in open files is 1024
  option alu.open-files-usage.exit-threshold 32   # Don't stop until 992
 files have been written the least-used volume
  option alu.stat-refresh.interval 10sec   # Refresh the statistics used
 for decision-making every 10 seconds
 end-volume


 my 3 server configs are:
 1:
 volume ns-mail
  type storage/posix
  option directory /data2/gluster-index/mail
 end-volume

 volume server
  type protocol/server
  subvolumes ns-mail
  option transport-type tcp/server # For TCP/IP transport
  option listen-port 7010
  option auth.ip.ns-mail.allow 192.168.*
 end-volume


 2:
 volume mail
  type storage/posix
  option directory /data/mail
 end-volume

 volume posix-locks-mail
  type features/posix-locks
  option mandatory on
  subvolumes mail
 end-volume

 volume io-threads-mail
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes posix-locks-mail
 end-volume

 volume server
  type protocol/server
  subvolumes posix-locks-mail
  option transport-type tcp/server # For TCP/IP transport
  option listen-port 7011
  option auth.ip.io-threads-mail.allow 192.168.*
  option 

Re: [Gluster-devel] Problem with samba/ctdb (clustered samba)

2008-06-02 Thread Amar S. Tumballi
Hi JR,

 As you said, two glusterfs exports (different process) would solve it. But
yes, its bit complex to maintain.

Meanwhile, if most part of your clients don't use locks, then posix-locks
will act as an dummy layer. It comes into act only when a fnctl()/flock()
calls are made. So, I think you can just use posix-locks translator in the
current glusterfsd process.

Regards,
Amar

On Mon, Jun 2, 2008 at 9:56 AM, jrs [EMAIL PROTECTED] wrote:

 Hi gents,

 I'm trying to serve up gluster to a large set of
 windows clients using samba and it's clustering
 extension, CTDB.

 CTDB requires that samba's secrets.tdb file (among
 others) be available to all nodes in the cluster.
 It turns out that locking is done by each node
 when writing to the file.

 This means that I need some storage in gluster where
 I can apply the posix-locks translator.  Problem is
 I don't want it on for most of my storage.

 Is there a convenient way to just have posix-locks on
 for a given sub-directory of an already defined brick?

 If not, and I have to have a separate brick to apply
 posix-locks to then how do I access this directory?
 Of course, the way I'm running glusterfsd is to
 give a mount point as the last argument.

 How do I handle this?

 Do I just have to run two different glusterfsd processes?

 thanks much,

 JR


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Files being double created on bricks

2008-06-02 Thread Amar S. Tumballi
I replied in other thread,
 I suspect this as unify's rename() issue. We are working on it. I will post
once we fix this.

Regards,
Amar

On Mon, Jun 2, 2008 at 9:09 AM, Brian Taber [EMAIL PROTECTED] wrote:

 Sorry to post again, I have not heard anything yet and this is a live
 system having these issues...

 I am using glusterfs 1.3.9 (glusterfs--mainline--2.5--patch-770)  I am
 using Dovecot for email retrieval.  Dovecot creates files called
 dovecot-uidlist which is a listing of all messages int he mailbox.
 somehow, the file is getting created on more than one brick, which
 causes gluster to go nuts and deny access to the file.  Here is the
 errors in the client log:

 2008-05-30 11:31:43 E [unify.c:873:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: entry_count is 3
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on mail
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on mail2
 2008-05-30 11:31:43 E [unify.c:876:unify_open] brick:
 /domain.com/usermailbox/dovecot-uidlist: found on ns-mail
 2008-05-30 11:31:43 E [fuse-bridge.c:692:fuse_fd_cbk] glusterfs-fuse:
 1530174: (12) /domain.com/usermailbox/dovecot-uidlist = -1 (5)

 On the server there is stranger errors:
 2008-05-29 11:57:27 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-uidlist with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-uidlist with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/courierpop3dsizelist
 with mode (0100644)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/maildirsize with mode
 (0100644)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot-keywords with
 mode (0100600)
 2008-05-29 13:30:23 E [posix.c:1984:posix_setdents] ns-mail: Error
 creating file
 /data2/gluster-index/mail/domain.com/usermailbox/dovecot.index.cache
 with mode (0100600)



 How can this file be created on more than one brick?
 What are all these errors I cam getting on the server?  The files the
 server is yelling about exist.
 Am I doing something wrong?



 My configs looks like this:

 # namespace
 volume ns-mail
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7010
  option remote-subvolume ns-mail
 end-volume

 # first storage vol
 volume mail
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7011
  option remote-subvolume io-threads-mail
 end-volume

 # second storage vol
 volume mail2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.200.200
  option remote-port 7012
  option remote-subvolume io-threads-mail2
 end-volume

 volume brick
  type cluster/unify
  subvolumes mail mail2
  option namespace ns-mail
  option scheduler alu
  option alu.limits.min-free-disk  5%  # Don't create files one a
 volume with less than 5% free diskspace
  option alu.limits.max-open-files 1   # Don't create files on a
 volume with more than 1 files open

  option alu.order
 disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
  option alu.disk-usage.entry-threshold 2GB   # Kick in if the
 discrepancy in disk-usage between volumes is more than 2GB
  option alu.disk-usage.exit-threshold  60MB   # Don't stop writing to
 the least-used volume until the discrepancy is 1988MB
  option alu.open-files-usage.entry-threshold 1024   # Kick in if the
 discrepancy in open files is 1024
  option alu.open-files-usage.exit-threshold 32   # Don't stop until 992
 files have been written the least-used volume
  option alu.stat-refresh.interval 10sec   # Refresh the statistics used
 for decision-making every 10 seconds
 end-volume


 my 3 server configs are:
 1:
 volume ns-mail
  type storage/posix
  option directory /data2/gluster-index/mail
 end-volume

 volume server
  type protocol/server
  subvolumes ns-mail
  option transport-type tcp/server # For TCP/IP transport
  option listen-port 7010
  option auth.ip.ns-mail.allow 192.168.*
 end-volume


 2:
 volume mail
  type storage/posix
  option directory /data/mail
 end-volume

 volume posix-locks-mail
  type features/posix-locks
  option mandatory on
  subvolumes mail
 end-volume

 volume io-threads-mail
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes posix-locks-mail
 end-volume

 volume server
  type protocol/server
  subvolumes posix-locks-mail
  option transport-type tcp/server # For TCP/IP transport
  option 

Re: [Gluster-devel] Problem with samba/ctdb (clustered samba)

2008-06-02 Thread Amar S. Tumballi
Also,  you can do one thing. You can have two exports in single process, one
which has posix locks over it, and another without posix-locks.

Regards,
Amar

On Mon, Jun 2, 2008 at 10:05 AM, Amar S. Tumballi [EMAIL PROTECTED]
wrote:

 Hi JR,

  As you said, two glusterfs exports (different process) would solve it. But
 yes, its bit complex to maintain.

 Meanwhile, if most part of your clients don't use locks, then posix-locks
 will act as an dummy layer. It comes into act only when a fnctl()/flock()
 calls are made. So, I think you can just use posix-locks translator in the
 current glusterfsd process.

 Regards,
 Amar


 On Mon, Jun 2, 2008 at 9:56 AM, jrs [EMAIL PROTECTED] wrote:

 Hi gents,

 I'm trying to serve up gluster to a large set of
 windows clients using samba and it's clustering
 extension, CTDB.

 CTDB requires that samba's secrets.tdb file (among
 others) be available to all nodes in the cluster.
 It turns out that locking is done by each node
 when writing to the file.

 This means that I need some storage in gluster where
 I can apply the posix-locks translator.  Problem is
 I don't want it on for most of my storage.

 Is there a convenient way to just have posix-locks on
 for a given sub-directory of an already defined brick?

 If not, and I have to have a separate brick to apply
 posix-locks to then how do I access this directory?
 Of course, the way I'm running glusterfsd is to
 give a mount point as the last argument.

 How do I handle this?

 Do I just have to run two different glusterfsd processes?

 thanks much,

 JR


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




 --
 Amar Tumballi
 Gluster/GlusterFS Hacker
 [bulde on #gluster/irc.gnu.org]
 http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What does it mean (unify bug ?) ?

2008-06-01 Thread Amar S. Tumballi


 glusterfs 1.3.8pre1 built on Feb 26 2008 18:21:47
 Repository revision: glusterfs--mainline--2.5--patch-676

 Installed from lmello's debian packages on debian etch system.


Can you install latest from latest tarballs? the preX versions have lot of
known issues.

Thanks
Amar

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Locking Problem with Openoffice and Firefox ?

2008-05-30 Thread Amar S. Tumballi

 I don't find any error in my config, maybe someone could help.

I can see the error.



 Here is my Config:

 Server
 --
 volume homes
  type storage/posix
  option directory /home
 end-volume

 volume home_locks
  type features/posix-locks
  option mandatory on
  subvolumes homes
 end-volume

 volume server
  type protocol/server
  subvolumes home_locks
  option transport-type tcp/server # For TCP/IP transport
  option auth.ip.homes.allow 192.168.70.20


change the above line to option auth.ip.home_locks.allow 192.168.70.20


 http://192.168.70.20
 end-volume


 Client
 --
 volume client
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.70.2
  option remote-subvolume homes


Change the above line to option remote-subvolume home_locks



 end-volume


 Thanks for your help (hopefully)

Thanks for hoping ;-)

Regards,
Amar

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] help to check correctness

2008-05-28 Thread Amar S. Tumballi
That result is correct.

GlusterFS keeps the files as is on the backend (hence /home/cfs{0,1} has
exactly one entries, alternatively, (because you use rr scheduler)), a
namespace contains a entry for every file and directory created, (or say it
will have complete unified view of the filesystem at one place).

Of course your glusterfs mount shows all the files created over it.

hope you got the basic design?

I think, it is working. At least, log file contains no any error
 records and when I try to create files in the mounted folder, the
 server mapped folders receive copy of these files. But I cannot
 understand - should the maped and namespace folders contain all files
 too?

 touch ./glusterfs/file{0,1,2,3,4,5,6,7,8}
 find /home/cfs[0-3] -type f

 /home/cfs0/file7
 /home/cfs0/file1
 /home/cfs0/file5
 /home/cfs0/file3
 /home/cfs1/file6
 /home/cfs1/file2
 /home/cfs1/file4
 /home/cfs1/file0
 /home/cfs1/file8
 /home/cfs2/file6
 /home/cfs2/file2
 /home/cfs2/file4
 /home/cfs2/file7
 /home/cfs2/file0
 /home/cfs2/file1
 /home/cfs2/file5
 /home/cfs2/file8
 /home/cfs2/file3
 /home/glusterfs/file6
 /home/glusterfs/file2
 /home/glusterfs/file4
 /home/glusterfs/file7
 /home/glusterfs/file0
 /home/glusterfs/file1
 /home/glusterfs/file5
 /home/glusterfs/file8
 /home/glusterfs/file3

 Please help me to understand correctness of test result.

 TIA,

 Simon


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] how to stop a gluster server

2008-05-27 Thread Amar S. Tumballi
On Tue, May 27, 2008 at 1:11 PM, Shaofeng Yang [EMAIL PROTECTED] wrote:

 Hi,

 I am wondering how to stop a gluster server.
 Just kill the pid with [gluster]?  If the client and server running on the
 same machine, both will show as [gluster] from ps. How to identify which
 one
 is the client/server?

There are already few init.d scripts available in 'extras/init.d' directory
of source, also you can use  --pidfile option while running glusterfs so you
can differentiate  client for server. Soon we will be evaluating few more
init.d scripts we got from community, so you can have more example to look.

Regards,

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Adding nodes

2008-05-27 Thread Amar S. Tumballi

 To answer your original question, files within an AFR are healed from one
 node to the other when the file is accessed (actually read) through the AFR
 and one node is found to have more recent data than others.


Just for understanding, this should be AFR self-heal done when files are
accessed  (actually open()'d), it may not be even a read call. if you have
a tool, which just does  'open()/close()' on a file, it gets synced.

Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs 1.3.9 - Could not acquire lock [Errno 38] Function not implemented

2008-05-27 Thread Amar S. Tumballi
advisory locking and mandatory locking support is present in posix-locks
translator, you need to load it on server spec file...

something like below

volume posix
  ...
end-volume

volume plocks
  type features/posix-locks
  subvolumes posix
end-volume

..
..
..
volume server
  type protocol/server
  option auth.ip.plocks.allow *
  subvolumes plocks
end-volume

and from the client connect to 'plocks' volume (using option
remote-subvolume).

Regards,
Amar

PS: I will be updating wiki with fully loaded production system's volume
spec file. So, we can reduce the basic translator positioning mails. Sorry
for not doing it till now.. i am bit lazy



On Tue, May 27, 2008 at 3:24 PM, Mateusz Korniak 
[EMAIL PROTECTED] wrote:

 Hi !
 I am trying to use glusterfs 1.3.9-2 (linux/i686) as general network FS,
 but
 run into problems with bzr which I suspect are file lock related.
 Is it posible to have flocks over glusterfs mount ?
 Do I need glusterfs fuse module for that?
 Do I need use posix-locks translator ?

 http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#posix-locks?

 I think I need to mimic std linux fs behaviour - advisory locks.

 Thanks a lot for that great piece of software, and thanks in advance for
 any
 reply or hint.


 [EMAIL PROTECTED] abbon2]# bzr info
 bzr: ERROR: Could not acquire lock [Errno 38] Function not implemented
 /usr/lib/python2.4/site-packages/bzrlib/lock.py:79: UserWarning: lock on
 open
 file u'/usr/lib/python2.4/site-packages/abbon2/.bzr/checkout/dirstate',
 mode 'rb' at 0xf75047b8 not released
 Exception exceptions.IOError: (38, 'Function not implemented') in bound
 method _fcntl_ReadLock.__del__ of bzrlib.lock._fcntl_ReadLock object at
 0xf74f766c ignored
 [EMAIL PROTECTED] abbon2]# mount
 (...)
 glusterfs on /usr/lib/python2.4/site-packages/abbon2 type fuse
 (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)


 [EMAIL PROTECTED] abbon2]# cat /etc/fstab
 (...)
 /etc/glusterfs/gw_ri_abbon2.vol  /usr/lib/python2.4/site-packages/abbon2
 glusterfs   defaults0   0


 [EMAIL PROTECTED] abbon2]# cat /etc/glusterfs/gw_ri_abbon2.vol
 volume local_gw_ri_abbon2
  type protocol/client
  option transport-type tcp/client # for TCP/IP transport
  option remote-host 10.20.1.26 # IP address of the remote brick
  option remote-subvolume gw_ri_abbon2# name of the remote volume
 end-volume

 Regards,
 --
 Mateusz Korniak


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Size of GlusterFS installations? Hints? Alternatives?

2008-05-27 Thread Amar S. Tumballi
Hi Steffen,
 Replies inline.

On Mon, May 26, 2008 at 8:01 AM, Steffen Grunewald 
[EMAIL PROTECTED] wrote:

 I'm looking into GlusterFS to store data on a cluster (amd64, Debian Etch)
 in a distributed way with redundancy (kind of RAID-1, two nodes mirroring
 each other). Each node has a dedicated harddisk which can be used for data
 storage, and I'm still free to decide which FS to choose.

Sure, we advice you to feel comfortable before choosing anything.



 For easier recovery I'd prefer not to split files across nodes (typical
 size ~ a few MB).

Yes, we recommend that for filesizes below few hundred MBs.



 Would GlusterFS be suited for such a task?

Yes.



 Would it scale to a couple of hundreds of disk pairs?

Yes.



 Which underlying filesystem should I choose?

you have lot of options. ext3, reiserfs, xfs, ZFS (if you use Solaris, on
BSD extended attribute support for ZFS is not yet there, hence you may not
get RAID-1 functionality of GlusterFS)



 Since I'm using my own homegrown kernel, which modules would I have to
 build - is it mandatory to use the patched version of fuse?

No, its not mandatory to use GlusterFS patched fuse. Its just recommended
because, it has higher I/O performance. But yes, if you want to use
GlusterFS, you need FUSE module.

If you have Infiniband setup, few IB modules (mainly ib_uverbs) are required
to get ''ib-verbs'' tranport to work. GlusterFS has native support for
Infiniband userspace verbs (ie, RDMA support) protocol, using which you can
gain a lot in performance, for both i/o intensive and low latency
applications.



 Any suggestions for the stack of translators (and their order therein)?
 How to best organize redundancy? (since I don't have it in hardware, and
 I can get hold of missing files *if* I know their names, this should
 be not too hard?)


Unify (afr1 (disk1a, disk2a.. diskNa), afr2 (disk 1b, disk2b...diskNb)...
afrN(...)) [1]

Hope the above description is understood as a tree graph.

Yes, your file resides on the hardware (ie, on backend filesystem) as is.
So, you will get the file back, even if you take the disk out.


 Any alternatives? (as fas as I know, e.g. Lustre doesn't have redundancy
 features...)


No idea.

Regards,
Amar

[1] disks can be mapped to remote servers too.

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs and Xen

2008-05-27 Thread Amar S. Tumballi
Hi Nathan,
 We will soon get back to you on this issue.

Regards,
Amar

On Fri, May 23, 2008 at 8:48 PM, [EMAIL PROTECTED] wrote:


 I am running glusterfs-1.3.9 on Centos 5.1. The gluster setup is simple AFR
 between two boxes and xen is using tap:aio. When I start a domu and watch
 the console it dies at:

 Loading ext3.ko module
 Loading xenblk.ko module
 Registering block device major 202
  xvda:Scanning and configuring dmraid supported devices
 Creating root device.

 When I move the file off gluster it boots normally:

 Loading ext3.ko module
 Loading xenblk.ko module
 Registering block device major 202
  xvda: xvda1 xvda2
 Scanning and configuring dmraid supported devices
 Creating root device.
 Mounting root filesystem.
 kjournald starting.  Commit interval 5 seconds
 EXT3-fs: mounted filesystem with ordered data mode.
 Setting up other filesystems.
 Setting up new root fs
 { and so on to a good boot }

 I had some problmes with mounting loop file on gluster and found the -d
 disable in the list archives. I tried that and now I can mount loop files,
 but I still can't get a domu to book on glusterfs. Any ideas?

 -Nathan


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Re: Unexpected behaviour when self-healing

2008-05-27 Thread Amar S. Tumballi
Hi Daniel,
 the 1.3.8-pre2 is very old and there are lot of known issues. Please use
the tarballs available at ZResearch FTP site. debian packages are not yet
updated for latest release, hence its advised for all those who want to try
GlusterFS, not to use any older versions of 1.3.8.

Please upgrade to 1.3.9 and see if you still get the bug.

Regards,
Amar

On Tue, May 27, 2008 at 10:42 PM, Daniel Wirtz [EMAIL PROTECTED] wrote:

 I noticed the advise posted to the other similar thread by Forcey, so I
 will
 reply here, too for completeness :)


  Did you sync up all your bricks' system time?


 Both servers are synchronized using the ntp package that comes with debian
 etch. Doing a parallel date on both servers shows exactly the same time.
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs segfault - xlator/mount/fuse.so not available

2008-05-25 Thread Amar S. Tumballi


 2008-05-25 15:20:17 E [xlator.c:120:xlator_set_type] xlator:
 dlopen(/usr/lib/glusterfs/1.3.9/xlator/mount/fuse.so):
 /usr/lib/glusterfs/1.3.9/xlator/mount/fuse.so:
 cannot open shared object file: No such file or directory


This is mostly because you don't have fuse package installed. Please check
the  ./configure output.



 I just built gluster; how did I manage to miss part of the install?

 Thanks for help or info,



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster limitation

2008-05-21 Thread Amar S. Tumballi
with current versions GlusterFS, the limitations are as follows:
* number of inodes  - 2^64
* size of the largest file   - 2^64
* size of filesystem in bytes - 2^64

* max length of the path (it can be even length of single filename). -
(4096bytes)

* no limit on number of files in a directory (obviously not more than 2^64).
* no limit on number of directories created (again, not more than 2^64).

There is no limitation of on the content of the path, unless otherwise fuse
has some limitations (which i doubt, some one clarify me). So, no worries
about paths in different language. (our team has not tested it, as none of
us have any keyboard with different languages :O, other than english)

So, mostly if any limitation depends mainly on the backend filesystem,
rather than GlusterFS.

Regards,
Amar

On Tue, May 20, 2008 at 10:08 PM, Ben Mok [EMAIL PROTECTED]
wrote:

 Hello !

 I have some questions for gluster file system limitation. What is the
 maximum file size and filesystem size on gluster ? How many files and
 sub-directory can be created in one directory? How long is the length of
 file and directory name? Do it support multiple language for file and
 directory name?  Do the limitation also depend on the local file system ? I
 am using XFS for each server nodes.  Thank you so much !



 Ben

 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] booster translator error

2008-05-19 Thread Amar S. Tumballi
nope, its not LD_PRELOAD'd on server side. the booster volume is loaded on
server side.. i will write proper docs once i reach office tomorrow.

-amar

On Mon, May 19, 2008 at 1:04 AM, Daniel Maher
[EMAIL PROTECTED][EMAIL PROTECTED]
wrote:

 On Sat, 17 May 2008 08:45:59 +0200 nicolas prochazka
 [EMAIL PROTECTED] wrote:

  i do not undestand ' booster is effective when loaded on the server
  side ' compare to documentation about booster translator .

 While i certainly cannot speak for Anand, i think he was suggesting
 that you do the LD_PRELOAD on the server instead of on the client -
 that's all. :)

 --
 Daniel Maher dma AT witbe.net


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] can mount several times

2008-05-16 Thread Amar S. Tumballi
Somehow missed to commit this patch.
Thanks Matthias for patch. It was very helpful.



 --- glusterfs-1.3.8.orig/glusterfs-fuse/utils/mount.glusterfs.in
 2008-01-08 12:49:35.0 +0100
 +++ glusterfs-1.3.8/glusterfs-fuse/utils/mount.glusterfs.in
 2008-01-08 13:44:30.0 +0100
 @@ -121,6 +121,12 @@ main ()
 # $2=$(echo $@ | sed -n 's/[^ ]* \([^ ]*\).*/\1/p');
 mount_point=$2;
 +
 +# Simple check to avoid multiple identical mounts
 +if grep -q glusterfs $mount_point fuse /etc/mtab; then
 +echo $0: according to mtab, a glusterfs is already mounted on
 $mount_point
 +exit 1
 +fi

 fs_options=$(echo $fs_options,$new_fs_options);

 This is probably not portable at all (tested only on Fedora/RHEL), and
 I'm sure there must be a much more elegant way to fix the problem ;-)

 Matthias


Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Error in logs from catting a deleted file?

2008-05-15 Thread Amar S. Tumballi
On Thu, May 15, 2008 at 12:49 PM, Gordan Bobic [EMAIL PROTECTED] wrote:

 If I do the following:
 (/home being the glusterfs mount point)
 node1$ echo test  /home/test
 node1$ cat /home/test
 test

 node2$ cat /home/test
 test
 node2$ rm /home/test
 node1$ cat /home/test

 I get the following in the error log on node1:
 2008-05-15 20:44:30 E [fuse-bridge.c:459:fuse_entry_cbk] glusterfs-fuse:
 80: (34) /test = -1 (2)


This is because, the inode cache at node1's VFS is not flushed after node2
deletes the file. Hence when you cat, it does a lookup for the file, (with
the inode cache of vfs), but it got ENOENT (file not found) error, and the
inode cache is flushed after this.


 The behaviour of the FS is correct (the file gets deleted and the delete
 propagates to both nodes).

 I'm using AFR with NUFA scheduler.


AFR with NUFA ? I am not sure how did you managed to get schedulers working
with AFR. Schedulers are meant for scheduling creat(), and used *only* by
cluster/unify translator as of now.





 Is this just over-paranoid logging, or is there really something going
 wrong?


Its a paranoid logging. We are working on reducing number of such logs.

Regards,

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Error in logs from catting a deleted file?

2008-05-15 Thread Amar S. Tumballi

  Schedulers are meant for scheduling creat(), and used *only* by
 cluster/unify translator as of now.


Sorry for bit wrong info. Schedulers are meant to scheduler following calls.

* creat()
* mknod ()
* symlink ()

Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Setup recommendations - 2 (io-threads)

2008-05-15 Thread Amar S. Tumballi
Hi Anton,
 Sorry for the delay in response. ''io-threads'' translator creates seperate
threads (as specified by the option). But, it doesn't make sense to have
more threads than number of CPUs available. It works good if the thread
count is same as number of CPUs you have in server.

On Tue, May 13, 2008 at 3:49 AM, Anton Khalikov [EMAIL PROTECTED] wrote:

 Hello everyone

 Could anyone give me any advice how to set up io-threads thanslator
 correctly for the following configuration: a server with glusterfs
 storage that works as dom0 (xen). DomU filesystems are actually files
 that have been placed to glusterfs volume. For example, if i have 20
 domUs what would be the best value for io-threads thread-count ? 20 ? or
 40 ? or may be 10 ?

 --
 Best regards
 Anton Khalikov



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] write-behind tuning

2008-05-15 Thread Amar S. Tumballi
On Wed, May 14, 2008 at 12:15 AM, Jordan Mendler [EMAIL PROTECTED] wrote:

 Hi all,

 I am in the process of implementing write-behind and had some questions.

 1) I've been told aggregate-size should be between 0-4MB. What is the
 down-side to making it large? In our case (a backup server) I would think
 the bigger the better since we are doing lots of consecutive/parallel
 rsyncs
 of a combination of tons of small files and some very large files. The only
 down-side I could see is that small transfers are not distributed as evenly
 since large writes will be done to only 1 brick instead of half of the
 write
 to each brick. Perhaps someone can clarify.


Currently we have a upper limit in our protocol translator to transfer only
4MB of data at the max in one request/reply packet. Hence if you use
write-behind on client side (as in most of the cases), it will fail to send
the bigger packet.




 2) What does flush-behind do? What is the advantage of having it on and
 what
 is the advantage of it off.

this option is given for increasing the performance of handling lot of small
files where, the close()/flush() can be pushed to back-ground, hence client
can process the next request. Its ''off'' by default.



 3) write-behind on the client aggregates small writes into larger ones. Is
 there any purpose to doing it on the server side? If so, how is this
 helpful?

Yes, generally, it helps if the writes are coming in a very small chunk. It
helps to reduce the diskhead seek() time.



 4) should write-behind be done on a brick-by-brick basis on the client, or
 is it fine to do it after the unify? (seems like it would be fine since
 this
 would consolidate small writes before sending it to the scheduler).


Yes, the behavior will be same where ever you put it. But its advised to do
it after unify as it reduces the complexity of the spec file.


 Hardware wise we currently have 2 16x1TB Hardware RAID6 servers (each is
 8core, 8gb of RAM). Each acts as both a server and a unify client.
 Underlying filesystem is currently XFS on Linux, ~13TB each. Interconnect
 is
 GigE and eventually we will have more external clients, though for now we
 are just using the servers as clients. My current client config is below.
 Any other suggestions are also appreciated.


Spec file looks good.


Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS configuration scripts

2008-05-15 Thread Amar S. Tumballi
Hi Samuel,
  Thanks for sharing the scripts with all. It is very helpful.

Regards,
Amar

On Thu, May 1, 2008 at 2:38 PM, Samuel Douglas [EMAIL PROTECTED]
wrote:

 While working on setting up GlusterFS as part of a University project,
 I started working on some Python scripts to dynamically generate
 GlusterFS configurations, and seeing John's question about separate
 spec files for each server, decided I should post what I had done
 here.

 The main 'library' script is available at:

  
 http://cs.waikato.ac.nz/~sjd29/glusterfs-config/glfsconf.pyhttp://cs.waikato.ac.nz/%7Esjd29/glusterfs-config/glfsconf.py

 Volume definitions are represented as classes in the script which can
 be configured and put together, then output ready to be read into
 GlusterFS as the config file when it starts up. You write a second
 Python script which utilises these classes and functions to actually
 construct your site-specific configuration, either hard coding the
 information or using some config files or whatever -- you write that
 bit.

 Here is the example config I came up with to try it on our test cluster:


 http://www.cs.waikato.ac.nz/~sjd29/glusterfs-config/glusterconf_waikato.pyhttp://www.cs.waikato.ac.nz/%7Esjd29/glusterfs-config/glusterconf_waikato.py

 Pretty much the result of an afternoon of hacking around. Not very
 well documented but could be a good starting point for anyone looking
 for a dynamic config system.

 -- Samuel


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Improving real world performance by moving files closer to their target workloads

2008-05-15 Thread Amar S. Tumballi
Hi Luke,
 Feels good that your university is looking into GlusterFS. Few tips inline.

On Thu, May 15, 2008 at 2:25 PM, Luke McGregor [EMAIL PROTECTED] wrote:

 Hi

 Im Luke McGregor and im working on a project at the university of
 waikato computer science department to make some improvements to
 GLusterFS to improve performance for our specific application.

Understanding i/o pattern of the application will generally help to tune the
filesystem to very good performance. You can look into it.


 We are
 implementing a fairly small cluster (90 machines currently) to use for
 large scale computing projects. This machine is being built using
 comodity hardware and backended into a gigabit ethernet backbone with
 10G uplinks between switches. Each node in the cluster will be
 responsible for both storage and workload processing. This is to be
 achieved with single sata disks in the machines.

You may use single process for both server and client to save overhead due
to context switching.



 We are currently experimenting with running GLuster over the nodes in
 the cluster to produce a single large filesystem. For my Honors
 research project ive been asked to look into making some improvements
 to GLuster to try to improve performance by moving the files within
 the GLusterFS closer to the node which is accessing the file.

You may look at NUFA scheduler. We are thinking of a way to reduce the
overhead in case of spec file management for NUFA. Which may come soon.



 What i was wondering is basically how hard would it be to write code
 to modify the metadata so that when a file is accessed it is then
 moved to the node which it is accessed from and its location is
 updated in the metadata.

There is no metadata is stored about the location of the file. But I am not
sure why you want to keep moving file :O if a file is moved to another node
when its accessed, what are the guarantee that its not accessed by two nodes
at a time (hence two copies and it may lead to I/O errors from GlusterFS).
Also you will have lot of overhead in doing that. You may think of using I/O
-cache. or implementing HSM.

-Amar

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NUFA Scheduler

2008-05-15 Thread Amar S. Tumballi
Well, I would say that won't work.

http://www.gluster.org/docs/index.php/AFR_single_process

Check the above link. It should works fine. This can extended further to
unify with NUFA (afr with the local posix volume as local-volume-name).

-amar


On Thu, May 15, 2008 at 5:10 PM, Gordan Bobic [EMAIL PROTECTED] wrote:

 OK, I clearly misunderstood something.
 All I am trying to achieve is have the same process handle the client and
 server aspects, so that mounting the AFR-ed FS gets the local server going,
 too. Is that not achievable? Because what I have _seems_ to actually work,
 only I cannot mount the FS from a single server from a remote node, I just
 get end point not connected. But files between the two servers do get
 replicated.

 Attached is a sample config of what I'm talking about. The other server is
 the same only with home1 and home2 volumes reversed.

 Gordan


 Anand Babu Periasamy wrote:

 Hi Gordan,
 Schedulers are only available for Unify translator. AFR doesnt support
 NUFA.

 Usually for HPC type workload, we do two separate mounts.
 1) Unify (with NUFA scheduler) of all the nodes
 2) Unify (with RR scheduler) of all AFRs. Each AFR mirrors two nodes.

 --
 Anand Babu Periasamy
 GPG Key ID: 0x62E15A31
 Blog [http://ab.freeshell.org]
 The GNU Operating System [http://www.gnu.org]
 Z RESEARCH Inc [http://www.zresearch.com]



 Gordan Bobic wrote:

 Hi,

 I'm trying to use the NUFA scheduler and it works fine between the
 machines that are both clients and servers, but I'm not having much luck
 with getting client-only machines to mount the share.

 I'm guessing that the problem is that I am trying to wrap a type
 cluster/afr brick with option scheduler nufa with a protocol/server, and
 export that. Is there any reason why this shouldn't work? Or do I have to
 set up a separate AFR brick without option scheduler nufa to export with a
 brick of type protocol/server?


 volume home1
type storage/posix
option directory /gluster/home
 end-volume

 volume storage-home
type protocol/server
option transport-type tcp/server
option listen-port 6996
subvolumes home1
option auth.ip.storage-home.allow 192.168.*
 end-volume

 volume home2
type protocol/client
option transport-type tcp/client
option remote-host 192.168.3.1
option remote-port 6996
option remote-subvolume storage-home
 end-volume

 volume home-afr
type cluster/afr
option read-subvolume home1
option scheduler nufa
option nufa.local-volume-name home1
option nufa.limits.min-free-disk 1%
subvolumes home1 home2
 end-volume

 volume home
type protocol/server
option transport-type tcp/server
option listen-port 6997
option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes home-afr
option auth.ip.home.allow 127.0.0.1,192.168.*
 end-volume


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NUFA Scheduler

2008-05-15 Thread Amar S. Tumballi
You will not connect to the server protocol volume, instead you connect to
one of its subvolumes (depending on the auth.allow). Hence, what i wrote
is valid.

On Thu, May 15, 2008 at 6:10 PM, Gordan Bobic [EMAIL PROTECTED] wrote:

 Amar S. Tumballi wrote:

 Well, I would say that won't work.

 http://www.gluster.org/docs/index.php/AFR_single_process

 Check the above link. It should works fine. This can extended further to
 unify with NUFA (afr with the local posix volume as local-volume-name).


 Thanks for that. I'm not 100% sure, but shouldn't this:

 volume home[12]
  type protocol/client
  option transport-type tcp/client
  option remote-host machine0[12]
  option remote-subvolume home2
 end-volume

 instead be:

 volume home[12]
  type protocol/client
  option transport-type tcp/client
  option remote-host machine0[12]
  option remote-subvolume storage-home
 end-volume

 ?


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NUFA Scheduler

2008-05-15 Thread Amar S. Tumballi
On Thu, May 15, 2008 at 6:19 PM, Gordan Bobic [EMAIL PROTECTED] wrote:

 So what is the purpose of the storage-home volume? It doesn't appear to be
 referenced anywhere.


Yes, you are right, its not needed. I just enhanced the spec file you had
sent first.  Same wiki link is updated for new spec files.


 And are you saying that it is possible to connect via a protocol/client
 volume to a remote volume of type other than protocol/server?


Always protocol client connects to protocol server. But the 'option
remote-subvolume' will be one of the subvolumes of server protocol. I
thought that was noticeable in all the spec file examples given.


Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Scheduling based on network speed / mixing compute and storage nodes

2008-05-14 Thread Amar S. Tumballi
You have NUFA scheduler to address this need.

On Wed, May 14, 2008 at 12:56 AM, Jordan Mendler [EMAIL PROTECTED] wrote:

 This question is a bit more distant. Is there a way to have the scheduler
 find the fastest-link/nearest storage brick and send files there? I ask
 because we have a grant pending which would allow us to build a ~128 node
 compute-cluster, but storage will also be a very large factor. My thought
 is
 that by having compute-nodes also act as storage nodes we can more
 efficiently utilize hardware (of course using a large # of AFR's for
 safety), and buy more nodes that will be better utilized rather than
 spending an extra ~$200-300/TB on storage servers/enclosures.

 So coming back to network-speed scheduling, would there be a way to have
 each node prefer writing to it's locally hosted gluster brick, to then be
 AFR replicated to its close-by nodes that are on the same switch? Also,
 has
 anyone attempted this kind of combined setup of Gluster across
 compute-nodes?

 Thanks so much,
 Jordan
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Extended Attributes

2008-05-13 Thread Amar S. Tumballi
There should be other thing if not d_off, we cant just remove it like this.
Because, if we remove it. it won't return at all.
Other than this, I am pretty sure i compiled 1.3.8 on bsd machine (on paul
arch's machine). Anyways, Thanks for the patch. But Iam thinking how i can
set that d_off thing.

On Tue, May 13, 2008 at 12:56 AM, Harshavardhana [EMAIL PROTECTED]
wrote:

 Hi Amar,

   Actually it was 1.3.8pre6 which successfully compiled on FreeBSD 7.0 not
   1.3.8 release. But when i saw the 1.3.8 release it had alloca.h added
   into common-utils.h and also d_off is not present under FreeBSD dirent
   structure used in posix translator.

   Please find the attached patch which makes 1.3.8 compile fine on FreeBSD
   7.0, the d_off entry has been removed.

   And also by fact all the xlators/schedulers/transport needed
   -D$(GF_HOST_OS) to be added for AM_CFLAGS.

   Let me know if this works out. Sorry about suggesting you linux-compat
   package that actually didn't really help :D

 Regards
 --
 Harshavardhana
 [y4m4 on [EMAIL PROTECTED]
 Samudaya TantraShilpi
 Z Research Inc - http://www.zresearch.com





-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Extended Attributes

2008-05-13 Thread Amar S. Tumballi
I used that on Mac OS X, but not sure on other systems. even for BSD we can
use it.

On Tue, May 13, 2008 at 1:06 AM, Anand Avati [EMAIL PROTECTED] wrote:


 There should be other thing if not d_off, we cant just remove it like
  this. Because, if we remove it. it won't return at all.
  Other than this, I am pretty sure i compiled 1.3.8 on bsd machine (on
  paul arch's machine). Anyways, Thanks for the patch. But Iam thinking how i
  can set that d_off thing.
 

 A portable way of getting the offset from a DIR * is to use telldir() and
 not worry about how the offset is internally stored. some systems store it
 as dirent-d_off, some as dirent-seek_off etc. Infact even in the current
 code we should cleanup the #ifdef mess and just use telldir() for every
 platform.

 avati




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Connecting via unix socket instead of IP socket

2008-05-13 Thread Amar S. Tumballi
 Don't the subvolume and namespace subvolume have to
 different?


Yes they should be different. Thanks for pointing out.



 Fromthe web page:

 volume posix
  type storage/posix
  option directory /tmp/exports
 end-volume

 ...

 volume brick-ns
  type storage/posix
  option directory /tmp/exports
 end-volume


 -Martin



*So the same volume config file can be used to start up the server and mount
the FS?
i.e.

# glusterfsd -f /etc/glusterfs/foo.vol

and

/etc/glusterfs/foo.vol /foo glusterfs defaults 0 0
in /etc/fstab?

Or is an additional setup/option required somewhere?
Or is only one of the above actually required? Would in this configuration
mounting the FS get the server process going implicitly, since it's now the
same process?

* Actually if you start mountpoint, it implicitly starts server. Hence its
just one command now. Nothing extra. Will update the wiki file.

Regards,
Amar




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Extended Attributes

2008-05-13 Thread Amar S. Tumballi
Hi Paul,
 Yeah! these changes were done after pre6, thats the version I tested on
your machines too. I committed the changes required to compile over BSD
already to mainline-2.5, and jordan confirmed it works fine now. So, you can
use mainline--2.5--patch-770, or choose to wait one or two days more, so you
can get 1.3.9 tarball, with all these fixes.

Regards,
Amar

On Tue, May 13, 2008 at 4:20 PM, Paul Arch [EMAIL PROTECTED] wrote:

 Hi Jordan,



  No immediate reason for xfs namespace, just seemed good for resizing and
 specifying heaps of inodes - plus it shouldn't need any fsck'ing, even
 though it is only 10gb in size I don't know how long it would take because
 it has so many directories/files.  I cannot remember trying to create a
 namespace on ZFS, I am sure I did and it worked fine, it just made sense
 in
 my setup to have the namespace on the client, as my setup will always be a
 'one client' / 'multiple server' scenario.



  Sorry I just realised off some other messages, I have been running
 1.3.8pre6 ( or 7), I think this then somehow got renamed to 1.3.8.freebsd2
 after we were compiling it and fixing any issues we came across.  I had
 thought this had been integrated into the main tree already.



  Cheers



  Paul Arch







 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
 Of
 Jordan Mendler
 Sent: Tuesday, 13 May 2008 11:01 AM
 To: Paul Arch
 Cc: gluster-devel@nongnu.org
 Subject: Re: [Gluster-devel] Extended Attributes



 Hi Paul,

 I got unify working across 2 ZFS nodes (12-13TB each), and am able to
 write
 files into it. I can read and cat files too, but for some reason doing an
 'ls' of the directory takes forever and never returns. Have you had any
 issues this like? It's weird because I can so all files when doing an ls
 of
 the namespace, just not when doing so on the fuse mount.

 Also, why use XFS for the namespace? I was thinking to just create a
 separate ZFS directory or zpool for the namespace on one of the storage
 servers. Any reason not to do this?

 Lastly, what version of gluster are you using on FreeBSD?

 I also gave some thought to OpenSolaris and Nexenta, but they don't
 support
 3ware RAID cards so its not an option. It's looking like either figure out
 how to get FreeBSD working flawlessly, or use Linux and give up on
 compression.

 Thanks so much,
 Jordan

 On Mon, May 12, 2008 at 5:37 PM, Paul Arch [EMAIL PROTECTED] wrote:



 snip


 Thanks again.
 
 Jordan
 
 On Mon, May 12, 2008 at 3:38 PM, Amar S. Tumballi [EMAIL PROTECTED]
 wrote:
 

 snip



 Hi Jordan,

  Also FYI we are running Gluster on FreeBSD 6.1, and FreeBSD 7.0RC1 (
 Servers only ).  7.0RC1 has ZFS running on backend store, 6.1 is UFS.

  System has ~ 10 million files over maybe 7Tb, running a simple unify,
 client is Linux with namespace Linux also.

  Generally, I would say things are 99.5% good, system seems to be holding
 together, I believe the only issues I have had related to attempting to
 bring in data on the servers( without gluster ) and then unify them.
  After
 that, anything written to the cluster seems very stable.  In between I did
 do a lot of chopping/changing on namespace so I am sure that didn't help.

  I can't remember specifically if the client worked under freebsd ( I am
 quite sure it ended up working ), but as Amar has suggested AFR and stripe
 won't work, looks like because of the attributes.

  The only real gotcha I got, and this will relate to any unify/cluster
 setup
 I assume, is to make sure the namespace filesystem can support the number
 of
 files you have ( ie FREE INODES )  I got unstuck with this a couple of
 times, hence the reason for chopping/changing namespace.  In the end I
 created a 10gb XFS loopback image under linux - but even now I just
 checked
 and I am nearly out of inodes again ! But at least I can easily resize it.


  Cheers

  Paul Arch



 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ib-verbs

2008-05-12 Thread Amar S. Tumballi
Ok, Let me update the documents properly..

-amar

On Mon, May 12, 2008 at 9:52 AM, [EMAIL PROTECTED] wrote:

 On Sun, 11 May 2008, Amar S. Tumballi wrote:

  Try running 'ibv_devinfo' to see if the device is present, and 'modprobe
  ib_mthca', as GlusterFS needs userspace device module. modprobe ib_mthca
  should solve it.
 

 ib_mthca was loaded, but I had to install libmthca to get the userspace
 stuff working. Was driving me crazy, since from root everything worked.

 -Nathan




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ESTALE

2008-05-12 Thread Amar S. Tumballi
Hi,
 Yes, there are two factors here.

1. 'errno' incompatibility. (errno == 45 in BSD is EOPNOTSUPP, but on Linux
EL2NSYNC.)
2. xattr not supported.

the first one is already solved in mainline--3.0 branch, second one, I will
work on having solaris like port for BSD inside glusterfs. But may be after
few (1-3)weeks.

Regards,
Amar

On Mon, May 12, 2008 at 10:20 AM, Jordan Mendler [EMAIL PROTECTED] wrote:

 I am not sure if this is a factor or not, but I had trouble with
 FreeBSD/ZFS
 since their port does not yet support extended attributed. While I am not
 sure whether or not this matters for unify, I see a xattr message in your
 logs.

 Cordially,
 Jordan

 On Sun, May 11, 2008 at 7:54 PM, Paul Arch [EMAIL PROTECTED] wrote:

  Hi,
 
 
 
   I have a very simple two server setup with Unify.  The servers are
  running
  FreeBSD ( 1.3.8 ) and the client is SuSE 10.3 ( 1.3.8 )
 
 
 
   The servers ( FreeBSD ) are running ZFS and UFS, the name space exists
 on
  the client which is XFS.
 
 
 
   In most cases, things are running sweet.
 
 
 
   But, I have been getting this which is causing my app. to bomb out  (
 eg
  ):
 
 2008-05-12 01:50:49 E [unify.c:325:unify_lookup] bricks: returning
  ESTALE for /jre/lib/locale/sv(46912587703840) [translator generation (6)
  inode generation (3)]
 
 2008-05-12 01:50:49 E [fuse-bridge.c:459:fuse_entry_cbk]
  glusterfs-fuse:
  46164509: (34) /jre/lib/locale/sv = -1 (116)
 
 
 
   If I actually 'CWD' to that directory from the client and 'ls -al', I
 can
  see the files there but get something like  ( eg ):
 
 2008-05-12 06:08:04 E [fuse-bridge.c:2176:fuse_xattr_cbk]
  glusterfs-fuse: 159472: (20) /java/lib/fontconfig.bfc = -1 (45)
 
  The actual ls command returns something like : Level 2 not Syncronized
 
 
 
  1.   On the namespace server there is a reference to the file there
 
  2.   On Server 1 the file actually exists
 
  3.   On Server 2 the file isn't there
 
 
 
   It could be the case that these files existed in the backend ( server
  side
  ) before the unify was done, I believe in the past if I come across
 files
  like this, if I delete them from the client and let them be re-created
 it
  seems ok.
 
 
 
  Is there anything I can do , or am I better off just blowing away these
  directories and allow the creation of the files again?
 
 
 
  cheers
 
 
 
  --
 
 
 
  Paul Arch
 
 
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@nongnu.org
  http://lists.nongnu.org/mailman/listinfo/gluster-devel
 
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Extended Attributes

2008-05-12 Thread Amar S. Tumballi
Replies Inline.

On Mon, May 12, 2008 at 3:29 PM, Jordan Mendler [EMAIL PROTECTED] wrote:

 Hi all,

 Can someone please give me a run down of when extended attriubutes are
 needed in the underlying filesystem accessed by gluster?

 1) Is it at all needed for unify without AFR and/or stripe?

No, its not needed in unify.


 2) For AFR and stripe, are they needed for both the namespace and the
 storage filesystems, or only for one of the two?

Its needed by the subvolumes of stripe and AFR, whether subvolumes are
namespace, or storage backend, it doesn't matter.



 3) In what other scenarios are extended attributes needed?

Currently its used only by those two translators inside GlusterFS.  Linux
systems use it for supporting Posix ACLs. if you don't need it, you can go
ahead.



 I am trying to see if there is a way to use gluster with FreeBSD/ZFS given
 that extended attributes are not yet supported. Depending on where the
 xattr
 are actually needed I was thinking to perhaps split off components, since
 I
 only care about ZFS for data storage volume.


Also I figured out that you need some package 'linux-compat' for
compatibility of linux packages (This package has all the missing header
files). I compiled on a system with this package, hence didn't get any
problems with compilation. I noticed with bare minimum installation, this
package is not installed.


 Thanks,
 Jordan



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Extended Attributes

2008-05-12 Thread Amar S. Tumballi
On Mon, May 12, 2008 at 6:00 PM, Jordan Mendler [EMAIL PROTECTED] wrote:

 Hi Paul,

 I got unify working across 2 ZFS nodes (12-13TB each), and am able to
 write
 files into it. I can read and cat files too, but for some reason doing an
 'ls' of the directory takes forever and never returns. Have you had any
 issues this like? It's weird because I can so all files when doing an ls
 of
 the namespace, just not when doing so on the fuse mount.


This I doubt is due to your removing of 'd_off' in posix.c, So, my
suggestion is, there is a 'linux-compat' package, which has linux headers.
Compile GlusterFS after getting that installed.

Regards,

-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ib-verbs

2008-05-11 Thread Amar S. Tumballi
Try running 'ibv_devinfo' to see if the device is present, and 'modprobe
ib_mthca', as GlusterFS needs userspace device module. modprobe ib_mthca
should solve it.

Regards,
Amar

On Sat, May 10, 2008 at 9:02 PM, [EMAIL PROTECTED] wrote:


 I have setup infiniband before, but running into issues this time:

 2008-05-07 14:51:45 C [ib-verbs.c:1408:ib_verbs_init] transport/ib-verbs:
 IB device list is empty. Check for 'ib_uverbs' module


 Module is loaded:
 [EMAIL PROTECTED] glusterfs]# lsmod |grep ib_uverbs
 ib_uverbs  71793  2 rdma_ucm,ib_ucm

 I see the other box:
 [EMAIL PROTECTED] glusterfs]# ibhosts
 Ca  : 0x0005ad033ff0 ports 2 xen1 HCA-1
 Ca  : 0x0005ad0327e8 ports 2 xen0 HCA-1

 Server is simple:
 [EMAIL PROTECTED] glusterfs]# more server.vol
 volume brick
  type storage/posix
  option directory /md0
 end-volume

 volume server
  type protocol/server
  option transport-type ib-verbs/server
  option auth.ip.brick.allow *
  subvolumes brick
 end-volume

  

 Nathan Stratton
 nathan at robotics.net
 http://www.robotics.net


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.3.8 FreeBSD Compilation issues

2008-05-08 Thread Amar S. Tumballi
Hi,
 Thanks for the report,


 First off, in addition to fusefs-kmod and bison are there any other
 dependencies I need to install to get gluster working?

Currently we haven't approved FUSE for BSD, hence having fuse translator is
not supported as such. But I heard that few users got it to working using
fuse port in bsd. Not sure though.




 Now the issues:
 1) alloca.h cannot be found. Some research indicates that a fix is doing
 echo #include stdlib.h  /usr/include/alloca.h which seems to fix it.
 Apparently this library is needed in linux but not explicitely needed for
 FreeBSD. Creating the file works for now, but seems too hackish.

Not sure of it, will check, mostly I had a system with alloca.h file. Will
check it out.




 2)  I cannot figure this one out. I am not sure if it is a simple library
 issue or what. When doing the make it breaks as follow:
 
 
 Making all in storage
 Making all in posix
 Making all in src
 if gcc -DHAVE_CONFIG_H -I. -I. -I../../../.. -fPIC
 -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -DGF_BSD_HOST_OS -Wall
 -I../../../../libglusterfs/src -shared -nostartfiles   -g -O2 -MT posix.o
 -MD -MP -MF .deps/posix.Tpo -c -o posix.o posix.c;  then mv -f
 .deps/posix.Tpo .deps/posix.Po; else rm -f .deps/posix.Tpo; exit 1;
 fi
 posix.c: In function 'posix_readdir':
 posix.c:2179: error: 'struct dirent' has no member named 'd_off'
 *** Error code 1


 Does anyone know what this means and how I can fix it?

Sadly struct dirent members are different on each OS. But when I last
compiled over BSD, i didn't had problem. I will check this too, and commit
the fix. Meanwhile, comment the line in posix.c and continue the
compilation. Let me know if other things break too.




 Thanks so much,
 Jordan Mendler


Regards,
-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1.3.8 FreeBSD Compilation issues

2008-05-08 Thread Amar S. Tumballi
Yes, mostly  I suspect the same,
Try adding these two commands,

export CFLAGS=-I/usr/local/include
export LDFLAGS=-L/usr/local/lib

Is there a path issue or something you would suggest trying to get gluster
 to recognize fuse and install the client? Also, if these will be effected by
 the patches that were committed, is there a particular snapshot that you
 think would both include this and likely be stable for production use?

Without testing things over FUSE on BSD its hard to give a warranty about
production use. But other things (like server export) should be stable
enough to use in production.



 Thanks so much,
 Jordan



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


  1   2   3   >