Jesse Caldwell wrote:
hi all,
i just built glusterfs-nfs_beta_rc10 on freebsd 8.1. i configured
glusterfs as follows:
./configure --disable-fuse-client --prefix=/usr/local/glusterfs
The volume file looks fine. We've never tried anything with the beta
branch on fbsd. Let me see if I can get it setup for a few build tests
atleast. In the mean time, please let me have the complete log file from
the glusterfsd that runs nfs/server. Use the TRACE log level by setting
the following command line options:
-L TRACE -l /tmp/nfs-fail.log
Mail me the nfs-fail.log.
Thanks
i also ran this on the source tree before building:
for file in $(find . -type f -exec grep -l EBADFD {} \;); do
sed -i -e 's/EBADFD/EBADF/g' ${file};
done
i used glusterfs-volgen to create some config files:
glusterfs-volgen -n export --raid 1 --nfs 10.0.0.10:/pool 10.0.0.20:/pool
glusterfsd will start up with 10.0.0.10-export-export.vol or
10.0.0.20-export-export.vol without any complaints. when i try to start
the nfs server, i get:
nfs1:~ $ sudo /usr/local/glusterfs/sbin/glusterfsd -f ./export-tcp.vol
Volume 'nfsxlator', line 31: type 'nfs/server' is not valid or not found on this machine
error in parsing volume file ./export-tcp.vol
exiting
the module is present, though, and truss shows that glusterfsd is finding
and opening it:
open("/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so",O_RDONLY,0106)
= 7 (0x7)
nfs/server.so doesn't seem to be tragically mangled:
nfs1:~ $ ldd
/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so
/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so:
libglrpcsvc.so.0 => /usr/local/glusterfs/lib/libglrpcsvc.so.0
(0x800c00000)
libglusterfs.so.0 => /usr/local/glusterfs/lib/libglusterfs.so.0
(0x800d17000)
libthr.so.3 => /lib/libthr.so.3 (0x800e6a000)
libc.so.7 => /lib/libc.so.7 (0x800647000)
is this a freebsd-ism, or did i screw up something obvious? the config
file i am using is obviously nothing special, but here it is:
nfs1:~ $ grep -v '^#' export-tcp.vol
volume 10.0.0.20-1
type protocol/client
option transport-type tcp
option remote-host 10.0.0.20
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume 10.0.0.10-1
type protocol/client
option transport-type tcp
option remote-host 10.0.0.10
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume mirror-0
type cluster/replicate
subvolumes 10.0.0.10-1 10.0.0.20-1
end-volume
volume nfsxlator
type nfs/server
subvolumes mirror-0
option rpc-auth.addr.mirror-0.allow *
end-volume
thanks,
jesse
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users