Re: [Gluster-users] NFS expose subfolders only

2011-11-16 Thread Shehjar Tikoo
Like I said earlier, all folders are exported by default, just that they 
dont show up on showmount -e output unless explicitly set using the 
nfs.export-dir option. This means, if you know that a folder exists on the 
Gluster volume, you can mount it directly like I had shown in the previous 
email.


-Shehjar


Thai. Ngo Bao wrote:

Hi,

Is it possible to expose multiple sub-folders using nfs.export-dir? I do
not have an access to my test environment right now, so I guess
something similar should be done: gluster volume set volume_name
nfs.export-dir  /sub-folder1   /sub-folder2 ... ?

Any insight into this is much appreciated, and I will definitely figure
out what should be done tomorrow for the case.

Thanks, ~Thai  From:
gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] On
Behalf Of Thai. Ngo Bao [tha...@vng.com.vn] Sent: Monday, November 14,
2011 6:02 PM To: Shehjar Tikoo Cc: gluster-users@gluster.org Subject:
Re: [Gluster-users] NFS expose subfolders only

Bingo. It works. Shehjar, thanks for your hint.

~Thai


-Original Message- From: Shehjar Tikoo
[mailto:shehj...@gluster.com] Sent: Monday, November 14, 2011 5:55 PM 
To: Thai. Ngo Bao Cc: Anush Shetty; gluster-users@gluster.org Subject:

Re: [Gluster-users] NFS expose subfolders only

Directory exports are enabled by default. You just need to mount using 
/bkfarm/00 as the export dir, not /00.


-Shehjar

Thai. Ngo Bao wrote:

Anush, thanks for the quick reply.



Below is the output of showmount at server side:



[root@GS_BackupFarm_Cluster01 ~]# showmount -e localhost

Export list for localhost:

/bkfarm/00 *



Output from netstat:



[root@GS_BackupFarm_Cluster01 ~]# netstat -vtlpn

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address   Foreign Address
State   PID/Program name

tcp0  0 0.0.0.0:38465 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:38466 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:38467 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:805 0.0.0.0:*   LISTEN
4015/rpc.statd

tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN
3934/portmap

tcp0  0 :::24007 :::*LISTEN
22996/glusterd

tcp0  0 :::24009 :::*LISTEN
8653/glusterfsd





-client side ---

[root@GSO_DB_Local1 ~]# showmount -e localhost

mount clntudp_create: RPC: Program not registered

[root@GSO_DB_Local1 ~]# mount -o vers=3 bkf3:/00 /bkfarm/

mount: bkf3:/00 failed, reason given by server: No such file or
directory



Any ideas?



Thanks,

~Thai



*From:* Anush Shetty [mailto:an...@gluster.com] *Sent:* Monday,
November 14, 2011 5:40 PM *To:* Thai. Ngo Bao;
gluster-users@gluster.org *Subject:* RE: NFS expose subfolders only



Hi,

Please make sure that nfs-kernel-server isn't running.

Can you pase your showmount -e output?

The right way to mount Gluster NFS is, mount -o vers=3 bkf3:/00
/bkfarm/

- Anush




*From:* Thai. Ngo Bao [tha...@vng.com.vn] *Sent:* 14 November 2011
16:07:05 *To:* Anush Shetty; gluster-users@gluster.org *Subject:* RE:
NFS expose subfolders only

Hi,



I have tried the trick several times but had no success so far.



Below is the info of my testing environment



[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-dir /00


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-volumes off


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume info



Volume Name: bkfarm

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: bkf1:/sfarm

Brick2: bkf2:/sfarm

Brick3: bkf3:/sfarm

Brick4: bkf4:/sfarm

Brick5: bkf5:/sfarm

Brick6: bkf6:/sfarm

Brick7: bkf7:/sfarm

Brick8: bkf8:/sfarm

Options Reconfigured:

nfs.disable: Off

nfs.export-dir: /00




- No virus found in this message. Checked by AVG - www.avg.com 
Version: 2012.0.1869 / Virus Database: 2092/4615 - Release Date:

11/13/11 ___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


- No virus found in this message. Checked by AVG - www.avg.com 
Version: 2012.0.1869 / Virus Database: 2092/4615 - Release Date:

11/13/11


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS expose subfolders only

2011-11-14 Thread Shehjar Tikoo
Directory exports are enabled by default. You just need to mount using 
/bkfarm/00 as the export dir, not /00.


-Shehjar

Thai. Ngo Bao wrote:

Anush, thanks for the quick reply.

 


Below is the output of showmount at server side:

 


[root@GS_BackupFarm_Cluster01 ~]# showmount -e localhost

Export list for localhost:

/bkfarm/00 *

 


Output from netstat:

 


[root@GS_BackupFarm_Cluster01 ~]# netstat -vtlpn

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address   Foreign 
Address State   PID/Program name


tcp0  0 0.0.0.0:38465   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:38466   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:38467   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:805 
0.0.0.0:*   LISTEN  4015/rpc.statd


tcp0  0 0.0.0.0:111 
0.0.0.0:*   LISTEN  3934/portmap


tcp0  0 :::24007
:::*LISTEN  22996/glusterd


tcp0  0 :::24009
:::*LISTEN  8653/glusterfsd


 

 


-client side ---

[root@GSO_DB_Local1 ~]# showmount -e localhost

mount clntudp_create: RPC: Program not registered

[root@GSO_DB_Local1 ~]# mount -o vers=3 bkf3:/00 /bkfarm/

mount: bkf3:/00 failed, reason given by server: No such file or directory

 


Any ideas?

 


Thanks,

~Thai

 


*From:* Anush Shetty [mailto:an...@gluster.com]
*Sent:* Monday, November 14, 2011 5:40 PM
*To:* Thai. Ngo Bao; gluster-users@gluster.org
*Subject:* RE: NFS expose subfolders only

 


Hi,

Please make sure that nfs-kernel-server isn't running.

Can you pase your showmount -e output?

The right way to mount Gluster NFS is,
mount -o vers=3 bkf3:/00 /bkfarm/

-
Anush



*From:* Thai. Ngo Bao [tha...@vng.com.vn]
*Sent:* 14 November 2011 16:07:05
*To:* Anush Shetty; gluster-users@gluster.org
*Subject:* RE: NFS expose subfolders only

Hi,

 


I have tried the trick several times but had no success so far.

 


Below is the info of my testing environment

 

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-dir /00


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-volumes off


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume info

 


Volume Name: bkfarm

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: bkf1:/sfarm

Brick2: bkf2:/sfarm

Brick3: bkf3:/sfarm

Brick4: bkf4:/sfarm

Brick5: bkf5:/sfarm

Brick6: bkf6:/sfarm

Brick7: bkf7:/sfarm

Brick8: bkf8:/sfarm

Options Reconfigured:

nfs.disable: Off

nfs.export-dir: /00



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] deleted files make bricks full ?

2011-08-23 Thread Shehjar Tikoo

Is this a 4 brick distributed configuration or is there replication involved?

Tomoaki Sato wrote:

Shejar,

Where can I see updates on this issue ?
Bugzilla ?

Thanks,
tomo
 
(2011/08/17 15:06), Shehjar Tikoo wrote:

Thanks for providing the exact steps. This is a bug. We're on it.

-Shehjar

Tomoaki Sato wrote:

a simple way to reproduce the issue:
1) NFS mount and create 'foo' and umount.
2) NFS mount and delete 'foo' and umount.
3) replete 1) 2) till ENOSPC.

command logs are following:
[root@vhead-010 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.1.5-1
glusterfs-core-3.1.5-1
[root@vhead-010 ~]# cat /etc/issue
CentOS release 5.6 (Final)
Kernel \r on an \m

[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
103212320 192256 103020064 1% /mnt/brick
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# ls /mnt
[root@vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 17.8419 seconds, 60.2 MB/s
[root@vhead-010 ~]# ls -l /mnt/foo
-rw-r--r-- 1 root root 1073741824 Aug 2 08:14 /mnt/foo
[root@vhead-010 ~]# umount /mnt
[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
103212320 1241864 101970456 2% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
103212320 192256 103020064 1% /mnt/brick
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# rm -f /mnt/foo
[root@vhead-010 ~]# ls /mnt
[root@vhead-010 ~]# umount /mnt
[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
103212320 1241864 101970456 2% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
103212320 192256 103020064 1% /mnt/brick
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
103212320 192256 103020064 1% /mnt/brick
[root@vhead-010 ~]# ssh small-1-4-private
[root@localhost ~]# du /mnt/brick
16 /mnt/brick/lost+found
24 /mnt/brick
[root@localhost ~]# ps ax | grep glusterfsd | grep -v grep
7246 ? Ssl 0:03 /opt/glusterfs/3.1.5/sbin/glusterfsd --xlator-option
small-server.listen-port=24009 -s localhost --volfile-id 
small.small-1-4-private
.mnt-brick -p 
/etc/glusterd/vols/small/run/small-1-4-private-mnt-brick.pid --bri
ck-name /mnt/brick --brick-port 24009 -l 
/var/log/glusterfs/bricks/mnt-brick.log


[root@localhost ~]# ls -l /proc/7246/fd
total 0
lrwx-- 1 root root 64 Aug 2 08:18 0 -> /dev/null
lrwx-- 1 root root 64 Aug 2 08:18 1 -> /dev/null
lrwx-- 1 root root 64 Aug 2 08:18 10 -> socket:[153304]
lrwx-- 1 root root 64 Aug 2 08:18 11 -> socket:[153306]
lrwx-- 1 root root 64 Aug 2 08:18 12 -> socket:[153388]
lrwx-- 1 root root 64 Aug 2 08:18 13 -> /mnt/brick/foo (deleted) 
<

lrwx-- 1 root root 64 Aug 2 08:18 2 -> /dev/null
lr-x-- 1 root root 64 Aug 2 08:18 3 -> eventpoll:[153252]
l-wx-- 1 root root 64 Aug 2 08:18 4 -> 
/var/log/glusterfs/bricks/mnt-brick.

log
lrwx-- 1 root root 64 Aug 2 08:18 5 -> 
/etc/glusterd/vols/small/run/small-1

-4-private-mnt-brick.pid
lrwx-- 1 root root 64 Aug 2 08:18 6 -> socket:[153257]
lrwx-- 1 root root 64 Aug 2 08:18 7 -> socket:[153301]
lrwx-- 1 root root 64 Aug 2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
lrwx-- 1 root root 64 Aug 2 08:18 9 -> socket:[153297]
[root@localhost ~]# exit
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 21.4717 seconds, 50.0 MB/s
[root@vhead-010 ~]# ls -l /mnt/foo
-rw-r--r-- 1 root root 1073741824 Aug 2 08:19 /mnt/foo
[root@vhead-010 ~]# umount /mnt
[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
103212320 2291472 10092

Re: [Gluster-users] deleted files make bricks full ?

2011-08-16 Thread Shehjar Tikoo

Thanks for providing the exact steps. This is a bug. We're on it.

-Shehjar

Tomoaki Sato wrote:

a simple way to reproduce the issue:
1) NFS mount and create 'foo' and umount.
2) NFS mount and delete 'foo' and umount.
3) replete 1) 2) till ENOSPC.

command logs are following:
[root@vhead-010 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.1.5-1
glusterfs-core-3.1.5-1
[root@vhead-010 ~]# cat /etc/issue
CentOS release 5.6 (Final)
Kernel \r on an \m

[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
 103212320192256 103020064   1% /mnt/brick
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# ls /mnt
[root@vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 17.8419 seconds, 60.2 MB/s
[root@vhead-010 ~]# ls -l /mnt/foo
-rw-r--r-- 1 root root 1073741824 Aug  2 08:14 /mnt/foo
[root@vhead-010 ~]# umount /mnt
[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
 103212320   1241864 101970456   2% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
 103212320192256 103020064   1% /mnt/brick
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# rm -f /mnt/foo
[root@vhead-010 ~]# ls /mnt
[root@vhead-010 ~]# umount /mnt
[root@vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
/mnt/brick; do

ne
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cde:2cdf:8
 103212320   1241864 101970456   2% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2ceb:2cec:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2cf8:2cf9:8
 103212320192256 103020064   1% /mnt/brick
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/2d05:2d06:8
 103212320192256 103020064   1% /mnt/brick
[root@vhead-010 ~]# ssh small-1-4-private
[root@localhost ~]# du /mnt/brick
16  /mnt/brick/lost+found
24  /mnt/brick
[root@localhost ~]# ps ax | grep glusterfsd | grep -v grep
 7246 ?Ssl0:03 /opt/glusterfs/3.1.5/sbin/glusterfsd 
--xlator-option
small-server.listen-port=24009 -s localhost --volfile-id 
small.small-1-4-private
.mnt-brick -p 
/etc/glusterd/vols/small/run/small-1-4-private-mnt-brick.pid --bri
ck-name /mnt/brick --brick-port 24009 -l 
/var/log/glusterfs/bricks/mnt-brick.log


[root@localhost ~]# ls -l /proc/7246/fd
total 0
lrwx-- 1 root root 64 Aug  2 08:18 0 -> /dev/null
lrwx-- 1 root root 64 Aug  2 08:18 1 -> /dev/null
lrwx-- 1 root root 64 Aug  2 08:18 10 -> socket:[153304]
lrwx-- 1 root root 64 Aug  2 08:18 11 -> socket:[153306]
lrwx-- 1 root root 64 Aug  2 08:18 12 -> socket:[153388]
lrwx-- 1 root root 64 Aug  2 08:18 13 -> /mnt/brick/foo (deleted) <
lrwx-- 1 root root 64 Aug  2 08:18 2 -> /dev/null
lr-x-- 1 root root 64 Aug  2 08:18 3 -> eventpoll:[153252]
l-wx-- 1 root root 64 Aug  2 08:18 4 -> 
/var/log/glusterfs/bricks/mnt-brick.

log
lrwx-- 1 root root 64 Aug  2 08:18 5 -> 
/etc/glusterd/vols/small/run/small-1

-4-private-mnt-brick.pid
lrwx-- 1 root root 64 Aug  2 08:18 6 -> socket:[153257]
lrwx-- 1 root root 64 Aug  2 08:18 7 -> socket:[153301]
lrwx-- 1 root root 64 Aug  2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
lrwx-- 1 root root 64 Aug  2 08:18 9 -> socket:[153297]
[root@localhost ~]# exit
[root@vhead-010 ~]# mount small:/small /mnt
[root@vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 21.4717 seconds, 50.0 MB/s
[root@vhead-010 ~]# ls -l /mnt/foo
-rw-r--r-- 1 root root 1073741824 Aug  2 08:1

Re: [Gluster-users] "Failed to map FH to vol"

2011-06-14 Thread Shehjar Tikoo
They have no meaning unless you get a "Stale NFs file handle" error on a 
*NFS* mount. It generally means that the NFS client is trying to access a 
volume/share that does not exist anymore.


I've been thinking of reducing the log-level of this message for sometime. 
Do you think it is a problem enough to change the log-level in the source?


-Shehjar

Daniel Manser wrote:

Dear list,

What do these messages mean? My client can't write and these messages 
are filling up the nfs.log:


  [2011-06-14 13:38:42.826673] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:38:51.826871] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:00.827037] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:09.827248] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:18.827401] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:27.827578] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:36.827783] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:45.829407] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:54.828145] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:40:03.828344] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol


Thanks,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS problem

2011-06-10 Thread Shehjar Tikoo
cal/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9]
(-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) 
[0x2ab58145ef8e]
(-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) 
[0x2ab58145eefe]))) 0-poolsave-client-1:
forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 
2011-06-09 17:01:35.784870
[2011-06-09 17:01:35.817918] I 
[client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote

operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 
0-poolsave-client-1: disconnected
[2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 
0-poolsave-replicate-0: All subvolumes

are down. Going offline until atleast one of them comes back up.
[2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 
0-poolsave-client-1: connection

to 10.68.217.86:24011 failed (Connection refused)
[2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 
0-poolsave-replicate-0: no subvolumes up
[2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00/log:

no child is up
[2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00/log:

no child is up
[2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00: no

child is up
[2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 
0-poolsave-client-0: connection

to 10.68.217.85:24014 failed (Connection refused)
[2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up




 > Message: 7
 > Date: Thu, 9 Jun 2011 12:56:39 +0530
 > From: Shehjar Tikoo 
 > Subject: Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem
 > To: J?rgen Winkler 
 > Cc: gluster-users@gluster.org
 > Message-ID: <4df075af.3040...@gluster.com>
 > Content-Type: text/plain; charset="us-ascii"; format=flowed
 >
 > This can happen if all your servers were unreachable for a few 
seconds. The
 > situation must have rectified during the restart. We could confirm 
if you

 > change the log level on nfs to DEBUG and send us the log.
 >
 > Thanks
 > -Shehjar
 >
 > Ju"rgen Winkler wrote:
 > > Hi,
 > >
 > > i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our
 > > Servers are loosing the Mount but when you restart the Volume on the
 > > Server it works again without a remount.
 > >
 > > On the server i noticed this entries in the Glusterfs/Nfs 
log-file when

 > > the mount on the Client becomes unavailable :
 > >
 > > [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat]
 > > 0-ksc-replicate-0: /: no child is up
 > > [2011-06-08 14:37:04.334089] I [afr-inod

Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem

2011-06-09 Thread Shehjar Tikoo
This can happen if all your servers were unreachable for a few seconds. The 
situation must have rectified during the restart. We could confirm if you 
change the log level on nfs to DEBUG and send us the log.


Thanks
-Shehjar

Ju"rgen Winkler wrote:

Hi,

i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our 
Servers are loosing the Mount but when you restart the Volume on the 
Server it works again without a remount.


On the server i noticed this entries in the Glusterfs/Nfs  log-file when 
the mount on the Client becomes unavailable  :


[2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up



Thx for the help

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How do I diagnose what's going wrong with a Gluster NFS mount?

2011-05-31 Thread Shehjar Tikoo

If you can file a bug, we'll take it from there. Thanks.

Whit Blauvelt wrote:

Hi,

Has anyone even seen this before - an NFS mount through Gluster that gets
the filesystem size wrong and is otherwise garbled and dangerous?

Is there a way within Gluster to fix it, or is the lesson that Gluster's NFS
sometimes can't be relied on? What have the experiences been running an
external NFS daemon with Gluster? Is that fairly straightforward? Might like
to get the advantages of NFS4 anyhow.

Thanks,
Whit


- Forwarded message from Whit Blauvelt  -

Date: Sat, 28 May 2011 22:33:55 -0400
From: Whit Blauvelt 
To: gluster-users@gluster.org
Subject: nfs mount in error, wrong filesystem size shown
User-Agent: Mutt/1.5.21 (2010-09-15)

I've got a couple of servers with several mirrored gluster volumes. Two work
fine from all perspectives. One, the most recently set up, mounts remotely
as glusterfs, but fails badly as nfs. The mount appears to work when
requested, but the filesystem size shown in totally wrong and it is not in
fact accessible. This is with 3.1.4:

So we have for instance on one external system:

192.168.1.242:/std   309637120 138276672 155631808  48% /mnt/std
192.168.1.242:/store
  19380692   2860644  15543300  16% /mnt/store

where the first nfs mount is correct and working, but the second is way off.
That was the same result as when /store was nfs mounted to another system
too. But on that same other system, /store mounts correctly as glusterfs:

vm2:/store536704000  14459648 494981376   3% /mnt/store

with the real size shown, and the filesystem fully accessible.

The erroneous mount is also apparently dangerous. I tried writing a file to
it to see what would happen, and it garbaged the underlying filesystems. So
I did a full reformatting and recreation of the gluster volume before
retrying at that point - and still got the bad nfs mount for it.

The bad nfs mount happens no matter which of the two servers in the gluster
cluster the mount uses, too.

Any ideas what I'm hitting here? For the present purpose, we need to be able
to mount nfs, as we need some Macs to mount it.

Thanks,
Whit

- End forwarded message -

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Editing text files on NFS mounted volumes

2011-04-04 Thread Shehjar Tikoo
Try catting a file from the shell. I'd like to see the error message 
printed by a shell tool to see whats going on. Thanks.



Dan Bretherton wrote:

Hello list-
I have a strange problem opening text files on NFS mounted volumes with 
certain text editors.  With nedit and gedit I get the error "Could not 
open file" or similar, and kwrite just show an empty file.  Other 
editors such as emacs, vi, mousepad (XFCE4 desktop) and kedit (simple 
KDE editor) work fine.  The problem is that nedit, gedit and kwrite are 
the ones most people want to use.  None of the editors have any problems 
opening files on native GlusterFS mounts, but I only use the GlusterFS 
client on our compute servers and I don't want to have to install it on 
all the office PCs.  Does anybody know why this problem occurs with NFS 
or how to stop it?


-Dan.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Disabling NFS

2011-03-30 Thread Shehjar Tikoo

Mike Hanby wrote:

Strange, I do have the fuse and gluster-fuse / gluster-core packages
installed on the client.

I can mount the volume using the gluster

nas-srv-01:/users   /users   glusterfs defaults,_netdev   0
0

Maybe I just need to figure out how to configure builtin Gluster NFS to
export like I want.


See http://gluster.com/community/documentation/index.php/Gluster_3.1_NFS_Guide



I'm trying to ensure that from the Gluster servers, I have fine control
over which ip addresses can mount specific parts of the volume, similar
to what can be done via /etc/exports


See rpc-auth.addr option in the NFS options section in the 3.1.3 release notes:

http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/Gluster_FS_Release_Notes_3.1.3.pdf

-Shehjar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 회신: gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo


- Original Message -
> From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> To: "Shehjar Tikoo" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, March 29, 2011 4:28:33 PM
> Subject: [Gluster-users] 회신: gluster 3.1.3 mount using nfs
> yes, you were right.
> 
> but after i shutdown the nfs server on 150.2.226.21
> 

You have to restart Gluster NFS server after shutting down the kernel's nfs 
server.

-Shehjar




> i still has error message like below
> 
> test14:/>mount -F nfs 150.2.226.21:/vms /mnt
> nfs mount: get_fh: 150.2.226.21:: RPC: Program not registered
> nfs mount: get_fh: 150.2.226.21:: RPC: Program not registered
> nfs mount: retry: retrying(1) for: /mnt after 5 seconds
> nfs mount: retry: giving up on: /mnt
> 
> can you tell me that what's different from linux and unix nfs protocol
> ?
> cause, there's no problem with linux
> 
> 
> 보낸 사람: Shehjar Tikoo [shehj...@gluster.com]
> 보낸 날짜: 2011년 3월 29일 화요일 오후 6:31
> 받는 사람: 공용준(yongjoon kong)/Cloud Computing 사업담당
> 참조: gluster-users@gluster.org
> 제목: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> 
> It looks like you have another nfs server running on 150.2.226.21
> because there is no mountd that is part of Gluster NFS server. Please
> shutdown this mountd processes and then try mounting again. If that
> doesnt work, try mounting with the options you had used earlier.
> 
> - Original Message -
> > From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> > To: "Shehjar Tikoo" 
> > Cc: gluster-users@gluster.org
> > Sent: Tuesday, March 29, 2011 2:46:31 PM
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > The message is changed like below
> > test14:/>mount -F nfs 150.2.226.21:/vms /mnt
> > Permission denied
> >
> > And from the 150.2.226.21 says
> >
> > Mar 29 18:17:23 skt-cldpap001 mountd[22080]: refused mount request
> > from 150.2.223.249 for /vms (/): not exported
> >
> > -Original Message-
> > From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> > Sent: Tuesday, March 29, 2011 5:45 PM
> > To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> >
> > OK. See if the following steps help on the HP-UX client:
> >
> > 1. Add the line NFS_TCP=1 in file /etc/rc.config.d/nfsconf
> >
> > 2. /sbin/init.d/nfs.client stop;
> >
> > 3. /sbin/init.d/nfs.client start
> >
> > 4. Now run the mount command, first without any options and if that
> > does not work, try with the options including port=38467.
> >
> >
> >
> >
> > - Original Message -
> > > From: "공용준(yongjoon kong)/Cloud Computing 사업담당"
> > > 
> > > To: "Shehjar Tikoo" 
> > > Cc: gluster-users@gluster.org
> > > Sent: Tuesday, March 29, 2011 1:39:23 PM
> > > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > > I got this result
> > >
> > >
> > > showmount -e 150.2.226.26
> > > export list for 150.2.226.26:
> > > /vbs *
> > > /vms *
> > >
> > > And /vms , /vbs cannot be mounted.
> > >
> > > -Original Message-
> > > From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> > > Sent: Tuesday, March 29, 2011 4:08 PM
> > > To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> > > Cc: gluster-users@gluster.org
> > > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > >
> > > On the HP-UX machine, whats the output of:
> > >
> > > showmount -e 150.2.226.26
> > >
> > >
> > >
> > > - Original Message -
> > > > From: "공용준(yongjoon kong)/Cloud Computing 사업담당"
> > > > 
> > > > To: "gluster-users@gluster.org" 
> > > > Sent: Tuesday, March 29, 2011 5:31:28 AM
> > > > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > > > It doesn't work.
> > > >
> > > > For my guess, it's probably connected with multiple networks.
> > > >
> > > > By gluster bricks is on 192.168.x.x
> > > >
> > > > one of the brick server has public network like 150.2.x.x
> > > >
> > > > And the unix server's ip is 150.2.x.236 and try to mount the
> > > > brick
> > > > via
> > > > nfs through that network
> > > > Line
> > > >
> 

Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo
It looks like you have another nfs server running on 150.2.226.21 because there 
is no mountd that is part of Gluster NFS server. Please shutdown this mountd 
processes and then try mounting again. If that doesnt work, try mounting with 
the options you had used earlier.

- Original Message -
> From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> To: "Shehjar Tikoo" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, March 29, 2011 2:46:31 PM
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> The message is changed like below
> test14:/>mount -F nfs 150.2.226.21:/vms /mnt
> Permission denied
> 
> And from the 150.2.226.21 says
> 
> Mar 29 18:17:23 skt-cldpap001 mountd[22080]: refused mount request
> from 150.2.223.249 for /vms (/): not exported
> 
> -Original Message-
> From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> Sent: Tuesday, March 29, 2011 5:45 PM
> To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> 
> OK. See if the following steps help on the HP-UX client:
> 
> 1. Add the line NFS_TCP=1 in file /etc/rc.config.d/nfsconf
> 
> 2. /sbin/init.d/nfs.client stop;
> 
> 3. /sbin/init.d/nfs.client start
> 
> 4. Now run the mount command, first without any options and if that
> does not work, try with the options including port=38467.
> 
> 
> 
> 
> - Original Message -
> > From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> > To: "Shehjar Tikoo" 
> > Cc: gluster-users@gluster.org
> > Sent: Tuesday, March 29, 2011 1:39:23 PM
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > I got this result
> >
> >
> > showmount -e 150.2.226.26
> > export list for 150.2.226.26:
> > /vbs *
> > /vms *
> >
> > And /vms , /vbs cannot be mounted.
> >
> > -Original Message-
> > From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> > Sent: Tuesday, March 29, 2011 4:08 PM
> > To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> >
> > On the HP-UX machine, whats the output of:
> >
> > showmount -e 150.2.226.26
> >
> >
> >
> > - Original Message -
> > > From: "공용준(yongjoon kong)/Cloud Computing 사업담당"
> > > 
> > > To: "gluster-users@gluster.org" 
> > > Sent: Tuesday, March 29, 2011 5:31:28 AM
> > > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > > It doesn't work.
> > >
> > > For my guess, it's probably connected with multiple networks.
> > >
> > > By gluster bricks is on 192.168.x.x
> > >
> > > one of the brick server has public network like 150.2.x.x
> > >
> > > And the unix server's ip is 150.2.x.236 and try to mount the brick
> > > via
> > > nfs through that network
> > > Line
> > >
> > > Is it possible configurations?
> > >
> > > -Original Message-
> > > From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> > > Sent: Tuesday, March 29, 2011 1:42 AM
> > > To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> > > Cc: gluster-users@gluster.org
> > > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > >
> > > Try port=38467. Let us know if it works. We'll put it up on the
> > > FAQ.
> > > Thanks.
> > >
> > > - Original Message -
> > > > From: "공용준(yongjoon kong)/Cloud Computing 사업담당"
> > > > 
> > > > To: "gluster-users@gluster.org" 
> > > > Sent: Monday, March 28, 2011 5:54:06 PM
> > > > Subject: [Gluster-users] gluster 3.1.3 mount using nfs
> > > > Hi all,
> > > >
> > > > I setup the gluster filesystem and I want to mount the gluster
> > > > volume
> > > > using nfs in unix system.
> > > >
> > > > My machine is hp-ux (11.23)
> > > > I put command like below but it has error
> > > >
> > > > test14:/>mount -F nfs -o proto=tcp,port=38465,vers=3,llock
> > > > 150.2.226.26:/temp /mnt
> > > > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > > > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > > > nfs mount: retry: retrying(1) for: /mnt after 5 seconds
> > > > nfs mount: retry: giving up on: /mnt
> > > >
> > &g

Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo
OK. See if the following steps help on the HP-UX client:

1. Add the line NFS_TCP=1 in file /etc/rc.config.d/nfsconf

2. /sbin/init.d/nfs.client stop;

3. /sbin/init.d/nfs.client start

4. Now run the mount command, first without any  options and if that does not 
work, try with the options including port=38467.




- Original Message -
> From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> To: "Shehjar Tikoo" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, March 29, 2011 1:39:23 PM
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> I got this result
> 
> 
> showmount -e 150.2.226.26
> export list for 150.2.226.26:
> /vbs *
> /vms *
> 
> And /vms , /vbs cannot be mounted.
> 
> -Original Message-
> From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> Sent: Tuesday, March 29, 2011 4:08 PM
> To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> 
> On the HP-UX machine, whats the output of:
> 
> showmount -e 150.2.226.26
> 
> 
> 
> - Original Message -
> > From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> > To: "gluster-users@gluster.org" 
> > Sent: Tuesday, March 29, 2011 5:31:28 AM
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> > It doesn't work.
> >
> > For my guess, it's probably connected with multiple networks.
> >
> > By gluster bricks is on 192.168.x.x
> >
> > one of the brick server has public network like 150.2.x.x
> >
> > And the unix server's ip is 150.2.x.236 and try to mount the brick
> > via
> > nfs through that network
> > Line
> >
> > Is it possible configurations?
> >
> > -Original Message-
> > From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> > Sent: Tuesday, March 29, 2011 1:42 AM
> > To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> >
> > Try port=38467. Let us know if it works. We'll put it up on the FAQ.
> > Thanks.
> >
> > - Original Message -
> > > From: "공용준(yongjoon kong)/Cloud Computing 사업담당"
> > > 
> > > To: "gluster-users@gluster.org" 
> > > Sent: Monday, March 28, 2011 5:54:06 PM
> > > Subject: [Gluster-users] gluster 3.1.3 mount using nfs
> > > Hi all,
> > >
> > > I setup the gluster filesystem and I want to mount the gluster
> > > volume
> > > using nfs in unix system.
> > >
> > > My machine is hp-ux (11.23)
> > > I put command like below but it has error
> > >
> > > test14:/>mount -F nfs -o proto=tcp,port=38465,vers=3,llock
> > > 150.2.226.26:/temp /mnt
> > > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > > nfs mount: retry: retrying(1) for: /mnt after 5 seconds
> > > nfs mount: retry: giving up on: /mnt
> > >
> > > and it was ok when I tried it in linux machine.
> > >
> > > I attached the volume setting
> > >
> > > Plz help me out.
> > >
> > >
> > > Volume Name: isi
> > > Type: Distributed-Replicate
> > > Status: Started
> > > Number of Bricks: 2 x 2 = 4
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: node003:/data02
> > > Brick2: node004:/data02
> > > Brick3: node005:/data02
> > > Brick4: node006:/data02
> > > Options Reconfigured:
> > > nfs.rpc-auth-allow: 150.2.223.249
> > > nfs.rpc-auth-unix: on
> > >
> > >
> > > Cloud Computing Business Team
> > > Andrew Kong Assistant Manager | andrew.k...@sk.com|
> > > T:+82-2-6400-4328
> > > | M:+82-10-8776-5025
> > > SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si,
> > > Gyeonggi-do,
> > > 463-844, Korea
> > > SK C&C<http://www.skcc.co.kr/> :
> > > About<http://www.skcc.co.kr/user/common/userContentViewer.vw?menuID=KRCA0300>
> > > >>
> > >
> > >
> > >
> > >
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo
On the HP-UX machine, whats the output of:

showmount -e 150.2.226.26



- Original Message -
> From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> To: "gluster-users@gluster.org" 
> Sent: Tuesday, March 29, 2011 5:31:28 AM
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> It doesn't work.
> 
> For my guess, it's probably connected with multiple networks.
> 
> By gluster bricks is on 192.168.x.x
> 
> one of the brick server has public network like 150.2.x.x
> 
> And the unix server's ip is 150.2.x.236 and try to mount the brick via
> nfs through that network
> Line
> 
> Is it possible configurations?
> 
> -Original Message-
> From: Shehjar Tikoo [mailto:shehj...@gluster.com]
> Sent: Tuesday, March 29, 2011 1:42 AM
> To: 공용준(yongjoon kong)/Cloud Computing 사업담당
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
> 
> Try port=38467. Let us know if it works. We'll put it up on the FAQ.
> Thanks.
> 
> - Original Message -
> > From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> > To: "gluster-users@gluster.org" 
> > Sent: Monday, March 28, 2011 5:54:06 PM
> > Subject: [Gluster-users] gluster 3.1.3 mount using nfs
> > Hi all,
> >
> > I setup the gluster filesystem and I want to mount the gluster
> > volume
> > using nfs in unix system.
> >
> > My machine is hp-ux (11.23)
> > I put command like below but it has error
> >
> > test14:/>mount -F nfs -o proto=tcp,port=38465,vers=3,llock
> > 150.2.226.26:/temp /mnt
> > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> > nfs mount: retry: retrying(1) for: /mnt after 5 seconds
> > nfs mount: retry: giving up on: /mnt
> >
> > and it was ok when I tried it in linux machine.
> >
> > I attached the volume setting
> >
> > Plz help me out.
> >
> >
> > Volume Name: isi
> > Type: Distributed-Replicate
> > Status: Started
> > Number of Bricks: 2 x 2 = 4
> > Transport-type: tcp
> > Bricks:
> > Brick1: node003:/data02
> > Brick2: node004:/data02
> > Brick3: node005:/data02
> > Brick4: node006:/data02
> > Options Reconfigured:
> > nfs.rpc-auth-allow: 150.2.223.249
> > nfs.rpc-auth-unix: on
> >
> >
> > Cloud Computing Business Team
> > Andrew Kong Assistant Manager | andrew.k...@sk.com|
> > T:+82-2-6400-4328
> > | M:+82-10-8776-5025
> > SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si,
> > Gyeonggi-do,
> > 463-844, Korea
> > SK C&C<http://www.skcc.co.kr/> :
> > About<http://www.skcc.co.kr/user/common/userContentViewer.vw?menuID=KRCA0300>
> > >>
> >
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-28 Thread Shehjar Tikoo
Try port=38467. Let us know if it works. We'll put it up on the FAQ. Thanks.

- Original Message -
> From: "공용준(yongjoon kong)/Cloud Computing 사업담당" 
> To: "gluster-users@gluster.org" 
> Sent: Monday, March 28, 2011 5:54:06 PM
> Subject: [Gluster-users] gluster 3.1.3 mount using nfs
> Hi all,
> 
> I setup the gluster filesystem and I want to mount the gluster volume
> using nfs in unix system.
> 
> My machine is hp-ux (11.23)
> I put command like below but it has error
> 
> test14:/>mount -F nfs -o proto=tcp,port=38465,vers=3,llock
> 150.2.226.26:/temp /mnt
> nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
> nfs mount: retry: retrying(1) for: /mnt after 5 seconds
> nfs mount: retry: giving up on: /mnt
> 
> and it was ok when I tried it in linux machine.
> 
> I attached the volume setting
> 
> Plz help me out.
> 
> 
> Volume Name: isi
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: node003:/data02
> Brick2: node004:/data02
> Brick3: node005:/data02
> Brick4: node006:/data02
> Options Reconfigured:
> nfs.rpc-auth-allow: 150.2.223.249
> nfs.rpc-auth-unix: on
> 
> 
> Cloud Computing Business Team
> Andrew Kong Assistant Manager | andrew.k...@sk.com| T:+82-2-6400-4328
> | M:+82-10-8776-5025
> SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si, Gyeonggi-do,
> 463-844, Korea
> SK C&C :
> About
> >>
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Why does glusterfs has nfs stuff on the server

2011-03-23 Thread Shehjar Tikoo

Mohit Anchlia wrote:

Can you please explain what do you mean by "All Gluster volumes are
exported through nfs"? I thought gluster just uses fuse on the server
and then client can decide to use nfs or not.
That is correct. The client can decide to use nfs BUT nfs needs to be 
enabled if the user has to mount using nfs. This is enabled by default and 
that is why you see NFS related process started automatically. If you dont 
want it, see my earlier reply.


-Shehjar




But what I am seeing is that after installing gluster on the "server"
and then do a ps on gluster process it shows "nfs-server.vol, nfs.log
etc." Why?

Thanks for your response

On Mon, Mar 21, 2011 at 10:48 PM, Shehjar Tikoo  wrote:

All Gluster volumes are exported through nfs by default. To disable nfs on
3.1.3 release, use the nfs.disable command line option. For more info on
this, please see the release notes.

Mohit Anchlia wrote:

When I installed gluster and do a "ps" on the process I see:

/usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p
/etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log"

My question is why did glusterfs use nfs-server.vol, nfs.pid and
nfs.log instead of using some generic name. This is confusing and
makes me think it's using nfs somehow on the server even though that
doesn't look like it.

We use direct attached storage, not NFS. This seems to come with
default installation of gluster. Is this just a mistake in how scripts
were named or is there more to it?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.3 NFS - cant overwrite certain files..

2011-03-21 Thread Shehjar Tikoo
And, how do you know over-writing is failing? better still, please post the 
sequence of commands that resulted in this failure. Thanks


Pranith Kumar. Karampuri wrote:

Could you post the output of "ls -l" of the files for which the write op fails 
from the backends. Knowing the strace output of the write operation thats failing would 
help.

Pranith.
- Original Message -
From: "paul simpson" 
To: gluster-users@gluster.org
Sent: Tuesday, March 22, 2011 5:18:21 AM
Subject: [Gluster-users] 3.1.3 NFS - cant overwrite certain files..

hi,

i'm running 3.1.3.  i'm finding that certain machines cant overwrite certain
files - getting a operation not permitted.  the files are owned by the same
user.  nothing appears on the gluster nfs log.  config:

g1:/var/log/glusterfs # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
performance.write-behind-window-size: 1mb
performance.cache-size: 1gb
performance.stat-prefetch: 1
network.ping-timeout: 20
diagnostics.latency-measurement: off
diagnostics.dump-fd-stats: on


when mounted with FUSE - files can be overwritten.  this is very disruptive
- as it's stalling/breaking complex grid jobs.  has anyone seen
this behaviour at all?   ...any ideas???

regards to all,

paul

ps - this also happened with 3.1.2.  i just upgraded hoping that it would be
fixed.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Why does glusterfs has nfs stuff on the server

2011-03-21 Thread Shehjar Tikoo
All Gluster volumes are exported through nfs by default. To disable nfs on 
3.1.3 release, use the nfs.disable command line option. For more info on 
this, please see the release notes.


Mohit Anchlia wrote:

When I installed gluster and do a "ps" on the process I see:

/usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p
/etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log"

My question is why did glusterfs use nfs-server.vol, nfs.pid and
nfs.log instead of using some generic name. This is confusing and
makes me think it's using nfs somehow on the server even though that
doesn't look like it.

We use direct attached storage, not NFS. This seems to come with
default installation of gluster. Is this just a mistake in how scripts
were named or is there more to it?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster for Oracle Data repository

2011-03-20 Thread Shehjar Tikoo

Are you using the Direct nfs client in Oracle?

???(yongjoon kong)/Cloud Computing  wrote:

Hi all,

I’m trying to use Gluster as Oracle Data home via Gluster NFS

I tried it in a small size.
-Made small Gluster using 2 node and using AFR
 -and mount from client using NFS
 -create tablespace and insert data into the Oracle
 -while doing that, I shutdown one of bricks. But it was OK anyway.( I mean 
there’s no error in inserting procedure)

So I’m going to go on a big scale…

Do you guys have any recommendation on that?




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Mac / NFS problems

2011-03-11 Thread Shehjar Tikoo

David Lloyd wrote:

Hello,

Were having issues with macs writing to our gluster system.
Gluster vol info at end.

On a mac, if I make a file in the shell I get the following message:

smoke:hunter david$ echo hello > test
-bash: test: Operation not permitted



I can help if you can send the nfs.log file from the /etc/glusterd 
directory on the nfs server. Before your mount command, set the log-level 
to trace for nfs server and then run the echo command above. Unmount as 
soon as you see the error above and email me the nfs.log.


-Shehjar





And the file is made but is zero size.

smoke:hunter david$ ls -l test
-rw-r--r--  1 david  realise  0 Mar  3 08:44 test


glusterfs/nfslog logs thus:

[2011-03-03 08:44:10.379188] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---

[2011-03-03 08:44:10.379222] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename : /production/hunter/test

Then try to open the file:

smoke:hunter david$ cat test

and get the following messages in the log:

[2011-03-03 08:51:13.957319] I [afr-common.c:716:afr_lookup_done]
glustervol1-replicate-0: background  meta-data self-heal triggered. path:
/production/hunter/test
[2011-03-03 08:51:13.959466] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
glustervol1-replicate-0: background  meta-data self-heal completed on
/production/hunter/test

If I do the same test on a linux machine (nfs) it's fine.

We get the same issue on all the macs. They are 10.6.6.

Gluster volume is mounted:
/n/auto/gv1 -rw,hard,tcp,rsize=32768,wsize=32768,intr
gus:/glustervol1
Other nfs mounts on mac (from linux servers) are OK

We're using LDAP to authenticate on the macs, the gluster servers aren't
bound into the LDAP domain.

Any ideas?

Thanks
David


g3:/var/log/glusterfs # gluster volume info
Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
performance.stat-prefetch: 1
performance.cache-size: 1gb
performance.write-behind-window-size: 1mb
network.ping-timeout: 20
diagnostics.latency-measurement: off
diagnostics.dump-fd-stats: on









___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

David Lloyd wrote:

I'm working with Paul on this.

We did take advice on XFS beforehand, and were given the impression that it
would just be a performance issue rather than things not actually working.
We've got quite fast hardware, and are more comfortable with XFS that ext4
from our own experience so we did our own tests and were happy with XFS
performance.

Likewise, we're aware of the very poor performance of gluster with small
files. We serve a lot of large files, and we're now moved most of the small
files off to a normal nfs server. Again small files aren't known to break
gluster are they?



No. Gluster will work fine but the usual caveats about small file 
performance over a network file system apply.


-Shehjar



David

On 21 February 2011 14:42, Fabricio Cannini  wrote:


Em Sexta-feira 18 Fevereiro 2011, às 23:24:10, paul simpson escreveu:

hello all,

i have been testing gluster as a central file server for a small

animation

studio/post production company.  my initial experiments were using the

fuse

glusterfs protocol - but that ran extremely slowly for home dirs and
general file sharing.  we have since switched to using NFS over

glusterfs.

 NFS has certainly seemed more responsive re. stat and dir traversal.
however, i'm now being plagued with three different types of errors:

1/ Stale NFS file handle
2/ input/output errors
3/ and a new one:
$ l -l /n/auto/gv1/production/conan/hda/published/OLD/
ls: cannot access /n/auto/gv1/production/conan/hda/published/OLD/shot:
Remote I/O error
total 0
d? ? ? ? ?? shot

...so it's a bit all over the place.  i've tried rebooting both servers

and

clients.  these issues are very erratic - they come and go.

some information on my setup: glusterfs 3.1.2

g1:~ # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:


performance.write-behind-window-size: 1mb


performance.cache-size: 1gb


performance.stat-prefetch: 1


network.ping-timeout: 20


diagnostics.latency-measurement: off


diagnostics.dump-fd-stats: on


that is 4 servers - serving ~30 clients - 95% linux, 5% mac.  all NFS.
 other points:
- i'm automounting using NFS via autofs (with ldap).  ie:
  gus:/glustervol1 on /n/auto/gv1 type nfs
(rw,vers=3,rsize=32768,wsize=32768,intr,sloppy,addr=10.0.0.13)
gus is pointing to rr dns machines (g1,g2,g3,g4).  that all seems to be
working.

- backend files system on g[1-4] is xfs.  ie,

g1:/var/log/glusterfs # xfs_info /mnt/glus1
meta-data=/dev/sdb1  isize=256agcount=7, agsize=268435200
blks
 =   sectsz=512   attr=2
data =   bsize=4096   blocks=1627196928,

imaxpct=5

 =   sunit=256swidth=2560 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=512   sunit=8 blks, lazy-count=0
realtime =none   extsz=4096   blocks=0, rtextents=0


- sometimes root can stat/read the file in question while the user

cannot!

 i can remount the same NFS share to another mount point - and i can then
see that with the same user.

- sample output of g1 nfs.log file:

[2011-02-18 15:27:07.201433] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/entries
[2011-02-18 15:27:07.201445] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 1414 bytes
[2011-02-18 15:27:07.201455] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 001024b+ : 1
[2011-02-18 15:27:07.205999] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.206032] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/props/tempfile.tmp
[2011-02-18 15:27:07.210799] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.210824] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/log
[2011-02-18 15:27:07.211904] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.211928] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :


/prod_data/xmas/lgl/pic/mr_all_PBR_HIGHNO_DF/035/1920x1080/mr_all_PBR_HIGHN

O_DF.6084.exr [2011-02-18 15:27:07.211940] I
[io-stats.c:343:io_stats_dump_fd]
glustervol1:   Lifetime : 8731secs, 610796usecs
[2011-02-18 15:27:07.211951] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 2321370 bytes
[2011-02-18 15:27:07.211962] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 000512b+ : 1
[2011-02-18 15:27:07.211

Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

paul simpson wrote:

hello all,

i have been testing gluster as a central file server for a small animation
studio/post production company.  my initial experiments were using the fuse
glusterfs protocol - but that ran extremely slowly for home dirs and general
file sharing.  we have since switched to using NFS over glusterfs.  NFS
has certainly seemed more responsive re. stat and dir traversal.  however,
i'm now being plagued with three different types of errors:

1/ Stale NFS file handle
2/ input/output errors
3/ and a new one:
$ l -l /n/auto/gv1/production/conan/hda/published/OLD/
ls: cannot access /n/auto/gv1/production/conan/hda/published/OLD/shot:
Remote I/O error
total 0
d? ? ? ? ?? shot

...so it's a bit all over the place.  i've tried rebooting both servers and
clients.  these issues are very erratic - they come and go.

some information on my setup: glusterfs 3.1.2

g1:~ # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:


performance.write-behind-window-size: 1mb


performance.cache-size: 1gb


performance.stat-prefetch: 1


network.ping-timeout: 20


diagnostics.latency-measurement: off


diagnostics.dump-fd-stats: on


that is 4 servers - serving ~30 clients - 95% linux, 5% mac.  all NFS.


Mac OS as a nfs client remains untested against Gluster NFS. Do you see 
these errors on Mac or Linux clients?




 other points:
- i'm automounting using NFS via autofs (with ldap).  ie:
  gus:/glustervol1 on /n/auto/gv1 type nfs
(rw,vers=3,rsize=32768,wsize=32768,intr,sloppy,addr=10.0.0.13)
gus is pointing to rr dns machines (g1,g2,g3,g4).  that all seems to be
working.

- backend files system on g[1-4] is xfs.  ie,

g1:/var/log/glusterfs # xfs_info /mnt/glus1
meta-data=/dev/sdb1  isize=256agcount=7, agsize=268435200
blks
 =   sectsz=512   attr=2
data =   bsize=4096   blocks=1627196928, imaxpct=5
 =   sunit=256swidth=2560 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=512   sunit=8 blks, lazy-count=0
realtime =none   extsz=4096   blocks=0, rtextents=0


- sometimes root can stat/read the file in question while the user cannot!
 i can remount the same NFS share to another mount point - and i can then
see that with the same user.


I think that may be occurring because NFS+LDAP requires a slightly 
different authentication scheme as compared to a NFS only setup. Please try 
the same test without LDAP in the middle.




- sample output of g1 nfs.log file:

[2011-02-18 15:27:07.201433] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/entries
[2011-02-18 15:27:07.201445] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 1414 bytes
[2011-02-18 15:27:07.201455] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 001024b+ : 1
[2011-02-18 15:27:07.205999] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.206032] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/props/tempfile.tmp
[2011-02-18 15:27:07.210799] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.210824] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/log
[2011-02-18 15:27:07.211904] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.211928] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/prod_data/xmas/lgl/pic/mr_all_PBR_HIGHNO_DF/035/1920x1080/mr_all_PBR_HIGHNO_DF.6084.exr
[2011-02-18 15:27:07.211940] I [io-stats.c:343:io_stats_dump_fd]
glustervol1:   Lifetime : 8731secs, 610796usecs
[2011-02-18 15:27:07.211951] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 2321370 bytes
[2011-02-18 15:27:07.211962] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 000512b+ : 1
[2011-02-18 15:27:07.211972] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 002048b+ : 1
[2011-02-18 15:27:07.211983] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 004096b+ : 4
[2011-02-18 15:27:07.212009] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 008192b+ : 4
[2011-02-18 15:27:07.212019] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 016384b+ : 20
[2011-02-18 15:27:07.212030] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 032768b+ : 54
[2011-02-18 15:27:07.228051] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stat

Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

Hi Paul,

Locking is part of the core GlusterFS protocol but the NFS server module 
does not have NLM support yet(NLM is the locking protocol associated with 
NFSv3). On linux, the workaround is generally to mount with the -o nolock 
option although I dont see why excluding this option results in stale file 
handle and other errors. let me go through the complete thread, I'll reply 
elsewhere.


Today, if locking among multiple client machines is a must-have, you'll 
have to use FUSE.


paul simpson wrote:

so, while your all about - my big question is can/does gluster (with
NFS/fuse client) properly lock files?

ie, a simple test is to checkout a svn tree to a gluster, modify, checkin,
list, alter, revert.  everytime i do this with 3.1.2 i get input/output
errors from my client machine within a few minutes.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Write performance on XenServer guest

2011-01-27 Thread Shehjar Tikoo

Shehjar Tikoo wrote:

What gluster config are you using?



My bad.. I didnt read the mail completely. First thing you should try is 
run a streaming IO write perf test using dd or iozone. Lets see how that 
performs over the replicated config. Thanks









Stefano Baronio wrote:

Hello,
I have recently set up gluster-3.1.2 as NFS Virtual Disk Storage for
XenServer.
I have ran a windows vm on it and tested the disk performance:
Read: 100MB/s
Write: 10MB/s

While with standard NFS, on the same servers, we can achieve:
Read: 115MB/s
Write: 100MB/s

We have two servers with local scsi disks with gluster configured in
replicate mode.
Volume Options:

performance.cache-size 1GB

write-behind-window-size 512MB

performance.stat-prefech 1

Is there any specific option for this configuration? And, is it normal 
such

a poor write performance?


Thank you
Stefano Baronio





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Write performance on XenServer guest

2011-01-27 Thread Shehjar Tikoo

What gluster config are you using?

Stefano Baronio wrote:

Hello,
I have recently set up gluster-3.1.2 as NFS Virtual Disk Storage for
XenServer.
I have ran a windows vm on it and tested the disk performance:
Read: 100MB/s
Write: 10MB/s

While with standard NFS, on the same servers, we can achieve:
Read: 115MB/s
Write: 100MB/s

We have two servers with local scsi disks with gluster configured in
replicate mode.
Volume Options:

performance.cache-size 1GB

write-behind-window-size 512MB

performance.stat-prefech 1

Is there any specific option for this configuration? And, is it normal such
a poor write performance?


Thank you
Stefano Baronio





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-10 Thread Shehjar Tikoo


Are any apps on the mount point erroring out with:

"Invalid argument"

or

"Stale NFS file handle"?

Burnash, James wrote:

Hello.

Has anyone seen error messages like this in /var/log/glusterfs/nfs.log:

tail /var/log/glusterfs/nfs.log
[2011-01-10 14:22:55.859066] I 
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] pfs-ro1-replicate-3: 
background  meta-data data self-heal completed on /
[2011-01-10 14:22:55.859084] I 
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] pfs-ro1-replicate-5: 
background  meta-data data self-heal completed on /
[2011-01-10 14:22:57.786088] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.355112] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.415732] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.455029] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:01.800751] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:02.127233] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:07.834044] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:09.478852] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:40:18.558072] E 
[afr-self-heal-metadata.c:524:afr_sh_metadata_fix] pfs-ro1-replicate-5: Unable 
to self-heal permissions/ownership of '/' (possible split-brain). Please fix 
the file on all backend volumes

Mount is done with this command:
mount -v -t nfs -o soft,rsize=16384,wsize=16384 jc1lpfsnfsro:/pfs-ro1 /pfs1

Command line being executed is:

rsync -av  --progress /pfs1/online_archive/2010 .

This CentOS 5.5 x86_64, GlusterFS 3.1.1. Currently configured:

gluster volume info

Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10

Thanks.

James Burnash
Unix Engineering.


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-12-09 Thread Shehjar Tikoo
"Vikas Gorur"  wrote:

>
>On Dec 8, 2010, at 2:23 PM, Michael Patterson wrote:
>
>> Hi Shehjar,
>> 
>> I'm currently running gluster 3.1.1 on Centos 5.5 x64 and I am unable
>to
>> mount NFS subdirectories from xenserver. Was support for this feature
>added
>> in gluster 3.1.1? I've also tried using various options defined here,
>with
>> no luck:
>>
>http://gluster.org/faq/index.php?sid=2118&lang=en&action=artikel&cat=5&id=55&artlang=en
>> 
>> Like Stefano, I'm using the 'option nfs.port 2049*'*' in my
>> /etc/glusterd/nfs/nfs-server.vol configuration. Xenserver can create
>the SR,
>> but it cannot mount the subdirectory (see below).
>> 
>> -
>> # glusterd -V
>> glusterfs 3.1.1 built on Nov 29 2010 10:07:44
>> 
>> 
>> Mounting SR subdirectory from xenserver fails:
>> ---
>> # mount  10.0.0.1:/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b /mnt
>> mount: 10.0.0.1:/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b failed,
>reason
>> given by server: No such file or directory
>> ---
>
>Are you sure you don't have the Linux kernel NFS server running on
>10.0.0.1? Make sure you disable it before starting the Gluster volume.
>
>You can verify that the Gluster NFS server is running by running this
>on 10.0.0.1:
>
># showmount -e localhost
>
>The output should show:
>
>/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b  *
>
>--
>Vikas Gorur
>Engineer - Gluster, Inc.
>--

Yes. Support for xen server subdirectory mounting is not present yet. It is 
still a work in progress.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.1.1 - mount subdir -

2010-11-30 Thread Shehjar Tikoo

Stefano Baronio wrote:

Seem like it is not able to mount NFS subdirectory yet
Can someone confirm that?


Not yet.



Thank you
Stefano


2010/11/30 Mike Hanby 


ha, just like magic, 3.1.1 is now available under LATEST on the download
server.

-Original Message-
From: gluster-users-boun...@gluster.org [mailto:
gluster-users-boun...@gluster.org] On Behalf Of Mike Hanby
Sent: Monday, November 29, 2010 1:35 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] GlusterFS 3.1.1 ETA?

Howdy,

Is there any ETA on the 3.1.1 patch that will have the secondary group
membership fix?

Thanks,

Mike

=
Mike Hanby
mha...@uab.edu
UAB School of Engineering
Information Systems Specialist II
IT HPCS / Research Computing


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exposing parts of a volume to specific clients?

2010-11-14 Thread Shehjar Tikoo

Mike Hanby wrote:

So, to accomplish this right now I would need to disable the builtin
NFS server, enable the OS NFS server, mount the Gluster file system
and export as normal via /etc/exports?

Just brainstorming here,


You can still export directories individually but the support is minimal 
as far as restricting the exports to different clients.


More info at:

http://www.gluster.org/faq/index.php?sid=940&lang=en&action=artikel&cat=5&id=55&artlang=en

-Shehjar



Mike

-Original Message- From: Shehjar Tikoo
[mailto:shehj...@gluster.com] Sent: Thursday, November 11, 2010 10:47
PM To: Mike Hanby Cc: gluster-users@gluster.org Subject: Re:
[Gluster-users] Exposing parts of a volume to specific clients?

Mike Hanby wrote:

Howdy,

We have 18TB at our disposal to share via GlusterFS 3.1. My initial
thought was to create a single volume, comprised of 18 x 1TB
bricks. The volume will be used for user storage as well as storage
for applications.

Is there any way to create different exports for the various
clients via NFS and Gluster client?

For example, the server used by our user base should be able to
mount the users directory on the gfs volume (nas-01:/gfs/users),
where as the yum repository server should only be able to mount
nas-01:/gfs/repo

Essentially, I only want to expose what's necessary for each client
machine to perform its roll.

If this isn't possible directly within GlusterFS, I'm open to
suggestions to rethink my strategy.



Not possible yet. Its on the ToDo list. Thanks



Thanks,

Mike

___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exposing parts of a volume to specific clients?

2010-11-11 Thread Shehjar Tikoo

Mike Hanby wrote:

Howdy,

We have 18TB at our disposal to share via GlusterFS 3.1. My initial thought was 
to create a single volume, comprised of 18 x 1TB bricks. The volume will be 
used for user storage as well as storage for applications.

Is there any way to create different exports for the various clients via NFS 
and Gluster client?

For example, the server used by our user base should be able to mount the users 
directory on the gfs volume (nas-01:/gfs/users), where as the yum repository 
server should only be able to mount nas-01:/gfs/repo

Essentially, I only want to expose what's necessary for each client machine to 
perform its roll.

If this isn't possible directly within GlusterFS, I'm open to suggestions to 
rethink my strategy.



Not possible yet. Its on the ToDo list. Thanks



Thanks,

Mike

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS Mounted GlusterFS, secondary groups not working

2010-11-11 Thread Shehjar Tikoo
Hi,

It might be related to a bug filed at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2045

If you please update it there or file a new one, I'll take a look. Thanks.

- Original Message -
> From: "Mike Hanby" 
> To: gluster-users@gluster.org
> Sent: Friday, November 12, 2010 12:00:23 AM
> Subject: [Gluster-users] NFS Mounted GlusterFS, secondary groups not working
> Howdy,
> 
> I have a GlusterFS 3.1 volume being mounted on a client using NFS.
> From the client I created a directory under the mount point and set
> the permissions to root:groupa 750
> 
> My user account is a member of groupa on the client, yet I am unable
> to list the contents of the directory:
> 
> $ ls -l /gfs/dir1
> ls: /gfs/dir1/: Permission denied
> 
> $ ls -ld /gfs/dir1
> rwxr-x--- 9 root groupa 73728 Nov 9 09:44 /gfs/dir1/
> 
> $ groups
> myuser groupa
> 
> I am able to list the directory as the user root. If I change the
> group ownership to my primary group, myuser, then I can successfully
> list the contents of the directory.
> 
> $ sudo chgrp myuser /gfs/dir1
> $ ls -ld /gfs/dir1
> rwxr-x--- 9 root myuser 73728 Nov 9 09:44 /gfs/dir1/
> 
> $ ls -l /gfs/dir1
> drwxr-xr-x 5 root root 73728 Mar 26 2010 testdir1
> drwxr-x--- 4 root root 73728 Apr 8 2010 testdir2
> drwxr-x--- 2 root root 73728 Aug 4 21:23 testdir3
> 
> The volume is being exported using the builtin GlusterFS NFS server.
> The servers and client are all CentOS 5.5 x86_64 boxes.
> 
> Thanks for any suggestions,
> 
> Mike
> 
> =
> Mike Hanby
> mha...@uab.edu
> UAB School of Engineering
> Information Systems Specialist II
> IT HPCS / Research Computing
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-11-11 Thread Shehjar Tikoo

Davide Ferri wrote:

I've the same issue.
Of course it works (both nfsv3 and nfsv4) with the standard NFS
deamon. If you share /media you can mount /media/subdir1 (if exists)
without any change to the /etc/exports.


Thanks. I understand that now. Thats not hard to fix. We'll try 
squeezing it into 3.1.1 release.


-Shehjar



Davide

On Thu, Nov 11, 2010 at 11:47 AM, Shehjar Tikoo  wrote:

Stefano Baronio wrote:

Thank you Shehjar,
   as we are planning to use glsuterfs in a production environment, we
prefer to stay with the 3.1 stable version.
As for now the port forwarding seems to work properly, because XenServer,
after the testing step on port 2049,  correctly connects to the nfs share
using the right ports.
I just have the problem that I cannot mount a NFS share subdirectory.
 XenServer, when creating the sr, makes a new directory (the sr uuid) just
under the share root and put its VM file under that directory.
When mounting that subdir, Glusterfs returns the "No such file or
directory" error.

Thats because by default Gluster NFS only exports volumes as NFS exports not
the directories inside those volumes.

How does this work with knfs? Even that will return the same error because
the newly created directory will not exist in /etc/exports.

Thanks
-Shehjar



Thank you
Stefano Baronio

2010/11/11  Tikoo mailto:shehj...@gluster.com>>

   Yes. That was a limitation on 3.1 release and is already fixed in
   mainline. This support allows you to change the nfs port number that
   Gluster NFS uses by default. It'll be available in 3.1.1 but if
   you'd like to test right away, please use 3.1.1qa5 by checking it
   out from the repository:

   $ git clone git://git.gluster.com/glusterfs.git
   <http://git.gluster.com/glusterfs.git>
   $ cd glusterfs
   $ git checkout -b v3.1.1qa5 3.1.1qa5

   Then build and install.

   To change the nfs port, locate the volume section nfs/server in
   /etc/glusterd/nfs/nfs-server.vol and add the following line:

   option nfs.port 2049

   Note that this option is not yet available in the gluster CLI, so
   you'll have to manually edit this file and restart the gluster nfs
   daemon. Be careful while using that tool, because on a restart of a
   volume using gluster CLI, your edited volume file will get
   over-written with the default version.


   Thanks
   -Shehjar

   Stefano Baronio wrote:

   Hello all,
I'm new to the list and I'm working on glusterfs since a month
   right now.
   I'm posting a request about how to get XenServer working with
   Glusterfs.

   I have a standard setup of both XenServer and Glusterfs.
   I can mount the glusterfs nfs share from the Xen CLI, write in
   it and mount
   it as an ISO library as well.
   I just can't mount it for storage purpose.
   It seems that XenServer is testing the NFS share directly to
   port 2049,
   without checking with portmapper.

   I have tried to make glusterfs listen on port 2049 without any
   success, so I
   have setup a port forwarding on the gluster server.
   Lets say:
   xen01 - 192.168.14.33
   xenfs01 (gluster nfs) - 192.168.14.61

   The iptables settings are:
   iptables -A PREROUTING -d 192.168.14.61 -p tcp -m tcp --dport
   2049 -j DNAT
   --to-destination 192.168.14.61:38467 <http://192.168.14.61:38467>
   iptables -A FORWARD -d 192.168.14.61 -p tcp -m tcp --dport 38467
   -j ACCEPT

   Now XenServer can correctly test the gluster nfs share. It
   creates the
   sr-uuid directory in it, but it can't mount it, with the
   following error:
   FAILED: (errno 32) stdout: '', stderr: 'mount:
   xenfs01:/xenfs/1ca32487-42fe-376e-194c-17f78afc006c failed,
   reason given by
   server: No such file or directory

   Any help appreciated.
   Thank you

   Stefano




 

   ___
   Gluster-users mailing list
   Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
   http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-11-11 Thread Shehjar Tikoo

Stefano Baronio wrote:

Thank you Shehjar,
as we are planning to use glsuterfs in a production environment, we 
prefer to stay with the 3.1 stable version.
As for now the port forwarding seems to work properly, because 
XenServer, after the testing step on port 2049,  correctly connects to 
the nfs share using the right ports.
I just have the problem that I cannot mount a NFS share subdirectory. 
 XenServer, when creating the sr, makes a new directory (the sr uuid) 
just under the share root and put its VM file under that directory.
When mounting that subdir, Glusterfs returns the "No such file or 
directory" error.


Thats because by default Gluster NFS only exports volumes as NFS exports 
not the directories inside those volumes.


How does this work with knfs? Even that will return the same error 
because the newly created directory will not exist in /etc/exports.


Thanks
-Shehjar




Thank you
Stefano Baronio

2010/11/11  Tikoo mailto:shehj...@gluster.com>>

Yes. That was a limitation on 3.1 release and is already fixed in
mainline. This support allows you to change the nfs port number that
Gluster NFS uses by default. It'll be available in 3.1.1 but if
you'd like to test right away, please use 3.1.1qa5 by checking it
out from the repository:

$ git clone git://git.gluster.com/glusterfs.git

$ cd glusterfs
$ git checkout -b v3.1.1qa5 3.1.1qa5

Then build and install.

To change the nfs port, locate the volume section nfs/server in
/etc/glusterd/nfs/nfs-server.vol and add the following line:

option nfs.port 2049

Note that this option is not yet available in the gluster CLI, so
you'll have to manually edit this file and restart the gluster nfs
daemon. Be careful while using that tool, because on a restart of a
volume using gluster CLI, your edited volume file will get
over-written with the default version.


Thanks
-Shehjar

Stefano Baronio wrote:

Hello all,
 I'm new to the list and I'm working on glusterfs since a month
right now.
I'm posting a request about how to get XenServer working with
Glusterfs.

I have a standard setup of both XenServer and Glusterfs.
I can mount the glusterfs nfs share from the Xen CLI, write in
it and mount
it as an ISO library as well.
I just can't mount it for storage purpose.
It seems that XenServer is testing the NFS share directly to
port 2049,
without checking with portmapper.

I have tried to make glusterfs listen on port 2049 without any
success, so I
have setup a port forwarding on the gluster server.
Lets say:
xen01 - 192.168.14.33
xenfs01 (gluster nfs) - 192.168.14.61

The iptables settings are:
iptables -A PREROUTING -d 192.168.14.61 -p tcp -m tcp --dport
2049 -j DNAT
--to-destination 192.168.14.61:38467 
iptables -A FORWARD -d 192.168.14.61 -p tcp -m tcp --dport 38467
-j ACCEPT

Now XenServer can correctly test the gluster nfs share. It
creates the
sr-uuid directory in it, but it can't mount it, with the
following error:
FAILED: (errno 32) stdout: '', stderr: 'mount:
xenfs01:/xenfs/1ca32487-42fe-376e-194c-17f78afc006c failed,
reason given by
server: No such file or directory

Any help appreciated.
Thank you

Stefano





___
Gluster-users mailing list
Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-11-10 Thread Shehjar Tikoo
Yes. That was a limitation on 3.1 release and is already fixed in 
mainline. This support allows you to change the nfs port number that 
Gluster NFS uses by default. It'll be available in 3.1.1 but if you'd 
like to test right away, please use 3.1.1qa5 by checking it out from the 
repository:


$ git clone git://git.gluster.com/glusterfs.git
$ cd glusterfs
$ git checkout -b v3.1.1qa5 3.1.1qa5

Then build and install.

To change the nfs port, locate the volume section nfs/server in 
/etc/glusterd/nfs/nfs-server.vol and add the following line:


option nfs.port 2049

Note that this option is not yet available in the gluster CLI, so you'll 
have to manually edit this file and restart the gluster nfs daemon. Be 
careful while using that tool, because on a restart of a volume using 
gluster CLI, your edited volume file will get over-written with the 
default version.



Thanks
-Shehjar

Stefano Baronio wrote:

Hello all,
  I'm new to the list and I'm working on glusterfs since a month right now.
I'm posting a request about how to get XenServer working with Glusterfs.

I have a standard setup of both XenServer and Glusterfs.
I can mount the glusterfs nfs share from the Xen CLI, write in it and mount
it as an ISO library as well.
I just can't mount it for storage purpose.
It seems that XenServer is testing the NFS share directly to port 2049,
without checking with portmapper.

I have tried to make glusterfs listen on port 2049 without any success, so I
have setup a port forwarding on the gluster server.
Lets say:
xen01 - 192.168.14.33
xenfs01 (gluster nfs) - 192.168.14.61

The iptables settings are:
iptables -A PREROUTING -d 192.168.14.61 -p tcp -m tcp --dport 2049 -j DNAT
--to-destination 192.168.14.61:38467
iptables -A FORWARD -d 192.168.14.61 -p tcp -m tcp --dport 38467 -j ACCEPT

Now XenServer can correctly test the gluster nfs share. It creates the
sr-uuid directory in it, but it can't mount it, with the following error:
FAILED: (errno 32) stdout: '', stderr: 'mount:
xenfs01:/xenfs/1ca32487-42fe-376e-194c-17f78afc006c failed, reason given by
server: No such file or directory

Any help appreciated.
Thank you

Stefano





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ACL with GlusterFS 3.1?

2010-11-10 Thread Shehjar Tikoo

Hi

ACLs are not supported  as yet.

Thanks

Mike Hanby wrote:

Howdy,

Are access control lists (ACL, i.e. setfacl / getfacl) supported in
GlusterFS 3.1?

If yes, beyond mounting the bricks with "defaults,acl" what do I need
to do to enable ACL for both NFS and native Gluster clients?

Google isn't returning anything useful on this topic.

Thanks,

Mike

= Mike Hanby mha...@uab.edu UAB
School of Engineering Information Systems Specialist II IT HPCS /
Research Computing


___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] question on NFS mounting

2010-11-09 Thread Shehjar Tikoo


Hi Joe,

Was your permission denied problem solved? How? I'd like to add an entry 
into the NFS FAQ page about this. Thanks


Joe Landman wrote:

Hi Folks

We have a 3.1 cluster set up, and NFS mounting is operational.  We are 
trying to get our heads around the mounting of this cluster.  What we 
found works (for a 6 brick distributed cluster) is using the same 
server:/export  in all the mounts.


My questions are

1) can we use any of the bricks for server?  We tried using another 
brick in the volume, but it doesn't seem to work.


2) if we need to use a single server for the mount, is this a 
performance bottleneck?  That is, does all the traffic have to traverse 
one brick?


3) in previous releases (3.0.x and before) we could mount the file 
system on each server.  We are doing this now with the nfs mount, but my 
concern from point 1 still stands.


We are working on understanding the performance issues with this. 
Hopefully with some benchmarks soon.


Regards,

Joe



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] question on NFS mounting

2010-11-08 Thread Shehjar Tikoo

Mike Hanby wrote:

Each Gluster server that is going to also act as an NFS server (or Samba) has 
to mount the volume using the Gluster client:


There is no mounting taking place to export GlusterFS through NFS. That 
NFS server is a GlusterFS client, is correct but it is also a 
translator. That means any translation from NFS ops to GlusterFS ops 
happens internally in the Gluster NFS server process, without having to 
go through a mount point.


-Shehjar



For example, on my two GlusterFS servers I have the following:
LABEL=brick01-01/export/nas-01/brick01  ext4defaults,_netdev
 0 2
LABEL=brick01-02/export/nas-01/brick02  ext4defaults,_netdev
 0 2
...bricks 3-8
LABEL=brick01-09/export/nas-01/brick09  ext4defaults,_netdev
 0 2
localhost:/dev-storage  /developglusterfs defaults,_netdev  
 0 0

LABEL=brick02-01/export/nas-02/brick01  ext4defaults,_netdev
 0 2
LABEL=brick02-02/export/nas-02/brick02  ext4defaults,_netdev
 0 2
...bricks 3-8
LABEL=brick02-09/export/nas-02/brick09  ext4defaults,_netdev
 0 2
localhost:/dev-storage  /developglusterfs defaults,_netdev  
 0 0

And then "/usr/sbin/showmount -e nas-01" and "/usr/sbin/showmount -e nas-02" both show 
the "/dev-storage *" export.

Also verify your firewalls have the appropriate ports open on all servers. For 
example on CentOS 5 for 2 servers with 18 bricks and 1 volume:

# GlusterFS
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
# GlsuterFS 24007, 24008 plus 1 port per brick across all volumes, 18 bricks in 
this case
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24026 
-j ACCEPT
# Gluster port 38465 and NFS for 2 Gluster servers, 1 port per server
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 38465:38468 
-j ACCEPT
# End GlusterFS

Mike

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Bernard Li
Sent: Sunday, November 07, 2010 1:10 AM
To: land...@scalableinformatics.com
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] question on NFS mounting

Hi Joe:

On Sun, Nov 7, 2010 at 12:03 AM, Joe Landman
 wrote:


Actually, showmount didn't work.

We get permission denied.  Even after playing with the auth.allowed flag.


That's an indication that the gNFS server is not running.

I would recommend you review the FAQ and some of the recent posts on
the list, as there have been a couple threads discussing numerous
NFS-related issues and their solutions.  I've collected them for you
here for your convenience:

http://www.gluster.org/faq/index.php?sid=679&lang=en&action=show&cat=5
http://gluster.org/pipermail/gluster-users/2010-November/005692.html

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS crashes under load

2010-11-07 Thread Shehjar Tikoo

Thanks. I'll be looking into it. I've filed a bug at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2061

You may add yourself to the CC list for notifications.

It seems the crash is easily reproduced on your setup. Can you please 
post the log from Gluster NFS process in TRACE log level to the bug?


Dan Bretherton wrote:

I upgraded to GlusterFS 3.1 a couple of weeks ago and overall I am very
impressed; I think it is a big step forward.  Unfortunately there is one
"feature" that is causing me a big problem - the NFS process crashes
every few hours  when under load.  I have pasted the relevant error
messages from nfs.log at the end of this message.  The rest of the log
file is swamped with these messages incidentally.

[2010-11-06 23:07:04.977055] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available

There are no apparent problems while these errors are being produced so
this issue probably isn't relevant to the crashes.


Correct. That error is misleading and will be removed in 3.1.1

Thanks
-Shehjar


To give an indication of what I mean by "under load", we have a small
HPC cluster that is used for running ocean models.  A typical model run
involves 20 processors, all needing to read simultaneously from the same
input data files at regular intervals during the run.  There are roughly
20 files, each ~1GB in size.  At the same time this is going on several
people, typically, are processing output from previous runs from this
and other (much bigger) clusters, chugging through hundreds of GB and
tens of thousands of files every few hours.  I don't think the
Gluster-NFS crashes are purely load dependant because they seem to occur
at different load levels, which is what leads me to suspect something
subtle related to the cluster's 20-processor model runs.  I would prefer
to use the GlusterFS client on the cluster's compute nodes, but
unfortunately the pre-FUSE Linux kernel has been customised in a way
that has thwarted all my attempts to build a FUSE module that the kernel
will accept (see
http://gluster.org/pipermail/gluster-users/2010-April/004538.html)

The servers that are exporting NFS are all running CentOS 5.5 with
GlusterFS installed from RPMs, and the GlusterFS volumes are distributed
(not repicated).  Two of the servers with GlusterFS bricks are actually
running SuSE Enterprise 10; I don't know if this is relevant.  I used
previous GlusterFS versions with SLES10 without any problems, but as
RPMs are not provided for SuSE I presume it is not an officially
supported distro.  For that reason I am only using the CentOS machines
as NFS servers for the GlusterFS volumes.

I would be very grateful for any suggested solutions or workarounds that
might help to prevent these NFS crashes.

-Dan.
nfs.log extract
--
[2010-11-06 23:07:10.380744] E [fd.c:506:fd_unref_unbind]
(-->/usr/lib64/glusterfs/3.1.0/xlator/debug/io-stats.so(io_stats_fstat_cbk+0x8e)
[0x2b30813e]
(-->/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs_fop_fstat_cbk+0x41)
[0x2b9a6da1]
(-->/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs3svc_readdir_fstat_cbk+0x22d)
[0x2b9b0bdd]))) : Assertion failed: fd->refcount
pending frames:

patchset: v3.1.0
signal received: 11
time of crash: 2010-11-06 23:07:10
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.1.0
/lib64/libc.so.6[0x35746302d0]
/lib64/libpthread.so.0(pthread_spin_lock+0x2)[0x357520b722]
/usr/lib64/libglusterfs.so.0(fd_unref_unbind+0x3d)[0x38f223511d]
/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs3svc_readdir_fstat_cbk+0x22d)[0x2b9b0bdd]
/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs_fop_fstat_cbk+0x41)[0x2b9a6da1]
/usr/lib64/glusterfs/3.1.0/xlator/debug/io-stats.so(io_stats_fstat_cbk+0x8e)[0x2b30813e]
/usr/lib64/libglusterfs.so.0(default_fstat_cbk+0x79)[0x38fa69]
/usr/lib64/glusterfs/3.1.0/xlator/performance/read-ahead.so(ra_attr_cbk+0x79)[0x2aeec459]
/usr/lib64/glusterfs/3.1.0/xlator/performance/write-behind.so(wb_fstat_cbk+0x9f)[0x2ace402f]
/usr/lib64/glusterfs/3.1.0/xlator/cluster/distribute.so(dht_attr_cbk+0xf4)[0x2b521d24]
/usr/lib64/glusterfs/3.1.0/xlator/protocol/client.so(client3_1_fstat_cbk+0x287)[0x2aacd2b7]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)[0x38f1a0f2e2]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x8d)[0x38f1a0f4dd]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x2c)[0x38f1a0a77c]
/usr/lib64/glusterfs/3.1.0/rpc-transport/socket.so(socket_event_poll_in+0x3f)[0x2aaac3eb435f]
/usr/lib64/glusterfs/3.1.0/rpc-transport/socket.so(socket_event_handler+0x168)[0x2aaac3eb44e8]
/usr/lib64/libglusterfs.so.0[0x38f2236ee7]
/usr/sbin/glusterfs(main+0x37d)[0x4046ad]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x357461d994]
/usr/sbin/glusterfs[0x402dc9]
-



___
Gluster-users mailing list
Gluster-u

Re: [Gluster-users] question on NFS mounting

2010-11-07 Thread Shehjar Tikoo

Joe Landman wrote:

On 11/07/2010 02:00 AM, Bernard Li wrote:


I'm not sure about distribute, but with replicate, each brick should
be able to act as the NFS server.  What does `showmount -e` say for
each brick?  And what error message did you get when you tried to
mount it?




With any kind of volume config, NFS starts up by default on all bricks. 
You'll have ensure that no other nfs servers are running on the bricks 
when Gluster volumes are started.




Actually, showmount didn't work.

We get permission denied.  Even after playing with the auth.allowed flag.



Please paste the output of rpcinfo -p . It'll help point out 
whats going on.


Thanks
-Shehjar



Cheers,

Bernard





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster crash

2010-11-07 Thread Shehjar Tikoo
Please file a bug. It'd help to have the steps to reproduce and if it is 
easily reproduced, the client log in TRACE log level. Thanks.


Samuel Hassine wrote:

Hi all,

Our service using GlusterFS is in production since one week and we are
managing a huge trafic. The last night, one of the Gluster client (on a
physical node with a lot of virtual engines) crashed. Can you give me
more information about the log of the crash?

Here is the log: 


pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(CREATE)
frame : type(1) op(CREATE)

patchset: v3.0.6
signal received: 6
time of crash: 2010-11-06 05:38:11
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.6
/lib/libc.so.6[0x7f7644e76f60]
/lib/libc.so.6(gsignal+0x35)[0x7f7644e76ed5]
/lib/libc.so.6(abort+0x183)[0x7f7644e783f3]
/lib/libc.so.6(__assert_fail+0xe9)[0x7f7644e6fdc9]
/lib/libpthread.so.0(pthread_mutex_lock+0x686)[0x7f76451a0b16]
/lib/glusterfs/3.0.6/xlator/performance/io-cache.so(ioc_create_cbk
+0x87)[0x7f7643dcd3f7]
/lib/glusterfs/3.0.6/xlator/performance/read-ahead.so(ra_create_cbk
+0x1a2)[0x7f7643fd9322]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_unwind
+0x126)[0x7f76441f1866]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_wind_cbk
+0x10f)[0x7f76441f25ef]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(client_create_cbk
+0x5aa)[0x7f764443a00a]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(protocol_client_pollin
+0xca)[0x7f76444284ba]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(notify
+0xe0)[0x7f7644437d70]
/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7f76455cd483]
/lib/glusterfs/3.0.6/transport/socket.so(socket_event_handler
+0xe0)[0x7f76433819e0]
/lib/libglusterfs.so.0[0x7f76455e7e0f]
/sbin/glusterfs(main+0x82c)[0x40446c]
/lib/libc.so.6(__libc_start_main+0xe6)[0x7f7644e631a6]
/sbin/glusterfs[0x402a29]

I just want to know "why" Gluster crashed.

Regards.
Sam





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Cannot mount NFS

2010-11-03 Thread Shehjar Tikoo

Please try some of the steps mentioned at:

http://www.gluster.org/faq/index.php?sid=679&lang=en&action=show&cat=5

Thanks

Horacio Sanson wrote:
I have a two server replicated volume that I can mount without problems using 
the native client but I cannot make this work via NFS no matter what.


I read the mailing list and the FAQ, applied all fixes but still no luck.


System:  
  Two Gluster nodes with vanilla Ubuntu 10.10 LTS 64bit.

  One client with vanilla Ubuntu 10.10 Desktop 32bit
  One client with vanilla Ubuntu 10.10 LTS 64bit

Gluster:  Installed using glusterfs_3.1.0-1_amd64.deb

Command used to create volume:  
   sudo gluster volume create www replica 2 transport tcp \

 192.168.4.90:/opt/www  192.168.4.91:/opt/www

Below I present all my attempts to get NFS mounted on the two clients I have 
so everything below multiply by 2:


1. NFS Mount attempt 1: After creating the volume

$ sudo mount -v  -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:37:17 2010
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused

2. NFS Mount attempt 2: Seems UDP is not supported so I added the tcp option:

$ sudo mount -v -o mountproto=tcp -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:38:36 2010
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Unable to receive

mount.nfs: prog 13, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Unable to receive
 - Connection refused
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Unable to receive

3. NFS Mount attempt 3: Google tells me I need to start portmap

- on both Gluster servers and the clients I installed portmap
$ sudo aptitude install portmap # This should be a dependency of Gluster deb
$ sudo service portmap start

- on the client:
$ sudo mount -v -o mountproto=tcp -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:42:07 2010
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: requested NFS version or transport protocol is not supported

4. Me throwing keyboard throught the fourth floor window. 

5. NFS Mount attempt 4: Retry 3 but without tcp option, maybe it is not needed 
with portmap started:


$ sudo mount -v -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:45:37 2010
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused

5. Re-read FAQ, there is something about DNS being use for authentication. 
Sounds related to the Connection refused error I am getting:


skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www add.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)

6. Set up hosts file so each Gluster node and client can resolve their 
hostnames locally.


$ NFS Mount attempt 5,6,7,8:  Try all mount options from 1-5 above, including 
4 several times but with the hosts file correctly set up.


7. Bang head against wall get huge cup of coffee.

8. Re-read mailing list: It seems that gluster NFS may conflict with the kernel 
NFS service:


$ sudo aptitude search nfs-kernel-server
 p   nfs-kernel-server

it is not installed so this could not be the problem

Re: [Gluster-users] cannot nfs mount glusterFS

2010-11-03 Thread Shehjar Tikoo

Rick King wrote:
Craig, Can you let us know the bug number? I ran into this as well. 



Heres the one specific to the nfs problem above.

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1639




~~Rick


- Original Message -
From: "Craig Carl" 
To: "Matt Hodson" 
Cc: gluster-users@gluster.org
Sent: Wednesday, November 3, 2010 12:41:28 PM
Subject: Re: [Gluster-users] cannot nfs mount glusterFS

There is a bug filed, Gluster should throw a warning when you start the volume. Please keep us updated as you test, let me know f you have any other questions. 





Thanks, 
Craig 

--> 
Craig Carl 




Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.c...@gmail.com 



From: "Matt Hodson"  
To: "Vikas Gorur"  
Cc: gluster-users@gluster.org 
Sent: Wednesday, November 3, 2010 11:34:29 AM 
Subject: Re: [Gluster-users] cannot nfs mount glusterFS 

HA! that was it. dolt! thank you. i was going crazy looking at other 
stuff. 

-matt 



--- 
Matt Hodson 
Scientific Customer Support, Geospiza 
(206) 633-4403, Ext. 111 
http://www.geospiza.com 





On Nov 3, 2010, at 11:26 AM, Vikas Gorur wrote: 

On Nov 3, 2010, at 11:18 AM, Matt Hodson wrote: 

I just installed distributed gluster FS on 2 CentOS 5 boxes. 
install and configuration seemed to go fine. gluterd is running. 
firewalls/iptables are off. however for the life of me i cannot 
nfs mount the main gluster server from either a OSX or a CentOS 5 
box. I use NFS often and have a fair amount of experience with it 
so i've reviewed most of the common pitfalls. 

here's the command that fails from centos: 
$ sudo mount -v -t nfs 172.16.1.76:/gs-test /mnt/gluster/ 
mount: trying 172.16.1.76 prog 13 vers 3 prot tcp port 2049 
mount: trying 172.16.1.76 prog 15 vers 3 prot udp port 909 
mount: 172.16.1.76:/gs-test failed, reason given by server: No such 
file or directory 

and the same one from OSX 10.5 
sudo mount -v -t nfs 172.16.1.76:/gs-test /gluster/ 
mount_nfs: can't access /gs-test: No such file or directory 

what's weird is that i can mount actual dirs on the gluster server, 
just not the gluster VOLNMAE. in other words, this command works 
fine because it's mounting an actual dir. 
$sudo mount -v -t nfs 172.16.1.76:/ /mnt/gluster/ 
You have the kernel NFS service running. That is why you can mount 
regular directories on the gluster server. 

When you try to mount Gluster the kernel NFS server is actually 
looking for a directory called /gs-test, which ofcourse does not 
exist. You need to stop the kernel NFS service and stop and start 
the gluster volume. 

-- 
Vikas Gorur 
Engineer - Gluster, Inc. 
-- 












___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
DISCLAIMER: This e-mail and any files transmitted with it ('Message') is 
intended only for the use of the recepient (s) named and may contain 
confidential information. Opinions, conclusion and other information in this 
message that do not relate to the official business of King7.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-11-02 Thread Shehjar Tikoo
Please file a bug. I'll need the logs in TRACE level for the nfs server 
daemon while you run ls on the mount point, also the volume files.

Thanks




Bernard Li wrote:

Hi Shehjar:

On Fri, Oct 29, 2010 at 12:07 PM, Bernard Li  wrote:


Thanks, that worked.  I copied /etc/glusterd/nfs/nfs-server.vol to the
other server, started glusterfsd and I could mount the volume via NFS
on a client.


I guess I spoke too soon.  While I could successfully mount and see
the top level directories, nested directories (3rd level) cannot be
seen.

For instance for each brick I have:

/export/share/test/a/b/foo

The resulting NFS mountpoint only shows ./test/a but stops there ("a"
is empty directory).

Using gluster client I can see the all the nested files just fine.

Is this a known issue?

Thanks,

Bernard


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs NFS dependencies (gentoo)

2010-11-01 Thread Shehjar Tikoo


Yes, you can start rpc.statd but because we dont yet support NLM, you'll 
need to use nolock option at mount time.


Jan Pisacka wrote:

Hi,

I am testing glusterfs-3.1.0 on Gentoo Linux, which is our preferred OS.
For correct NFS server functionality, some dependencies are required. I
understand that on supported distributions, all dependencies are already
installed by default. This is probably not the case of gentoo. First I
found that the glusterfs's NFS server still needs the portmapper:

>

[2010-10-22 16:05:20.469964] E
[rpcsvc.c:2630:nfs_rpcsvc_program_register_portmap] nfsrpc: Could not
register with portmap
[2010-10-22 16:05:20.470006] E
[rpcsvc.c:2715:nfs_rpcsvc_program_register] nfsrpc: portmap registration
of program failed
[2010-10-22 16:05:20.470038] E
[rpcsvc.c:2728:nfs_rpcsvc_program_register] nfsrpc: Program registration
failed: MOUNT3, Num: 15, Ver: 3, Port: 38465
[2010-10-22 16:05:20.470050] E [nfs.c:127:nfs_init_versions] nfs:
Program init failed
[2010-10-22 16:05:20.470061] C [nfs.c:608:notify] nfs: Failed to
initialize protocols

Are there any further dependencies that need to be satisfied?
Replication and distribution works, but I still have issues when trying
to run gnome desktop on clients where /home is glusterfs-nfs mounted.
Conventional nfs works perfectly. The issues seem to be related to the
absent locking:

Oct 26 08:42:24 glc01 kernel: lockd: server gls13 not responding, still
trying

On the server side, I still have log records like this one:

[2010-10-25 11:00:55.823932] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available

Which RPC program should I install? rpc.statd? Anything else? Thanks a lot.

Jan Pisacka
Compass experiment
Institute of Plasma Physics AS CR
Praha, Czech Republic





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-11-01 Thread Shehjar Tikoo

Bernard Li wrote:

Hi Shehjar:

On Thu, Oct 28, 2010 at 12:34 AM, Shehjar Tikoo  wrote:


Thats not recommended but I can see why this is needed. The simplest way to
run the nfs server for the two replicas is to simply copy over the nfs
volume file from the current nfs server. It will work right away. The volume
file below will not.


Thanks, that worked.  I copied /etc/glusterd/nfs/nfs-server.vol to the
other server, started glusterfsd and I could mount the volume via NFS
on a client.


Performance will also drop because now both your replicas are another
network hop away. I guess the ideal situation would be to allow gnfs to run
even when there is already a server running. Its on the ToDo list.


That would be good.  However, would it also be possible for this other
server to join as a non-contributing peer (i.e. it's not sharing its
disk) but act only as the NFS server?  This way I don't need to copy
over the volume file manually and leave it to glusterd to set
everything up.  Would be a nice stop-gap workaround until you guys can
implement the above mentioned feature.



"non-contributing peer".I like the sound of that. ;-)

I think we'll be better of with a quick fix to the port problem. Let me 
see what I can do.


-Shehjar


Cheers,

Bernard


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-28 Thread Shehjar Tikoo

Bernard Li wrote:

Hi Shehjar:

The hosts with exports in the replication pool are already running NFS
servers so I need to setup GlusterFS native NFS server on another
server.  I am using the following /etc/glusterfs/glusterfsd.vol:


Thats not recommended but I can see why this is needed. The simplest way 
to run the nfs server for the two replicas is to simply copy over the 
nfs volume file from the current nfs server. It will work right away. 
The volume file below will not.


Performance will also drop because now both your replicas are another 
network hop away. I guess the ideal situation would be to allow gnfs to 
run even when there is already a server running. Its on the ToDo list.


-Shehjar




volume gluster01
type protocol/client
option transport-type tcp
option remote-host gluster01
option remote-port 6996
option remote-subvolume brick
end-volume

volume gluster02
type protocol/client
option transport-type tcp
option remote-host gluster02
option remote-port 6996
option remote-subvolume brick
end-volume

volume repl
type cluster/replicate
subvolumes gluster01 gluster02
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes repl
end-volume

volume dshare
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume

volume nfs-server
type nfs/server
subvolumes dshare
end-volume

and I start glusterfsd

Does this look about right?  Is the remote-port correct?

Thanks,

Bernard

On Tue, Oct 26, 2010 at 10:14 PM, Shehjar Tikoo  wrote:

Bernard Li wrote:

On Tue, Oct 26, 2010 at 9:15 PM, Shehjar Tikoo 
wrote:


Regarding this pdf, only the portions which show mount commands and the
FAQ
section is applicable to 3.1. In 3.1, NFS gets started by default for a
volume started with the volume start command.

So basically you're saying if I have a 2 server replicated volume on
gluster01 and gluster02, and I did volume start on gluster01, I should
be able to mount the volume via NFS from either of the servers?


Yes. You can test that both servers have exported the same volume by:

$ showmount -e gluster01
$ showmount -e gluster02


Is exporting via glusterfsd still supported?


Yes.

-Shehjar


Cheers,

Bernard




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 and NFS problem

2010-10-27 Thread Shehjar Tikoo

Hi

What kind of app/tool was running on the nfs mount point when it hung?

Here are some things we can do to debug further.
Please restart the NFS server daemon using the TRACE log level, remount 
NFS and restart the tests. That way I'll be able to zero-in on the 
operation that results in this bug.


To start the nfs server in TRACE log:

$ ps ax|grep gluster|grep nfs

Kill the process that is listed here.

Then use the same command line as in the ps output, except, change the 
-L and -l arguments so it looks like:


glusterfs -f  -L TRACE -l /tmp/nfs.log

Next:

$ dmesg -c >/dev/null;
to clear dmesg.

Remount and run your apps, and watch for "not responding" message in 
dmesg. When that happens, please email me /tmp/nfs.log.


Thanks
-Shehjar


M. Vale wrote:

HI, I using Gluster 3.1 and after 2 day of working it stops the NFS mount
with the following error:

[2010-10-27 14:59:54.687519] I
[client-handshake.c:699:select_server_supported_programs] storage-client-0:
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2010-10-27 14:59:54.720898] I [client-handshake.c:535:client_setvolume_cbk]
storage-client-0: Connected to 192.168.2.2:24009, attached to remote volume
'/mnt'.
[2010-10-27 15:00:48.271921] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available
[2010-10-27 15:06:45.571535] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=(.)
[2010-10-27 15:06:45.571818] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=((null))
[2010-10-27 15:06:45.572129] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=((null))


If a mount the gluster NFS on another machine everything works ok, but if I
try to cpoy large data to the NFS mount the mount disapears, and the client
says:

nfs: server IP not responding, still trying

What is this error : torage/inode: possible infinite loop detected, forcing
break. name=(.) ?

Regards
Mauro V.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-26 Thread Shehjar Tikoo

Bernard Li wrote:

On Tue, Oct 26, 2010 at 9:15 PM, Shehjar Tikoo  wrote:


Regarding this pdf, only the portions which show mount commands and the FAQ
section is applicable to 3.1. In 3.1, NFS gets started by default for a
volume started with the volume start command.


So basically you're saying if I have a 2 server replicated volume on
gluster01 and gluster02, and I did volume start on gluster01, I should
be able to mount the volume via NFS from either of the servers?



Yes. You can test that both servers have exported the same volume by:

$ showmount -e gluster01
$ showmount -e gluster02


Is exporting via glusterfsd still supported?



Yes.

-Shehjar


Cheers,

Bernard


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-26 Thread Shehjar Tikoo

Bernard Li wrote:

Hi all:

I'm trying to setup an NFS export using GlusterFS 3.1.  I have setup a
replicated volume using the gluster CLI as follows:

Volume Name: share
Type: Replicate
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gluster01:/export/share
Brick2: gluster02:/export/share
Brick3: gluster03:/export/share
Brick4: gluster04:/export/share
Brick5: gluster05:/export/share

I then follow the instructions from the following PDF to setup the NFS server:

http://download.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-beta/nfs-beta-rc15/GlusterFS_NFS_Beta_RC15_Release_Notes.pdf

And I got the following errors:

[2010-10-26 16:54:16.671148] E [rpc-clnt.c:338:saved_frames_unwind]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xd4) [0xcacaca]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x65)
[0xcaa764] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0x22)
[0xcaa6f0]))) rpc-clnt: forced unwinding frame type(GF-DUMP)
op(DUMP(1)) called at 2010-10-26 16:54:16.670952
[2010-10-26 16:54:16.671177] M
[client-handshake.c:849:client_dump_version_cbk] : some error, retry
again later
[2010-10-26 16:54:16.671198] E [afr-common.c:2598:afr_notify] share:
All subvolumes are down. Going offline until atleast one of them comes
back up.

This is on CentOS 4, 32-bit.

BTW, I didn't see any documentation on how to setup the native NFS
server in the wiki -- are updated documentations available there?



Is there anything specific you'd like to know? There is nothing more 
needed than volume start followed by a nfs mount command.


-Shehjar


Also, I noticed that /etc/init.d/glusterfsd doesn't exist in the
gluster-core RPM package, should it be included?

Thanks,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-26 Thread Shehjar Tikoo

Bernard Li wrote:

Hi all:

I'm trying to setup an NFS export using GlusterFS 3.1.  I have setup a
replicated volume using the gluster CLI as follows:

Volume Name: share
Type: Replicate
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gluster01:/export/share
Brick2: gluster02:/export/share
Brick3: gluster03:/export/share
Brick4: gluster04:/export/share
Brick5: gluster05:/export/share

I then follow the instructions from the following PDF to setup the NFS server:

http://download.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-beta/nfs-beta-rc15/GlusterFS_NFS_Beta_RC15_Release_Notes.pdf


Regarding this pdf, only the portions which show mount commands and the 
FAQ section is applicable to 3.1. In 3.1, NFS gets started by default 
for a volume started with the volume start command.




And I got the following errors:

[2010-10-26 16:54:16.671148] E [rpc-clnt.c:338:saved_frames_unwind]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xd4) [0xcacaca]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x65)
[0xcaa764] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0x22)
[0xcaa6f0]))) rpc-clnt: forced unwinding frame type(GF-DUMP)
op(DUMP(1)) called at 2010-10-26 16:54:16.670952
[2010-10-26 16:54:16.671177] M
[client-handshake.c:849:client_dump_version_cbk] : some error, retry
again later
[2010-10-26 16:54:16.671198] E [afr-common.c:2598:afr_notify] share:
All subvolumes are down. Going offline until atleast one of them comes
back up.

This is on CentOS 4, 32-bit.


GlusterFS servers are not supported on 32 bit systems.

-Shehjar



BTW, I didn't see any documentation on how to setup the native NFS
server in the wiki -- are updated documentations available there?

Also, I noticed that /etc/init.d/glusterfsd doesn't exist in the
gluster-core RPM package, should it be included?

Thanks,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-25 Thread Shehjar Tikoo

Brent A Nelson wrote:
Alas, that does not work on Solaris 2.6 or 7.  Solaris 7 was apparently 
the first to support the WebNFS URL syntax, but it otherwise has the 
same behavior as 2.6.  Both seem to be hardwired to look for the UDP 
mountd port, even when told to use TCP.  On each, you can specify the 
NFS port, but apparently not the mountd port (there's no mountport 
option documented).


Yes, thats broken in Solaris. Without WebURL, if we specify proto=tcp, 
it defaults to using NFSv4. If we then try to override using vers=3 
option, as the man page says we can, it falls back to using UDP even 
when proto=tcp is given.


These versions will need UDP support in gluster nfs to work.

-Shehjar



Perhaps they got it right in Solaris 8 or greater, at least for WebNFS, 
but I no longer have any newer Solaris with which to test...


Thanks,

Brent

On Fri, 22 Oct 2010, Craig Carl wrote:


{Resending due to incomplete response]

Brent,
Thanks for your feedback . To mount with a Solaris client use -
` mount -o proto=tcp,vers=3 nfs://:38467/ 
`


As to UDP access we want to force users to use TCP. Everything about 
Gluster is designed to be fast , as NFS over UDP approaches line speed 
it becomes increasingly inefficient, [1] we want to avoid that.


I have updated our documentation to reflect the required tcp option 
and Solaris instructions.


[1] http://nfs.sourceforge.net/#faq_b10


Thanks again,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster


From: "Brent A Nelson" 
To: gluster-users@gluster.org
Sent: Thursday, October 21, 2010 8:18:02 AM
Subject: [Gluster-users] Some client problems with TCP-only NFS in 
Gluster 3.1


I see that the built-in NFS support registers mountd in portmap only with
tcp and not udp. While this makes sense for a TCP-only NFS
implementation, it does cause problems for some clients:

Ubuntu 10.04 and 7.04 mount just fine.

Ubuntu 8.04 gives "requested NFS version or transport protocol is not
supported", unless you specify "-o mountproto=tcp" as a mount option, in
which case it works just fine.

Solaris 2.6 & 7 both give "RPC: Program not registered". Solaris
apparently doesn't support the mountproto=tcp option, so there doesn't
seem to be any way for Solaris clients to mount.

There may be other clients that assume mountd will be contactable via
udp, even though they (otherwise) happily support TCP NFS...

Thanks,

Brent
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 3.1 with Ubuntu Lucid 32bit

2010-10-20 Thread Shehjar Tikoo


- Original Message -
> From: "Deadpan110" 
> I have posted this to the lists purely to help others - please do not
> consider any of the following suitable for a production environment
> and follow these rough instructions at your own risk.
> 



> 2: glusterfs NFS
> 
> Obviously make sure you have nfs-common and portmap installed and then
> mount in the usual way.
> 
> I found this method had less mem and CPU overheads but locking seemed
> really bad with some of my services (Dovecot, SVN) and the locks
> ultimately caused load to spiral out of control.
> 
> It may have been a misconfiguration on my behalf!
> 
> Simply using NFS mounting as a read filesystem without the need for
> locking worked well... but writing large files seemed to lock up the
> system also (i did not test this with 1024MB of mem and again, it is
> possibly a configuration on my behalf).
> 
> 

If only a single nfs client machine will be running dovecot/svn, you can use
the nolock option at mount time to work-around the missing NLM support.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs/server not loading

2010-08-19 Thread Shehjar Tikoo

Jesse Caldwell wrote:

hi all,

i just built glusterfs-nfs_beta_rc10 on freebsd 8.1. i configured
glusterfs as follows:

  ./configure --disable-fuse-client --prefix=/usr/local/glusterfs


The volume file looks fine. We've never tried anything with the beta 
branch on fbsd. Let me see if I can get it setup for a few build tests 
atleast. In the mean time, please let me have the complete log file from 
the glusterfsd that runs nfs/server. Use the TRACE log level by setting 
the following command line options:


-L TRACE -l /tmp/nfs-fail.log

Mail me the nfs-fail.log.

Thanks



i also ran this on the source tree before building:

  for file in $(find . -type f -exec grep -l EBADFD {} \;); do
  sed -i -e 's/EBADFD/EBADF/g' ${file};
  done

i used glusterfs-volgen to create some config files:

  glusterfs-volgen -n export --raid 1 --nfs 10.0.0.10:/pool 10.0.0.20:/pool

glusterfsd will start up with 10.0.0.10-export-export.vol or
10.0.0.20-export-export.vol without any complaints. when i try to start
the nfs server, i get:

  nfs1:~ $ sudo /usr/local/glusterfs/sbin/glusterfsd -f ./export-tcp.vol 
  Volume 'nfsxlator', line 31: type 'nfs/server' is not valid or not found on this machine

  error in parsing volume file ./export-tcp.vol
  exiting

the module is present, though, and truss shows that glusterfsd is finding
and opening it:

  
open("/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so",O_RDONLY,0106)
 = 7 (0x7)

nfs/server.so doesn't seem to be tragically mangled:

  nfs1:~ $ ldd 
/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so
  /usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so:
  libglrpcsvc.so.0 => /usr/local/glusterfs/lib/libglrpcsvc.so.0 
(0x800c0)
  libglusterfs.so.0 => /usr/local/glusterfs/lib/libglusterfs.so.0 
(0x800d17000)
  libthr.so.3 => /lib/libthr.so.3 (0x800e6a000)
  libc.so.7 => /lib/libc.so.7 (0x800647000)

is this a freebsd-ism, or did i screw up something obvious? the config
file i am using is obviously nothing special, but here it is:

  nfs1:~ $ grep -v '^#' export-tcp.vol 


  volume 10.0.0.20-1
  type protocol/client
  option transport-type tcp
  option remote-host 10.0.0.20
  option transport.socket.nodelay on
  option transport.remote-port 6996
  option remote-subvolume brick1
  end-volume

  volume 10.0.0.10-1
  type protocol/client
  option transport-type tcp
  option remote-host 10.0.0.10
  option transport.socket.nodelay on
  option transport.remote-port 6996
  option remote-subvolume brick1
  end-volume

  volume mirror-0
  type cluster/replicate
  subvolumes 10.0.0.10-1 10.0.0.20-1
  end-volume

  volume nfsxlator
  type nfs/server
  subvolumes mirror-0
  option rpc-auth.addr.mirror-0.allow *
  end-volume

thanks,

jesse
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS re-export a GlusterFS on RHEL 5.4?

2010-08-19 Thread Shehjar Tikoo

Steven Lee wrote:

Hi,

I'd like to re-export a glusterfs to a bunch of NFS clients.  The
server with the mounted glusterfs is running RHEL 5.4.  I'd like to
use the nfsd in the kernel if possible as the same server is already
serving a few other (non glusterfs) NFS shares.


We have native nfs serving functionality, i.e. as a translator, that 
will be part of 3.1 release. In the mean time, you could use the same 
NFS server from a separate branch that is a lot more stable than the nfs 
in mainline. Note that this branch is only a beta branch for NFS and not 
for the remaining 3.1 targeted items like DVM. The usual caveats that go 
with beta release candidates apply.


Thats available at:

http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-beta/nfs-beta-rc10/

The documentation in there will show you how to export existing 
non-glusterfs directories also through Gluster NFS server.


Of course, you can always email here for questions.

-Shehjar





On the server, the glusterfs is mounted like this:

--- root 13213  0.0  0.0 126924  2000 ?Ssl  23:25   0:00
/usr/sbin/glusterfs --log-level=NORMAL --disable-direct-io-mode
--volfile=/etc/glusterfs/localhost-tcp.vol /export/jiv2_0001 ---

and the export file looks like this:

--- /export/jiv2_0001/HY299
@biotechhosts(rw,no_subtree_check,no_root_squash,fsid=101) ---

When a NFS client tries to mount the export, it gets a "permission
denied" error.  /var/log/messages on the server says:

--- mountd[8221]:  refused mount request from  for
/exports/jiv2_0001 (/): not exported ---

Any pointers are appreciated.  I am running gluster 3.0.5.


Steven Lee s...@cac.cornell.edu Cornell Center for Advanced Computing











___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HA NFS

2010-07-14 Thread Shehjar Tikoo

Layer7 Consultancy wrote:

Hello Vikas,

How do the NFS clients react to such a failover? Will the I/O just
temporarily stall and then proceed once connectivity is resumed or
will running transactions be aborted?



There may be some delay as the IP migrates but overall, the NFS client's 
retransmission behaviour will be able continue the transactions with no 
interruptions.



Best regards,
Koen

2010/7/13 Vikas Gorur :
 Your understanding is correct. Whether using the native NFS
translator or re-export through unfsd, NFS clients only connect to the
management IP. If you want failover for the NFS server, you'd setup a
virtual IP using ucarp (http://www.ucarp.org) and the clients would
only use this virtual IP.

--
Vikas Gorur
Engineer - Gluster, Inc.
--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] transport.remote-port is changing on volume restart

2010-07-04 Thread Shehjar Tikoo

Rafael Pappert wrote:

Hello List,

I'm evaluate gluster platform as a "static file backend" for a webserver farm.
First of all, I have to say thank you to the guys at gluster, you did an 
awesome job.

But there is one really annoying thing, after each restart of a volume in the
volume-manager, i have to change the transport.remote-port in the "client.vol" 
and
remount the volume on all clients.


Why is that required? Is there a specific error message that you 
encounter? If you have the client logs, that will help even more in 
figuring out the problem.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Volume Crash

2010-07-04 Thread Shehjar Tikoo
In the crash log at http://dpaste.com/213817/, it is clear that the 
crash happens in the debug/trace translator but in your log file pasted 
at http://dpaste.com/213489/, I do not see any debug/trace translator 
configured into the volume file. IOW, the crash log does not correspond 
to the volume files that you've pasted. What command did you use to 
start the GlusterFS mount point on the samba server?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-08 Thread Shehjar Tikoo

Tomasz Chmielewski wrote:

Am 07.06.2010 13:55, Daniel Maher wrote:


Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are "more or less" high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same 
or not.


The bug also points to a different bug in io-cache, which I also use - 
so I'll try to disable it and see if it changes anything.


That comment on io-cache bug is nfs specific and does not come into play 
when used with FUSE.


-Shehjar







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] DHT translator problem

2010-06-04 Thread Shehjar Tikoo
We need the logs to figure out the exact problem. Please run the 
glusterfs command while mounting through FUSE with the following command 
line option:


glusterfs -f  -L TRACE -l /tmp/dhtlog /mnt/gtest

Then perform the same operations and email us the dhtlog file, and we'll 
see whats going on.


Thanks
-Shehjar

Deyan Chepishev wrote:

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS file locks

2010-05-03 Thread Shehjar Tikoo

Steinmetz, Ian wrote:

I've turned on debug logging for the server and client of GlusterFS and
amended them below.   I've replaced the IP addresses with "x.x.x" and
left the last octet for security.  It appears I'm able to lock the file
when I run the program directly on the gluster mount point, just not
when it's mounted via NFS.  I checked and rpc.statd is running.  One odd
thing, when I run the perl locking program  directly on the mnt point,
it appears to work but spits out the following log message:

[2010-05-03 09:16:39] D [read-ahead.c:468:ra_readv] readahead:
unexpected offset (4096 != 362) resetting


This is nothing to worry about..just debugging output.

.
.
.



[glusterfsd server logfile during above testing]

r...@gluster02:/var/log/glusterfs# /usr/sbin/glusterfsd -p
/var/run/glusterfsd.pid -f /etc/glusterfs/glusterfsd.vol --log-file
/var/log/glusterfsd.log --debug -N


.
.
.


Given volfile:
+---
---+
  1: volume posix
  2:   type storage/posix
  3:   option directory /data/export
  4: end-volume
  5:  
  6: volume locks

  7:   type features/locks
  8:   option manditory on  


I'll continue looking into this but in the mean time you could test over 
NFS again with "manditory" changed to "mandatory". Let me know if that 
works.


Thanks
-Shehjar


  9:   subvolumes posix
 10: end-volume

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS file locks

2010-05-02 Thread Shehjar Tikoo

There are a couple of things we can do:

-> Mail us the Glusterfs log files from the NFS server and the glusterfs 
servers when the lock script fails. Do file a bug if you can.


-> On the NFS client machine, before you run the mount command, make 
sure you run the following command.


$ rpc.statd

-> Run the same perl script but this time at the nfs server over the 
glusterfs mount point, not at the NFS client. If it runs fine, it is 
probably related to locking over NFS and we'll look at other places to 
figure it out.


-Shehjar



Steinmetz, Ian wrote:

I'm seeing an issue where I can't lock files on a NFS exported
GlusterFS mount.  I have two servers connected to each other doing AFR
to provide a high available NFS server (mirror the content, one VIP for
NFS mounts to clients).  Both of the servers have mounted
"/mnt/glusterfs" using GlusterFS with the client pointing to both
servers.  I then export the filesyste with NFS.  I grabbed a quick perl
program that tries to lock a file for testing, which fails only on the
glusterfs.  When I export a normal directory "/mnt/test" the locking
works.
Any ideas appreciated.  I have a feeling I've implemented the
posix/locks option incorrectly.

Both servers are running Ubuntu with identical setups, below are
relevant configs.
r...@gluster01:/mnt/glusterfs# uname -a
Linux gluster01 2.6.31-20-generic-pae #58-Ubuntu SMP Fri Mar 12 06:25:51
UTC 2010 i686 GNU/Linux

r...@gluster01:/mnt/glusterfs# cat /etc/exports
/mnt/glusterfs  /25(rw,no_root_squash,no_all_squash,no_subtree_check,sync,insec
ure,fsid=10)
/mnt/test   /25(rw,no_root_squash,no_all_squash,no_subtree_check,sync,insec
ure,fsid=11)

* I've tried async, rsync, removing all options except FSID.

r...@gluster02:/etc/glusterfs# cat glusterfs.vol 
volume brick1

 type protocol/client
 option transport-type tcp/client
 option remote-host  # IP address of the remote
brick
 option remote-subvolume brick# name of the remote volume
end-volume
 
volume brick2

 type protocol/client
 option transport-type tcp/client
 option remote-host   # IP address of the
remote brick
 option remote-subvolume brick# name of the remote volume
end-volume
 
volume afr1

 type cluster/afr
 subvolumes brick1 brick2
end-volume
 
volume writebehind

  type performance/write-behind
  option window-size 4MB
  subvolumes afr1
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

volume readahead
   type performance/read-ahead
   option page-size 128KB # unit in bytes
   subvolumes cache
end-volume

volume iothreads
   type performance/io-threads
   option thread-count 4
   option cache-size 64MB
   subvolumes readahead
end-volume

r...@gluster02:/etc/glusterfs# cat glusterfsd.vol 
volume posix

  type storage/posix
  option directory /data/export
end-volume
 
volume locks

  type features/posix-locks
  option manditory on  # tried with and without this, found in a search
of earlier post
  subvolumes posix
end-volume
 
volume brick

  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume
 
volume server

  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.brick-ns.allow *
  option transport.socket.nodelay on
  option auth.ip.locks.allow *
  subvolumes brick
end-volume


* file to test locking...
r...@gluster02:/mnt/glusterfs# cat locktest.pl 
#!/usr/bin/perl


use Fcntl qw(:flock);

my $lock_file = 'lockfile';

open(LOCKFILE,">>$lock_file") or die "Cannot open $lock_file: $!\n";
print "Opened file $lock_file\n";
flock(LOCKFILE, LOCK_SH) or die "Can't get shared lock on $lock_file:
$!\n";
print "Got shared lock on file $lock_file\n";
sleep 2;
close LOCKFILE;
print "Closed file $lock_file\n";

exit;

*Test run from gluster02 using normal NFS mount:
r...@gluster02:/# mount :/mnt/test /mnt/test
r...@gluster02:/# cd /mnt/test
r...@gluster02:/mnt/test# ./locktest.pl 
Opened file lockfile

Got shared lock on file lockfile
Closed file lockfile

*Test run from gluster02 using gluster exported NFS mount:
r...@gluster02:/# mount 74.81.128.17:/mnt/glusterfs /mnt/test
r...@gluster02:/# cd /mnt/test
r...@gluster02:/mnt/test# ./locktest.pl 
Opened file lockfile

Can't get shared lock on lockfile:
No locks available

--
Ian Steinmetz
Comcast Engineering - Houston
713-375-7866

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs-alpha feedback

2010-04-13 Thread Shehjar Tikoo

chadr wrote:

I ran the same dd tests from KnowYourNFSAlpha-1.pdf and performance is
inconsistent and causes the server to become unresponsive.

My server freezes every time when I run the following command:
dd if=/dev/zero of=garb bs=256k count=64000



Please confirm whether the server freezes or crashes? I ask because 
we've had a couple of reports of hanging/freezing servers when actually 
they had crashed. The way to figure it out is to let us have the 
complete log file, or at least the last few hundred lines.


I would also like to mount a path like: /volume/some/random/dir 


# mount host:/gluster/tmp /mnt/test
mount: host:/gluster/tmp failed, reason given by server: No such file or
directory

I can mount it up host:/volume_name and /mnt/test/tmp exists

I see you're trying to mount a directory from a GlusterFS subvolume as 
an NFS export by itself. This is not supported in the Alpha release. NFS 
server right now allows each GlusterFS subvolume to be a single NFS 
export. We are, however, planning to introduce this functionality in the 
near future.


Thanks
-Shehjar


dd if=/dev/zero of=garb bs=64K count=100
100+0 records in
100+0 records out
6553600 bytes (6.6 MB) copied, 0.068906 seconds, 95.1 MB/s

dd of=garb if=/dev/zero bs=64K count=100
100+0 records in
100+0 records out
6553600 bytes (6.6 MB) copied, 0.057207 seconds, 115 MB/s

dd if=/dev/zero of=garb bs=64K count=1000
1000+0 records in
1000+0 records out
65536000 bytes (66 MB) copied, 0.523117 seconds, 125 MB/s

dd of=garb if=/dev/zero bs=64K count=1000
1000+0 records in
1000+0 records out
65536000 bytes (66 MB) copied, 1.04666 seconds, 62.6 MB/s

dd if=/dev/zero of=garb bs=64K count=1
1+0 records in
1+0 records out
65536 bytes (655 MB) copied, 10.9809 seconds, 59.7 MB/s

dd of=garb if=/dev/zero bs=64K count=1
1+0 records in
1+0 records out
65536 bytes (655 MB) copied, 11.3515 seconds, 57.7 MB/s

dd if=/dev/zero of=garb bs=128K count=100
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 0.105364 seconds, 124 MB/s

dd of=garb if=/dev/zero bs=128K count=100
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 0.254225 seconds, 51.6 MB/s

dd if=/dev/zero of=garb bs=128K count=1000
1000+0 records in
1000+0 records out
131072000 bytes (131 MB) copied, 60.1008 seconds, 2.2 MB/s

dd of=garb if=/dev/zero bs=128K count=1000
1000+0 records in
1000+0 records out
131072000 bytes (131 MB) copied, 1.51868 seconds, 86.3 MB/s

dd if=/dev/zero of=garb bs=128K count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 18.7755 seconds, 69.8 MB/s

dd of=garb if=/dev/zero bs=128K count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 18.9837 seconds, 69.0 MB/s

dd if=/dev/zero of=garb bs=256k count=64000
My server freezes.


Here is the recent nfs log when the server froze:

[2010-04-09 23:37:33] D [nfs3-helpers.c:2114:nfs3_log_rw_call] nfs-nfsv3:
XID: 6f68c85f, WRITE: args: FH: hashcount 2, xlid 0, gen
5458285267163021319, ino 11856898, offset: 1129578496,  count: 65536,
UNSTABLE
[2010-04-09 23:37:33] D [rpcsvc.c:1790:rpcsvc_request_create] rpc-service:
RPC XID: 7068c85f, Ver: 2, Program: 13, ProgVers: 3, Proc: 7
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [nfs3-helpers.c:2114:nfs3_log_rw_call] nfs-nfsv3:
XID: 7068c85f, WRITE: args: FH: hashcount 2, xlid 0, gen
5458285267163021319, ino 11856898, offset: 1129644032,  count: 65536,
UNSTABLE
[2010-04-09 23:37:33] D [rpcsvc.c:1790:rpcsvc_request_create] rpc-service:
RPC XID: 7168c85f, Ver: 2, Program: 13, ProgVers: 3, Proc: 7
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [rpcsvc.c:1266:rpcsvc_program_actor] rpc-service:
Actor found: NFS3 - WRITE
[2010-04-09 23:37:33] D [nfs3-helpers.c:2114:nfs3_log_rw_call] nfs-nfsv3:
XID: 7168c85f, WRITE: args: FH: hashcount 2, xlid 0, gen
5458285267163021319, ino 11856898, offset: 1129709568,  count: 65536,
UNSTABLE
[2010-04-09 23:38:33] D [rpcsvc.c:1790:rpcsvc_request_create] rpc-service:
RPC XID: 6268c85f, Ver: 2, Program: 13, ProgVers: 3, Proc: 7

Thanks,
Chad Richards
_

Re: [Gluster-users] nfs-alpha feedback

2010-04-12 Thread Shehjar Tikoo

Hi Chad

I've filed a bug at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=819

Would you please upload the complete log file from NFS server, GlusterFS 
server and the volume files for the same to this bug report?


Thanks
-Shehjar

chadr wrote:

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Old story - glusterfs memory usage

2010-03-29 Thread Shehjar Tikoo

Krzysztof Strasburger wrote:

On Mon, Mar 29, 2010 at 12:35:32PM +0530, Shehjar Tikoo wrote:

Krzysztof Strasburger wrote:

On Mon, Mar 29, 2010 at 11:48:44AM +0530, Shehjar Tikoo wrote:
This is a known problem. See a previous email on the devel list about it 
at:

http://article.gmane.org/gmane.comp.file-systems.gluster.devel/1469

A bug is filed is at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=545

For more info on the drop_caches mentioned in that gluster-devel thread, 
see:

http://www.linuxinsight.com/proc_sys_vm_drop_caches.html

Do let us know if dropping caches as shown in that thread helps.

Dear Shehjar,
thank you, but drop_caches really is not the Holy Grail of this problem. 
It simply does not change anything, to set it to something != 0.

Amon Ott figured it out, that forget() is called, but memory is not freed
anyway. 
Of course, just doing drop_caches is not the silver bullet here. In some 
cases, it needs to be preceded by a sync. What is your experience with that?


In any case, if Ott is right, then we might have a memory leak. The best 
step is to file a bug.

Shehjar - I have sent the response to your private mail; sorry for that.
Syncing is IMHO irrelevant here, as there are no dirty buffers to be
written out to fs. The files are only opened, stated and closed.
I filed related bug report in ancient days, when the bug database was hosted on
savannah. If I remember correctly, it has been closed in the meantime. 
Should I file a new report, or find the old one and reopen it (if it is closed)?

Krzysztof

We'll prefer having the old one re-opened if you can find it, if not, 
feel free to file a new one.


-Shehjar



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Old story - glusterfs memory usage

2010-03-28 Thread Shehjar Tikoo

Krzysztof Strasburger wrote:

I upgraded today from 2.0.8 to 3.0.3.
The first sad impression was that unify no longer works, but it is not that
important. Not a surprise. Switched to dht. Tried my old "du" test:
du /home
(/home is a glusterfs filesystem, of course ;) and saw again the bad old
unlimited increase of glusterfs clients memory usage. I hoped that the
problem went to quick-read, but it didn't.


This is a known problem. See a previous email on the devel list about it 
at:

http://article.gmane.org/gmane.comp.file-systems.gluster.devel/1469

A bug is filed is at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=545

For more info on the drop_caches mentioned in that gluster-devel thread, 
see:

http://www.linuxinsight.com/proc_sys_vm_drop_caches.html

Do let us know if dropping caches as shown in that thread helps.

Thanks
-Shehjar


So switched back to 2.0.10rc1, to restore unify.
And then did a simple test, to be sure that no translators are to be blamed.
I created the following, trivial setup:

file loopback.vol:
volume loopback
 type storage/posix
 option directory /root/loopback
end-volume

And then mounted /root/loopback on /root/loop-test:

mount -t glusterfs loopback.vol /root/loop-test

Do you see? No translators, even no server and networking. Just a loopback
to a directory on the local machine.

I copied then the /usr directory to /root/loop-test (c.a. 16 files).
And then ran "du /root/loop-test".
Memory usage of respective glusterfs process went up from 16 MB to 50 MB.
I could reproduce it perfectly by umounting /root/loop-test, mounting it
again and re-running "du".
More files touched mean more memory used by glusterfs.
This is not a memory leak. Repeating this "du" does not cause memory
usage go even a single byte up. Glusterfs client keeps somewhere an 
information about _every file touched_, and keeps it _forever_. As the situation 
did not improve since the times of glusterfs 1.3, I assume this behavior 
to be a part of its design. IMHO a wrong one. This is not a feature,

this is a bug. It costs memory and probably performace too, if it is a kind
of cache (with millions of entries, potentially).
I filed a bug report almost one year ago - no change. Dear other users of
glusterfs - are you at least able to reproduce this memory consumption
problem? If it occurs on my site only, then the reason is somewhere in my
system setup, otherwise - let us do something about it, finally.
It is not in translators, it is in the core of glusterfs.
With regards
Krzysztof Strasburger
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS

2010-03-19 Thread Shehjar Tikoo

- "hgichon"  wrote:

> wow good news!
> thanks.
> 
> I was installed source.
> but mount failed.
> 
> my config is wrong?
> 
> - kpkim
> 
> r...@ccc1:/usr/local/etc/glusterfs# mount -t glusterfs
> /usr/local/etc/glusterfs/nfs.vol /ABCD -o loglevel=DEBUG
Hi

The volfile containing the nfs/server translator 
needs to be started using the glusterfsd command, just like
starting up the glusterfs server, not using the mount command.

Exporting a GlusterFS volume as a NFS export does not require 
the volume to be mounted using the mount command.

Thanks
-Shehjar

> Volume 'nfs-server', line 60: type 'nfs/server' is not valid or not
> found on this machine
> error in parsing volume file /usr/local/etc/glusterfs/nfs.vol
> exiting
> r...@ccc1:/usr/local/etc/glusterfs#
> 
> [pid  4270]
> open("/usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so",
> O_RDONLY) = 7
> [pid  4270] read(7,
> "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\337\0\0\0\0\0\0"...,
> 832) = 832
> [pid  4270] fstat(7, {st_mode=S_IFREG|0755, st_size=781248, ...}) = 0
> [pid  4270] mmap(NULL, 2337392, PROT_READ|PROT_EXEC,
> MAP_PRIVATE|MAP_DENYWRITE, 7, 0) = 0x7f4be41bc000
> 
> my config
> -
> r...@ccc1:/usr/local/etc/glusterfs# cat glusterfsd.vol
> ## file auto generated by /usr/local/bin/glusterfs-volgen
> (export.vol)
> # Cmd line:
> # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export
> 192.168.1.128:/export --nfs --cifs
> 
> volume posix1
>type storage/posix
>option directory /export
> end-volume
> 
> volume locks1
>  type features/locks
>  subvolumes posix1
> end-volume
> 
> volume brick1
>  type performance/io-threads
>  option thread-count 8
>  subvolumes locks1
> end-volume
> 
> volume server-tcp
>  type protocol/server
>  option transport-type tcp
>  option auth.addr.brick1.allow *
>  option transport.socket.listen-port 6996
>  option transport.socket.nodelay on
>  subvolumes brick1
> end-volume
> -
> r...@ccc1:/usr/local/etc/glusterfs# cat nfs.vol
> ## file auto generated by /usr/local/bin/glusterfs-volgen
> (export.vol)
> # Cmd line:
> # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export
> 192.168.1.128:/export --nfs --cifs
> 
> volume 192.168.1.128-1
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.128
>  option transport.socket.nodelay on
>  option transport.remote-port 6996
>  option remote-subvolume brick1
> end-volume
> 
> volume 192.168.1.127-1
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.127
>  option transport.socket.nodelay on
>  option transport.remote-port 6996
>  option remote-subvolume brick1
> end-volume
> 
> volume distribute
>  type cluster/distribute
>  subvolumes 192.168.1.127-1 192.168.1.128-1
> end-volume
> 
> #volume writebehind
> #type performance/write-behind
> #option cache-size 4MB
> #subvolumes distribute
> #end-volume
> 
> #volume readahead
> #type performance/read-ahead
> #option page-count 4
> #subvolumes writebehind
> #end-volume
> 
> volume iocache
>  type performance/io-cache
>  option cache-size 128MB
>  option cache-timeout 1
>  subvolumes distribute
> end-volume
> 
> #volume quickread
> #type performance/quick-read
> #option cache-timeout 1
> #option max-file-size 64kB
> #subvolumes iocache
> #end-volume
> 
> #volume statprefetch
> #type performance/stat-prefetch
> #subvolumes quickread
> #end-volume
> 
> volume nfs-server
>  type nfs/server
>  subvolumes iocache
>  option rpc-auth.addr.allow *
> end-volume
> 
> 
> 
> Tejas N. Bhise wrote:
> > Dear Community Users,
> > 
> > Gluster is happy to announce the ALPHA release of the native NFS
> Server.
> > The native NFS server is implemented as an NFS Translator and hence
> > integrates very well, the NFS protocol on one side and GlusterFS
> protocol
> > on the other side.
> > 
> > This is an important step in our strategy to extend the benefits of
> > Gluster to other operating system which can benefit from a better
> NFS
> > based data service, while enjoying all the backend smarts that
> Gluster
> > provides.
> > 
> > The new NFS Server also strongly supports our efforts towards
> > becoming a virtualization storage of choice.
> > 
> > The release notes of the NFS ALPHA Release are available at -
> > 
> >
> http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf
> > 
> > The Release notes describe where RPMs and source code can be
> obtained
> > and where bugs found in this ALPHA release can be filed. Some
> examples 
> > on usage are also provided.
> > 
> > Please be aware that this is an ALPHA release and in no way should
> be
> 

Re: [Gluster-users] Using NFS on client side

2010-03-09 Thread Shehjar Tikoo

Aaron Porter wrote:

On Tue, Mar 9, 2010 at 2:10 PM, carlopmart  wrote:

 Is it possible to export a glusterfs volume mounted on a client via NFS
from this client??


You have to use a userspace NFS server (unfs3 seems to work, mostly).


Available at:

http://ftp.gluster.com/pub/gluster/glusterfs/misc/unfs3/

-Shehjar


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] replicating storage for ESX problem

2010-02-14 Thread Shehjar Tikoo

Romans S(c(ugarevs wrote:

My configuration:
Two node(raid1) cluster based on glusterfs with NFS server.
The system is replicating nicely. Changes are synced. The storage is 
exported over NFS for vmware esx usage.
The problem is when I am using vmware ESX to create a VM on the NSF 
storage, the result is an empty unusable virtual disk. When I export 
some local ext3 from one of the nodes, correct 8GB VM disk is created. 
When I reexport the glusterfs mount, I 'm getting unusable VMs(almost 
zero size). I suppose it has something to do with so called thin 
provisioning over NFS.

Any advice is appreciated!




Are you using NFS with the Gluster Storage Platform or is
this your customised setup?

-Shehjar



Regards,
Roman.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS problem

2010-02-06 Thread Shehjar Tikoo

Jonas Bulthuis wrote:

Hi Shehjar,

Thanks for your reply. We may be interested in testing a alpha version
in the future. I cannot tell for sure right now, but if you can send me
an e-mail at the time this version becomes available, we can see if we
can fit it in.

We're currently running the Gluster FS on Ubuntu (LTS) servers. I can
access the volumes though the Gluster client on the same machines. Do
you know whether it's possible to export the Gluster client mount point
through nfs-kernel-server instead of the user space NFS server? or would
that be unwise?



It is possible but it is not real solution. Due to the way knfsd
talks to FUSE, some amount of state in the kernel needs to be kept
around indefinitely, which causes problems of excessive memory usage.
unfsd does not cause such a problem.


-Shehjar



Kind regards / Met vriendelijke groet,
Jonas Bulthuis


Shehjar Tikoo wrote:

Hi

Due to time constraints, booster has gone untested for the last couple
of months. I suggest using unfsd over fuse for the time
being. We'll be releasing an alpha of the NFS translator
somewhere in March. Let me know if you'd be interested in doing
early testing?

Thanks
-Shehjar

Jonas Bulthuis wrote:

Hello,

I'm using Gluster with cluster/replicate on two servers. On each of
these servers I'm exporting the replicated volume through the UNFSv3
booster provided by Gluster.

Multiple nfs clients are using these storage servers and most of the
time it seems to work fine. However, sometimes the clients give error
messages about a 'Stale NFS Handle' when trying to get a directory
listing of some directory on the volume (not all directories gave this
problem). Yesterday it happened after reinstalling the client machines.

All the client machines had the same problem. Rebooting the client
machines did not help. Eventually, restarting the UNFSv3 server solved
the problem.

At least the problem disappeared for now, but, as it happened twice in a
short time now, it seems likely that it will occur again.

Does anyone have any suggestion on how to permanently solve this problem?


This is the nfs booster configuration we're currently using:

/etc/glusterfs/cache_acceptation-tcp.vol /nfsexport_acceptation
glusterfs
subvolume=cache_acceptation,logfile=/usr/local/var/log/glusterfs/booster_acceptation.log,loglevel=DEBUG,attr_timeout=0



Any help will be very much appreciated. Thanks in advance.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS problem

2010-02-04 Thread Shehjar Tikoo

Hi

Due to time constraints, booster has gone untested for the last couple
of months. I suggest using unfsd over fuse for the time
being. We'll be releasing an alpha of the NFS translator
somewhere in March. Let me know if you'd be interested in doing
early testing?

Thanks
-Shehjar

Jonas Bulthuis wrote:

Hello,

I'm using Gluster with cluster/replicate on two servers. On each of
these servers I'm exporting the replicated volume through the UNFSv3
booster provided by Gluster.

Multiple nfs clients are using these storage servers and most of the
time it seems to work fine. However, sometimes the clients give error
messages about a 'Stale NFS Handle' when trying to get a directory
listing of some directory on the volume (not all directories gave this
problem). Yesterday it happened after reinstalling the client machines.

All the client machines had the same problem. Rebooting the client
machines did not help. Eventually, restarting the UNFSv3 server solved
the problem.

At least the problem disappeared for now, but, as it happened twice in a
short time now, it seems likely that it will occur again.

Does anyone have any suggestion on how to permanently solve this problem?


This is the nfs booster configuration we're currently using:

/etc/glusterfs/cache_acceptation-tcp.vol /nfsexport_acceptation
glusterfs
subvolume=cache_acceptation,logfile=/usr/local/var/log/glusterfs/booster_acceptation.log,loglevel=DEBUG,attr_timeout=0


Any help will be very much appreciated. Thanks in advance.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] maxing out cpu

2010-01-08 Thread Shehjar Tikoo

The callgrind output can be opened using kcachegrind. See the attd
png for what I observe here. For some reason, inode_path is consuming
unusual amount of CPU. This could be a resurfacing of a bug we thought
was fixed by an earlier patch. Do you still have the logs for this
particular run? Those are what we need to start looking deeper.


Thanks
-Shehjar


Dave Hall wrote:

Hi again,


On Wed, 2010-01-06 at 14:38 +0530, Shehjar Tikoo wrote:
You can do this by using the following command to start up the 
glusterfs client process:


$ valgrind --tool=callgrind  -f 
 -L NONE --no-daemon




Done. See attached.  I must admit that I don't have much experience
with valgrind.  I let it run for a few hours while I did something
else. When it was running the valgrind process was maxing a CPU and
 cd /path/to/mount and other operations hung.  I started it with
find -type f > /dev/null

Cheers

Dave


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] maxing out cpu

2010-01-06 Thread Shehjar Tikoo

Dave Hall wrote:

Hi Raghavendra,

Thanks for the response.  I played with it a bit today, but I don't
have any good news - see below.

On Tue, 2010-01-05 at 19:18 +0400, Raghavendra G wrote:
* Do you run into same problems without performance translators 
(write-behind, io-cache)?


Unfortunately it does it with both disabled.  I ran tcpdump the
machine which was playing up and there didn't appear to be any
traffic, either locally or to the other servers.


If you don't face any problem after removing performance
translators, Is it possible to pin point which among write-behind
and io-cache is causing the problem (Again by retaining only one
of write-behind/io-cache)? Also is it possible to find out
whether the client or server process that is using too much of
cpu ("top" may help here)?


Sorry, I should have been clearer, it is the glusterfs client. 
glusterfsd never seems to go above 10% even under load.  it is only

1 glusterfs process and it max's out a whole core.



Is it always the same client that is showing this problem
or do the clients on all machines have CPU usage shoot-ups?

If it is always the same machine, will it be possible to run the
"glusterfs" client process under valgrind on this system? It'll slow
it down but should help us figure out where the CPU usage is going.
You can do this by using the following command to start up the
glusterfs client process:

 $ valgrind --tool=callgrind  -f
 -L NONE --no-daemon 

This will run the glusterfs process in the foreground while you
can access the mount point from a different shell session.
You can then monitor the CPU while this is running and press
CTRL+C for valgrind once the high CPU usage phase has passed.

Then mail us the *.out file generated by valgrind.

-Shehjar


Cheers

Dave

___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow unfs3 + booster

2009-09-17 Thread Shehjar Tikoo

Justice London wrote:

I am having issues with slow writes to a gluster replicated setup using
booster. on the order of about 800KB/sec. When writing a file to a fuse
mounted version of the same filesystem I am able to write of course many,
many times that speed. Has anyone gotten this to successfully work at this
point? If so, with what changes or configs? I have tried both the standard
and modified versions of unfs3.

 


Do you have write-behind in the booster volfile?

Which release of GlusterFS?

With what wsize value are you mounting NFS server at the client?
Preferable size is 64KiB but I think Linux client default is 32KiB.

Use the -o wsize=65536 with the mount command to increase this value.

Please tell me the output of:
$ cat /proc/sys/sunrpc/tcp_slot_table_entries

The default is 16 on Linux. Try increasing this to say 64 first, and
then to 128. Do tell me if this increases the throughput.

-Shehjar



Justice London
E-mail:  jlon...@lawinfo.com






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 volumes 
and the client import the 2 volumes and runs DHT over them.


Hi

The fix will be available in a day, at most. Please track the bug
here:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=260

Thanks
-Shehjar


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or errors 
in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 volumes 
and the client import the 2 volumes and runs DHT over them.



Ok thanks. I am looking into this now.

-Shehjar


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or errors 
in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] libglusterfsclient

2009-09-14 Thread Shehjar Tikoo

David Saez Padros wrote:

Hi


In the long run, we'd really prefer applications using booster
since that avoids the need to use the custom libglusterfsclient
API. Just slip booster under an app and it works. However, the
disadvantage is that booster does not at this time support all
the system calls an application might need. For a full list, see:


http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page




I supoose that simple applications like cp, mv, rm & mkdir are 
fully supported, is that right ?




I hope so. ;)
I cant comment on such general tools because all our testing focus
wrt booster has been on unfs3 and apache, both of which work.

Do try these out for us if convenient. We'd really like bug reports
about extending booster to support them.

Thanks
Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster

2009-09-14 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

On Mon, 14 Sep 2009 11:40:03 +0530 Shehjar Tikoo
 wrote:


We only tried to run some bash scripts with preloaded
booster...

Do you mean the scripts contained commands with LD_PRELOADed 
booster? Or were you trying to run bash with LD_PRELOADed

booster?

The second scenario will not work at this point.

Thanks -Shehjar


Oh, that's bad news. We tried to PRELOAD booster in front of bash
(implicit, called the bash-script with LD_PRELOADED). Is this a
general problem or a not-yet-implemented feature?



A general problem, I'd say. The last time, i.e. when we revamped
booster, we tried running with bash but there was some clash with bash
internals.

We havent done anything special to fix the problem since then because:

1. it requires changes deep inside GlusterFS and;

2. running bash wasnt a very useful scenario when the LD_PRELOAD
variable can be added for the bash environment as a whole. For eg.
if you just do "export LD_PRELOAD=" on the command line, you can
actually have every program started from that shell use booster.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported separately.


The problem only appears when I use booster.  Nothing seems to go wrong 
when I mount GlusterFS.  Also everything is find if I only export one 
brick from each server.  There's also no warning or errors in the log 
file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster with apache

2009-09-13 Thread Shehjar Tikoo

Alan Ivey wrote:
Can someone post some configuration files for using booster with 
apache?


I understand how booster works, but I'm curious to see how to 
implement it with the Centos init.d scripts, specifically for 
httpd.


In general, it'd be great to see some real-world examples of how 
booster can be paired with Apache. Thanks.




Hi,

Somehow we missed creating a doc for booster+Apache while we
created the same for unfsd and webdav. Give us a couple of days
and we'll have a page ready on the wiki.

Thanks
-Shehjar








___ Gluster-users 
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] very low file creation rate with glusterfs -- result updates

2009-09-13 Thread Shehjar Tikoo

Wei Dong wrote:
By using booster, I actually avoid being root on the client side. 
It would be perfect if the servers can also be run by regular 
users, even if that means that some features have to be deleted. 
Can someone explain a little bit why the server side must be run by

 root?


There are plenty of reasons why the FUSE approach needs to be run
as root but I am sure others more familiar with FUSE can do a far
better job of explaining exactly why.

With regards to booster, we do not need root since all
file system operations are basically being translated by
libglusterfsclient into network IO operations so the kernel's
file system API is almost completely bypassed.

Know that libglusterfsclient is the library used internally
by booster.



I know that I should not ask for too much when the robustness of 
the current codebase is the most import issue at the time.  I just 
want to hear a story about that and maybe hack the code myself.



Please, dont hesitate to ask any question about Gluster. We'll try to
answer as well given the time and other constraints.

Thanks
-Shehjar


- Wei

Wei Dong wrote:
I think it is fuse that causes the slowness.  I ran all 
experiments with booster enabled and here's the new figure: 
http://www.cs.princeton.edu/~wdong/gluster/summary-booster.gif . 
The numbers are MUCH better than NFS in most cases except for the
 local setting, which is not practically interesting.  The 
interesting thing is that all of a sudden, the deleting rate drop

 by 4-10 times -- though I don't really care about file deletion.


I must say that I'm totally satisfied by the results.

- Wei


Wei Dong wrote:

Hi All,

I complained about the low file creation rate with the 
glusterfs on my cluster weeks ago and Avati suggested I started
 with a small number of nodes.  I finally get sometime to 
seriously benchmark glusterfs with Bonnie++ today and the 
results confirms that glusterfs is indeed slow in terms of file

 creating.  My application is to store a large number of ~200KB
 image files.  I use the following bonnie++ command for 
evaluation (create 10K files of 200KiB each scattered under 100

 directories):

bonnie++ -d . -s 0 -n 10:20:20:100

Since sequential I/O is not that interesting to me, I only keep
 the random I/O results.

My hardware configuration is 2xquadcore Xeon E5430 2.66GHz, 
16GB memory, 4 x Seagate 1500GiB 7200RPM hard drive.  The 
machines are connected with gigabit ethernet.


I ran several GlusterFS configurations, each named as N-R-T, 
where N is the number of replicated volumes aggregated, R is 
the number of replications and T is number of server side I/O 
thread.  I use one machine to serve one volume so there are NxR
 servers and one separate client running for each experiment. 
On the client side, the server volumes are first replicated and
 then aggregated -- even with 1-1-2 configuration, the single 
volume is wrapped by a replicate and a distribute translator. 
To show the overhead of those translators, I also run a 
"simple" configuration which is 1-1-2 without the extra 
replicate & distribute translators, and a "local" configuration
 which is "simple" with client & server running on the same 
machine.  These configurations are compared to "nfs" and 
"nfs-local", which is NFS with server and client on the same 
machine.  The GlusterFS volume file templates are attached to 
the email.


The result is at 
http://www.cs.princeton.edu/~wdong/gluster/summary.gif .  The 
bars/numbers shown are operations/second, so the larger the 
better.


Following are the messages shown by the figure: 1.  GlusterFS 
is doing a exceptionally good job on deleting files, but 
creates and reads files much slower than both NFS. 2.  At least

 for one node server configuration, network doesn't affects the
 file creation rate and does affects file read rate. 3.  The 
extra dummy replicate & distribute translators lowers file 
creation rate by almost half. 4.  Replication doesn't hurt 
performance a lot. 5.  I'm running only single-threaded 
benchmark, so it's hard to say about scalability, but adding 
more servers does helps a little bit even in single-threaded 
setting.


Note that my results are not really that different from 
http://gluster.com/community/documentation/index.php/GlusterFS_2.0_I/O_Benchmark_Results,
 where the single node configuration file create rate is about 
30/second.


I see no reason why GlusterFS has to be that slower than NFS in
 file creation in single node configuration.  I'm wondering if 
someone here can help me figure out what's wrong in my 
configuration or what's wrong in the GlusterFS implementation.


- Wei

Server volume:

volume posix type storage/posix option directory 
/state/partition1/wdong/gluster end-volume


volume lock type features/locks subvolumes posix end-volume

volume brick type performance/io-threads option thread-count 2
 subvolumes lock end-volume

volume server type protocol/server option transport-type tcp 
option auth.

Re: [Gluster-users] libglusterfsclient

2009-09-13 Thread Shehjar Tikoo

David Saez Padros wrote:

Hi

Is libglusterfsclient a library that oone can use to build 
applications that read/write to glusterfs file systems directly, 
bypassing fuse ? if so, were can i find documentation/examples on 
how to use it ?




That is correct. It is like a user-space API. It allows one to build
or customize applications to avoid FUSE and kernel FS API.

There is no documentation for libglusterfsclient at this point but
booster is a user of libglusterfsclient so that is the first place
to look into. Here are some more pointers on how to use it.

When going through booster, do ignore the booster-specific code
related to mapping glusterfs file handles to POSIX integer file
descriptors and also the code for reading the FSTAB files.

To learn how to instantiate a libglusterfsclient context, see
booster/src/booster.c:booster_mount. It shows how to build a
glusterfs_init_params_t structure and pass it to glusterfs_mount.

Once the context is initialized in libglusterfsclient, you can use the
functions with names starting with glusterfs_* or glusterfs_glh_*.

The difference between the two types of API is that the first one,
i.e. glusterfs_ type does not require the application to pass
a handle. A handle here is just an identifier to tell
libglusterfsclient which server(s) it should be talking to.
The reason this first API does not require a file handle is because
libglusterfsclient maintains an internal mapping for you. This mapping
is between a path prefix and the corresponding handle. So for eg, if
you call glusterfs_mount as:

glusterfs_mount ("/test/gluster", &ipars);

where ipars is a glusterfs_init_params_t, any subsequent file system
operation through the API that occurs on the /test/gluster path will
be mapped to go over the handle stored internally for this path
prefix. This of course tells you that libglusterfsclient only supports
working with absolute paths.

The above approach is preferable when an application needs to use
multiple volfiles.

In contrast, if you're not interested in the above approach, you
use glusterfs_init API to initialize a handle and then use this
handle to operate on the files on the servers.

In the long run, we'd really prefer applications using booster since
that avoids the need to use the custom libglusterfsclient API. Just 
slip booster under an app and it works. However, the disadvantage is
that booster does not at this time support all the system calls an 
application might need. For a full list, see:


http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page

-Shehjar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster

2009-09-13 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

Hello all,

we would like to try a simple booster configuration. Reading the 
docs we found this:


(http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page)





"Still, applications that need to use the old approach, can do so 
by simply mounting GlusterFS and pointing the application to 
perform its file system operations over that mount point. In order 
to utilize the mount-point bypass approach, booster will have to be
LD_PRELOAD'ed using the instructions below but without requiring 
the booster configuration file or setting the environment 
variables. "


Does that mean if we provide a classical gluster mount over e.g. 
/mnt/glusterfs we do not have to specify any 
GLUSTERFS_BOOSTER_FSTAB in env at all to redirect fs action on 
/mnt/glusterfs ? Of course we tried but were unsuccessful besides 
some core dumps (no matter if booster config file provided or not).





Yes. That is expected to work. Please try defining the following
env variable. It will give us some debugging output to work with.

export GLUSTERFS_BOOSTER_LOG=

If this does not help, lets test again with 2.0.7. The rc1
should be out later today. In the past few days we've fixed a few
causes of crash in booster.




We only tried to run some bash scripts with preloaded booster...



Do you mean the scripts contained commands with LD_PRELOADed
booster? Or were you trying to run bash with LD_PRELOADed booster?

The second scenario will not work at this point.

Thanks
-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] feature question

2009-09-06 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

Hello all,

after testing around in real world scenarios we found it would be a great
feature if glusterfs could cache whole files that are read-only opened (or
rw-opened and not modified) on the clients. We have lots of setups with small
to medium size config files that have to be read through every few minutes -
and almost never change. Simple timestamp comparison would probably be enough
to decide whether some file has to be transmitted again or if the cached
version is up-to-date.



Is it currently possible to create a setup like this?

Some part of what you mention above is being performed by
the io-cache translator but there are other aspects that are
yet to be implemented, for eg, longer term caching of read-only
files. Do try it out and let us know how it works for you.

-Shehjar

Would this be a neat future feature?



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] crash in __socket_ioq_new

2009-08-31 Thread Shehjar Tikoo

Chetan Ahuja wrote:

I posted this on irc (#gluster) a couple of times in the last few days but
got no response. Trying my luck here:

  I'm seeing this crash in 2.0.1 server codebase
 #4  0x7f6a42848a56 in free () from /lib/libc.so.6
 #5  0x7f6a4157ff99 in __socket_ioq_new (this=,
buf=0x1 ,
vector=0x7f6a3c019db0,   count=1006736528, iobref=0x0) at socket.c:313
 #6  0x7f6a41581c08 in socket_event_handler (fd=5883, idx=5883,
data=0x6, poll_in=-1, poll_out=1116630944, poll_err=0) at socket.c:796

  This happened on  volume with 4 bricks. Two subvolumes of two replicas
each and the final volume distributed over the replicas. There are a large
set of clients writing to these volumes. The write patterns are such that
all clients are writing into different files. In other words, no two clients
are writing simultaneously to the same file at any time.

  I haven't seen this stack trace on either the mailing list archives or the
bugzilla and I don't see any code changes in the relevant code in 2.0.6
either.  If the developers would confirm that if it's a known bug and/or has
a known workaround... I'd really appreciate that.



Hi

A bug in this code path was fixed in 2.0.1. For eg, see:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=29

Please confirm if you continue to observe the crash in subsequent
releases.
Thanks for reporting anyway.

-Shehjar


Thanks a lot
Chetan





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS replacement

2009-08-31 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:
On Mon, 31 Aug 2009 19:48:46 +0530 Shehjar Tikoo 
 wrote:



Stephan von Krawczynski wrote:

Hello all,

after playing around for some weeks we decided to make some real
 world tests with glusterfs. Therefore we took a nfs-client and 
mounted the very same data with glusterfs. The client does some 
logfile processing every 5 minutes and needs around 3,5 mins 
runtime in a nfs setup. We found out that it makes no sense to 
try this setup with gluster replicate as long as we do not have 
the same performance in a single server setup with glusterfs. So

 now we have one server mounted (halfway replicate) and would
like to tune performance. Does anyone have experience with some 
simple replacement like that? We had to find out that almost all

 performance options have exactly zero effect. The only thing
that seems to make at least some difference is read-ahead on the
 server. We end up with around 4,5 - 5,5 minutes runtime of the 
scripts, which is on the edge as we need something quite below 5

 minutes (just like nfs was). Our goal is to maximise performance
 in this setup and then try a real replication setup with two 
servers. The load itselfs looks like around 100 scripts starting

 at one time and processing their data.

Any ideas?


What nfs server are you using? The in-kernel one?


Yes.

You could try the unfs3booster server, which is the original unfs3 
with our modifications for bug fixes and slight performance 
improvements. It should give better performance in certain cases 
since it avoids the FUSE bottleneck on the server.


For more info, do take a look at this page: 
http://www.gluster.org/docs/index.php/Unfs3boosterConfiguration


When using unfs3booster, please use GlusterFS release 2.0.6 since 
that has the required changes to make booster work with NFS.


I read the docs, but I don't understand the advantage. Why should we
 use nfs as kind of a transport layer to an underlying glusterfs 
server, when we can easily export the service (i.e. glusterfs) 
itself. Remember, we don't want nfs on the client any longer, but a 
replicate setup with two servers (though we do not use it right now,

 but nevertheless it stays our primary goal).


Ok. My answer was simply under the impression that moving to NFS
was the motive. unfs3booster-over-gluster is a better solution as
opposed to having kernel-nfs-over-gluster because of the avoidance of
the FUSE layer completely.


It sounds obvious to me

that a nfs-over-gluster must be slower than a pure kernel-nfs. On the
 other hand glusterfs per se may even have some advantages on the 
network side, iff performance tuning (and of course the options 
themselves) is well designed. The first thing we noticed is that load
 dropped dramatically both on server and client when not using 
kernel-nfs. Client dropped from around 20 to around 4. Server dropped
 from around 10 to around 5. Since all boxes are pretty much 
dedicated to their respective jobs a lot of caching is going on 
anyways.

Thanks, that is useful information.

So I
would not expect nfs to have advantages only because it is 
kernel-driven. And the current numbers (loss of around 30% in 
performance) show that nfs performance is not completely out of 
reach.

That is true, we do have setups performing as well and in some
cases better than kernel NFS despite the replication overhead. It
is a matter of testing and arriving at a config that works for your
setup.




What advantages would you expect from using unfs3booster at all?


To begin with, unfs3booster must be compared against kernel nfsd and not
against a GlusterFS-only config. So when comparing with kernel-nfsd, one
should understand that knfsd involves the FUSE layer, kernel's VFS and
network layer, all of which have their advantages and also
disadvantages, especially FUSE when using with the kernel nfsd. Those
bottlenecks with FUSE+knfsd interaction are well documented elsewhere.

unfs3booster enables you to avoid the FUSE layer, the VFS, etc and talk
directly to the network and through that, to the GlusterFS server. In
our measurements, we found that we could perform better than kernel
nfs-over-gluster by avoiding FUSE and using our own caching(io-cache),
buffering(write-behind, read-ahead) and request scheduling(io-threads).

Another thing we really did not understand is the _negative_ effect 
of adding iothreads on client or server. Our nfs setup needs around 
90 nfs kernel threads to run smoothly. Every number greater than 8 
iothreads reduces the performance of glusterfs measurably.




The main reason why knfsds need a higher number of threads is simply
because knfsd threads are highly io-bound, that is they wait for for the
disk IO to complete in order to serve each NFS request.

On the other hand, with io-threads, the right number actually depends on
the point at which io-threads are being used. For eg, if you're using
io-threads just above the posix or  features/locks, the s

Re: [Gluster-users] NFS replacement

2009-08-31 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

Hello all,

after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same data
with glusterfs. The client does some logfile processing every 5 minutes and
needs around 3,5 mins runtime in a nfs setup.
We found out that it makes no sense to try this setup with gluster replicate
as long as we do not have the same performance in a single server setup with
glusterfs. So now we have one server mounted (halfway replicate) and would
like to tune performance.
Does anyone have experience with some simple replacement like that? We had to
find out that almost all performance options have exactly zero effect. The
only thing that seems to make at least some difference is read-ahead on the
server. We end up with around 4,5 - 5,5 minutes runtime of the scripts, which
is on the edge as we need something quite below 5 minutes (just like nfs was).
Our goal is to maximise performance in this setup and then try a real
replication setup with two servers.
The load itselfs looks like around 100 scripts starting at one time and
processing their data.

Any ideas?


What nfs server are you using? The in-kernel one?

You could try the unfs3booster server, which is the original unfs3
with our modifications for bug fixes and slight performance
improvements. It should give better performance in certain cases
since it avoids the FUSE bottleneck on the server.

For more info, do take a look at this page:
http://www.gluster.org/docs/index.php/Unfs3boosterConfiguration

When using unfs3booster, please use GlusterFS release 2.0.6 since
that has the required changes to make booster work with NFS.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster

2009-08-26 Thread Shehjar Tikoo

Tim Runion - System Administrator wrote:

I also wanted to add to this post about booster. I have tried
booster with apache 2.2



Which GlusterFS release are you using?

Which tool or test against apache results in a core dump?

-Shehjar




LD_PRELOAD=”/usr/local/lib/glusterfs/glusterfs-booster.so” 
/usr/local/apache2/bin/httpd




Apache does load but I’m getting core dumps.



Maybe I don’t have apache compiled correctly with glusterfs?



Anyone that is using booster with Apache, please post your
configure statement for apache.



Thanks,



Tim



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] double traffic usage since upgrade?

2009-08-17 Thread Shehjar Tikoo

Mark Mielke wrote:

Possibly relevant here -

At work, we have used a tool which does something similar to
booster to accelerate an extremely slow remote file system. It
works the same way with LD_PRELOAD, however, it also requires GLIBC
to be compiled with --disable-hidden-plt. Reviewing the Internet
for similar solutions, will find PlasticFS which also has the same
requirement.

Recent versions of GLIBC call open() internally without following
the regular the regular PLT name resolution model. This increases 
performance as the PLT indirect lookup model has an expense

associated. For example, GLIBC fopen() calls open() directly rather
than going through the PLT. So, overriding open() does not
intercept calls to fopen()?

Is this something the booster developers are aware of? Have they
found a way around this, or is it possible that booster is only
boosting *some* types of access, and other types of access are
still "falling through" to FUSE?

I've asked the developer who wrote out library what he thought of 
glusterfs/booster not requiring GLIBC with --disable-hidden-plt,

and he thinks glusterfs/booster cannot be working (or cannot be
intercepting all calls and some calls are leaking through to FUSE).
Comments?

If some calls were leaking through, this might have the "double
traffic" effect, since FUSE would have its own cache separate from
booster?



I dont know what a PLT is but I'll attempt to provide some clarity here.

It is true, that booster does not support or boost all system calls.
We do not require that glibc be built with --disable-hidden-plt
for those calls which we do support.
For a start, we've aimed at getting apache and unfs3 to work with 
booster. The functional support for both in booster is complete in

2.0.6 release.

For a list of system calls supported by booster, please see:
http://www.gluster.org/docs/index.php/BoosterConfiguration

There can be applications which need un-boosted syscalls also to be
usable over GlusterFS. For such a scenario we have two ways booster
can be used. Both approaches are described at the page linked above
but in short, you're right in thinking that when the un-supported
syscalls are also needed to go over FUSE, we are, as you said, leaking
or redirecting calls over the FUSE mount point.

That page is a bit long so feel free to ask any questions here.

Thanks
-Shehjar



Cheers, mark



On 08/14/2009 01:22 PM, Anand Avati wrote:

I've been running 2.0.3 with two backend bricks and a frontend
client of mod_gluster/apache 2.2.11+worker for a few weeks now
without much issue. Last night i upgraded to 2.0.6 only to find
out that mod_gluster has been removed and is recommending to
use the booster library - which is fine but i didnt have time
to test it last night so i just mounted the whole filesystem 
with a fuse mount and figured id test the booster config later

and then swap.  I did try running the 2.0.3 mod_gluster module
with the 2.0.6 bricks but apache kept segfaulting (every 10
seconds) and then would spawn another process which would
reconnect and keep going.  I figured it was dropping a client
request every few seconds which is why i went with the fuse
mount until i could test the booster library.



That would not work, swapping binaries across versions.



Well, before with mod_gluster, we would be pushing around
200mbit of web traffic and it would evenly distribute that
200mbit between our two bricks - so server1 would be pushing
100mbit and server2 would be pushing another 100mbit.
Basically both inbound from the backend bricks and outbound 
from apache was basically identical.  Except of course if one

of the backend glusterd processes died for whatever reason the
other remaining brick would take the whole load and its traffic
would double as you would expect. Perfect, all was happy.

Now using gluster 2.0.6 and fuse both server bricks are pushing
the full 200mbit of traffic - so i basically have 400mbit of
incoming traffic from the gluster bricks but the same 200mbit
of web traffic.  I can deal, but i only have a shared gigabit
link between my client server and backend bricks and im already
eating up basically 50% of that pipe.  It is also putting a 
much larger load on both bricks since i have basically doubled

the disk IO time and traffic.  Is this a feature? Bug?



If I understand correct, 2.0.3 mod_glusterfs = 1x, 2.0.6 fuse =
2x? Can you describe the files being served? (average file size
and number of files)

Avati ___ 
Gluster-users mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users










___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/

Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5 performance issue

2009-08-06 Thread Shehjar Tikoo

Hi

I've filed a bug report which you can track at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=194

Please do add your email to the CC list and also upload the volfiles
which you're using with this test.

Thanks
Shehjar


Somsak Sriprayoonsakul wrote:

The behavior is still the same


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5 performance issue

2009-08-03 Thread Shehjar Tikoo

Somsak Sriprayoonsakul wrote:

I have run html read benchmark test using exactly the same old testset.

- Runing normal httpd (prefork) over NFS-sync option - Average TPS ~ 1500,
Peak TPS ~ 3500

- Running normal httpd (prefork) using booster - Average TPS < 500, Peak TPS
~ 1200, slight improvements but each HTTPD still eat up about 50MB+ (keep
increasing) memory instead of about 20MB without booster. I have to cut the
number of httpd process by half, but still the process used up some swap
space.

- Running httpd.worker using booster - The benchmark result is very low and
error rate is very high. There seems to be some trouble in this mode.
 - Here's what I did
  - Changed from httpd to httpd.worker in /etc/sysconfig/httpd
  - Disable PHP4
  - LD_PRELOAD in /etc/init.d/httpd then start httpd
 - The HTTPD start correctly and seems to work ok, but not all the time. Two
consecutive wget's on exactly the same URL yield different results



Hi

I can see a few reasons why this could happen with recent
releases. If possible, could you please try the same test
with the release below. In this release, I've fixed a few bugs that
could result in behaviour seen in your tests.

http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/glusterfs-2.0.6rc2.tar.gz

Thanks
Shehjar




[r...@compute-0-9 ~]# wget -v --header "Host: www.myhost.local" -O /dev/null
http://c0-3/cafe/siam/topic/F7800428/F7800428.html
--15:52:12--  http://c0-3/cafe/siam/topic/F7800428/F7800428.html
Resolving c0-3... 10.1.255.251
Connecting to c0-3|10.1.255.251|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 264373 (258K) [text/html]
Saving to: `/dev/null'

100%[>]
264,373 --.-K/s   in 0.003s

15:52:12 (83.3 MB/s) - `/dev/null' saved [264373/264373]

[r...@compute-0-9 ~]# wget -v --header "Host: www.myhost.local" -O /dev/null
http://c0-3/cafe/siam/topic/F7800428/F7800428.html
--15:52:15--  http://c0-3/cafe/siam/topic/F7800428/F7800428.html
Resolving c0-3... 10.1.255.251
Connecting to c0-3|10.1.255.251|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 0 [text/html]
Saving to: `/dev/null'

[
<=>
] 0   --.-K/s   in 0s

15:52:15 (0.00 B/s) - `/dev/null' saved [0/0]

[r...@compute-0-9 ~]#

  (Note the returned content-length. Returned contents, when corrected, is
ok)

Here's the error log in booster log file

(First wget)

[2009-08-03 15:36:21] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /usr/home/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html: /usr/home/
[2009-08-03 15:36:21] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html) to
2626532/2650924
[2009-08-03 15:36:21] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /usr/home/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html: /usr/home/
[2009-08-03 15:36:21] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html) to
2626532/2650924

(second wget)

[2009-08-03 15:36:26] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /usr/home/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html: /usr/home/
[2009-08-03 15:36:26] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html) to
2626532/2650924
[2009-08-03 15:36:26] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /usr/home/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html: /usr/home/
[2009-08-03 15:36:26] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/www/
www.myhost.com/webdoc/cafe/siam/topic/F7800428/F7800428.html) to
2626532/2650924

In server logs, there are just connect-destroy-connection messages in the
log. Nothing particularily wrong to me.

No such failure occur with Apache+Prefork.

2009/8/2 Somsak Sriprayoonsakul 


After moving the embed configuration file out from glusterfs, now httpd
boot up ok and the web seems to work now.

However, comment posting is not working. It seems that the code that do the
html modification is not working in GlusterFS context. I found that the code
modify web page locally instead. So I think workaround for my case is to
mount glusterfs with fuse at the same path as booster.

I will give it another benchmark and see how it goes.

2009/8/2 Somsak Sriprayoonsakul 

Ok, I have a chance to run booster over 2.0.4

Please find the attach file for my configuration

I did configure boost and try simple ls over my Gluster file system.
Here's the output of l

Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5 performance issue

2009-08-02 Thread Shehjar Tikoo

Somsak Sriprayoonsakul wrote:

Ok, I have a chance to run booster over 2.0.4



Have you tried configuring booster with the help doc available at:
http://www.gluster.org/docs/index.php/BoosterConfiguration

-Shehjar



Please find the attach file for my configuration

I did configure boost and try simple ls over my Gluster file system. Here's
the output of ls -al

[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
ls -l /gluster/www/
ls: /gluster/www/: Invalid argument
ls: /gluster/www/members.pantip.com: Invalid argument
ls: /gluster/www/cafe.pantip.com: Invalid argument
ls: /gluster/www/admin.pantip.com: Invalid argument
ls: /gluster/www/www.pantip.com: Invalid argument
ls: /gluster/www/passwd3.sql: Invalid argument
ls: /gluster/www/passwd.sql: Invalid argument
ls: closing directory /gluster/www/: File descriptor in bad state
total 129972
drwxr-xr-x  3 root   root   8192 May 11 16:13 admin.pantip.com
drwxr-xr-x  5 root   root   8192 May 18 11:11 cafe.pantip.com
drwxr-xr-x  3 root   root   8192 May 11 18:48 members.pantip.com
-rw-r--r--  1 root   root   66654820 May 18 10:50 passwd3.sql
-rw-r--r--  1 root   root   66225769 May 18 10:33 passwd.sql
drwxr-xr-x 11 apache apache 8192 May 18 09:47 www.pantip.com
[r...@compute-0-3 ~]#

[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
cp /etc/issue /gluster/
[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
ls -l /gluster/issue
ls: /gluster/issue: Invalid argument
-rw-r--r-- 1 root root 47 Aug  2 14:57 /gluster/issue
[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
cat /gluster/issue
CentOS release 5.3 (Final)
Kernel \r on an \m

[r...@compute-0-3 ~]#


Despite all those errors, output seems to be fine

And this is what inside my booster.log

[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:56:27] E [libglusterfsclient.c:4194:__glusterfs_stat]
libglusterfsclient: path lookup failed for (/hosts)
[2009-08-02 14:56:37] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:57:00] E [libglusterfsclient.c:4194:__glusterfs_stat]
libglusterfsclient: path lookup failed for (/issue)
[2009-08-02 14:57:07] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value

Then, I try to LD_PRELOAD apache (prefork). I change the target from
/gluster to /usr/home instead (the web application needs it). Then I tried
to strace the httpd process and found that httpd crash at the points where
httpd tried to read configuration file stored on Gluster volume (bad file
descriptor). I will try to move this configuration file some other places
and test again.

2009/7/31 Raghavendra G 


Hi,

On Thu, Jul 30, 2009 at 11:39 AM, Somsak
Sriprayoonsakul wrote:

Thank you very much for you reply

At the time we used 2.0.3, and yes we used stock Apache from CentOS. I

will

try 2.0.4 very soon to see if it's work.

For Booster, it seems not working correctly for me. Booster complains a

lots

of error with plain 'ls' command (but giving the correct output). Also,

with

Can you mail those errors?


booster, Apache process refuse to start. I will try 2.0.4 to see if it
improves. If not, I will attach error log next time.

logs are very much appreciated.



2009/7/30 Raghavendra G 

Hi Somsak,

Sorry for the delayed reply. Below you've mentioned that you've problems
with apache and booster. Going forward, Apache over booster will be the
preferred approach. Can you tell us what version of glusterfs you are

using?

And as I can understand you are using apache 2.2, am I correct?

regards,
- Original Message -
From: "Liam Slusser" 
To: "Somsak Sriprayoonsakul" 
Cc: gluster-users@gluster.org
Sent: Saturday, July 25, 2009 3:46:14 AM GMT +04:00 Abu Dhabi / Muscat
Subject: Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5

performance

 issue

I haven't tried an apples to apples comparison with Apache+mod_gluster

vs

Apache+fuse+gluster however i do run both setups.  I load tested both
setups
so to verified it could handle 4x our normal daily load and left it at
that.
 I didn't actually compare the two (although that might be cool to do
so

Re: [Gluster-users] booster unfs with cluster/distribute doesn't work...

2009-07-23 Thread Shehjar Tikoo

Liam Slusser wrote:

I've been playing with booster unfs and found that i cannot get it to work
with a gluster config that uses cluster/distribute.  I am using Gluster
2.0.3...


Thanks. I've seen the stale handle errors while using both
replicate and distribute. The fixes are in the repo but
not part of a release yet. Release 2.0.5 will contain those
changes. In the mean time, if you're really interested, you'd
check out the repo as:

$ git clone git://git.sv.gnu.org/gluster.git ./glusterfs
$ cd glusterfs
$ git checkout -b release2.0 origin/release-2.0

Also, we've not yet announced it on the list but a customised version
of unfs3 is available at:
http://ftp.gluster.com/pub/gluster/glusterfs/misc/unfs3/0.5/unfs3-0.9.23booster0.5.tar.gz

It has some bug fixes, performance enhancements and work-arounds
to improve behaviour with booster.

Some documentation is available at:
http://www.gluster.org/docs/index.php/Unfs3boosterConfiguration


Thanks
Shehjar





[r...@box01 /]# mount -t nfs store01:/intstore.booster -o
wsize=65536,rsize=65536 /mnt/store
mount: Stale NFS file handle

(just trying it again and sometimes it will mount...)

[r...@box01 /]# mount -t nfs store01:/store.booster -o
wsize=65536,rsize=65536 /mnt/store
[r...@box01 /]# ls /mnt/store
data
[r...@box01 store]# cd /mnt/store/data
-bash: cd: /mnt/store/data/: Stale NFS file handle
[r...@box01 /]# cd /mnt/store
[r...@box01 store]# cd data
-bash: cd: data/: Stale NFS file handle
[r...@box01 store]#

Sometimes i can get df to show the actual cluster, but most times it gives
me nothing.

[r...@box01 /]# df -h
FilesystemSize  Used Avail Use% Mounted on
<>
store01:/store.booster
   90T   49T   42T  54% /mnt/store
[r...@box01 /]#

[r...@box01 /]# df -h
FilesystemSize  Used Avail Use% Mounted on
<...>
store01:/store.booster
 - - -   -  /mnt/store


However as soon as i remove the cluster/distribute from my gluster client
configuration file it works fine.  (Missing 2/3 of the files because my
gluster cluster has a "distribute" of 3 volumes per each of the two servers)

A strace of unfs during one of the cd commands above outputs:

poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000) = 1 ([{fd=22,
revents=POLLIN|POLLRDNORM}])
poll([{fd=22, events=POLLIN}], 1, 35000) = 1 ([{fd=22, revents=POLLIN}])
read(22,
"\200\0\0\230B\307D\234\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\4\0\0\0\1"...,
4000) = 156
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
futex(0x7fff31c7cb20, FUTEX_WAIT_PRIVATE, 1, NULL) = 0
setresgid(-1, 0, -1)= 0
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresuid(-1, 0, -1)= 0
write(22, "\200\0\0
B\307D\234\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0F"..., 36) = 36
poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000) = 1 ([{fd=22,
revents=POLLIN|POLLRDNORM}])
poll([{fd=22, events=POLLIN}], 1, 35000) = 1 ([{fd=22, revents=POLLIN}])
read(22,
"\200\0\0\230C\307D\234\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\4\0\0\0\1"...,
4000) = 156
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresgid(-1, 0, -1)= 0
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresuid(-1, 0, -1)= 0
write(22, "\200\0\0
C\307D\234\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0F"..., 36) = 36
poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000 

With the booster.fstab debug level set a debug, this is all that shows up in
the log file:

[2009-07-23 02:52:16] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/) to 1/1
[2009-07-23 02:52:17] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /store.booster/: /store.booster/

my /etc/booster.conf

/home/gluster/apps/glusterfs-2.0.3/etc/glusterfs/liam.conf /store.booster/
glusterfs
subvolume=d,logfile=/home/gluster/apps/glusterfs-2.0.3/var/log/glusterfs/d.log,loglevel=DEBUG,attr_timeout=0

my /etc/exports

/store.booster myclient(rw,no_root_squash)

my client gluster config (liam.conf):

volume brick1a
  type protocol/client
  option transport-type tcp
  option remote-host server1
  option remote-subvolume brick1a
end-volume

volume brick1b
  type protocol/client
  option transport-type tcp
  option rem

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

This could be tricky as you don't want too lookup too many
alternatives!!
But, as you are doing LD_PRELOAD, can you not ask the application to
specify the paths (I know it's going to be little error prone based on
what application supplies) 

For example: 

/mnt/glusterfs 
If the application run dir is /mnt/application then, 
../glusterfs is another way to get to the glusterfs. 


So, for both the paths the booster intercepts the calls...


Doing the redirection might be a bit unclean based on
the current working directory. Would fit your need if we alias
one VMP to another, so that the underlying GlusterFS is accessible
through both paths?

For eg, a new option in booster.conf that says,

vmpalias=/aliased/mount

-Shehjar




Thanks and regards,
Sudipto

-Original Message-----
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 11:28 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:
Makes sense. 
Yes I am running the same program. 
I will be running couple of more tests to verify this. 
BTW. two more questions on the related topics: 


1. How much of performance boost does booster provide?


That it provides a performance boost is evident from some of our tests
and from reports from users. The exact improvement really
depends on various factors such as the volume configuration,
the type of translators used, the network setup, etc.


2. does the following


http://www.gluster.org/docs/index.php/BoosterConfiguration#Virtual_Mount

_Points
mean that from the application we always need to use absolute path to
the files being operated? 



That is a good question. I've also considered the need to remove
the dependency on absolute paths but the use cases have been limited
or none, till now to help me evolve the exact behaviour. Could you
please describe what exactly you have in mind? I could take it from
there.

One approach is to redirect file system operations into GlusterFS
when booster sees a string identifier or a token prepended to a path.
This token could be specified through the booster.conf file.
Since there will be no / prepended to this string, it does not
remain an absolute path anymore. Also, since this is a global
string identifier for booster, it is very different from a relative
path, so can be used from anywhere in the local file system tree,
as long as booster knows where to redirect these operations.

Thanks
Shehjar




Thanks and regards,
Sudipto
 



-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 9:46 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 


[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster:

booster

is inited

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry]

libglusterfsclient:

VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote2: Connected to 10.16.80.55:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc->parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending



lookup for remaining path

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk]
libglusterfsclient: 

1: (op_num=0) /abc.txt => -1 (No such file or directory)

[2009-07-20 12:24:58] E

[libglusterfsclient.c:2447:glusterfs_glh_open]

libglusterfsclient: path lookup failed for (/abc.txt)


Assuming you're the using the same program as you'd pasted earlier,
this last line is an error that says the file was not found in
the file system when trying to open it. This is the reason why
your program took the file creation branch in the "if" block.

We can be sure about this error if you could confirm, before running
your test program, whether the file really did not exist on the

backend.

Regards
-Shehjar



 

 


Can you please explain the above errors and what is that causing

these

errors?

 


Thanks and regards,

Sudipto

 

 

 


-Original Message

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-20 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:
Makes sense. 
Yes I am running the same program. 
I will be running couple of more tests to verify this. 
BTW. two more questions on the related topics: 


1. How much of performance boost does booster provide?


That it provides a performance boost is evident from some of our tests
and from reports from users. The exact improvement really
depends on various factors such as the volume configuration,
the type of translators used, the network setup, etc.


2. does the following
http://www.gluster.org/docs/index.php/BoosterConfiguration#Virtual_Mount
_Points
mean that from the application we always need to use absolute path to
the files being operated? 



That is a good question. I've also considered the need to remove
the dependency on absolute paths but the use cases have been limited
or none, till now to help me evolve the exact behaviour. Could you
please describe what exactly you have in mind? I could take it from
there.

One approach is to redirect file system operations into GlusterFS
when booster sees a string identifier or a token prepended to a path.
This token could be specified through the booster.conf file.
Since there will be no / prepended to this string, it does not
remain an absolute path anymore. Also, since this is a global
string identifier for booster, it is very different from a relative
path, so can be used from anywhere in the local file system tree,
as long as booster knows where to redirect these operations.

Thanks
Shehjar




Thanks and regards,
Sudipto
 



-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 9:46 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 


[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster: booster



is inited

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry] libglusterfsclient:



VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote2: Connected to 10.16.80.55:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc->parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending 
lookup for remaining path


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk]
libglusterfsclient: 

1: (op_num=0) /abc.txt => -1 (No such file or directory)

[2009-07-20 12:24:58] E [libglusterfsclient.c:2447:glusterfs_glh_open]



libglusterfsclient: path lookup failed for (/abc.txt)



Assuming you're the using the same program as you'd pasted earlier,
this last line is an error that says the file was not found in
the file system when trying to open it. This is the reason why
your program took the file creation branch in the "if" block.

We can be sure about this error if you could confirm, before running
your test program, whether the file really did not exist on the backend.

Regards
-Shehjar



 

 


Can you please explain the above errors and what is that causing these



errors?

 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar Tikoo
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: RE: [Gluster-users] Regarding 2.0.4 booster

 


Hi Shehjar,

 


The contents in booster.conf are in two separate lines.

I will try out the test again by merging the contents of booster.conf 
into one single line after I get my nodes back.


 


Thanks and regards,

Sudipto

 


-Original Message-

From: Shehjar Tikoo [mailto:shehj...@gluster.com]

Sent: Thursday, July 16, 2009 10:08 PM

To: Sudipto Mukhopadhyay

Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org

Subject: Re: [Gluster-users] Regarding 2.0.4 booster

 


Sudipto Mukhopadhyay wrote:


 My mistake I meant booster log is not getting generated.
 Thanks and regards,
 Sudipto
 

 


Hi

 


A few questions. Please see inlined text.

 


 -Original Message-
 From: gluster-users-boun...@gluster.org
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto
 Mukhopadhyay
 Sent: Tuesday, July 14, 2009 2:0

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-20 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 

[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster: booster 
is inited


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry] libglusterfsclient: 
VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote2: Connected to 10.16.80.55:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc->parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending 
lookup for remaining path


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk] libglusterfsclient: 
1: (op_num=0) /abc.txt => -1 (No such file or directory)


[2009-07-20 12:24:58] E [libglusterfsclient.c:2447:glusterfs_glh_open] 
libglusterfsclient: path lookup failed for (/abc.txt)


 

 

Can you please explain the above errors and what is that causing these 
errors?


The other messages are debug and informational messages. If you'd
like to reduce the verbosity of the log, please set the loglevel
parameter in the booster.conf to ERROR.

Regards
-Shehjar




 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar Tikoo
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: RE: [Gluster-users] Regarding 2.0.4 booster

 


Hi Shehjar,

 


The contents in booster.conf are in two separate lines.

I will try out the test again by merging the contents of booster.conf 
into one single line after I get my nodes back.


 


Thanks and regards,

Sudipto

 


-Original Message-

From: Shehjar Tikoo [mailto:shehj...@gluster.com]

Sent: Thursday, July 16, 2009 10:08 PM

To: Sudipto Mukhopadhyay

Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org

Subject: Re: [Gluster-users] Regarding 2.0.4 booster

 


Sudipto Mukhopadhyay wrote:


 My mistake I meant booster log is not getting generated.







 Thanks and regards,



 Sudipto


 

 


Hi

 


A few questions. Please see inlined text.

 






 -Original Message-



 From: gluster-users-boun...@gluster.org



 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto



 Mukhopadhyay



 Sent: Tuesday, July 14, 2009 2:07 PM



 To: Shehjar Tikoo



 Cc: av...@gluster.com; gluster-users@gluster.org



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Hi Shehjar,







 Thanks for looking into this issue.



 The file abc.txt is getting created; if you look at the C program; the



 following line is basically printing out the file handle on stdout:







 printf ("File handle %d\n",fh);







 But, the booster log is getting generated.







 Thanks and regards,



 Sudipto







 -Original Message-



 From: Shehjar Tikoo [mailto:shehj...@gluster.com]



 Sent: Tuesday, July 14, 2009 2:34 AM



 To: Sudipto Mukhopadhyay



 Cc: gluster-users@gluster.org; av...@gluster.com



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Sudipto Mukhopadhyay wrote:



> Hi,


> 


> 


> 



> I trying to run some tests w/booster client library and I had to



> upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD



> failures of the booster client library,


> 



 http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2ea



 cf8a8691aaca53295cde972f).


> 


> 


> 


> 


> 



> Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs



> are not generating and I am not sure whether booster is working or,



> not.


> 



> I have the following booster.conf and client vol specification:


> 


> 


> 



> $cat /etc/glusterfs/booster.conf


> 



> /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs


> 



> subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG


> 


 


Please confirm whether the above content in booster.conf is on

a single line or two separate lines. It should all be on one line for

the logfile setting to be associated with /mnt/glusterfs.

 

> 


> 



> $cat /etc/glusterfs/glusterfs-client.vol


> 



> volume brick


> 



> type protocol/client


> 



> option transport-type t

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-20 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 

[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster: booster 
is inited


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry] libglusterfsclient: 
VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote2: Connected to 10.16.80.55:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc->parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending 
lookup for remaining path


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk] libglusterfsclient: 
1: (op_num=0) /abc.txt => -1 (No such file or directory)


[2009-07-20 12:24:58] E [libglusterfsclient.c:2447:glusterfs_glh_open] 
libglusterfsclient: path lookup failed for (/abc.txt)




Assuming you're the using the same program as you'd pasted earlier,
this last line is an error that says the file was not found in
the file system when trying to open it. This is the reason why
your program took the file creation branch in the "if" block.

We can be sure about this error if you could confirm, before running
your test program, whether the file really did not exist on the backend.

Regards
-Shehjar



 

 

Can you please explain the above errors and what is that causing these 
errors?


 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar Tikoo
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: RE: [Gluster-users] Regarding 2.0.4 booster

 


Hi Shehjar,

 


The contents in booster.conf are in two separate lines.

I will try out the test again by merging the contents of booster.conf 
into one single line after I get my nodes back.


 


Thanks and regards,

Sudipto

 


-Original Message-

From: Shehjar Tikoo [mailto:shehj...@gluster.com]

Sent: Thursday, July 16, 2009 10:08 PM

To: Sudipto Mukhopadhyay

Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org

Subject: Re: [Gluster-users] Regarding 2.0.4 booster

 


Sudipto Mukhopadhyay wrote:


 My mistake I meant booster log is not getting generated.







 Thanks and regards,



 Sudipto


 

 


Hi

 


A few questions. Please see inlined text.

 






 -Original Message-



 From: gluster-users-boun...@gluster.org



 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto



 Mukhopadhyay



 Sent: Tuesday, July 14, 2009 2:07 PM



 To: Shehjar Tikoo



 Cc: av...@gluster.com; gluster-users@gluster.org



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Hi Shehjar,







 Thanks for looking into this issue.



 The file abc.txt is getting created; if you look at the C program; the



 following line is basically printing out the file handle on stdout:







 printf ("File handle %d\n",fh);







 But, the booster log is getting generated.







 Thanks and regards,



 Sudipto







 -Original Message-



 From: Shehjar Tikoo [mailto:shehj...@gluster.com]



 Sent: Tuesday, July 14, 2009 2:34 AM



 To: Sudipto Mukhopadhyay



 Cc: gluster-users@gluster.org; av...@gluster.com



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Sudipto Mukhopadhyay wrote:



> Hi,


> 


> 


> 



> I trying to run some tests w/booster client library and I had to



> upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD



> failures of the booster client library,


> 



 http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2ea



 cf8a8691aaca53295cde972f).


> 


> 


> 


> 


> 



> Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs



> are not generating and I am not sure whether booster is working or,



> not.


> 



> I have the following booster.conf and client vol specification:


> 


> 


> 



> $cat /etc/glusterfs/booster.conf


> 



> /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs


> 



> subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG


> 


 


Please confirm whether the above content in booster.conf is on

a single line or two separate lines. It should all be on one l

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-16 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:
My mistake I meant booster log is not getting generated. 


Thanks and regards,
Sudipto



Hi

A few questions. Please see inlined text.



-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto
Mukhopadhyay
Sent: Tuesday, July 14, 2009 2:07 PM
To: Shehjar Tikoo
Cc: av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Hi Shehjar, 

Thanks for looking into this issue. 
The file abc.txt is getting created; if you look at the C program; the

following line is basically printing out the file handle on stdout:

printf ("File handle %d\n",fh);

But, the booster log is getting generated. 


Thanks and regards,
Sudipto 


-Original Message-----
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Tuesday, July 14, 2009 2:34 AM

To: Sudipto Mukhopadhyay
Cc: gluster-users@gluster.org; av...@gluster.com
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,



I trying to run some tests w/booster client library and I had to 
upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD 
failures of the booster client library, 


http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2ea
cf8a8691aaca53295cde972f).






Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs 
are not generating and I am not sure whether booster is working or, 
not.


I have the following booster.conf and client vol specification:



$cat /etc/glusterfs/booster.conf

/etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs

subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG



Please confirm whether the above content in booster.conf is on
a single line or two separate lines. It should all be on one line for
the logfile setting to be associated with /mnt/glusterfs.




$cat /etc/glusterfs/glusterfs-client.vol

volume brick

type protocol/client

option transport-type tcp/client # for TCP/IP transport

option remote-host 10.16.80.53   # IP address of the server

option remote-subvolume afr  # name of the remote volume

end-volume



volume writebehind

type performance/write-behind

option window-size 1MB

subvolumes brick

end-volume





I have written a simple program to test few system calls and the 
booster functionality:


int main() {



int fh;

char buffer[65];



fh = open("/mnt/glusterfs/abc.txt",O_WRONLY);

if (fh == -1) {

fh = creat("/mnt/glusterfs/abc.txt",O_WRONLY);

}

printf ("File handle %d\n",fh);

close(fh);



return 0;

}



When I run my program with LD_PRELOAD option, I get the following 
message:




LD_PRELOAD="/usr/local/lib/glusterfs/glusterfs-booster.so" 
/root/fstest/a.out


[2009-07-13 16:19:02] E 
[libglusterfsclient.c:2447:glusterfs_glh_open] libglusterfsclient: 
path lookup failed for (/abc.txt)




Is the above log message the only message that is being output? Were
there more lines that resembled this log line that you probably
removed for the purpose of this email?

Thanks
Shehjar



File handle 4



And there is no /tmp/booster.log created. The above log is only 
observed while creating the file (in case abc.txt is not present).


Could you please advice on the booster config file and let me know 
why I am not seeing any booster log?




I'll check why the booster log is not getting generated.
In the mean time, can you check if the file abc.txt really does not
get created on the backend?
If open is failing, it looks like the fd 4 is being returned by
creat call.

-Shehjar



Appreciate your help.



Thanks and regards,

Sudipto














___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-14 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

Hi,



I trying to run some tests w/booster client library and I had to 
upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD 
failures of the booster client library, 
http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2eacf8a8691aaca53295cde972f).







Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs 
are not generating and I am not sure whether booster is working or, 
not.


I have the following booster.conf and client vol specification:



$cat /etc/glusterfs/booster.conf

/etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs

subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG



$cat /etc/glusterfs/glusterfs-client.vol

volume brick

type protocol/client

option transport-type tcp/client # for TCP/IP transport

option remote-host 10.16.80.53   # IP address of the server

option remote-subvolume afr  # name of the remote volume

end-volume



volume writebehind

type performance/write-behind

option window-size 1MB

subvolumes brick

end-volume





I have written a simple program to test few system calls and the 
booster functionality:


int main() {



int fh;

char buffer[65];



fh = open("/mnt/glusterfs/abc.txt",O_WRONLY);

if (fh == -1) {

fh = creat("/mnt/glusterfs/abc.txt",O_WRONLY);

}

printf ("File handle %d\n",fh);

close(fh);



return 0;

}



When I run my program with LD_PRELOAD option, I get the following 
message:




LD_PRELOAD="/usr/local/lib/glusterfs/glusterfs-booster.so" 
/root/fstest/a.out


[2009-07-13 16:19:02] E 
[libglusterfsclient.c:2447:glusterfs_glh_open] libglusterfsclient: 
path lookup failed for (/abc.txt)


File handle 4



And there is no /tmp/booster.log created. The above log is only 
observed while creating the file (in case abc.txt is not present).


Could you please advice on the booster config file and let me know 
why I am not seeing any booster log?




I'll check why the booster log is not getting generated.
In the mean time, can you check if the file abc.txt really does not
get created on the backend?
If open is failing, it looks like the fd 4 is being returned by
creat call.

-Shehjar




Appreciate your help.



Thanks and regards,

Sudipto












___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance question

2009-07-02 Thread Shehjar Tikoo

Joe Julian wrote:
I'm using an unpatched fuse 2.7.4-1 and glusterfs 2.0.2-1 with the 
following configs and have this result which surprised me:


# dd if=/dev/zero of=foo bs=512k count=1024 1024+0 records in 1024+0 
records out 536870912 bytes (537 MB) copied, 14.1538 seconds, 37.9 
MB/s # dd if=/dev/zero of=foo bs=512k count=1024 1024+0 records in 
1024+0 records out 536870912 bytes (537 MB) copied, 24.4553 seconds, 
22.0 MB/s


Why is it slower if the file exists? Should it be?


It is nearly impossible to tell from the difference between these
two runs of dd. We can help better if there are more data points
for us to look at.

Once you have figures averaged from more than a few runs and if you
still see a drop, we'd much appreciate if you file a performance bug
with your findings at:

http://bugs.gluster.com

Thanks
Shehjar





# Servers # volume posix0 type storage/posix
 option directory /cluster/0 end-volume

volume locks0 type features/locks subvolumes posix0 end-volume

volume brick0 type performance/io-threads option thread-count 8 
subvolumes locks0 end-volume


volume posix1 type storage/posix option directory /cluster/1 
end-volume


volume locks1 type features/locks subvolumes posix1 end-volume

volume brick1 type performance/io-threads option thread-count 8 
subvolumes locks1 end-volume


volume posix2 type storage/posix option directory /cluster/2 
end-volume


volume locks2 type features/locks subvolumes posix2 end-volume

volume brick2 type performance/io-threads option thread-count 8 
subvolumes locks2 end-volume


volume posix3 type storage/posix option directory /cluster/3 
end-volume


volume locks3 type features/locks subvolumes posix3 end-volume

volume brick3 type performance/io-threads option thread-count 8 
subvolumes locks3 end-volume


volume server type protocol/server option transport-type tcp 
subvolumes brick0 brick1 brick2 brick3 option auth.addr.brick0.allow 
* option auth.addr.brick1.allow * option auth.addr.brick2.allow * 
option auth.addr.brick3.allow * end-volume


 Client  volume ewcs2_cluster0 type 
protocol/client option transport-type tcp option remote-host 
ewcs2.ewcs.com option remote-subvolume brick0 end-volume volume 
ewcs2_cluster1 type protocol/client option transport-type tcp option 
remote-host ewcs2.ewcs.com option remote-subvolume brick1 end-volume

 volume ewcs2_cluster2 type protocol/client option transport-type tcp
 option remote-host ewcs2.ewcs.com option remote-subvolume brick2 
end-volume volume ewcs2_cluster3 type protocol/client option 
transport-type tcp option remote-host ewcs2.ewcs.com option 
remote-subvolume brick3 end-volume


volume ewcs4_cluster0 type protocol/client option transport-type tcp
 option remote-host ewcs4.ewcs.com option remote-subvolume brick0 
end-volume volume ewcs4_cluster1 type protocol/client option 
transport-type tcp option remote-host ewcs4.ewcs.com option 
remote-subvolume brick1 end-volume volume ewcs4_cluster2 type 
protocol/client option transport-type tcp option remote-host 
ewcs4.ewcs.com option remote-subvolume brick2 end-volume volume 
ewcs4_cluster3 type protocol/client option transport-type tcp option 
remote-host ewcs4.ewcs.com option remote-subvolume brick3 end-volume


volume ewcs7_cluster0 type protocol/client option transport-type tcp
 option remote-host ewcs7.ewcs.com option remote-subvolume brick0 
end-volume volume ewcs7_cluster1 type protocol/client option 
transport-type tcp option remote-host ewcs7.ewcs.com option 
remote-subvolume brick1 end-volume volume ewcs7_cluster2 type 
protocol/client option transport-type tcp option remote-host 
ewcs7.ewcs.com option remote-subvolume brick2 end-volume volume 
ewcs7_cluster3 type protocol/client option transport-type tcp option 
remote-host ewcs7.ewcs.com option remote-subvolume brick3 end-volume


volume repl1 type cluster/replicate subvolumes ewcs2_cluster0 
ewcs4_cluster0 ewcs7_cluster0 end-volume


volume repl2 type cluster/replicate subvolumes ewcs2_cluster1 
ewcs4_cluster1 ewcs7_cluster1 end-volume


volume repl3 type cluster/replicate subvolumes ewcs2_cluster2 
ewcs4_cluster2 ewcs7_cluster2 end-volume


volume repl4 type cluster/replicate subvolumes ewcs2_cluster3 
ewcs4_cluster3 ewcs7_cluster3 end-volume


volume distribute type cluster/distribute subvolumes repl1 repl2 
repl3 repl4 end-volume


volume writebehind type performance/write-behind option 
aggregate-size 128KB option cache-size 1MB subvolumes distribute 
end-volume


volume ioc type performance/io-cache option cache-size 512MB 
subvolumes writebehind end-volume


###

mount -t glusterfs /etc/glusterfs/glusterfs-client.vol /mnt/gluster



___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://

Re: [Gluster-users] intended use cases

2009-07-02 Thread Shehjar Tikoo

Kent Tong wrote:

Hi,

What are the intended use cases for gluster? For example, is it 
suitable for, say, replacing a SAN? For example, is it good for the 
following?

Using GlusterFS doesnt always require you to make a choice between
SAN and GlusterFS. You could continue to use your existing SAN
infrastructure as the underlying storage for GlusterFS. Besides the
file-level clustering ability from GlusterFS, you'll also be able to use
the high performance provided by RAIDed SAN architectures.

-Shehjar


[ ] Storing a huge volume of seldom accessed files (file archive) [ ]
 Storing frequently read/write files (file server) [ ] Storing 
frequently read files (web server) [ ] Hosting databases [ ] Hosting 
VM images




___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


  1   2   >