Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Lalatendu Mohanty

On 02/11/2014 10:05 PM, Matt Miller wrote:
Yesterday was my first day on the list, so I had not yet seen that 
thread.  Appears to be working though.  Will have to setup some load 
tests.


I have written a blog post  about how to use vfs gluster plugin with 
Samba. Pasting the link below as anybody searching/reading the email in 
future will get the link.


http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/

Thanks,
Lala



On Tue, Feb 11, 2014 at 12:42 AM, Daniel Müller 
mailto:muel...@tropenklinik.de>> wrote:


No, not really:
Look at my thread: samba vfs objects glusterfs is it now working?
I am just waiting for an answer to fix this.
The only way I succeeded to make it work is how you descriped
(exporting
fuse mount thru samba)



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de 
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"




Von: gluster-users-boun...@gluster.org

[mailto:gluster-users-boun...@gluster.org
] Im Auftrag von Matt Miller
Gesendet: Montag, 10. Februar 2014 16:43
An: gluster-users@gluster.org 
Betreff: [Gluster-users] Gluster, Samba, and VFS

Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play
nice as a
general purpose file server.  I have had severe performance
problems in the
past with mounting the Gluster volume as a Fuse mount, then
exporting the
Fuse mount thru Samba.  As I found out after setting up the
cluster this is
somewhat expected when serving out lots of small files. Was hoping VFS
would provide better performance when serving out lots and lots of
small
files.
Is anyone using VFS extensions in production?  Is it ready for
prime time?
I could not find a single reference to it on Gluster's main
website (maybe I
am looking in the wrong place), so not sure of the stability or
supported-ness of this.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster-fuse and proftpd/vsftpd issue

2014-02-11 Thread Venky Shankar
Could you provide this information from the server (also the client/server
logs):

cat /proc//task/*/stack
ls -l /proc//task/*/fd

 == process ID of glusterfsd


On Fri, Feb 7, 2014 at 6:14 PM, Barry Stetler  wrote:

> We started using gluster in a custom solution recently.  Basically we
> setup a gluster volume that we mount at /home. Then we give users their own
> home directory with a gluster quota. They have FTP, SFTP and SSHMOUNT
> access. I first used vsftpd and notice that ftp connections will not close
> and the load goes up ever couple of days the funny thing is the load does
> not seem to slow down the server. The load went up to 20 in the last two
> months but the server was not slow at all... At first I thought it was
> vsftpd so I changed to proftpd then the issue came back.  If I reboot the
> server it goes away for a few days depending on how many ftp connections. I
> noticed in the dmesg log this making me think the issue was some kind of
> leak in gluster-fuse
>
> This is on CentOS 6.5 64 bit.
>
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_close+0x3c/0x90
>  [] sys_close+0xa5/0x100
>  [] system_call_fastpath+0x16/0x1b
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_close+0x3c/0x90
>  [] sys_close+0xa5/0x100
>  [] system_call_fastpath+0x16/0x1b
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_close+0x3c/0x90
>  [] sys_close+0xa5/0x100
>  [] system_call_fastpath+0x16/0x1b
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_close+0x3c/0x90
>  [] sys_close+0xa5/0x100
>  [] system_call_fastpath+0x16/0x1b
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_close+0x3c/0x90
>  [] sys_close+0xa5/0x100
>  [] system_call_fastpath+0x16/0x1b
> INFO: task proftpd:7831 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> proftpd   D  0  7831   4384 0x0004
>  88031f3bde38 0082  81051439
>  88031f3bddc8 0003 88031f3bddd8 880335936040
>  8803365cfab8 88031f3bdfd8 fb88 8803365cfab8
> Call Trace:
>  [] ? __wake_up_common+0x59/0x90
>  [] fuse_request_send+0xe5/0x290 [fuse]
>  [] ? autoremove_wake_function+0x0/0x40
>  [] fuse_flush+0x106/0x140 [fuse]
>  [] filp_c

Re: [Gluster-users] geo-replication errors

2014-02-11 Thread Venky Shankar
Is this from the latest master branch?


On Tue, Feb 11, 2014 at 4:35 PM, John Ewing  wrote:

> I am trying to use geo-replication but it is running slowly and I keep
> getting the
> following logged in the geo-replication log.
>
> [2014-02-11 10:56:42.831517] I [monitor(monitor):80:monitor] Monitor:
> 
> [2014-02-11 10:56:42.832226] I [monitor(monitor):81:monitor] Monitor:
> starting gsyncd worker
> [2014-02-11 10:56:42.951199] I [gsyncd:354:main_i] : syncing:
> gluster://localhost:xxx -> ssh://gluster-as...@xx.xx.xx.xx
> :gluster://localhost:x
> [2014-02-11 10:56:53.79632] I [master:284:crawl] GMaster: new master is
> acfda6fc-d995-4bf0-b13e-da789afb28c7
> [2014-02-11 10:56:53.80282] I [master:288:crawl] GMaster: primary master
> with volume id acfda6fc-d995-4bf0-b13e-da789afb28c7 ...
> [2014-02-11 10:56:57.453376] E [syncdutils:190:log_raise_exception] :
> FAIL:
> Traceback (most recent call last):
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 216,
> in twrap
> tf(*aa)
>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 123,
> in tailer
> poe, _ ,_ = select([po.stderr for po in errstore], [], [], 1)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 276,
> in select
> return eintr_wrap(oselect.select, oselect.error, *a)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 269,
> in eintr_wrap
> return func(*a)
> error: (9, 'Bad file descriptor')
> [2014-02-11 10:56:57.462110] I [syncdutils:142:finalize] : exiting.
>
> I'm unsure what to do to debug and fix this.
>
> Thanks
>
> John.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs

2014-02-11 Thread Justin Clift
Cool.  People that know what they're talking about for Gluster NFS. :)

+ Justin


On 11/02/2014, at 9:59 PM, Antonio Messina wrote:
> Just to clarify: what I just wrote is true for any nfs server, and
> it's not specific to gluster.
> 
> .a.
> 
> On Tue, Feb 11, 2014 at 10:58 PM, Antonio Messina
>  wrote:
>> You can check if the service is enabled with "rpcinfo -p"
>> 
>> you should see a list of services like (output is from Gluster 3.4):
>> 
>>   program vers proto   port  service
>>104   tcp111  portmapper
>>103   tcp111  portmapper
>>102   tcp111  portmapper
>>104   udp111  portmapper
>>103   udp111  portmapper
>>102   udp111  portmapper
>>1002273   tcp   2049
>>153   tcp  38465  mountd
>>151   tcp  38466  mountd
>>133   tcp   2049  nfs
>>1000241   udp  51025  status
>>1000241   tcp  51853  status
>>1000214   tcp  38468  nlockmgr
>>1000211   udp857  nlockmgr
>>1000211   tcp858  nlockmgr
>> 
>> You need to have lines for "nfs" and "mountd". If you don't have them
>> the nfs server is not running.
>> If it's running you can see which filesystems are exported with "showmount 
>> -e":
>> 
>> Export list for gluster-frontend001:
>> /default *
>> 
>> In my case "default" is the name of the gluster volume exported, and
>> "*" is the list of allowed clients (in this case: any client)
>> 
>> .a.
>> 
>> On Tue, Feb 11, 2014 at 10:50 PM, Joe Julian  wrote:
>>> 
>>> On 02/11/2014 12:41 PM, John G. Heim wrote:
 
 
 
 On 02/11/14 13:31, Justin Clift wrote:
> 
> On 10/02/2014, at 4:18 PM, John G. Heim wrote:
>> 
>> On 02/10/14 07:23, Justin Clift wrote:
>>> 
>>> On Thu, 06 Feb 2014 14:52:44 -0600
>>> "John G. Heim"  wrote:
 
 Maybe this is a dumb question but do I have to set up an nfs server on
 one of the server peers in my gluster volume in order to connect to
 the
 volume with nfs?
>>> 
>>> 
>>> In theory, NFS is supposed to be enabled/running by default.
>> 
>> 
>> On all the servers? I have 51 servers in my cluster.  I just ran a port
>> scan and none of them have port 2049 open.
>> 
>>> If you run "gluster volume status", what does it show?
>> 
>> 
>> Do you mean 'gluster volume info'?  That command says "nfs.disable: off"
>> I'm running 3.2.7, the version in debian stable (wheezy).
> 
> 
> 
> Heh, nah I'm definitely meaning "status" not info.  He's the output from
 
 
 That gives me an error message "Unrecognized word".
 
 Difference between 3.2 and 3.5?
>>> 
>>> Yep, that's 3.2 and it wouldn't be listening on 2049. It'll just use the
>>> portmapper on 111. Mount it using the options, tcp,vers=3 .
>>> 
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> 
>> 
>> 
>> --
>> antonio.s.mess...@gmail.com
>> antonio.mess...@uzh.ch +41 (0)44 635 42 22
>> GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
>> University of Zurich
>> Winterthurerstrasse 190
>> CH-8057 Zurich Switzerland
> 
> 
> 
> -- 
> antonio.s.mess...@gmail.com
> antonio.mess...@uzh.ch +41 (0)44 635 42 22
> GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
> University of Zurich
> Winterthurerstrasse 190
> CH-8057 Zurich Switzerland
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Antonio Messina
Just to clarify: what I just wrote is true for any nfs server, and
it's not specific to gluster.

.a.

On Tue, Feb 11, 2014 at 10:58 PM, Antonio Messina
 wrote:
> You can check if the service is enabled with "rpcinfo -p"
>
> you should see a list of services like (output is from Gluster 3.4):
>
>program vers proto   port  service
> 104   tcp111  portmapper
> 103   tcp111  portmapper
> 102   tcp111  portmapper
> 104   udp111  portmapper
> 103   udp111  portmapper
> 102   udp111  portmapper
> 1002273   tcp   2049
> 153   tcp  38465  mountd
> 151   tcp  38466  mountd
> 133   tcp   2049  nfs
> 1000241   udp  51025  status
> 1000241   tcp  51853  status
> 1000214   tcp  38468  nlockmgr
> 1000211   udp857  nlockmgr
> 1000211   tcp858  nlockmgr
>
> You need to have lines for "nfs" and "mountd". If you don't have them
> the nfs server is not running.
> If it's running you can see which filesystems are exported with "showmount 
> -e":
>
> Export list for gluster-frontend001:
> /default *
>
> In my case "default" is the name of the gluster volume exported, and
> "*" is the list of allowed clients (in this case: any client)
>
> .a.
>
> On Tue, Feb 11, 2014 at 10:50 PM, Joe Julian  wrote:
>>
>> On 02/11/2014 12:41 PM, John G. Heim wrote:
>>>
>>>
>>>
>>> On 02/11/14 13:31, Justin Clift wrote:

 On 10/02/2014, at 4:18 PM, John G. Heim wrote:
>
> On 02/10/14 07:23, Justin Clift wrote:
>>
>> On Thu, 06 Feb 2014 14:52:44 -0600
>> "John G. Heim"  wrote:
>>>
>>> Maybe this is a dumb question but do I have to set up an nfs server on
>>> one of the server peers in my gluster volume in order to connect to
>>> the
>>> volume with nfs?
>>
>>
>> In theory, NFS is supposed to be enabled/running by default.
>
>
> On all the servers? I have 51 servers in my cluster.  I just ran a port
> scan and none of them have port 2049 open.
>
>> If you run "gluster volume status", what does it show?
>
>
> Do you mean 'gluster volume info'?  That command says "nfs.disable: off"
> I'm running 3.2.7, the version in debian stable (wheezy).



 Heh, nah I'm definitely meaning "status" not info.  He's the output from
>>>
>>>
>>> That gives me an error message "Unrecognized word".
>>>
>>> Difference between 3.2 and 3.5?
>>
>> Yep, that's 3.2 and it wouldn't be listening on 2049. It'll just use the
>> portmapper on 111. Mount it using the options, tcp,vers=3 .
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> antonio.s.mess...@gmail.com
> antonio.mess...@uzh.ch +41 (0)44 635 42 22
> GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
> University of Zurich
> Winterthurerstrasse 190
> CH-8057 Zurich Switzerland



-- 
antonio.s.mess...@gmail.com
antonio.mess...@uzh.ch +41 (0)44 635 42 22
GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Antonio Messina
You can check if the service is enabled with "rpcinfo -p"

you should see a list of services like (output is from Gluster 3.4):

   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1002273   tcp   2049
153   tcp  38465  mountd
151   tcp  38466  mountd
133   tcp   2049  nfs
1000241   udp  51025  status
1000241   tcp  51853  status
1000214   tcp  38468  nlockmgr
1000211   udp857  nlockmgr
1000211   tcp858  nlockmgr

You need to have lines for "nfs" and "mountd". If you don't have them
the nfs server is not running.
If it's running you can see which filesystems are exported with "showmount -e":

Export list for gluster-frontend001:
/default *

In my case "default" is the name of the gluster volume exported, and
"*" is the list of allowed clients (in this case: any client)

.a.

On Tue, Feb 11, 2014 at 10:50 PM, Joe Julian  wrote:
>
> On 02/11/2014 12:41 PM, John G. Heim wrote:
>>
>>
>>
>> On 02/11/14 13:31, Justin Clift wrote:
>>>
>>> On 10/02/2014, at 4:18 PM, John G. Heim wrote:

 On 02/10/14 07:23, Justin Clift wrote:
>
> On Thu, 06 Feb 2014 14:52:44 -0600
> "John G. Heim"  wrote:
>>
>> Maybe this is a dumb question but do I have to set up an nfs server on
>> one of the server peers in my gluster volume in order to connect to
>> the
>> volume with nfs?
>
>
> In theory, NFS is supposed to be enabled/running by default.


 On all the servers? I have 51 servers in my cluster.  I just ran a port
 scan and none of them have port 2049 open.

> If you run "gluster volume status", what does it show?


 Do you mean 'gluster volume info'?  That command says "nfs.disable: off"
 I'm running 3.2.7, the version in debian stable (wheezy).
>>>
>>>
>>>
>>> Heh, nah I'm definitely meaning "status" not info.  He's the output from
>>
>>
>> That gives me an error message "Unrecognized word".
>>
>> Difference between 3.2 and 3.5?
>
> Yep, that's 3.2 and it wouldn't be listening on 2049. It'll just use the
> portmapper on 111. Mount it using the options, tcp,vers=3 .
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



-- 
antonio.s.mess...@gmail.com
antonio.mess...@uzh.ch +41 (0)44 635 42 22
GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 3.5.0beta3 RPMs are available now

2014-02-11 Thread Kaleb KEITHLEY
3.5.0beta3 RPMs for el5, el6, el7, fedora 19, fedora 20, and fedora 21 
(rawhide) are available in YUM repos at 
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta3/


Gluster Test is slated to start Thursday. Watch for an announcement here.

Debian and Ubuntu dpkgs coming soon too (I hope).

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Joe Julian


On 02/11/2014 12:41 PM, John G. Heim wrote:



On 02/11/14 13:31, Justin Clift wrote:

On 10/02/2014, at 4:18 PM, John G. Heim wrote:

On 02/10/14 07:23, Justin Clift wrote:

On Thu, 06 Feb 2014 14:52:44 -0600
"John G. Heim"  wrote:
Maybe this is a dumb question but do I have to set up an nfs 
server on
one of the server peers in my gluster volume in order to connect 
to  the

volume with nfs?


In theory, NFS is supposed to be enabled/running by default.


On all the servers? I have 51 servers in my cluster.  I just ran a 
port scan and none of them have port 2049 open.



If you run "gluster volume status", what does it show?


Do you mean 'gluster volume info'?  That command says "nfs.disable: 
off" I'm running 3.2.7, the version in debian stable (wheezy).



Heh, nah I'm definitely meaning "status" not info.  He's the output from


That gives me an error message "Unrecognized word".

Difference between 3.2 and 3.5?
Yep, that's 3.2 and it wouldn't be listening on 2049. It'll just use the 
portmapper on 111. Mount it using the options, tcp,vers=3 .

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Justin Clift
On 11/02/2014, at 9:46 PM, Justin Clift wrote:

> From a Gluster 6.4 (now) vm just created:

Typo.  Gluster 3.4 I meant.  It's been a long day... ;)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Justin Clift
On 11/02/2014, at 8:41 PM, John G. Heim wrote:
> On 02/11/14 13:31, Justin Clift wrote:
>> On 10/02/2014, at 4:18 PM, John G. Heim wrote:
>>> On 02/10/14 07:23, Justin Clift wrote:
 On Thu, 06 Feb 2014 14:52:44 -0600
 "John G. Heim"  wrote:
> Maybe this is a dumb question but do I have to set up an nfs server on
> one of the server peers in my gluster volume in order to connect to  the
> volume with nfs?
 
 In theory, NFS is supposed to be enabled/running by default.
>>> 
>>> On all the servers? I have 51 servers in my cluster.  I just ran a port 
>>> scan and none of them have port 2049 open.
>>> 
 If you run "gluster volume status", what does it show?
>>> 
>>> Do you mean 'gluster volume info'?  That command says "nfs.disable: off" 
>>> I'm running 3.2.7, the version in debian stable (wheezy).
>> 
>> Heh, nah I'm definitely meaning "status" not info.  He's the output from
> 
> That gives me an error message "Unrecognized word".
> 
> Difference between 3.2 and 3.5?


Looks like it was introduced in 3.3.  Sorry about that, that was unhelpful of
me. :/

As a thought, does the version of netstat on Debian Wheezy support "-nltp"
as an option when run as root?  If so, that will show all listening ports
and their process pid/name.  From a Gluster 6.4 (now) vm just created:

  $ sudo gluster volume status
  Status of volume: playground
  Gluster processPortOnline  Pid
  --
  Brick centos65:/export/brick/brick149152   Y   6630
  NFS Server on localhost2049Y   6641
 
  There are no active volume tasks
  $ sudo netstat -nltp | grep gluster
  tcp0  0 0.0.0.0:49152   0.0.0.0:*   
LISTEN  6630/glusterfsd 
  tcp0  0 0.0.0.0:20490.0.0.0:*   
LISTEN  6641/glusterfs  
  tcp0  0 0.0.0.0:38465   0.0.0.0:*   
LISTEN  6641/glusterfs  
  tcp0  0 0.0.0.0:38466   0.0.0.0:*   
LISTEN  6641/glusterfs  
  tcp0  0 0.0.0.0:38469   0.0.0.0:*   
LISTEN  6641/glusterfs  
  tcp0  0 0.0.0.0:24007   0.0.0.0:*   
LISTEN  6551/glusterd 

(lsof can show this info too, I'm just not as familiar with it)

On the second line of netstat results, we can see glusterfs is binding to
port 2049 on this server.

Try it on one of your boxes.  It will help figure out if something is
actually listening on the NFS port, but not showing up in a port scan for
some reason (iptables, etc).

If glusterfs is NOT binding to port 2049, it might be crashing on startup too.

For me with Gluster 3.5 dev, the crashing NFS server is leaving a bunch of
core. files around (one per crash).  eg:

  /core.1323
  /core.1523
  /core.1952

Running "file" on any of them shows its a core file from glusterfs trying to
start (and failing).  If you have that happening too, this helps narrow things
down. ;)

Let us know how it goes?

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread John G. Heim



On 02/11/14 13:31, Justin Clift wrote:

On 10/02/2014, at 4:18 PM, John G. Heim wrote:

On 02/10/14 07:23, Justin Clift wrote:

On Thu, 06 Feb 2014 14:52:44 -0600
"John G. Heim"  wrote:

Maybe this is a dumb question but do I have to set up an nfs server on
one of the server peers in my gluster volume in order to connect to  the
volume with nfs?


In theory, NFS is supposed to be enabled/running by default.


On all the servers? I have 51 servers in my cluster.  I just ran a port scan 
and none of them have port 2049 open.


If you run "gluster volume status", what does it show?


Do you mean 'gluster volume info'?  That command says "nfs.disable: off" I'm 
running 3.2.7, the version in debian stable (wheezy).



Heh, nah I'm definitely meaning "status" not info.  He's the output from


That gives me an error message "Unrecognized word".

Difference between 3.2 and 3.5?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread Justin Clift
On 10/02/2014, at 4:18 PM, John G. Heim wrote:
> On 02/10/14 07:23, Justin Clift wrote:
>> On Thu, 06 Feb 2014 14:52:44 -0600
>> "John G. Heim"  wrote:
>>> Maybe this is a dumb question but do I have to set up an nfs server on
>>> one of the server peers in my gluster volume in order to connect to  the
>>> volume with nfs?
>> 
>> In theory, NFS is supposed to be enabled/running by default.
> 
> On all the servers? I have 51 servers in my cluster.  I just ran a port scan 
> and none of them have port 2049 open.
> 
>> If you run "gluster volume status", what does it show?
> 
> Do you mean 'gluster volume info'?  That command says "nfs.disable: off" I'm 
> running 3.2.7, the version in debian stable (wheezy).


Heh, nah I'm definitely meaning "status" not info.  He's the output from
a vm I'm doing development on using the Gluster 3.5 code base:

  $ sudo gluster volume status
  Status of volume: patchy
  Gluster process   PortOnline  Pid
  --
  Brick f19laptop.uk.gluster.org:/d/backends/patchy149153   Y   1788
  Brick f19laptop.uk.gluster.org:/d/backends/patchy249154   Y   1793
  Brick f19laptop.uk.gluster.org:/d/backends/patchy349155   Y   1798
  Brick f19laptop.uk.gluster.org:/d/backends/patchy449156   Y   1802
  NFS Server on localhost   N/A N   N/A
  Self-heal Daemon on localhost N/A Y   1854

  Task Status of Volume patchy
  --
  There are no active volume tasks

With the line that says "NFS Server on localhost", it *should* be showing a port
number, pid number, and "Y" for the Online row.

In my case it's not because the NFS server is crashing at startup (Gluster 3.5
bug I'm chasing down).

Hopefully yours does have entries there, otherwise you too could be having a
crashing NFS server. (!)

What's the output of yours look like?

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread David Gibbons
We use samba VFS quite heavily in our infrastructure. Integrated with AD
via winbind, load balanced SMB front-ends based on HA with auto failover.
This system has been in production for about 4 months. So far it's worked
very well.

Dave


On Tue, Feb 11, 2014 at 11:35 AM, Matt Miller  wrote:

> Yesterday was my first day on the list, so I had not yet seen that thread.
>  Appears to be working though.  Will have to setup some load tests.
>
>
> On Tue, Feb 11, 2014 at 12:42 AM, Daniel Müller 
> wrote:
>
>> No, not really:
>> Look at my thread: samba vfs objects glusterfs is it now working?
>> I am just waiting for an answer to fix this.
>> The only way I succeeded to make it work is how you descriped (exporting
>> fuse mount thru samba)
>>
>>
>>
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus
>> Paul-Lechler-Str. 24
>> 72076 Tübingen
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> "Der Mensch ist die Medizin des Menschen"
>>
>>
>>
>>
>> Von: gluster-users-boun...@gluster.org
>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
>> Gesendet: Montag, 10. Februar 2014 16:43
>> An: gluster-users@gluster.org
>> Betreff: [Gluster-users] Gluster, Samba, and VFS
>>
>> Stumbled upon
>>
>> https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
>> when trying to find info on how to make Gluster and Samba play nice as a
>> general purpose file server.  I have had severe performance problems in
>> the
>> past with mounting the Gluster volume as a Fuse mount, then exporting the
>> Fuse mount thru Samba.  As I found out after setting up the cluster this
>> is
>> somewhat expected when serving out lots of small files.  Was hoping VFS
>> would provide better performance when serving out lots and lots of small
>> files.
>> Is anyone using VFS extensions in production?  Is it ready for prime
>> time?
>> I could not find a single reference to it on Gluster's main website
>> (maybe I
>> am looking in the wrong place), so not sure of the stability or
>> supported-ness of this.
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Matt Miller
Yesterday was my first day on the list, so I had not yet seen that thread.
 Appears to be working though.  Will have to setup some load tests.


On Tue, Feb 11, 2014 at 12:42 AM, Daniel Müller wrote:

> No, not really:
> Look at my thread: samba vfs objects glusterfs is it now working?
> I am just waiting for an answer to fix this.
> The only way I succeeded to make it work is how you descriped (exporting
> fuse mount thru samba)
>
>
>
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> "Der Mensch ist die Medizin des Menschen"
>
>
>
>
> Von: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
> Gesendet: Montag, 10. Februar 2014 16:43
> An: gluster-users@gluster.org
> Betreff: [Gluster-users] Gluster, Samba, and VFS
>
> Stumbled upon
>
> https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
> when trying to find info on how to make Gluster and Samba play nice as a
> general purpose file server.  I have had severe performance problems in the
> past with mounting the Gluster volume as a Fuse mount, then exporting the
> Fuse mount thru Samba.  As I found out after setting up the cluster this is
> somewhat expected when serving out lots of small files.  Was hoping VFS
> would provide better performance when serving out lots and lots of small
> files.
> Is anyone using VFS extensions in production?  Is it ready for prime time?
> I could not find a single reference to it on Gluster's main website (maybe
> I
> am looking in the wrong place), so not sure of the stability or
> supported-ness of this.
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] glusterfs-3.5.0beta3 released

2014-02-11 Thread John Mark Walker
Excellent...

And with that, we'll start the next GlusterFest week from this Thursday, 
February 13, through Wednesday, February 19. 

Stay tuned for details.

-JM


- Original Message -
> 
> 
> SRC:
> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.0beta3.tar.gz
> 
> This release is made off jenkins-release-58
> 
> -- Gluster Build System
> 
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs

2014-02-11 Thread John G. Heim
It's just not clear in the documentation what it even means to say that 
nfs is enabled. Does that mean each peer becomes an nfs server?
Again,  the nfs port isn't open on any of the 51 peers in my cluster.  I 
take it that is an indication of a problem?


What if I join a machine that is already running nfs to the cluster?







On 02/10/14 20:54, Paul Cuzner wrote:

Hi John,

I think on gluster 3.3 and 3.4, nfs is enabled by default.

nfs is actually a translator, so it's in the 'stack' already - this is
why you don't need the nfs-kernel-server package.

When mounting from a client, you just need to ensure the mount options
are right
i.e. use the following -o proto=tcp,vers=3   (for linux)

The other consideration is that by default the volume will expose 64bit
inodes - if you have a 32bit apps, or OS you'll need to tweak the
gluster volume with "vol set  nfs.enable-ino32 on"

I haven't got a 3.3 system handy, but on a 3.4 system if you run the
following
gluster vol set help | grep "^Option: nfs."

you'll get a view of all the nfs tweaks that can be made with the
translator.

Cheers,

Paul C




*From: *"John G. Heim" 
*To: *"gluster-users" 
*Sent: *Friday, 7 February, 2014 9:52:44 AM
*Subject: *[Gluster-users] nfs


Maybe this is a dumb question but do I have to set up an nfs server on
one of the server peers in my gluster volume in order to connect to
  the
volume with nfs?  I did a port scan on a couple of the peers in my
cluster and port 2049 was cloased. I'm thinking maybe you have to
configure an nfs server on one of the peers and it can read/write to
the
gluster volume like it would any disk. But then what do  these
  commands do:


   gluster volume set  nfs.disable off
   gluster volume set  nfs.disable on

The  documentation on the gluster.org web site seems to imply that yu
don't need an nfs server. It specifically says you need the nfs-common
package on your servers. That would imply you don't need the
nfs-kernel-server package, right? See:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Using_NFS_to_Mount_Volumes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
---
John G. Heim, 608-263-4189, jh...@math.wisc.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs-3.5.0beta3 released

2014-02-11 Thread Gluster Build System


SRC: 
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.0beta3.tar.gz

This release is made off jenkins-release-58

-- Gluster Build System
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Problem with duplicate files

2014-02-11 Thread tegner
Hi,have a system consisting of 4 bricks, distributed, 3.4.1, and I have noticed that some of the files are stored on three of the bricks. Typically a listing can look something like this:brick1: -T 2 root root 0 Feb 11 15:47 /mnt/raid6/filebrick2: -rw--- 2 2686 2022 10545 Mar  6  2012 /mnt/raid6/filebrick3: -rw--- 2 2686 2022 10545 Mar  6  2012 /mnt/raid6/filebrick4: no such file or directoryThere are quite a few of these files, and it would be tedious to clean them up manually. Is there a way to have gluster fix these?Thanks!/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Testing replication and HA

2014-02-11 Thread James
On Tue, Feb 11, 2014 at 9:43 AM, David Bierce  wrote:
> When the timeout is reached for the brick failed brick, it does have to 
> recreate handles for all the files in the volume, which is apparently quite 
> an expensive operation.  In our environment, with only 100s of files, this 
> has been livable, but if you have 100k files, I’d imagine it is quite a wait 
> to get the clients state of the volume back to usable.

I'm interested in hearing more about this situation. How expensive,
and where do you see the cost? As CPU usage on the client side? On the
brick side? Or what?

Cheers,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Testing replication and HA

2014-02-11 Thread James
Thanks to everyone for their replies...

On Tue, Feb 11, 2014 at 2:37 AM, Kaushal M  wrote:
> The 42 second hang is most likely the ping timeout of the client translator.
Indeed I think it is...

>
> What most likely happened was that, the brick on annex3 was being used
> for the read when you pulled its plug. When you pulled the plug, the
> connection between the client and annex3 isn't gracefully terminated
> and the client translator still sees the connection as alive. Because
> of this the next fop is also sent to annex3, but it will timeout as
> annex3 is dead. After the timeout happens, the connection is marked as
> dead, and the associated client xlator is marked as down. Since afr
> now know annex3 is dead, it sends the next fop to annex4 which is
> still alive.
I think this sounds right... My thought was that maybe Gluster could
do better somehow. For example, if the timeout counter passes (say 1
sec) it immediately starts looking for a different brick to continue
from. This way a routine failover wouldn't interrupt activity for 42
seconds. Maybe this is a feature that could be part of the new style
replication?

>
> These kinds of unclean connection terminations are only handled by
> request/ping timeouts currently. You could set the ping timeout values
> to be lower, to reduce the detection time.
The reason I don't want to set this value significantly lower, is that
in the case of a _real_ disaster, or high load condition, I want to
have the 42 seconds to give things a chance to recover without having
to kill the "in process" client mount. So it makes sense to keep it
like this.

>
> ~kaushal

Cheers,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs volume sync: failed: , is not a friend after reboot

2014-02-11 Thread Jefferson Carlos Machado

Hi,

I get error glusterfs volume sync: failed: ,  is not a friend 
after reboot.

How I can fix this?

Regards,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Testing replication and HA

2014-02-11 Thread David Bierce
Isn’t that working as intended?  I’ve had the fuse client failover just fine in 
testing and in production when a memory error caused a kernel panic.

That timeout is tunable, but when a brick in the cluster goes down writes by 
the client are suspended until the timeout is reached.  In our environment, we 
have 100s of VM images that are running live so we’ve had to set the timeout 
down to 2 seconds, to avoid the client file systems remounting in read only or 
causing excessive errors to applications in the VM that get grumpy when there 
is a write lock of more than a few 100ms.

The timeout can be set per volume 
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options

network.ping-timeout

When the timeout is reached for the brick failed brick, it does have to 
recreate handles for all the files in the volume, which is apparently quite an 
expensive operation.  In our environment, with only 100s of files, this has 
been livable, but if you have 100k files, I’d imagine it is quite a wait to get 
the clients state of the volume back to usable.


On Feb 11, 2014, at 2:49 AM, Sharuzzaman Ahmat Raslan  
wrote:

> Hi all,
> 
> Is the 42s timeout tunable?
> 
> Should the default be made lower, eg. 3 second?
> 
> Thanks.
> 
> 
> 
> 
> On Tue, Feb 11, 2014 at 3:37 PM, Kaushal M  wrote:
> The 42 second hang is most likely the ping timeout of the client translator.
> 
> What most likely happened was that, the brick on annex3 was being used
> for the read when you pulled its plug. When you pulled the plug, the
> connection between the client and annex3 isn't gracefully terminated
> and the client translator still sees the connection as alive. Because
> of this the next fop is also sent to annex3, but it will timeout as
> annex3 is dead. After the timeout happens, the connection is marked as
> dead, and the associated client xlator is marked as down. Since afr
> now know annex3 is dead, it sends the next fop to annex4 which is
> still alive.
> 
> These kinds of unclean connection terminations are only handled by
> request/ping timeouts currently. You could set the ping timeout values
> to be lower, to reduce the detection time.
> 
> ~kaushal
> 
> On Tue, Feb 11, 2014 at 11:57 AM, Krishnan Parthasarathi
>  wrote:
> > James,
> >
> > Could you provide the logs of the mount process, where you see the hang for 
> > 42s?
> > My initial guess, seeing 42s, is that the client translator's ping timeout
> > is in play.
> >
> > I would encourage you to report a bug and attach relevant logs.
> > If the issue (observed) turns out to be an acceptable/explicable behavioural
> > quirk of glusterfs, then we could close the bug :-)
> >
> > cheers,
> > Krish
> > - Original Message -
> >> It's been a while since I did some gluster replication testing, so I
> >> spun up a quick cluster *cough, plug* using puppet-gluster+vagrant (of
> >> course) and here are my results.
> >>
> >> * Setup is a 2x2 distributed-replicated cluster
> >> * Hosts are named: annex{1..4}
> >> * Volume name is 'puppet'
> >> * Client vm's mount (fuse) the volume.
> >>
> >> * On the client:
> >>
> >> # cd /mnt/gluster/puppet/
> >> # dd if=/dev/urandom of=random.51200 count=51200
> >> # sha1sum random.51200
> >> # rsync -v --bwlimit=10 --progress random.51200 root@localhost:/tmp
> >>
> >> * This gives me about an hour to mess with the bricks...
> >> * By looking on the hosts directly, I see that the random.51200 file is
> >> on annex3 and annex4...
> >>
> >> * On annex3:
> >> # poweroff
> >> [host shuts down...]
> >>
> >> * On client1:
> >> # time ls
> >> random.51200
> >>
> >> real0m42.705s
> >> user0m0.001s
> >> sys 0m0.002s
> >>
> >> [hangs for about 42 seconds, and then returns successfully...]
> >>
> >> * I then powerup annex3, and then pull the plug on annex4. The same sort
> >> of thing happens... It hangs for 42 seconds, but then everything works
> >> as normal. This is of course the cluster timeout value and the answer to
> >> life the universe and everything.
> >>
> >> Question: Why doesn't glusterfs automatically flip over to using the
> >> other available host right away? If you agree, I'll report this as a
> >> bug. If there's a way to do this, let me know.
> >>
> >> Apart from the delay, glad that this is of course still HA ;)
> >>
> >> Cheers,
> >> James
> >> @purpleidea (twitter/irc)
> >> https://ttboj.wordpress.com/
> >>
> >>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@nongnu.org
> >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
> 
> 
> -- 
> Sharuzzaman Ahmat Raslan
> 

[Gluster-users] gluster on VM and RDBMS

2014-02-11 Thread Suvendu Mitra
Hi,
We are planning to run glusterfs on openstack based VM cluster, I
understood that gluster replication is based on file level. What is the
drawback of running RDBMS on gluster

-- 
Suvendu Mitra
GSM - +358504821066
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] geo-replication errors

2014-02-11 Thread John Ewing
I am trying to use geo-replication but it is running slowly and I keep
getting the
following logged in the geo-replication log.

[2014-02-11 10:56:42.831517] I [monitor(monitor):80:monitor] Monitor:

[2014-02-11 10:56:42.832226] I [monitor(monitor):81:monitor] Monitor:
starting gsyncd worker
[2014-02-11 10:56:42.951199] I [gsyncd:354:main_i] : syncing:
gluster://localhost:xxx -> ssh://gluster-as...@xx.xx.xx.xx
:gluster://localhost:x
[2014-02-11 10:56:53.79632] I [master:284:crawl] GMaster: new master is
acfda6fc-d995-4bf0-b13e-da789afb28c7
[2014-02-11 10:56:53.80282] I [master:288:crawl] GMaster: primary master
with volume id acfda6fc-d995-4bf0-b13e-da789afb28c7 ...
[2014-02-11 10:56:57.453376] E [syncdutils:190:log_raise_exception] :
FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 216,
in twrap
tf(*aa)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 123, in
tailer
poe, _ ,_ = select([po.stderr for po in errstore], [], [], 1)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 276,
in select
return eintr_wrap(oselect.select, oselect.error, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 269,
in eintr_wrap
return func(*a)
error: (9, 'Bad file descriptor')
[2014-02-11 10:56:57.462110] I [syncdutils:142:finalize] : exiting.

I'm unsure what to do to debug and fix this.

Thanks

John.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Question on replicated volumes with bricks on the same server

2014-02-11 Thread Antonio Messina
Hi all,

I would like to know how gluster distribute the data when two bricks
of the same volumes are on the same server. Specifically, I would like
to know if there is any way to spread the replicas on different nodes
whenever possible, in order not to lose any data if the node goes
down.

I did a simple test and it seems that the way replicas are spread over
the bricks is related to the way the volume is created, that is if I
create a volume with:

gluster volume create vol1 replica 2\
gluster-data001:/srv/gluster/vol1.1 \
gluster-data001:/srv/gluster/vol1.2 \
gluster-data002:/srv/gluster/vol1.1 \
gluster-data002:/srv/gluster/vol1.2

replicas of a file will be stored on the two bricks of the same
server, while if I create the volume with

gluster volume create vol1 replica 2\
gluster-data001:/srv/gluster/vol1.1 \
gluster-data002:/srv/gluster/vol1.1 \
gluster-data001:/srv/gluster/vol1.2 \
gluster-data002:/srv/gluster/vol1.2

replicas will be saved on two bricks of different servers.

So, my guess is that if I create a "replica N" replicated+distributed
volumes using the bricks:

  gluster-1:/srv/gluster
  ...
  gluster-[N*M]:/srv/gluster

gluster internally creates a distributed volumes made of the following
replicated "volumes":

  replicated volume 1: gluster-[1..N]:/srv/gluster
  replicated volume 2: gluster-[N+1..2N]:/srv/gluster
  ...
  replicated volume M: gluster-[N*(M-1)+1..N*M]:/srv/gluster

Is that correct or there is a more complex algorithm involved?

.a.

-- 
antonio.s.mess...@gmail.com
antonio.mess...@uzh.ch +41 (0)44 635 42 22
GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Daniel Müller
That did the trick. Thank you all!!!

 

Greetings

Daniel 

 



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 

Tel.: 07071/206-463, Fax: 07071/206-499
eMail:   muel...@tropenklinik.de
Internet: www.tropenklinik.de 

"Der Mensch ist die Medizin des Menschen"

 

 



 

Von: Xavier Hernandez [mailto:xhernan...@datalab.es] 
Gesendet: Dienstag, 11. Februar 2014 09:48
An: muel...@tropenklinik.de
Cc: m...@mattandtiff.net; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster, Samba, and VFS

 

Hi Daniel,

have you tried to set the following option into the share definition of 
smb.conf as Lalatendu said (see the bug report 
https://bugzilla.redhat.com/show_bug.cgi?id=1062674) ?

kernel share modes = no

I had a very similar problem and this option solved it.

Best regards,

Xavi

El 11/02/14 07:42, Daniel Müller ha escrit:

No, not really:
Look at my thread: samba vfs objects glusterfs is it now working?
I am just waiting for an answer to fix this.
The only way I succeeded to make it work is how you descriped (exporting
fuse mount thru samba)
 
 
 
EDV Daniel Müller
 
Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"
 
 
 
 
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
Gesendet: Montag, 10. Februar 2014 16:43
An: gluster-users@gluster.org
Betreff: [Gluster-users] Gluster, Samba, and VFS
 
Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play nice as a
general purpose file server.  I have had severe performance problems in the
past with mounting the Gluster volume as a Fuse mount, then exporting the
Fuse mount thru Samba.  As I found out after setting up the cluster this is
somewhat expected when serving out lots of small files.  Was hoping VFS
would provide better performance when serving out lots and lots of small
files.
Is anyone using VFS extensions in production?  Is it ready for prime time? 
I could not find a single reference to it on Gluster's main website (maybe I
am looking in the wrong place), so not sure of the stability or
supported-ness of this.
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

 

<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Vijay Bellur

On 02/10/2014 09:13 PM, Matt Miller wrote:

Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play nice as a
general purpose file server.  I have had severe performance problems in
the past with mounting the Gluster volume as a Fuse mount, then
exporting the Fuse mount thru Samba.  As I found out after setting up
the cluster this is somewhat expected when serving out lots of small
files.  Was hoping VFS would provide better performance when serving out
lots and lots of small files.

Is anyone using VFS extensions in production?  Is it ready for prime
time?  I could not find a single reference to it on Gluster's main
website (maybe I am looking in the wrong place), so not sure of the
stability or supported-ness of this.



We do support this VFS extension and actively welcome your feedback in 
this area. FWIW, Red Hat Storage 2.1 uses this plugin in its offering [1].


-Vijay

[1] 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-Administration_Guide-GlusterFS_Client-CIFS.html


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Testing replication and HA

2014-02-11 Thread Sharuzzaman Ahmat Raslan
Hi all,

Is the 42s timeout tunable?

Should the default be made lower, eg. 3 second?

Thanks.




On Tue, Feb 11, 2014 at 3:37 PM, Kaushal M  wrote:

> The 42 second hang is most likely the ping timeout of the client
> translator.
>
> What most likely happened was that, the brick on annex3 was being used
> for the read when you pulled its plug. When you pulled the plug, the
> connection between the client and annex3 isn't gracefully terminated
> and the client translator still sees the connection as alive. Because
> of this the next fop is also sent to annex3, but it will timeout as
> annex3 is dead. After the timeout happens, the connection is marked as
> dead, and the associated client xlator is marked as down. Since afr
> now know annex3 is dead, it sends the next fop to annex4 which is
> still alive.
>
> These kinds of unclean connection terminations are only handled by
> request/ping timeouts currently. You could set the ping timeout values
> to be lower, to reduce the detection time.
>
> ~kaushal
>
> On Tue, Feb 11, 2014 at 11:57 AM, Krishnan Parthasarathi
>  wrote:
> > James,
> >
> > Could you provide the logs of the mount process, where you see the hang
> for 42s?
> > My initial guess, seeing 42s, is that the client translator's ping
> timeout
> > is in play.
> >
> > I would encourage you to report a bug and attach relevant logs.
> > If the issue (observed) turns out to be an acceptable/explicable
> behavioural
> > quirk of glusterfs, then we could close the bug :-)
> >
> > cheers,
> > Krish
> > - Original Message -
> >> It's been a while since I did some gluster replication testing, so I
> >> spun up a quick cluster *cough, plug* using puppet-gluster+vagrant (of
> >> course) and here are my results.
> >>
> >> * Setup is a 2x2 distributed-replicated cluster
> >> * Hosts are named: annex{1..4}
> >> * Volume name is 'puppet'
> >> * Client vm's mount (fuse) the volume.
> >>
> >> * On the client:
> >>
> >> # cd /mnt/gluster/puppet/
> >> # dd if=/dev/urandom of=random.51200 count=51200
> >> # sha1sum random.51200
> >> # rsync -v --bwlimit=10 --progress random.51200 root@localhost:/tmp
> >>
> >> * This gives me about an hour to mess with the bricks...
> >> * By looking on the hosts directly, I see that the random.51200 file is
> >> on annex3 and annex4...
> >>
> >> * On annex3:
> >> # poweroff
> >> [host shuts down...]
> >>
> >> * On client1:
> >> # time ls
> >> random.51200
> >>
> >> real0m42.705s
> >> user0m0.001s
> >> sys 0m0.002s
> >>
> >> [hangs for about 42 seconds, and then returns successfully...]
> >>
> >> * I then powerup annex3, and then pull the plug on annex4. The same sort
> >> of thing happens... It hangs for 42 seconds, but then everything works
> >> as normal. This is of course the cluster timeout value and the answer to
> >> life the universe and everything.
> >>
> >> Question: Why doesn't glusterfs automatically flip over to using the
> >> other available host right away? If you agree, I'll report this as a
> >> bug. If there's a way to do this, let me know.
> >>
> >> Apart from the delay, glad that this is of course still HA ;)
> >>
> >> Cheers,
> >> James
> >> @purpleidea (twitter/irc)
> >> https://ttboj.wordpress.com/
> >>
> >>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@nongnu.org
> >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Sharuzzaman Ahmat Raslan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Xavier Hernandez

Hi Daniel,

have you tried to set the following option into the share definition of 
smb.conf as Lalatendu said (see the bug report 
https://bugzilla.redhat.com/show_bug.cgi?id=1062674) ?


kernel share modes = no

I had a very similar problem and this option solved it.

Best regards,

Xavi

El 11/02/14 07:42, Daniel Müller ha escrit:

No, not really:
Look at my thread: samba vfs objects glusterfs is it now working?
I am just waiting for an answer to fix this.
The only way I succeeded to make it work is how you descriped (exporting
fuse mount thru samba)



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
"Der Mensch ist die Medizin des Menschen"




Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
Gesendet: Montag, 10. Februar 2014 16:43
An: gluster-users@gluster.org
Betreff: [Gluster-users] Gluster, Samba, and VFS

Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play nice as a
general purpose file server.  I have had severe performance problems in the
past with mounting the Gluster volume as a Fuse mount, then exporting the
Fuse mount thru Samba.  As I found out after setting up the cluster this is
somewhat expected when serving out lots of small files.  Was hoping VFS
would provide better performance when serving out lots and lots of small
files.
Is anyone using VFS extensions in production?  Is it ready for prime time?
I could not find a single reference to it on Gluster's main website (maybe I
am looking in the wrong place), so not sure of the stability or
supported-ness of this.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs objects glusterfs is it now working?

2014-02-11 Thread Lalatendu Mohanty

On 02/07/2014 10:31 PM, Lalatendu Mohanty wrote:

Hei Deniel,

I was trying out Samba 4.1.3 + Samba vfs plugin+ glusterfs 3.4.2 and 
have hit a bug. I was using packages available in Fedora 20. I think 
the bug might be with vfs plugin which is in Samba 4 tree. Please 
update the bug with your comments if you think your issue is also 
related to this.


https://bugzilla.redhat.com/show_bug.cgi?id=1062674 : Write is failing 
on a cifs mount with samba-4.1.3-2.fc20 + glusterfs samba vfs plugin


Danial,

The above mentioned bug turned out to be a not a bug. After using 
"kernel share modes = No" for the share (which is expected) , it is 
working fine for me. I have also updated the bug with relevant 
information. Below are the settings for the share that worked for me. I 
used one of the Gluster node as the Samba server. Kindly try this out 
and let us know if it works for you.


[testvol]
comment = For samba share of volume testvol
path = /
read only = No
guest ok = Yes
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
glusterfs:volume = testvol

Thanks,
Lala



On 02/07/2014 12:57 PM, Daniel Müller wrote:

Dear all,

I try to establish the vfs object glusterfs.
My settings samba 4.1.4 share definition:

[home]
comment=gluster test
vfs objects=glusterfs
glusterfs:volume= sambacluster
glusterfs:volfile_server = 172.17.1.1
path=/ads/home
Actually with vfs plugin we don't need to give the actual path. 
However I am not sure if that is causing issue in your case. I use the 
following entries


[testvol]
comment = For samba share of volume smb-vol-2
path = /
read only = No
guest ok = Yes
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:logfile = /var/log/samba/glusterfs-smb-vol-2.%M.log
glusterfs:volume = testvol


read only=no
posix locking =NO


Mount gluster in fstab:

172.17.1.1:/sambacluster   /mnt/glusterfs glusterfs
defaults,acl,__netdev  0  0

Under /dev/sdb1: mkfs.xfs -i size=512 /dev/sdb1
/dev/sdb1   /raid5hs   xfs defaults   1 2

[root@s4master ~]# gluster volume info

Volume Name: sambacluster
Type: Replicate
Volume ID: 4fd0da03-8579-47cc-926b-d7577dac56cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s4master:/raid5hs/glusterfs/samba
Brick2: s4slave:/raid5hs/glusterfs/samba
Options Reconfigured:
performance.quick-read: on
network.ping-timeout: 5
performance.stat-prefetch: off

Samba is reporting to load the vol:

Feb  7 08:20:52 s4master GlusterFS[6867]: [2014/02/07 
08:20:52.455982,  0]

../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
Feb  7 08:20:52 s4master GlusterFS[6867]:   sambacluster: Initialized 
volume

from server 172.17.1.1

But when I try to write Office Files and txt Files from a win client 
to the
share there is an error "the file:x could not be created, System 
could

not find the file". But after a refresh the file was generated anyway.
This files cannot be changed but deleted anyway.
Directories can be generated without this issue!?

Can you give me a hint to make things work? Or is the plugin not 
working at

this state?


Greetings
Daniel



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
"Der Mensch ist die Medizin des Menschen"






___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Testing replication and HA

2014-02-11 Thread haiwei.xie-soulinfo

   It's interesting problem, after 42s, your client will be aware of some 
bricks offline, 
io will continue. if your app's timeout is too short, error will occur.
   If ping timeout is too lower, maybe trouble in heavy io environment.


On Tue, 11 Feb 2014 13:07:36 +0530
Kaushal M  wrote:

> The 42 second hang is most likely the ping timeout of the client translator.
> 
> What most likely happened was that, the brick on annex3 was being used
> for the read when you pulled its plug. When you pulled the plug, the
> connection between the client and annex3 isn't gracefully terminated
> and the client translator still sees the connection as alive. Because
> of this the next fop is also sent to annex3, but it will timeout as
> annex3 is dead. After the timeout happens, the connection is marked as
> dead, and the associated client xlator is marked as down. Since afr
> now know annex3 is dead, it sends the next fop to annex4 which is
> still alive.
> 
> These kinds of unclean connection terminations are only handled by
> request/ping timeouts currently. You could set the ping timeout values
> to be lower, to reduce the detection time.
> 
> ~kaushal
> 
> On Tue, Feb 11, 2014 at 11:57 AM, Krishnan Parthasarathi
>  wrote:
> > James,
> >
> > Could you provide the logs of the mount process, where you see the hang for 
> > 42s?
> > My initial guess, seeing 42s, is that the client translator's ping timeout
> > is in play.
> >
> > I would encourage you to report a bug and attach relevant logs.
> > If the issue (observed) turns out to be an acceptable/explicable behavioural
> > quirk of glusterfs, then we could close the bug :-)
> >
> > cheers,
> > Krish
> > - Original Message -
> >> It's been a while since I did some gluster replication testing, so I
> >> spun up a quick cluster *cough, plug* using puppet-gluster+vagrant (of
> >> course) and here are my results.
> >>
> >> * Setup is a 2x2 distributed-replicated cluster
> >> * Hosts are named: annex{1..4}
> >> * Volume name is 'puppet'
> >> * Client vm's mount (fuse) the volume.
> >>
> >> * On the client:
> >>
> >> # cd /mnt/gluster/puppet/
> >> # dd if=/dev/urandom of=random.51200 count=51200
> >> # sha1sum random.51200
> >> # rsync -v --bwlimit=10 --progress random.51200 root@localhost:/tmp
> >>
> >> * This gives me about an hour to mess with the bricks...
> >> * By looking on the hosts directly, I see that the random.51200 file is
> >> on annex3 and annex4...
> >>
> >> * On annex3:
> >> # poweroff
> >> [host shuts down...]
> >>
> >> * On client1:
> >> # time ls
> >> random.51200
> >>
> >> real0m42.705s
> >> user0m0.001s
> >> sys 0m0.002s
> >>
> >> [hangs for about 42 seconds, and then returns successfully...]
> >>
> >> * I then powerup annex3, and then pull the plug on annex4. The same sort
> >> of thing happens... It hangs for 42 seconds, but then everything works
> >> as normal. This is of course the cluster timeout value and the answer to
> >> life the universe and everything.
> >>
> >> Question: Why doesn't glusterfs automatically flip over to using the
> >> other available host right away? If you agree, I'll report this as a
> >> bug. If there's a way to do this, let me know.
> >>
> >> Apart from the delay, glad that this is of course still HA ;)
> >>
> >> Cheers,
> >> James
> >> @purpleidea (twitter/irc)
> >> https://ttboj.wordpress.com/
> >>
> >>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@nongnu.org
> >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users