Re: [Gluster-users] GlusterD uses 50% of RAM

2015-03-24 Thread RASTELLI Alessandro
Hi,
today the issue happened once again. 
Glusterd process was using 80% of RAM and its log was filling up the /var/log.
One month ago, when the issue last happened, you suggested to install a patch, 
so I did this:
git fetch git://review.gluster.org/glusterfs refs/changes/28/9328/4
git fetch http://review.gluster.org/glusterfs refs/changes/28/9328/4  git 
checkout FETCH_HEAD
Is this enough to install the patch or I missed something?

Thank you
Alessandro


-Original Message-
From: RASTELLI Alessandro 
Sent: martedì 24 febbraio 2015 10:28
To: 'Atin Mukherjee'
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] GlusterD uses 50% of RAM

Hi Atin,
I managed to install the patch, it fixed the issue Thank you A.

-Original Message-
From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: martedì 24 febbraio 2015 08:03
To: RASTELLI Alessandro
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterD uses 50% of RAM



On 02/20/2015 07:20 PM, RASTELLI Alessandro wrote:
 I get this:
 
 [root@gluster03-mi glusterfs]# git fetch 
 git://review.gluster.org/glusterfs refs/changes/28/9328/4  git 
 checkout FETCH_HEAD
 fatal: Couldn't find remote ref refs/changes/28/9328/4
 
 What's wrong with that?
Is your current branch at 3.6 ?
 
 A.
 
 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: venerdì 20 febbraio 2015 12:54
 To: RASTELLI Alessandro
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
 
From the cmd log history I could see lots of volume status commands were 
triggered parallely. This is a known issue for 3.6 and it would cause a 
memory leak. http://review.gluster.org/#/c/9328/ should solve it.
 
 ~Atin
 
 On 02/20/2015 04:36 PM, RASTELLI Alessandro wrote:
 10MB log
 sorry :)

 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: venerdì 20 febbraio 2015 10:49
 To: RASTELLI Alessandro; gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterD uses 50% of RAM

 Could you please share the cmd_history.log  glusterd log file to analyze 
 this high memory usage.

 ~Atin

 On 02/20/2015 03:10 PM, RASTELLI Alessandro wrote:
 Hi,
 I've noticed that one of our 6 gluster 3.6.2 nodes has glusterd 
 process using 50% of RAM, on the other nodes usage is about 5% This can be 
 a bug?
 Should I restart glusterd daemon?
 Thank you
 A

 From: Volnei Puttini [mailto:vol...@vcplinux.com.br]
 Sent: lunedì 9 febbraio 2015 18:06
 To: RASTELLI Alessandro; gluster-users@gluster.org
 Subject: Re: [Gluster-users] cannot access to CIFS export

 Hi Alessandro,

 My system:

 CentOS 7

 samba-vfs-glusterfs-4.1.1-37.el7_0.x86_64
 samba-winbind-4.1.1-37.el7_0.x86_64
 samba-libs-4.1.1-37.el7_0.x86_64
 samba-common-4.1.1-37.el7_0.x86_64
 samba-winbind-modules-4.1.1-37.el7_0.x86_64
 samba-winbind-clients-4.1.1-37.el7_0.x86_64
 samba-4.1.1-37.el7_0.x86_64
 samba-client-4.1.1-37.el7_0.x86_64

 glusterfs 3.6.2 built on Jan 22 2015 12:59:57

 Try this, work fine for me:

 [GFSVOL]
 browseable = No
 comment = Gluster share of volume gfsvol
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 posix locking = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-gfstest.log
 glusterfs:volume = vgtest
 glusterfs:volfile_server = 192.168.2.21

 On 09-02-2015 14:45, RASTELLI Alessandro wrote:
 Hi,
 I've created and started a new replica volume downloadstat with CIFS 
 export enabled on GlusterFS 3.6.2.
 I can see the following piece has been added automatically to smb.conf:
 [gluster-downloadstat]
 comment = For samba share of volume downloadstat vfs objects = 
 glusterfs glusterfs:volume = downloadstat glusterfs:logfile = 
 /var/log/samba/glusterfs-downloadstat.%M.log
 glusterfs:loglevel = 7
 path = /
 read only = no
 guest ok = yes

 I restarted smb service, without errors.
 When I try to access from Win7 client to 
 \\gluster01-mi\gluster-downloadstatfile:///\\gluster01-mi\gluster-downloadstat
  it asks me a login (which user do I need to put?) and then gives me error 
 The network path was not found
 and on Gluster smb.log I see:
 [2015/02/09 17:21:13.111639,  0] smbd/vfs.c:173(vfs_init_custom)
   error probing vfs module 'glusterfs': NT_STATUS_UNSUCCESSFUL
 [2015/02/09 17:21:13.111709,  0] smbd/vfs.c:315(smbd_vfs_init)
   smbd_vfs_init: vfs_init_custom failed for glusterfs
 [2015/02/09 17:21:13.111741,  0] smbd/service.c:902(make_connection_snum)
   vfs_init failed for service gluster-downloadstat

 Can you explain how to fix?
 Thanks

 Alessandro

 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-bounces@glust
 e r .org [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
 David F.
 Robinson
 Sent: domenica 8 febbraio 2015 18:19
 To: Gluster Devel;
 gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: [Gluster-users] cannot delete non-empty directory

 I am seeing these 

Re: [Gluster-users] What should I do to improve performance ?

2015-03-24 Thread noc
On 24-3-2015 12:59, marianna cattani wrote:
 Hello Ben ,
 whole infrastucture runs on 4 servers with SAS drives 7,200 rpm and a
 raid controller LSI.

 Probably the network is oversized compared to the disks and controllers .

 To verify that libgfapi is operating, is enough that my vm's disks
 have named as / dev / vd * ?


do a 'virsh -r dumpxml yourvm' and have a look at the output should like
like:

disk type='network' device='disk' snapshot='no'
  driver name='qemu' type='raw' cache='none' error_policy='stop' 
io='threads'/
  source protocol='gluster' 
name='GlusterSSD/e11866bf-1120-4c87-a992-9be32f110b8d/images/09ca5046-ab7a-11d8-9f2a-ffac3e89a6ad/042a9f7d-15be-443d-bcf7-a2ba8db43f72'
host name='test.com' port='0'/
  /source
  target dev='vda' bus='virtio'/
  serial09ca5046-ab7a-4bd8-9f2a-ffac3e89a6ad/serial
  boot order='1'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
/disk

The first line is import type=network indicates that its using libgfapi.

Joop

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What should I do to improve performance ?

2015-03-24 Thread marianna cattani
Openstack doesn't have vsdm, it should be a configuration option:
qemu_allowed_storage_drivers=gluster

But, however , the machine is generated with the xml that you see.

Now I try to write to the OpenStack's mailing list .

tnx

M

2015-03-24 15:14 GMT+01:00 noc n...@nieuwland.nl:

 On 24-3-2015 14:39, marianna cattani wrote:
  Many many thanks 
 
  Mine is different 
 
  :'(
 
  root@nodo-4:~# virsh -r dumpxml instance-002c | grep disk
  disk type='file' device='disk'
source
 
 file='/var/lib/nova/instances/ef84920d-2009-42a2-90de-29d9bd5e8512/disk'/
alias name='virtio-disk0'/
  /disk
 
 
 Thats a fuse connection. I'm running ovirt + glusterfs where vdsm is a
 special version with libgfapi support. Don't know if Openstack has that
 too?

 Joop


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What should I do to improve performance ?

2015-03-24 Thread marianna cattani
Many many thanks 

Mine is different 

:'(

root@nodo-4:~# virsh -r dumpxml instance-002c | grep disk
disk type='file' device='disk'
  source
file='/var/lib/nova/instances/ef84920d-2009-42a2-90de-29d9bd5e8512/disk'/
  alias name='virtio-disk0'/
/disk

Now I investigate more.

BR.

M.


2015-03-24 14:29 GMT+01:00 noc n...@nieuwland.nl:

 On 24-3-2015 12:59, marianna cattani wrote:
  Hello Ben ,
  whole infrastucture runs on 4 servers with SAS drives 7,200 rpm and a
  raid controller LSI.
 
  Probably the network is oversized compared to the disks and controllers .
 
  To verify that libgfapi is operating, is enough that my vm's disks
  have named as / dev / vd * ?
 
 
 do a 'virsh -r dumpxml yourvm' and have a look at the output should like
 like:

 disk type='network' device='disk' snapshot='no'
   driver name='qemu' type='raw' cache='none' error_policy='stop'
 io='threads'/
   source protocol='gluster'
 name='GlusterSSD/e11866bf-1120-4c87-a992-9be32f110b8d/images/09ca5046-ab7a-11d8-9f2a-ffac3e89a6ad/042a9f7d-15be-443d-bcf7-a2ba8db43f72'
 host name='test.com' port='0'/
   /source
   target dev='vda' bus='virtio'/
   serial09ca5046-ab7a-4bd8-9f2a-ffac3e89a6ad/serial
   boot order='1'/
   alias name='virtio-disk0'/
   address type='pci' domain='0x' bus='0x00' slot='0x04'
 function='0x0'/
 /disk

 The first line is import type=network indicates that its using libgfapi.

 Joop

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] lots of nfs.log activity since upgrading to 3.4.6

2015-03-24 Thread Matt

Hello list,

I have a few Wordpress sites served via NFS on Gluster. Since upgrading 
3.4.6, I'm seeing a non-trivial amount (1-2 million depending on how 
busy the blogs are, about 4 gigs in the last three weeks) entries like 
this appear in nfs.log:


[2015-03-24 13:49:17.314003] I 
[dht-common.c:1000:dht_lookup_everywhere_done] 0-wpblog-storage-dht: 
STATUS: hashed_subvol wpblog-storage-replicate-0 cached_subvol null
[2015-03-24 13:49:17.355722] I 
[dht-common.c:1000:dht_lookup_everywhere_done] 0-wpblog-storage-dht: 
STATUS: hashed_subvol wpblog-storage-replicate-0 cached_subvol null
[2015-03-24 13:49:17.616073] I 
[dht-common.c:1000:dht_lookup_everywhere_done] 0-wpblog-storage-dht: 
STATUS: hashed_subvol wpblog-storage-replicate-0 cached_subvol null


It doesn't seem to be a big problem, it's just an INFO log, but it 
definitely wasn't there with 3.4.5. Can anyone give me any insight into 
what's going on?


-Matt
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What should I do to improve performance ?

2015-03-24 Thread noc
On 24-3-2015 14:39, marianna cattani wrote:
 Many many thanks 

 Mine is different 

 :'(

 root@nodo-4:~# virsh -r dumpxml instance-002c | grep disk
 disk type='file' device='disk'
   source
 file='/var/lib/nova/instances/ef84920d-2009-42a2-90de-29d9bd5e8512/disk'/
   alias name='virtio-disk0'/
 /disk


Thats a fuse connection. I'm running ovirt + glusterfs where vdsm is a
special version with libgfapi support. Don't know if Openstack has that too?

Joop

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What should I do to improve performance ?

2015-03-24 Thread marianna cattani
Hello Ben ,
whole infrastucture runs on 4 servers with SAS drives 7,200 rpm and a raid
controller LSI.

Probably the network is oversized compared to the disks and controllers .

To verify that libgfapi is operating, is enough that my vm's disks have
named as / dev / vd * ?

I'm not sure that I understand the test which you suggest , if this is my
mount point /var/lib/nova/instances I could run:

bonnie++ -d /var/lib/nova/instances -r 2048

then on the vm that use /var/lib/nova/instances i run the same thing ?

Is it right ?

BR

M.

2015-03-23 19:57 GMT+01:00 Ben Turner btur...@redhat.com:

 - Original Message -
  From: marianna cattani marianna.catt...@gmail.com
  To: gluster-users@gluster.org
  Sent: Monday, March 23, 2015 6:09:41 AM
  Subject: [Gluster-users] What should I do to improve performance ?
 
  Dear all,
  I followed the tutorial I read at this link :
  http://www.gluster.org/documentation/use_cases/Virt-store-usecase/
 
  I have 4 nodes configured as a linked list , each node also performs
 virtual
  machines with KVM and mounts on its ip address, like this:
 
  172.16.155.12:/nova /var/lib/nova/instances glusterfs defaults,_netdev
 0 0
 
  Each node has two nic (ten giga) bonded in mode 4.
 
  What can I do to further improve the speed ?

 What kind of disks are back ending your 10G NICs?  Are you using FUSE or
 libgfapi to connect to gluster from your hypervisor?  What kind of speeds
 are you expecting vs seeing in your environment?  We need to understand
 what your HW can do first then gather some data running on gluster and
 compare the two.  As a rule of thumb with replica 2 you should see about:

 throughput = ( NIC line speed / 2 ) - 20% overhead

 As long as your disks can service it.  If you are seeing about that on the
 gluster mounts then go inside one of the VMs and run the same test, the VM
 should get something similar.  If you aren't seeing at least 400 MB / sec
 on sequential writes and 500-700 MB /sec on reads then there may be
 something off in your storage stack.

 -b

  BR.
 
  M.
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Out of memory Gluster 3.6.2

2015-03-24 Thread Atin Mukherjee
http://review.gluster.org/9269 is going to solve this problem, it will
be available in the coming 3.6 z stream release. You can refer to the
patch to understand the issue and the solution. Let me know in case of
any clarification required.

~Atin

On 03/24/2015 02:44 PM, Atin Mukherjee wrote:
 From the cmd_log_history it looks like there are frequent gluster
 profile commands which were executed that may have lead to OOM issue. I
 will analyze it further and get back to you.
 
 ~Atin
 
 On 03/24/2015 02:20 PM, Félix de Lelelis wrote:
 Hi,
 Only a monitoring system, Definitely, I canalize all checks by one script
 that lock all other monitoring checks and only there is one process check
 gluster. I send you cmd_log_history of the 2 nodes.

 Thanks.



 2015-03-24 9:18 GMT+01:00 Atin Mukherjee amukh...@redhat.com:

 Could you tell us what activities were run in the cluster?
 cmd_log_history across all the nodes would give a clear picture of it.

 ~Atin

 On 03/24/2015 01:03 PM, Félix de Lelelis wrote:
 Hi,

 Today, Glusterd daemon has been killed  due to excessive memory
 consumption:

  [3505254.762715] Out of memory: Kill process 7780 (glusterd) score 581
 or
 sacrifice child
 [3505254.763451] Killed process 7780 (glusterd) total-vm:3537640kB,
 anon-rss:1205240kB, file-rss:672kB

 I have installed gluster 3.6.2 on centos 7. There in any way to avoid
 this
 whithout the need to kill the process?Simply need more memory?

 Thanks.



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 --
 ~Atin


 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster peer probe error (v3.6.2)

2015-03-24 Thread Andreas
Sure I am. Unfortunately it didn't change the result...

# killall glusterd
# ps -ef | grep gluster
root 15755   657  0 18:35 ttyS000:00:00 grep gluster
# rm /var/lib/glusterd/peers/*  
#  /usr/sbin/glusterd -p /var/run/glusterd.pid
# gluster peer probe 10.32.1.144
#
(I killed glusterd and removed the files on both servers.)

Regards
Andreas


On 03/24/15 05:36, Atin Mukherjee wrote:
 If you are okay to do a fresh set up I would recommend you to clean up
 /var/lib/glusterd/peers/* and then restart glusterd in both the nodes
 and then try peer probing.

 ~Atin

 On 03/23/2015 06:44 PM, Andreas wrote:
 Hi,

 # gluster peer detach 10.32.1.144
 (No output here. Similar to the problem with 'gluster peer probe'.)
 # gluster peer detach 10.32.1.144 force
 peer detach: failed: Peer is already being detached from cluster.
 Check peer status by running gluster peer status
 # gluster peer status
 Number of Peers: 1

 Hostname: 10.32.1.144
 Uuid: 82cdb873-28cc-4ed0-8cfe-2b6275770429
 State: Probe Sent to Peer (Connected)

 # ping 10.32.1.144
 PING 10.32.1.144 (10.32.1.144): 56 data bytes
 64 bytes from 10.32.1.144: seq=0 ttl=64 time=1.811 ms
 64 bytes from 10.32.1.144: seq=1 ttl=64 time=1.834 ms
 ^C
 --- 10.32.1.144 ping statistics ---
 2 packets transmitted, 2 packets received, 0% packet loss
 round-trip min/avg/max = 1.811/1.822/1.834 ms


 As previously stated, this problem seems to be similar to what I experienced 
 with
 'gluster peer probe'. I can reboot the server, but the situation will be the 
 same
 (I've tried this many times).
 Any ideas of which ports to investigate and how to do it to get the most 
 reliable result?
 Anything else that could cause this?



 Regards
 Andreas


 On 03/23/15 11:10, Atin Mukherjee wrote:
 On 03/23/2015 03:28 PM, Andreas Hollaus wrote:
 2Hi,

 This network problem is persistent. However, I can ping the server so 
 guess it
 depends on the port no, right?
 I tried to telnet to port 24007, but I was not sure how to interpret the 
 result as I
 got no respons and no timeout (it just seemed to be waiting for something).
 That's why I decided to install nmap, but according to that tool the port 
 was
 accessible. Are there any other ports that are vital to gluster peer probe?

 When you say 'deprobe', I guess you mean 'gluster peer detach'? That 
 command shows
 similar behaviour to gluster peer probe.
 Yes I meant peer detach. How about gluster peer detach force?

 Regards
 Andreas

 On 03/23/15 05:34, Atin Mukherjee wrote:
 On 03/22/2015 07:11 PM, Andreas Hollaus wrote:
 Hi,

 I hope that these are the logs that you requested.

 Logs from 10.32.0.48:
 --
 # more /var/log/glusterfs/.cmd_log_history
 [2015-03-19 13:52:03.277438]  : peer probe 10.32.1.144 : FAILED : Probe 
 returned
  with unknown errno -1

 # more /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
 [2015-03-19 13:41:31.241768] I [MSGID: 100030] [glusterfsd.c:2018:main] 
 0-/usr/s
 bin/glusterd: Started running /usr/sbin/glusterd version 3.6.2 (args: 
 /usr/sbin/
 glusterd -p /var/run/glusterd.pid)
 [2015-03-19 13:41:31.245352] I [glusterd.c:1214:init] 0-management: 
 Maximum allo
 wed open file descriptors set to 65536
 [2015-03-19 13:41:31.245432] I [glusterd.c:1259:init] 0-management: 
 Using /var/l
 ib/glusterd as working directory
 [2015-03-19 13:41:31.247826] I 
 [glusterd-store.c:2063:glusterd_restore_op_versio
 n] 0-management: Detected new install. Setting op-version to maximum : 
 30600
 [2015-03-19 13:41:31.247902] I 
 [glusterd-store.c:3497:glusterd_store_retrieve_mi
 ssed_snaps_list] 0-management: No missed snaps list.
 Final graph:
 +--+
   1: volume management
   2: type mgmt/glusterd
   3: option rpc-auth.auth-glusterfs on
   4: option rpc-auth.auth-unix on
   5: option rpc-auth.auth-null on
   6: option transport.socket.listen-backlog 128
   7: option ping-timeout 30
   8: option transport.socket.read-fail-log off
   9: option transport.socket.keepalive-interval 2
  10: option transport.socket.keepalive-time 10
  11: option transport-type socket
  12: option working-directory /var/lib/glusterd
  13: end-volume
  14: 
 +--+
 [2015-03-19 13:42:02.258403] I 
 [glusterd-handler.c:1015:__glusterd_handle_cli_pr
 obe] 0-glusterd: Received CLI probe req 10.32.1.144 24007
 [2015-03-19 13:42:02.259456] I 
 [glusterd-handler.c:3165:glusterd_probe_begin] 0-
 glusterd: Unable to find peerinfo for host: 10.32.1.144 (24007)
 [2015-03-19 13:42:02.259664] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
 0-manag
 ement: setting frame-timeout to 600
 [2015-03-19 13:42:02.260488] I 
 [glusterd-handler.c:3098:glusterd_friend_add] 0-m
 anagement: connect returned 0
 [2015-03-19 13:42:02.270316] I 
 [glusterd.c:176:glusterd_uuid_generate_save] 0-ma
 nagement: generated UUID: 

Re: [Gluster-users] [Gluster-devel] Revamping the GlusterFS Documentation...

2015-03-24 Thread Shravan Chandrashekar
Yes, We are thinking about a front/landing page for the documentation with 
better UI/UX.

The minor changes that you had suggested, have been made :)

-Shravan


- Original Message -
From: Justin Clift jus...@gluster.org
To: Shravan Chandrashekar schan...@redhat.com
Cc: gluster-users@gluster.org, gluster-de...@gluster.org, Anjana Suparna 
Sriram asri...@redhat.com, s...@redhat.com
Sent: Tuesday, March 24, 2015 3:04:39 AM
Subject: Re: [Gluster-devel] Revamping the GlusterFS Documentation...

On 23 Mar 2015, at 07:01, Shravan Chandrashekar schan...@redhat.com wrote:
 Hi All, 
 
 The Gluster Filesystem documentation is not user friendly and fragmented 
 and this has been the feedback we have been receiving. 
 
 We got back to our drawing board and blueprints and realized that the content 
 was scattered at various places. These include: 
 
 [Static HTML] http://www.gluster.org/documentation/ 
 [Mediawiki] http://www.gluster.org/community/documentation/ 
 [In-source] https://github.com/gluster/glusterfs/tree/master/doc 
 [Markdown] https://github.com/GlusterFS/Notes 
 
 and so on… 
 
 Hence, we started by curating content from various sources including 
 gluster.org static HTML documentation, glusterfs github repository, 
 various blog posts and the Community wiki. We also felt the need to improve 
 the community member's experience with Gluster documentation. This led us to 
 put some thought into the user interface. As a result we came up with a page 
 which links all content into a single landing page: 
 
 http://www.gluster.org/community/documentation/index.php/Staged_Docs 
  
 This is just our first step to improve our community docs and enhance the 
 community contribution towards documentation. I would like to thank Humble 
 Chirammal and Anjana Sriram for the suggestions and directions during the 
 entire process. I am sure there is lot of scope for improvement. 
 Hence, request you all to review the content and provide your suggestions. 

Looks like a good effort.  Is the general concept for this to
become the front/landing page for the main wiki?

Also some initial thoughts:

 * Gluster Ant Logo image - The first letter REALLY looks like a C
   (to me), not a G.  Reads as Cluster for me...

   That aside, it looks really good. :)


 * Getting Started section ... move it up maybe, before the
   Terminology / Architecture / Additional Resources bit

   This is to make it more obvious for new people.


 * Terminologies should probably be Terminology, as
   Terminology is kind of both singular and plural.


 * All that Developers need to know → Everything Developers
   need to know

They're my first thoughts anyway. :)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Revamping the GlusterFS Documentation...

2015-03-24 Thread John Gardeniers

Hi Shravan,

Having recently set up geo-replication of Gluster v3.5.3 I can tell you 
that there is effectively almost no documentation. The documentation 
that does exists is primarily focused on describing the differences 
between the current and previous versions. That's completely useless to 
someone wanting to set it up for the first time and not a whole lot 
better for someone who has upgraded. The first, and perhaps most 
crucial, piece of information that's missing is installation 
requirements. Nowhere have I been able to find out exactly which 
components are required on either the master or the slave. In my case 
this was determined by pure trial and error. i.e. Install what I think 
should be needed and then continue installing components until it starts 
to work. Even once that part is done, there is a LOT of documentation 
missing. I recall that when I first set up geo-replication (v3.2 or 
v3.3?) I was able to follow clear and simple step by step instructions 
that almost guaranteed success.


regards,
John


On 23/03/15 18:01, Shravan Chandrashekar wrote:

*Hi All, *

*The Gluster Filesystem documentation is not user friendly and 
fragmented and this has been the feedback we have been receiving. *


*We got back to our drawing board and blueprints and realized that the 
content was scattered at various places. These include: *


*[Static HTML] http://www.gluster.org/documentation/ *
*[Mediawiki] http://www.gluster.org/community/documentation/ *
*[In-source] https://github.com/gluster/glusterfs/tree/master/doc *
*[Markdown] https://github.com/GlusterFS/Notes *

*and so on… *

*Hence, we started by curating content from various sources including 
gluster.org static HTML documentation, glusterfs github repository, *
*various blog posts and the Community wiki. We also felt the need to 
improve the community member's experience with Gluster documentation. 
This led us to put some thought into the user interface. As a result 
we came up with a page which links all content into a single landing 
page: *


*http://www.gluster.org/community/documentation/index.php/Staged_Docs *

*This is just our first step to improve our community docs and enhance 
the community contribution towards documentation. I would like to 
thank Humble Chirammal and Anjana Sriram for the suggestions and 
directions during the entire process. I am sure there is lot of scope 
for improvement. *
*Hence, request you all to review the content and provide your 
suggestions. *


Regards,
Shravan Chandra


__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
__


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster peer probe error (v3.6.2)

2015-03-24 Thread Andreas
Hi,

The problem seems to be solved now(!). We discovered that the global options 
file
(/var/lib/glusterd/options) generated an error in the log file:
 [1970-01-01 00:01:24.423024] E 
 [glusterd-utils.c:5760:glusterd_compare_friend_da
 ta] 0-management: Importing global options failed
For some reason the file was missing previously, but that didn't cause any major
problems to glusterd (except for an error message about the missing file). 
However,
when an empty options file was created to get rid of that error message, this 
new
message appeared which seems to be more serious than the previous. When the 
contents
'global-option-version=0' was added to that options file, all those error 
messages
disappeared and 'gluster peer probe' started to work as expected again. Not that
obvious, at least not for me.

Anyway, thanks for your efforts in trying to solve this problem.


Regards
Andreas

On 03/24/15 10:32, Andreas wrote:
 Sure I am. Unfortunately it didn't change the result...

 # killall glusterd
 # ps -ef | grep gluster
 root 15755   657  0 18:35 ttyS000:00:00 grep gluster
 # rm /var/lib/glusterd/peers/*  
 #  /usr/sbin/glusterd -p /var/run/glusterd.pid
 # gluster peer probe 10.32.1.144
 #
 (I killed glusterd and removed the files on both servers.)

 Regards
 Andreas


 On 03/24/15 05:36, Atin Mukherjee wrote:
 If you are okay to do a fresh set up I would recommend you to clean up
 /var/lib/glusterd/peers/* and then restart glusterd in both the nodes
 and then try peer probing.

 ~Atin

 On 03/23/2015 06:44 PM, Andreas wrote:
 Hi,

 # gluster peer detach 10.32.1.144
 (No output here. Similar to the problem with 'gluster peer probe'.)
 # gluster peer detach 10.32.1.144 force
 peer detach: failed: Peer is already being detached from cluster.
 Check peer status by running gluster peer status
 # gluster peer status
 Number of Peers: 1

 Hostname: 10.32.1.144
 Uuid: 82cdb873-28cc-4ed0-8cfe-2b6275770429
 State: Probe Sent to Peer (Connected)

 # ping 10.32.1.144
 PING 10.32.1.144 (10.32.1.144): 56 data bytes
 64 bytes from 10.32.1.144: seq=0 ttl=64 time=1.811 ms
 64 bytes from 10.32.1.144: seq=1 ttl=64 time=1.834 ms
 ^C
 --- 10.32.1.144 ping statistics ---
 2 packets transmitted, 2 packets received, 0% packet loss
 round-trip min/avg/max = 1.811/1.822/1.834 ms


 As previously stated, this problem seems to be similar to what I 
 experienced with
 'gluster peer probe'. I can reboot the server, but the situation will be 
 the same
 (I've tried this many times).
 Any ideas of which ports to investigate and how to do it to get the most 
 reliable result?
 Anything else that could cause this?



 Regards
 Andreas


 On 03/23/15 11:10, Atin Mukherjee wrote:
 On 03/23/2015 03:28 PM, Andreas Hollaus wrote:
 2Hi,

 This network problem is persistent. However, I can ping the server so 
 guess it
 depends on the port no, right?
 I tried to telnet to port 24007, but I was not sure how to interpret the 
 result as I
 got no respons and no timeout (it just seemed to be waiting for 
 something).
 That's why I decided to install nmap, but according to that tool the port 
 was
 accessible. Are there any other ports that are vital to gluster peer 
 probe?

 When you say 'deprobe', I guess you mean 'gluster peer detach'? That 
 command shows
 similar behaviour to gluster peer probe.
 Yes I meant peer detach. How about gluster peer detach force?
 Regards
 Andreas

 On 03/23/15 05:34, Atin Mukherjee wrote:
 On 03/22/2015 07:11 PM, Andreas Hollaus wrote:
 Hi,

 I hope that these are the logs that you requested.

 Logs from 10.32.0.48:
 --
 # more /var/log/glusterfs/.cmd_log_history
 [2015-03-19 13:52:03.277438]  : peer probe 10.32.1.144 : FAILED : Probe 
 returned
  with unknown errno -1

 # more /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
 [2015-03-19 13:41:31.241768] I [MSGID: 100030] [glusterfsd.c:2018:main] 
 0-/usr/s
 bin/glusterd: Started running /usr/sbin/glusterd version 3.6.2 (args: 
 /usr/sbin/
 glusterd -p /var/run/glusterd.pid)
 [2015-03-19 13:41:31.245352] I [glusterd.c:1214:init] 0-management: 
 Maximum allo
 wed open file descriptors set to 65536
 [2015-03-19 13:41:31.245432] I [glusterd.c:1259:init] 0-management: 
 Using /var/l
 ib/glusterd as working directory
 [2015-03-19 13:41:31.247826] I 
 [glusterd-store.c:2063:glusterd_restore_op_versio
 n] 0-management: Detected new install. Setting op-version to maximum : 
 30600
 [2015-03-19 13:41:31.247902] I 
 [glusterd-store.c:3497:glusterd_store_retrieve_mi
 ssed_snaps_list] 0-management: No missed snaps list.
 Final graph:
 +--+
   1: volume management
   2: type mgmt/glusterd
   3: option rpc-auth.auth-glusterfs on
   4: option rpc-auth.auth-unix on
   5: option rpc-auth.auth-null on
   6: option transport.socket.listen-backlog 128
   7: option ping-timeout 30
   8: option transport.socket.read-fail-log off
   9: 

Re: [Gluster-users] tune2fs exited with non-zero exit status

2015-03-24 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Ah, that is handy to know.

Will this patch get applied to the 3.5. release stream or am I going to have to 
look at moving onto 3.6 at some point.

Thanks

Paul

--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Atin Mukherjee
Sent: 18 March 2015 16:24
To: Vitaly Lipatov; gluster-users@gluster.org
Subject: Re: [Gluster-users] tune2fs exited with non-zero exit status



On 03/18/2015 08:04 PM, Vitaly Lipatov wrote:
  
 
 Osborne, Paul (paul.osbo...@canterbury.ac.uk) писал 2015-03-16
 19:22: 
 
 Hi,

 I am just looking through my logs and am seeing a
 lot of entries of the form: 

 [2015-03-16 16:02:55.553140] I
 [glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management:
 Received status volume req for volume wiki

 [2015-03-16
 16:02:55.561173] E
 [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management:
 tune2fs exited with non-zero exit status

 [2015-03-16
 16:02:55.561204] E
 [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management:
 failed to get inode size

 Having had a rummage I *SUSPECT* it is
 because gluster is trying to get the volume status by querying the 
 superblock on the filesystem for a brick volume. However this is an 
 issue as when the volume was created it was done so in the form:
 
 I
 believe it is a bug with tune2fs missed device argument, introduced 
 this patch http://review.gluster.org/#/c/8134/
Could you send a patch to fix this problem? You can refer to [1] for the 
workflow to send a patch.


[1]
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


~Atin
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

--
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterD uses 50% of RAM

2015-03-24 Thread Atin Mukherjee
Can you share the recent cmd_log_history again. Have you triggered lots
of volume set commands? In recent past we have discovered that volume
set has some potential memory leak.

~Atin

On 03/24/2015 07:49 PM, RASTELLI Alessandro wrote:
 Hi,
 today the issue happened once again. 
 Glusterd process was using 80% of RAM and its log was filling up the /var/log.
 One month ago, when the issue last happened, you suggested to install a 
 patch, so I did this:
 git fetch git://review.gluster.org/glusterfs refs/changes/28/9328/4
 git fetch http://review.gluster.org/glusterfs refs/changes/28/9328/4  git 
 checkout FETCH_HEAD
 Is this enough to install the patch or I missed something?
 
 Thank you
 Alessandro
 
 
 -Original Message-
 From: RASTELLI Alessandro 
 Sent: martedì 24 febbraio 2015 10:28
 To: 'Atin Mukherjee'
 Cc: gluster-users@gluster.org
 Subject: RE: [Gluster-users] GlusterD uses 50% of RAM
 
 Hi Atin,
 I managed to install the patch, it fixed the issue Thank you A.
 
 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: martedì 24 febbraio 2015 08:03
 To: RASTELLI Alessandro
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
 
 
 
 On 02/20/2015 07:20 PM, RASTELLI Alessandro wrote:
 I get this:

 [root@gluster03-mi glusterfs]# git fetch 
 git://review.gluster.org/glusterfs refs/changes/28/9328/4  git 
 checkout FETCH_HEAD
 fatal: Couldn't find remote ref refs/changes/28/9328/4

 What's wrong with that?
 Is your current branch at 3.6 ?

 A.

 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: venerdì 20 febbraio 2015 12:54
 To: RASTELLI Alessandro
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterD uses 50% of RAM

 From the cmd log history I could see lots of volume status commands were 
 triggered parallely. This is a known issue for 3.6 and it would cause a 
 memory leak. http://review.gluster.org/#/c/9328/ should solve it.

 ~Atin

 On 02/20/2015 04:36 PM, RASTELLI Alessandro wrote:
 10MB log
 sorry :)

 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: venerdì 20 febbraio 2015 10:49
 To: RASTELLI Alessandro; gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterD uses 50% of RAM

 Could you please share the cmd_history.log  glusterd log file to analyze 
 this high memory usage.

 ~Atin

 On 02/20/2015 03:10 PM, RASTELLI Alessandro wrote:
 Hi,
 I've noticed that one of our 6 gluster 3.6.2 nodes has glusterd 
 process using 50% of RAM, on the other nodes usage is about 5% This can be 
 a bug?
 Should I restart glusterd daemon?
 Thank you
 A

 From: Volnei Puttini [mailto:vol...@vcplinux.com.br]
 Sent: lunedì 9 febbraio 2015 18:06
 To: RASTELLI Alessandro; gluster-users@gluster.org
 Subject: Re: [Gluster-users] cannot access to CIFS export

 Hi Alessandro,

 My system:

 CentOS 7

 samba-vfs-glusterfs-4.1.1-37.el7_0.x86_64
 samba-winbind-4.1.1-37.el7_0.x86_64
 samba-libs-4.1.1-37.el7_0.x86_64
 samba-common-4.1.1-37.el7_0.x86_64
 samba-winbind-modules-4.1.1-37.el7_0.x86_64
 samba-winbind-clients-4.1.1-37.el7_0.x86_64
 samba-4.1.1-37.el7_0.x86_64
 samba-client-4.1.1-37.el7_0.x86_64

 glusterfs 3.6.2 built on Jan 22 2015 12:59:57

 Try this, work fine for me:

 [GFSVOL]
 browseable = No
 comment = Gluster share of volume gfsvol
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 posix locking = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-gfstest.log
 glusterfs:volume = vgtest
 glusterfs:volfile_server = 192.168.2.21

 On 09-02-2015 14:45, RASTELLI Alessandro wrote:
 Hi,
 I've created and started a new replica volume downloadstat with CIFS 
 export enabled on GlusterFS 3.6.2.
 I can see the following piece has been added automatically to smb.conf:
 [gluster-downloadstat]
 comment = For samba share of volume downloadstat vfs objects = 
 glusterfs glusterfs:volume = downloadstat glusterfs:logfile = 
 /var/log/samba/glusterfs-downloadstat.%M.log
 glusterfs:loglevel = 7
 path = /
 read only = no
 guest ok = yes

 I restarted smb service, without errors.
 When I try to access from Win7 client to 
 \\gluster01-mi\gluster-downloadstatfile:///\\gluster01-mi\gluster-downloadstat
  it asks me a login (which user do I need to put?) and then gives me error 
 The network path was not found
 and on Gluster smb.log I see:
 [2015/02/09 17:21:13.111639,  0] smbd/vfs.c:173(vfs_init_custom)
   error probing vfs module 'glusterfs': NT_STATUS_UNSUCCESSFUL
 [2015/02/09 17:21:13.111709,  0] smbd/vfs.c:315(smbd_vfs_init)
   smbd_vfs_init: vfs_init_custom failed for glusterfs
 [2015/02/09 17:21:13.111741,  0] smbd/service.c:902(make_connection_snum)
   vfs_init failed for service gluster-downloadstat

 Can you explain how to fix?
 Thanks

 Alessandro

 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-bounces@glust
 e r .org 

Re: [Gluster-users] Out of memory Gluster 3.6.2

2015-03-24 Thread Atin Mukherjee
From the cmd_log_history it looks like there are frequent gluster
profile commands which were executed that may have lead to OOM issue. I
will analyze it further and get back to you.

~Atin

On 03/24/2015 02:20 PM, Félix de Lelelis wrote:
 Hi,
 Only a monitoring system, Definitely, I canalize all checks by one script
 that lock all other monitoring checks and only there is one process check
 gluster. I send you cmd_log_history of the 2 nodes.
 
 Thanks.
 
 
 
 2015-03-24 9:18 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
 
 Could you tell us what activities were run in the cluster?
 cmd_log_history across all the nodes would give a clear picture of it.

 ~Atin

 On 03/24/2015 01:03 PM, Félix de Lelelis wrote:
 Hi,

 Today, Glusterd daemon has been killed  due to excessive memory
 consumption:

  [3505254.762715] Out of memory: Kill process 7780 (glusterd) score 581
 or
 sacrifice child
 [3505254.763451] Killed process 7780 (glusterd) total-vm:3537640kB,
 anon-rss:1205240kB, file-rss:672kB

 I have installed gluster 3.6.2 on centos 7. There in any way to avoid
 this
 whithout the need to kill the process?Simply need more memory?

 Thanks.



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 --
 ~Atin

 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Fwd: Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement and checkpatch warnings fixed

2015-03-24 Thread Lalatendu Mohanty

On 03/23/2015 12:49 PM, Anand Subramanian wrote:

FYI.

GlusterFS vols can now be accessed via NFSv4.1 pNFS protocol (mount -t 
nfs -o minorversion=1 ...) from nfs-ganesha 2.2-rc5 onwards.


Note: one fix is to go into libgfapi to fix up using anonymous fd's in 
ds_write/make_ds_handle() (Avati's sugeestion that really helps here).
Once Jiffin or myself get that fix in, a good large file performance 
can be seen with pNFS vs V4.


All thanks and credit to Jiffin for his terrific effort in coding 
things up quickly and for fixing bugs.


Anand


Great news!

I did a quick check in the docs directory i.e. 
https://github.com/gluster/glusterfs/tree/master/doc to see if we have 
any documentation about nfs-ganesha or pNFS and glusterfs integration, 
but did not find any.


I think without howtos around this will hamper the adoption of this 
feature among users. So if we can get some documentation for this, it 
will be awesome.



Thanks,
Lala



 Forwarded Message 
Subject: 	Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement 
and checkpatch warnings fixed

Date:   Sat, 21 Mar 2015 01:04:30 +0100
From:   GerritHub supp...@gerritforge.com
Reply-To:   ffilz...@mindspring.com
To: Anand Subramanian ana...@redhat.com
CC: onnfrhvruutnzhnaq.-g...@noclue.notk.org



 From Frank Filzffilz...@mindspring.com:

Frank Filz has submitted this change and it was merged.

Change subject: pNFS code drop enablement and checkpatch warnings fixed
..


pNFS code drop enablement and checkpatch warnings fixed

Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Signed-off-by: Anand Subramanianana...@redhat.com
---
A src/FSAL/FSAL_GLUSTER/ds.c
M src/FSAL/FSAL_GLUSTER/export.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.h
M src/FSAL/FSAL_GLUSTER/handle.c
M src/FSAL/FSAL_GLUSTER/main.c
A src/FSAL/FSAL_GLUSTER/mds.c
6 files changed, 993 insertions(+), 0 deletions(-)



--
To view, visithttps://review.gerrithub.io/221683
To unsubscribe, visithttps://review.gerrithub.io/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Gerrit-PatchSet: 1
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: Anand Subramanianana...@redhat.com
Gerrit-Reviewer: Frank Filzffilz...@mindspring.com
Gerrit-Reviewer:onnfrhvruutnzhnaq.-g...@noclue.notk.org




___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Do i need to configure vol files?

2015-03-24 Thread Alexander Marx

Dear List.

i am very happy with my glusterfs replica 2 so far.

i have two volumes (one 16TB raid5 on each server) devided into 2 8 TB 
partitions which are used as volumes


I configured some volume options via

gluster volume set 

the options so far:
performance.cache-size: 34359738368
performance.io-thread-count: 32
cluster.data-self-heal-algorithm: full
performance.quick-read: on

Now i read in the internet many posts about editing vol files. do i need 
to configure my vol files?


i have one volfile under /etc/gluster/glusterd.vol
and for each volume another two (for each server) under 
/var/lib/glusterd/volname/


But in neither volfile i am able to find a translator section of the 
options that i reconfigured.

Do i need to create this sections manually?

Background:

actually we are driving a 5 node proxmox cluster which uses NFS as 
storage from a fileserver.


Now we want to replace this setup with a 3 node proxmox cluster, from 
which 2 machines act as gluster replica 2


All test showed that the new setup is 200% faster than the old one, 
except a linux DB2 vm, which takes double time to write into database 
(on insert and delete oprations)


I want to optimize gluster for use of the databases and will now test 
the quickread option, do you think this will bring any performance?


Thank you

Alex



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Access data directly from underlying storage

2015-03-24 Thread Rumen Telbizov
Thank you once again for your input. It's highly appreciated.


On Sat, Mar 21, 2015 at 9:53 AM, Melkor Lord melkor.l...@gmail.com wrote:

 On Thu, Mar 19, 2015 at 9:11 PM, Rumen Telbizov telbi...@gmail.com
 wrote:

 Thank you for your answer Melkor.


 You're welcome!


 This is the kind of experience I was looking for actually. I am happy
 that it has worked fine for you.

 Anybody coming across any issues while reading directly from the
 underlying disk?


 I don't think you'll find any issues, at least on the replica scenario
 since the copied data is exactly the same for every brick in the cluster.
 After all, Gluster only exports a directory content accross the network. I
 don't really see how it could mess things up, especially if all you do is
 reading the files on the exported directory.

 I see the workflow basically as NFS or Samba. You can acces the exported
 root and remove/add files from it without creating havock for the clients
 except the usual can't access xxx if you remove/rename a file before the
 client refreshes the directory listing.

 I won't try such things though (not outside a test environment only) on a
 distributed scenario since in that particular case, the files are splitted
 in bits. Still, I think read only is always safe :-)

 --
 Unix _IS_ user friendly, it's just selective about who its friends are.




-- 
Rumen Telbizov
Unix Systems Administrator http://telbizov.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Out of memory Gluster 3.6.2

2015-03-24 Thread Félix de Lelelis
Hi,

Today, Glusterd daemon has been killed  due to excessive memory consumption:

 [3505254.762715] Out of memory: Kill process 7780 (glusterd) score 581 or
sacrifice child
[3505254.763451] Killed process 7780 (glusterd) total-vm:3537640kB,
anon-rss:1205240kB, file-rss:672kB

I have installed gluster 3.6.2 on centos 7. There in any way to avoid this
whithout the need to kill the process?Simply need more memory?

Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Fwd: Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement and checkpatch warnings fixed

2015-03-24 Thread Jiffin Tony Thottan



On 24/03/15 12:37, Lalatendu Mohanty wrote:

On 03/23/2015 12:49 PM, Anand Subramanian wrote:

FYI.

GlusterFS vols can now be accessed via NFSv4.1 pNFS protocol (mount 
-t nfs -o minorversion=1 ...) from nfs-ganesha 2.2-rc5 onwards.


Note: one fix is to go into libgfapi to fix up using anonymous fd's 
in ds_write/make_ds_handle() (Avati's sugeestion that really helps here).
Once Jiffin or myself get that fix in, a good large file performance 
can be seen with pNFS vs V4.


All thanks and credit to Jiffin for his terrific effort in coding 
things up quickly and for fixing bugs.


Anand


Great news!

I did a quick check in the docs directory i.e. 
https://github.com/gluster/glusterfs/tree/master/doc to see if we have 
any documentation about nfs-ganesha or pNFS and glusterfs integration, 
but did not find any.


I think without howtos around this will hamper the adoption of this 
feature among users. So if we can get some documentation for this, it 
will be awesome.



Thanks,
Lala

Documentation for glusterfs-nfs-ganesha integration is already present  :

https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration

http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/

For pNFS, I will send a documentation as soon as possible.

Thanks,
Jiffin



 Forwarded Message 
Subject: 	Change in ffilz/nfs-ganesha[next]: pNFS code drop 
enablement and checkpatch warnings fixed

Date:   Sat, 21 Mar 2015 01:04:30 +0100
From:   GerritHub supp...@gerritforge.com
Reply-To:   ffilz...@mindspring.com
To: Anand Subramanian ana...@redhat.com
CC: onnfrhvruutnzhnaq.-g...@noclue.notk.org



 From Frank Filzffilz...@mindspring.com:

Frank Filz has submitted this change and it was merged.

Change subject: pNFS code drop enablement and checkpatch warnings fixed
..


pNFS code drop enablement and checkpatch warnings fixed

Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Signed-off-by: Anand Subramanianana...@redhat.com
---
A src/FSAL/FSAL_GLUSTER/ds.c
M src/FSAL/FSAL_GLUSTER/export.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.h
M src/FSAL/FSAL_GLUSTER/handle.c
M src/FSAL/FSAL_GLUSTER/main.c
A src/FSAL/FSAL_GLUSTER/mds.c
6 files changed, 993 insertions(+), 0 deletions(-)



--
To view, visithttps://review.gerrithub.io/221683
To unsubscribe, visithttps://review.gerrithub.io/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Gerrit-PatchSet: 1
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: Anand Subramanianana...@redhat.com
Gerrit-Reviewer: Frank Filzffilz...@mindspring.com
Gerrit-Reviewer:onnfrhvruutnzhnaq.-g...@noclue.notk.org




___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Out of memory Gluster 3.6.2

2015-03-24 Thread Atin Mukherjee
Could you tell us what activities were run in the cluster?
cmd_log_history across all the nodes would give a clear picture of it.

~Atin

On 03/24/2015 01:03 PM, Félix de Lelelis wrote:
 Hi,
 
 Today, Glusterd daemon has been killed  due to excessive memory consumption:
 
  [3505254.762715] Out of memory: Kill process 7780 (glusterd) score 581 or
 sacrifice child
 [3505254.763451] Killed process 7780 (glusterd) total-vm:3537640kB,
 anon-rss:1205240kB, file-rss:672kB
 
 I have installed gluster 3.6.2 on centos 7. There in any way to avoid this
 whithout the need to kill the process?Simply need more memory?
 
 Thanks.
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users