[Gluster-users] Revamping the GlusterFS Documentation...

2015-03-25 Thread Shravan Chandrashekar
Hi John,

Thank you, thats a really valuable feedback. 
We are working on updating the documentation and will ensure to address this 
gap.

Thanks and regards,
Shravan



 Forwarded Message 
Subject:Re: [Gluster-users] Revamping the GlusterFS Documentation...
Date:   Wed, 25 Mar 2015 08:59:10 +1100
From:   John Gardeniers 
To: gluster-users@gluster.org



Hi Shravan,

Having recently set up geo-replication of Gluster v3.5.3 I can tell you 
that there is effectively almost no documentation. The documentation 
that does exists is primarily focused on describing the differences 
between the current and previous versions. That's completely useless to 
someone wanting to set it up for the first time and not a whole lot 
better for someone who has upgraded. The first, and perhaps most 
crucial, piece of information that's missing is installation 
requirements. Nowhere have I been able to find out exactly which 
components are required on either the master or the slave. In my case 
this was determined by pure trial and error. i.e. Install what I think 
should be needed and then continue installing components until it starts 
to work. Even once that part is done, there is a LOT of documentation 
missing. I recall that when I first set up geo-replication (v3.2 or 
v3.3?) I was able to follow clear and simple step by step instructions 
that almost guaranteed success.

regards,
John


On 23/03/15 18:01, Shravan Chandrashekar wrote:
> *Hi All, *
>
> *"The Gluster Filesystem documentation is not user friendly and 
> fragmented" and this has been the feedback we have been receiving. *
>
> *We got back to our drawing board and blueprints and realized that the 
> content was scattered at various places. These include: *
>
> *[Static HTML] http://www.gluster.org/documentation/ *
> *[Mediawiki] http://www.gluster.org/community/documentation/ *
> *[In-source] https://github.com/gluster/glusterfs/tree/master/doc *
> *[Markdown] https://github.com/GlusterFS/Notes *
>
> *and so on… *
>
> *Hence, we started by curating content from various sources including 
> gluster.org static HTML documentation, glusterfs github repository, *
> *various blog posts and the Community wiki. We also felt the need to 
> improve the community member's experience with Gluster documentation. 
> This led us to put some thought into the user interface. As a result 
> we came up with a page which links all content into a single landing 
> page: *
>
> *http://www.gluster.org/community/documentation/index.php/Staged_Docs *
>
> *This is just our first step to improve our community docs and enhance 
> the community contribution towards documentation. I would like to 
> thank Humble Chirammal and Anjana Sriram for the suggestions and 
> directions during the entire process. I am sure there is lot of scope 
> for improvement. *
> *Hence, request you all to review the content and provide your 
> suggestions. *
>
> Regards,
> Shravan Chandra
>
>
> __
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> __
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] libgfapi as non-root

2015-03-25 Thread Raghavendra Talur
On Thu, Mar 26, 2015 at 10:06 AM, Behrooz Shafiee 
wrote:

> The issue was that I had to restart glusterfs-server or daemon.
> Thanks,
>
> On Thu, Mar 26, 2015 at 12:34 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi Behrooz,
>>
>> The error says about "testvol2" where the 'gluster volume info' you
>> provided is on "testvol"
>>
>>  [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_
>> cbk] 0-glfs-mgmt: failed to fetch volume file (key:testvol2)
>>
>> Can you please cross-check on this ?
>>
>> --Humble
>>
>>
>> On Thu, Mar 26, 2015 at 4:30 AM, Behrooz Shafiee 
>> wrote:
>>
>>> Hello everyone,
>>>
>>>  I want to use libgfapi c api in my code. I have installed glusterfs on
>>> my host with two bricks and created a volume called 'testvol'; however, in
>>> my code 'glfs_init' fails with -1 when I run my program without sudo
>>> access. If I run it as root it will connect fine. I saw an email in the
>>> mailing list with this solution:
>>> 1. add ” option rpc-auth-allow-insecure on” to *.vol file
>>> 2. set  server.allow-insecure on
>>>
>>
For future reference, you need to set option rpc-auth-allow-insecure in
glusterd.vol files only on all nodes.



> 3. stop and start the volume
>>>
>>> so I added ” option rpc-auth-allow-insecure on” to every .vol file in
>>> /var/lib/glusterd/vols/testvol including
>>> (testvol.IP.media-behrooz-Backup-gluster-brick1.vol  testvol-rebalance.vol
>>> trusted-testvol.tcp-fuse.vol
>>> testvol.IP.media-behrooz-Backup-gluster-brick2.vol
>>> testvol.tcp-fuse.vol) as well as /etc/glusterfs/glusterd.vol. And also set
>>> testvol server.allow-insecure on and stop and start. Here is the info of my
>>> volume:
>>>
>>> Volume Name: testvol
>>> Type: Replicate
>>> Volume ID: 3754bf24-d6b0-4af7-a4c9-d2a63c55455c
>>> Status: Started
>>> Number of Bricks: 1 x 2 = 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: IP:/media/behrooz/Backup/gluster/brick1
>>> Brick2: IP:/media/behrooz/Backup/gluster/brick2
>>> Options Reconfigured:
>>> server.allow-insecure: on
>>> auth.allow: *
>>>
>>> But still no luck and here is what I see in my program output:
>>> [2015-03-25 22:50:08.079273] W [socket.c:611:__socket_rwv] 0-gfapi:
>>> readv on 129.97.170.232:24007 failed (No data available)
>>> [2015-03-25 22:50:08.079806] E [rpc-clnt.c:362:saved_frames_unwind] (-->
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fdb253f3da6]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fdb2589cc7e]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fdb2589cd8e]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x82)[0x7fdb2589e602]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fdb2589ed98]
>>> ) 0-gfapi: forced unwinding frame type(GlusterFS Handshake)
>>> op(GETSPEC(2)) called at 2015-03-25 22:50:08.079120 (xid=0x1)
>>> [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_cbk]
>>> 0-glfs-mgmt: failed to fetch volume file (key:testvol2)
>>> [2015-03-25 22:50:08.079908] E [glfs-mgmt.c:696:mgmt_rpc_notify]
>>> 0-glfs-mgmt: failed to connect with remote-host: IP (No data available)
>>> [2015-03-25 22:50:08.079924] I [glfs-mgmt.c:701:mgmt_rpc_notify]
>>> 0-glfs-mgmt: Exhausted all volfile servers
>>>
>>>
>>> And I use:
>>> glusterfs 3.6.2 built on Feb  7 2015 06:29:50
>>>
>>>
>>>
>>> Any help or comment is highly appreciated!
>>>
>>> Thanks,
>>> --
>>> Behrooz
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
>
> --
> Behrooz
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
*Raghavendra Talur *
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] libgfapi as non-root

2015-03-25 Thread Behrooz Shafiee
The issue was that I had to restart glusterfs-server or daemon.
Thanks,

On Thu, Mar 26, 2015 at 12:34 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi Behrooz,
>
> The error says about "testvol2" where the 'gluster volume info' you
> provided is on "testvol"
>
>  [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_
> cbk] 0-glfs-mgmt: failed to fetch volume file (key:testvol2)
>
> Can you please cross-check on this ?
>
> --Humble
>
>
> On Thu, Mar 26, 2015 at 4:30 AM, Behrooz Shafiee 
> wrote:
>
>> Hello everyone,
>>
>>  I want to use libgfapi c api in my code. I have installed glusterfs on
>> my host with two bricks and created a volume called 'testvol'; however, in
>> my code 'glfs_init' fails with -1 when I run my program without sudo
>> access. If I run it as root it will connect fine. I saw an email in the
>> mailing list with this solution:
>> 1. add ” option rpc-auth-allow-insecure on” to *.vol file
>> 2. set  server.allow-insecure on
>> 3. stop and start the volume
>>
>> so I added ” option rpc-auth-allow-insecure on” to every .vol file in
>> /var/lib/glusterd/vols/testvol including
>> (testvol.IP.media-behrooz-Backup-gluster-brick1.vol  testvol-rebalance.vol
>> trusted-testvol.tcp-fuse.vol
>> testvol.IP.media-behrooz-Backup-gluster-brick2.vol  testvol.tcp-fuse.vol)
>> as well as /etc/glusterfs/glusterd.vol. And also set testvol
>> server.allow-insecure on and stop and start. Here is the info of my volume:
>>
>> Volume Name: testvol
>> Type: Replicate
>> Volume ID: 3754bf24-d6b0-4af7-a4c9-d2a63c55455c
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: IP:/media/behrooz/Backup/gluster/brick1
>> Brick2: IP:/media/behrooz/Backup/gluster/brick2
>> Options Reconfigured:
>> server.allow-insecure: on
>> auth.allow: *
>>
>> But still no luck and here is what I see in my program output:
>> [2015-03-25 22:50:08.079273] W [socket.c:611:__socket_rwv] 0-gfapi: readv
>> on 129.97.170.232:24007 failed (No data available)
>> [2015-03-25 22:50:08.079806] E [rpc-clnt.c:362:saved_frames_unwind] (-->
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fdb253f3da6]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fdb2589cc7e]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fdb2589cd8e]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x82)[0x7fdb2589e602]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fdb2589ed98]
>> ) 0-gfapi: forced unwinding frame type(GlusterFS Handshake)
>> op(GETSPEC(2)) called at 2015-03-25 22:50:08.079120 (xid=0x1)
>> [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_cbk]
>> 0-glfs-mgmt: failed to fetch volume file (key:testvol2)
>> [2015-03-25 22:50:08.079908] E [glfs-mgmt.c:696:mgmt_rpc_notify]
>> 0-glfs-mgmt: failed to connect with remote-host: IP (No data available)
>> [2015-03-25 22:50:08.079924] I [glfs-mgmt.c:701:mgmt_rpc_notify]
>> 0-glfs-mgmt: Exhausted all volfile servers
>>
>>
>> And I use:
>> glusterfs 3.6.2 built on Feb  7 2015 06:29:50
>>
>>
>>
>> Any help or comment is highly appreciated!
>>
>> Thanks,
>> --
>> Behrooz
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>


-- 
Behrooz
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] libgfapi as non-root

2015-03-25 Thread Humble Devassy Chirammal
Hi Behrooz,

The error says about "testvol2" where the 'gluster volume info' you
provided is on "testvol"

 [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_
cbk] 0-glfs-mgmt: failed to fetch volume file (key:testvol2)

Can you please cross-check on this ?

--Humble


On Thu, Mar 26, 2015 at 4:30 AM, Behrooz Shafiee 
wrote:

> Hello everyone,
>
>  I want to use libgfapi c api in my code. I have installed glusterfs on my
> host with two bricks and created a volume called 'testvol'; however, in my
> code 'glfs_init' fails with -1 when I run my program without sudo access.
> If I run it as root it will connect fine. I saw an email in the mailing
> list with this solution:
> 1. add ” option rpc-auth-allow-insecure on” to *.vol file
> 2. set  server.allow-insecure on
> 3. stop and start the volume
>
> so I added ” option rpc-auth-allow-insecure on” to every .vol file in
> /var/lib/glusterd/vols/testvol including
> (testvol.IP.media-behrooz-Backup-gluster-brick1.vol  testvol-rebalance.vol
> trusted-testvol.tcp-fuse.vol
> testvol.IP.media-behrooz-Backup-gluster-brick2.vol  testvol.tcp-fuse.vol)
> as well as /etc/glusterfs/glusterd.vol. And also set testvol
> server.allow-insecure on and stop and start. Here is the info of my volume:
>
> Volume Name: testvol
> Type: Replicate
> Volume ID: 3754bf24-d6b0-4af7-a4c9-d2a63c55455c
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: IP:/media/behrooz/Backup/gluster/brick1
> Brick2: IP:/media/behrooz/Backup/gluster/brick2
> Options Reconfigured:
> server.allow-insecure: on
> auth.allow: *
>
> But still no luck and here is what I see in my program output:
> [2015-03-25 22:50:08.079273] W [socket.c:611:__socket_rwv] 0-gfapi: readv
> on 129.97.170.232:24007 failed (No data available)
> [2015-03-25 22:50:08.079806] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fdb253f3da6]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fdb2589cc7e]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fdb2589cd8e]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x82)[0x7fdb2589e602]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fdb2589ed98]
> ) 0-gfapi: forced unwinding frame type(GlusterFS Handshake)
> op(GETSPEC(2)) called at 2015-03-25 22:50:08.079120 (xid=0x1)
> [2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_cbk]
> 0-glfs-mgmt: failed to fetch volume file (key:testvol2)
> [2015-03-25 22:50:08.079908] E [glfs-mgmt.c:696:mgmt_rpc_notify]
> 0-glfs-mgmt: failed to connect with remote-host: IP (No data available)
> [2015-03-25 22:50:08.079924] I [glfs-mgmt.c:701:mgmt_rpc_notify]
> 0-glfs-mgmt: Exhausted all volfile servers
>
>
> And I use:
> glusterfs 3.6.2 built on Feb  7 2015 06:29:50
>
>
>
> Any help or comment is highly appreciated!
>
> Thanks,
> --
> Behrooz
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster probe node by hostname

2015-03-25 Thread 可樂我
Thanks for your answer!
I am using glusterfs-3.6 and I know the situation you talk about.
I will see the 'Other Names' when I reverse probe node.

I still see the hostname and ip peer status when I restart GlusterD of
Node3 after modify peer file in Node3
But I don't want to see the any ip in peer status, how can I do?
Thank you very much

2015-03-26 11:37 GMT+08:00 可樂我 :

> Thanks for your answer!
> I am using glusterfs-3.6 and I know the situation you talk about.
> I will see the 'Other Names' when I reverse probe node.
>
> I still see the hostname and ip peer status when I restart GlusterD of
> Node3 after modify peer file in Node3
> But I don't want to see the any ip in peer status, how can I do?
> Thank you very much
>
>
> 2015-03-25 16:50 GMT+08:00 Kaushal M :
>
>> You have just modified the on-disk stored information. The in-memory
>> information still is the IP. GlusterD only reads the on-disk file when
>> starting up, and after that uses the in-memory information for all
>> operations. This is why you still see the IP in peer status. You have to
>> restart GlusterD on Node2 and Node3 to load the information from the
>> modified file.
>>
>> But instead of the above, you have simpler option to set the hostname for
>> Node1.
>> - Probe Node2 and Node3 from Node1 as normal ("gluster peer probe node2",
>> "gluster peer probe node3"). After this Node1 will be referred to by it's
>> IP on Node2 and Node3.
>> - From one of Node2 or Node3, do a reverse probe on Node1 ("gluster peer
>> probe node1"). This will update the IP to hostname everywhere (in-memory,
>> on-disk and on all nodes).
>>
>> If you are using glusterfs-3.6 and above, doing the reverse probe on
>> Node1 will not change the IP to hostname. Instead an extra name is attached
>> to Node1 and will be displayed in peer status under 'Other Names'.
>>
>> ~kaushal
>>
>> On Wed, Mar 25, 2015 at 12:54 PM, 可樂我  wrote:
>>
>>> Hi all,
>>> I have a problem about probe new node by hostname
>>> i have three nodes, (Node1, Node-2, Node3)
>>> Node1 hostname: node1
>>> Node2 hostname: node2
>>> Node3 hostname: node3
>>>
>>> Step 1:
>>> Node1 probe Node2
>>> # gluster probe node2
>>>
>>> Step 2:
>>> modify the peer file of Node2
>>> hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)
>>>
>>> Step 3:
>>> Node1 probe Node3
>>> #gluster probe node3
>>>
>>> Step 4:
>>> modify the peer file of Node2
>>> hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)
>>>
>>> but it still show the IP of Node1 in hostname when I execute *gluster
>>> peer status* cmd
>>> if I want to hostname of all peer in cluster will only show hostname,how
>>> can I do?
>>> is any solution to fix the problem?
>>> Thank you very much!!
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi Kaushal,

I am really thankful to you and the guy form ovirt-china "huntxu" to help
me to resolve this issue... once again thanks to all...

Punit

On Wed, Mar 25, 2015 at 6:52 PM, Kaushal M  wrote:

> Awesome Punit! I'm happy to have been a part of the debugging process.
>
> ~kaushal
>
> On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal  wrote:
>
>> Hi All,
>>
>> With the help of gluster community and ovirt-china community...my issue
>> got resolved...
>>
>> The main root cause was the following :-
>>
>> 1. the glob operation takes quite a long time, longer than the ioprocess
>> default 60s..
>> 2. python-ioprocess updated which makes a single change of configuration
>> file doesn't work properly, only because this we should hack the code
>> manually...
>>
>>  Solution (Need to do on all the hosts) :-
>>
>>  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file
>> as  :-
>>
>> 
>> [irs]
>> process_pool_timeout = 180
>> -
>>
>> 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
>> there is  still "IOProcess(DEFAULT_TIMEOUT)" in it,if yes...then changing
>> the configuration file takes no effect because now timeout is the third
>> parameter not the second of IOProcess.__init__().
>>
>> 3. Change IOProcess(DEFAULT_TIMEOUT) to
>> IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
>>  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
>> supervdsm service on all hosts
>>
>> Thanks,
>> Punit Dambiwal
>>
>>
>> On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal 
>> wrote:
>>
>>> Hi All,
>>>
>>> Still i am facing the same issue...please help me to overcome this
>>> issue...
>>>
>>> Thanks,
>>> punit
>>>
>>> On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink <
>>> thomas.holkenbr...@fibercloud.com> wrote:
>>>
  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the
 gluster Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it
 works well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME="System Storage Bond0"









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

[Gluster-users] libgfapi as non-root

2015-03-25 Thread Behrooz Shafiee
Hello everyone,

 I want to use libgfapi c api in my code. I have installed glusterfs on my
host with two bricks and created a volume called 'testvol'; however, in my
code 'glfs_init' fails with -1 when I run my program without sudo access.
If I run it as root it will connect fine. I saw an email in the mailing
list with this solution:
1. add ” option rpc-auth-allow-insecure on” to *.vol file
2. set  server.allow-insecure on
3. stop and start the volume

so I added ” option rpc-auth-allow-insecure on” to every .vol file in
/var/lib/glusterd/vols/testvol including
(testvol.IP.media-behrooz-Backup-gluster-brick1.vol  testvol-rebalance.vol
trusted-testvol.tcp-fuse.vol
testvol.IP.media-behrooz-Backup-gluster-brick2.vol  testvol.tcp-fuse.vol)
as well as /etc/glusterfs/glusterd.vol. And also set testvol
server.allow-insecure on and stop and start. Here is the info of my volume:

Volume Name: testvol
Type: Replicate
Volume ID: 3754bf24-d6b0-4af7-a4c9-d2a63c55455c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: IP:/media/behrooz/Backup/gluster/brick1
Brick2: IP:/media/behrooz/Backup/gluster/brick2
Options Reconfigured:
server.allow-insecure: on
auth.allow: *

But still no luck and here is what I see in my program output:
[2015-03-25 22:50:08.079273] W [socket.c:611:__socket_rwv] 0-gfapi: readv
on 129.97.170.232:24007 failed (No data available)
[2015-03-25 22:50:08.079806] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fdb253f3da6]
(-->
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fdb2589cc7e]
(-->
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fdb2589cd8e]
(-->
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x82)[0x7fdb2589e602]
(-->
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fdb2589ed98]
) 0-gfapi: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-03-25 22:50:08.079120 (xid=0x1)
[2015-03-25 22:50:08.079884] E [glfs-mgmt.c:599:mgmt_getspec_cbk]
0-glfs-mgmt: failed to fetch volume file (key:testvol2)
[2015-03-25 22:50:08.079908] E [glfs-mgmt.c:696:mgmt_rpc_notify]
0-glfs-mgmt: failed to connect with remote-host: IP (No data available)
[2015-03-25 22:50:08.079924] I [glfs-mgmt.c:701:mgmt_rpc_notify]
0-glfs-mgmt: Exhausted all volfile servers


And I use:
glusterfs 3.6.2 built on Feb  7 2015 06:29:50



Any help or comment is highly appreciated!

Thanks,
-- 
Behrooz
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Revamping the GlusterFS Documentation...

2015-03-25 Thread Peter B.

Happy to hear that there are plans to improve the GlusterFS documentation!

I've realized myself that I once wanted to contribute to the docs, but
wasn't sure "where to". If the diverse sources of gluster's docs are now
going to be merged to a central entity, that would very likely improve
contributions from the community.



On 03/23/2015 10:34 PM, Justin Clift wrote:
> * Gluster Ant Logo image - The first letter REALLY looks like a C
>   (to me), not a G.  Reads as "Cluster" for me...

I personally also prefer the font used for the "Gluster community" logo
in the upper left corner.
It "feels" a bit more like a proper trademark. I like that :)



Something else I wanted to mention is, that I found it very confusing /
intimidating that some articles referred to specific version numbers in
their title.
For example "Gluster 3.2: Migrating Volumes" [1]. I wasn't sure if I
should follow instructions listed there, if I'm using a different
gluster version (e.g. 3.4).
Especially, because there is one for "3.1", for example.
So I tried to replace "3.2" with "3.4" in the URL, as I would have
expected a naming convention in that case.
Search also didn't list anything. So, in the end, I gave up and used the
3.2 docs.

I assume it's so because nothing has changed with "Migrating Volumes",
for example since 3.2 - but how will I know if the docs I find apply to
my version?

That's something I think might be nice if that's solved somehow in the
new docs.



Thanks and regards,
Pb


== References:
[1]
http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Migrating_Volumes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster/NFS mount issues

2015-03-25 Thread Ben Turner
Normally when I see this the NICs are not fully initialized.  I have done a 
couple different things to work around this:

-Try adding the linkdelay parameter to the ifcfg script:

LINKDELAY=time
where time is the number of seconds to wait for link negotiation before 
configuring the device.

-Try turning on portfast on your switch to speed up negotiation.

-Try putting a sleep in your init scripts just before it goes to mount your 
fstab items

-Try putting the mount command in rc.local or whatever is the last thing your 
system does before it boots.

Last time I looked at the _netdev code it only looked for an active link, it 
didn't ensure that the NIC was up and able to send traffic.  I would start with 
the linkdelay and go from there.  LMK how this works out for ya, I am not very 
well versed on the Ubuntu boot process :/

-b

- Original Message -
> From: "Alun James" 
> To: gluster-users@gluster.org
> Sent: Wednesday, March 25, 2015 6:33:05 AM
> Subject: [Gluster-users] Gluster/NFS mount issues
> 
> Hi folks,
> 
> I am having some issues getting NFS to mount the glusterfs volume on boot-up,
> I have tried all the usual mount options in fstab, but thus far none have
> helped I am using NFS as it seems to give better performance for my workload
> compared with glusterfs client.
> 
> [Node Setup]
> 
> 3 x Nodes mounting vol locally.
> Ubuntu 14.04 3.13.0-45-generic
> GlusterFS: 3.6.2-ubuntu1~trusty3
> nfs-common 1:1.2.8-6ubuntu1.1
> 
> Type: Replicate
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: node01:/export/brick0
> Brick2: node 02:/export/brick0
> Brick3: node 03:/export/brick0
> 
> /etc/fstab:
> 
> /dev/mapper/gluster--vg-brick0 /export/brick0 xfs defaults 0 0
> 
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,vers=3 0 0
> 
> 
> [Issue]
> 
> On boot, the /data partition is not mounted, however, I can jump on each node
> and simply run "mount /data" without any problems, so I assume my fstab
> options are OK. I have noticed the following log:
> 
> /var/log/upstart/mountall.log:
> 
> mount.nfs: requested NFS version or transport protocol is not supported
> mountall: mount /data [1178] terminated with status 32
> 
> I have attempted the following fstab options without success and similar log
> message:
> 
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock 0 0
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,vers=3 0 0
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock,vers=3 0 0
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock,nfsvers=3 0 0
> localhost:/my_filestore_vol /data nfs
> defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,nfsvers=3 0 0
> 
> Anything else I can try?
> 
> Regards,
> 
> ALUN JAMES
> Senior Systems Engineer
> Tibus
> 
> T: +44 (0)28 9033 1122
> E: aja...@tibus.com
> W: www.tibus.com
> 
> Follow us on Twitter @tibus
> 
> Tibus is a trading name of The Internet Business Ltd, a company limited by
> share capital and registered in Northern Ireland, NI31235. It is part of UTV
> Media Plc.
> 
> This email and any attachment may contain confidential information for the
> sole use of the intended recipient. Any review, use, distribution or
> disclosure by others is strictly prohibited. If you are not the intended
> recipient (or authorised to receive for the recipient), please contact the
> sender by reply email and delete all copies of this message.
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What should I do to improve performance ?

2015-03-25 Thread Joe Topjian
Just to clarify: the Cinder Gluster driver, which is comparable to the
Cinder NFS driver, uses FUSE:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py#L489-L499

Unless I'm mistaken, bootable volumes with Nova will use libgfapi (if
configured with qemu_allowed_storage_drivers), but the best ephemeral
instances can do is store the instances on a GlusterFS mounted
/var/lib/nova/instances.

We've also seen performance issues when running ephemeral instances on a
glusterfs mounted directory. However, we're using the Gluster Cinder driver
for volumes and have been mostly happy with it.

On Tue, Mar 24, 2015 at 9:22 AM, Joe Julian  wrote:

> Nova ephemeral disks don't use libgfapi, only Cinder volumes.
>
> On March 24, 2015 7:27:52 AM PDT, marianna cattani <
> marianna.catt...@gmail.com> wrote:
>
>> Openstack doesn't have vsdm, it should be a configuration option:
>> "qemu_allowed_storage_drivers=gluster"
>>
>> But, however , the machine is generated with the xml that you see.
>>
>> Now I try to write to the OpenStack's mailing list .
>>
>> tnx
>>
>> M
>>
>> 2015-03-24 15:14 GMT+01:00 noc :
>>
>>> On 24-3-2015 14:39, marianna cattani wrote:
>>> > Many many thanks 
>>> >
>>> > Mine is different 
>>> >
>>> > :'(
>>> >
>>> > root@nodo-4:~# virsh -r dumpxml instance-002c | grep disk
>>> > 
>>> >   >> >
>>> file='/var/lib/nova/instances/ef84920d-2009-42a2-90de-29d9bd5e8512/disk'/>
>>> >   
>>> > 
>>> >
>>> >
>>> Thats a fuse connection. I'm running ovirt + glusterfs where vdsm is a
>>> special version with libgfapi support. Don't know if Openstack has that
>>> too?
>>>
>>> Joop
>>>
>>>
>> --
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] REMINDER: Weekly Gluster Community meeting today at 12:00 UTC

2015-03-25 Thread Vijay Bellur

On 03/25/2015 04:23 PM, Vijay Bellur wrote:

Hi all,

In about 70 minutes from now we will have the regular weekly Gluster
Community meeting.




Meeting minutes from today can be found at [1].

Thanks,
Vijay

[1] 
http://meetbot.fedoraproject.org/gluster-meeting/2015-03-25/gluster-meeting.2015-03-25-12.00.html


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Rebalance never seems to start - Could be coursed by..

2015-03-25 Thread Jesper Led Lauridsen TS Infra server
Hi

I had an issue where rebalance of a volume never seemed to start. After I 
executed rebalance the status was like below for a week's time.

# gluster volume rebalance rhevtst_dr2_g_data_01 status
Node Rebalanced-files  size   
scanned  failures   skipped   status   run time in secs
   -  ---   ---   
---   ---   ---  --
   localhost00Bytes 
0 0 0  in progress   0.00
glustore04.net.dr.dk00Bytes 
0 0 0  in progress   0.00
glustore03.net.dr.dk00Bytes 
0 0 0  in progress   0.00
glustore02.net.dr.dk00Bytes 
0 0 0  in progress   0.00
volume rebalance: rhevtst_dr2_g_data_01: success:

I found that 'auth.allow' was the course to the problem. The 4 gluster-nodes IP 
(glustore0x.net.dr.dk) was not represented in the 'auth.allow' value.  After 
adding the IP's I can now rebalance successfully.

I have only confirmed this on a  Distributed Replica volume on gluster 3.6.2

Hope this may help others..

Thanks
Jesper

**
# rpm -qa | grep gluster
glusterfs-api-3.6.2-1.el6.x86_64
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-geo-replication-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
glusterfs-fuse-3.6.2-1.el6.x86_64
glusterfs-rdma-3.6.2-1.el6.x86_64
vdsm-gluster-4.14.17-0.el6.noarch
glusterfs-3.6.2-1.el6.x86_64
glusterfs-server-3.6.2-1.el6.x86_64



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is rebalance completely broken on 3.5.3 ?

2015-03-25 Thread Alessandro Ipe
Hi Nithya,


Thanks for your reply. I am glad that improving the rebalance status will be 
addressed in the (near) future. For my perspective, if the status is giving 
the total files to be scanned together with the files already scanned, it is 
sufficient information. Indeed, the user would see when it would complete (by 
doing several "gluster volume rebalance status" and computing differences 
according to elapsed time between them).

Please find below the answers to your questions:
1. Server and client are version 3.5.3
2. Indeed, I stopped the rebalance through the associated commdn from CLI, 
i.e. gluster  rebalance stop
3. Very limited file operations were carried out through a single client mount 
(servers were almost idle)
4.gluster volume info :
Volume Name: home
Type: Distributed-Replicate
Volume ID: 501741ed-4146-4022-af0b-41f5b1297766
Status: Started
Number of Bricks: 12 x 2 = 24
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/home/brick1
Brick2: tsunami2:/data/glusterfs/home/brick1
Brick3: tsunami1:/data/glusterfs/home/brick2
Brick4: tsunami2:/data/glusterfs/home/brick2
Brick5: tsunami1:/data/glusterfs/home/brick3
Brick6: tsunami2:/data/glusterfs/home/brick3
Brick7: tsunami1:/data/glusterfs/home/brick4
Brick8: tsunami2:/data/glusterfs/home/brick4
Brick9: tsunami3:/data/glusterfs/home/brick1
Brick10: tsunami4:/data/glusterfs/home/brick1
Brick11: tsunami3:/data/glusterfs/home/brick2
Brick12: tsunami4:/data/glusterfs/home/brick2
Brick13: tsunami3:/data/glusterfs/home/brick3
Brick14: tsunami4:/data/glusterfs/home/brick3
Brick15: tsunami3:/data/glusterfs/home/brick4
Brick16: tsunami4:/data/glusterfs/home/brick4
Brick17: tsunami5:/data/glusterfs/home/brick1
Brick18: tsunami6:/data/glusterfs/home/brick1
Brick19: tsunami5:/data/glusterfs/home/brick2
Brick20: tsunami6:/data/glusterfs/home/brick2
Brick21: tsunami5:/data/glusterfs/home/brick3
Brick22: tsunami6:/data/glusterfs/home/brick3
Brick23: tsunami5:/data/glusterfs/home/brick4
Brick24: tsunami6:/data/glusterfs/home/brick4
Options Reconfigured:
performance.cache-size: 512MB
performance.io-thread-count: 64
performance.flush-behind: off
performance.write-behind-window-size: 4MB
performance.write-behind: on
nfs.disable: on
features.quota: off
cluster.read-hash-mode: 2
diagnostics.brick-log-level: CRITICAL
cluster.lookup-unhashed: on
server.allow-insecure: on
cluster.ensure-durability: on

For the logs, it will be more difficult because it happened several days ago, 
and they were rotated. But I can dig... By the way, do you need a specific 
logfile, because gluster produces a lot of them...

I read in some discussion on the gluster-users mailing list that rebalance on 
version 3.5.x could leave the system with errors when stopped (or even when 
ran up to its completion ?) and that rebalance had gone a complete rewrite in 
3.6.x.  The issue is that I will put back online gluster next week, so my 
colleagues will definitively put it under high load and I was planning to run 
again the rebalance in the background. However, is it advisable ? Or should I 
wait after upgrading to 3.6.3 ?

I also noticed (currently undergoing a full heal on the volume) that accessing 
to some files on the client returned a "Transport endoint is not connected" 
the first time, but any new access was OK (probably due to self-healing). 
However, it is possible to setup a client or a volume parameter to just wait 
(and make the calling process wait) for the self-healing to complete and 
deliver the file the first time without issuing an error (extremely usefull in 
batch/operational processing) ?


Regards,


Alessandro.


On Wednesday 25 March 2015 05:09:38 Nithya Balachandran wrote:
> Hi Alessandro,
> 
> 
> I am sorry to hear that you are facing problems with rebalance.
> 
> Currently rebalance does not have the information as to how many files exist
> on the volume and so cannot calculate/estimate the time it will take to
> complete. Improving the rebalance status output to provide that info is on
> our to-do list already and we will be working on that.
> 
> I have a few questions :
> 
> 1. Which version of Glusterfs are you using?
> 2. How did you stop the rebalance ? I assume you ran "gluster 
> rebalance stop" but just wanted confirmation. 3. What file operations were
> being performed during the rebalance? 4. Can you send the "gluster volume
> info" output as well as the gluster log files?
> 
> Regards,
> Nithya
> 
> - Original Message -
> From: "Alessandro Ipe" 
> To: gluster-users@gluster.org
> Sent: Friday, March 20, 2015 4:52:35 PM
> Subject: [Gluster-users] Is rebalance completely broken on 3.5.3 ?
> 
> 
> 
> Hi,
> 
> 
> 
> 
> 
> After lauching a "rebalance" on an idle gluster system one week ago, its
> status told me it has scanned
> 
> more than 23 millions files on each of my 6 bricks. However, without knowing
> at least the total files to
> 
> be scanned, this status is USELESS from an end-user perspective, because it
> does not allow you to

Re: [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Kaushal M
Awesome Punit! I'm happy to have been a part of the debugging process.

~kaushal

On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal  wrote:

> Hi All,
>
> With the help of gluster community and ovirt-china community...my issue
> got resolved...
>
> The main root cause was the following :-
>
> 1. the glob operation takes quite a long time, longer than the ioprocess
> default 60s..
> 2. python-ioprocess updated which makes a single change of configuration
> file doesn't work properly, only because this we should hack the code
> manually...
>
>  Solution (Need to do on all the hosts) :-
>
>  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
>  :-
>
> 
> [irs]
> process_pool_timeout = 180
> -
>
> 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
> there is  still "IOProcess(DEFAULT_TIMEOUT)" in it,if yes...then changing
> the configuration file takes no effect because now timeout is the third
> parameter not the second of IOProcess.__init__().
>
> 3. Change IOProcess(DEFAULT_TIMEOUT) to
> IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
>  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
> supervdsm service on all hosts
>
> Thanks,
> Punit Dambiwal
>
>
> On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal  wrote:
>
>> Hi All,
>>
>> Still i am facing the same issue...please help me to overcome this
>> issue...
>>
>> Thanks,
>> punit
>>
>> On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink <
>> thomas.holkenbr...@fibercloud.com> wrote:
>>
>>>  I’ve seen this before. The system thinks the storage system us up and
>>> running and then attempts to utilize it.
>>>
>>> The way I got around it was to put a delay in the startup of the gluster
>>> Node on the interface that the clients use to communicate.
>>>
>>>
>>>
>>> I use a bonded link, I then add a LINKDELAY to the interface to get the
>>> underlying system up and running before the network comes up. This then
>>> causes Network dependent features to wait for the network to finish.
>>>
>>> It adds about 10seconds to the startup time, in our environment it works
>>> well, you may not need as long of a delay.
>>>
>>>
>>>
>>> CentOS
>>>
>>> root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
>>>
>>>
>>>
>>> DEVICE=bond0
>>>
>>> ONBOOT=yes
>>>
>>> BOOTPROTO=static
>>>
>>> USERCTL=no
>>>
>>> NETMASK=255.255.248.0
>>>
>>> IPADDR=10.10.1.17
>>>
>>> MTU=9000
>>>
>>> IPV6INIT=no
>>>
>>> IPV6_AUTOCONF=no
>>>
>>> NETWORKING_IPV6=no
>>>
>>> NM_CONTROLLED=no
>>>
>>> LINKDELAY=10
>>>
>>> NAME="System Storage Bond0"
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Hi Michal,
>>>
>>>
>>>
>>> The Storage domain is up and running and mounted on all the host
>>> nodes...as i updated before that it was working perfectly before but just
>>> after reboot can not make the VM poweron...
>>>
>>>
>>>
>>> [image: Inline image 1]
>>>
>>>
>>>
>>> [image: Inline image 2]
>>>
>>>
>>>
>>> [root@cpu01 log]# gluster volume info
>>>
>>>
>>>
>>> Volume Name: ds01
>>>
>>> Type: Distributed-Replicate
>>>
>>> Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6
>>>
>>> Status: Started
>>>
>>> Number of Bricks: 48 x 2 = 96
>>>
>>> Transport-type: tcp
>>>
>>> Bricks:
>>>
>>> Brick1: cpu01:/bricks/1/vol1
>>>
>>> Brick2: cpu02:/bricks/1/vol1
>>>
>>> Brick3: cpu03:/bricks/1/vol1
>>>
>>> Brick4: cpu04:/bricks/1/vol1
>>>
>>> Brick5: cpu01:/bricks/2/vol1
>>>
>>> Brick6: cpu02:/bricks/2/vol1
>>>
>>> Brick7: cpu03:/bricks/2/vol1
>>>
>>> Brick8: cpu04:/bricks/2/vol1
>>>
>>> Brick9: cpu01:/bricks/3/vol1
>>>
>>> Brick10: cpu02:/bricks/3/vol1
>>>
>>> Brick11: cpu03:/bricks/3/vol1
>>>
>>> Brick12: cpu04:/bricks/3/vol1
>>>
>>> Brick13: cpu01:/bricks/4/vol1
>>>
>>> Brick14: cpu02:/bricks/4/vol1
>>>
>>> Brick15: cpu03:/bricks/4/vol1
>>>
>>> Brick16: cpu04:/bricks/4/vol1
>>>
>>> Brick17: cpu01:/bricks/5/vol1
>>>
>>> Brick18: cpu02:/bricks/5/vol1
>>>
>>> Brick19: cpu03:/bricks/5/vol1
>>>
>>> Brick20: cpu04:/bricks/5/vol1
>>>
>>> Brick21: cpu01:/bricks/6/vol1
>>>
>>> Brick22: cpu02:/bricks/6/vol1
>>>
>>> Brick23: cpu03:/bricks/6/vol1
>>>
>>> Brick24: cpu04:/bricks/6/vol1
>>>
>>> Brick25: cpu01:/bricks/7/vol1
>>>
>>> Brick26: cpu02:/bricks/7/vol1
>>>
>>> Brick27: cpu03:/bricks/7/vol1
>>>
>>> Brick28: cpu04:/bricks/7/vol1
>>>
>>> Brick29: cpu01:/bricks/8/vol1
>>>
>>> Brick30: cpu02:/bricks/8/vol1
>>>
>>> Brick31: cpu03:/bricks/8/vol1
>>>
>>> Brick32: cpu04:/bricks/8/vol1
>>>
>>> Brick33: cpu01:/bricks/9/vol1
>>>
>>> Brick34: cpu02:/bricks/9/vol1
>>>
>>> Brick35: cpu03:/bricks/9/vol1
>>>
>>> Brick36: cpu04:/bricks/9/vol1
>>>
>>> Brick37: cpu01:/bricks/10/vol1
>>>
>>> Brick38: cpu02:/bricks/10/vol1
>>>
>>> Brick39: cpu03:/bricks/10/vol1
>>>
>>> Brick40: cpu04:/bricks/10/vol1
>>>
>>> Brick41: cpu01:/bricks/11/vol1
>>>
>>> Brick42: cpu02:/bricks/11/vol1
>>>
>>> Brick43: cpu03:/bricks/11/vol1
>>>
>>> Brick44: cpu04:/bricks/11/vol1
>>>
>>> Brick45: cpu01:/bricks/12/vol1
>>>
>>> Brick46: cpu02:/bricks/12/vol1
>>>
>>> Brick47: cpu03:/bricks/12/vol1

[Gluster-users] REMINDER: Weekly Gluster Community meeting today at 12:00 UTC

2015-03-25 Thread Vijay Bellur

Hi all,

In about 70 minutes from now we will have the regular weekly Gluster 
Community meeting.


Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 8:00 EDT, 12:00 UTC, 13:00 CET, 17:30 IST (in your terminal, 
run: date -d "12:00 UTC")

- agenda: available at [1]

Currently the following items are listed:
* Roll Call
* Status of last week's action items
* GlusterFS 3.6
* GlusterFS 3.5
* GlusterFS 3.4
* GlusterFS Next
* Open Floor
   - Fix regression tests with spurious failures
   - docs
   - Awesum Web Presence
   - Gluster Summit Barcelona, second week in May
   - Gluster Summer of Code


The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.

Thanks,
Vijay

[1] https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster/NFS mount issues

2015-03-25 Thread Alun James

Hi folks, 


I am having some issues getting NFS to mount the glusterfs volume on boot-up, I 
have tried all the usual mount options in fstab, but thus far none have helped 
I am using NFS as it seems to give better performance for my workload compared 
with glusterfs client. 


[Node Setup] 



3 x Nodes mounting vol locally. 

Ubuntu 14.04 3.13.0-45-generic 
GlusterFS: 3.6.2-ubuntu1~trusty3 
nfs-common 1:1.2.8-6ubuntu1.1 



Type: Replicate 

Status: Started 
Number of Bricks: 1 x 3 = 3 
Transport-type: tcp 
Bricks: 
Brick1: node01:/export/brick0 
Brick2: node 02:/export/brick0 
Brick3: node 03:/export/brick0 


/etc/fstab: 


/dev/mapper/gluster--vg-brick0 /export/brick0 xfs defaults 0 0 



localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,vers=3 0 0 




[Issue] 


On boot, the /data partition is not mounted, however, I can jump on each node 
and simply run "mount /data" without any problems, so I assume my fstab options 
are OK. I have noticed the following log: 



/var/log/upstart/mountall.log: 




mount.nfs: requested NFS version or transport protocol is not supported 
mountall: mount /data [1178] terminated with status 32 


I have attempted the following fstab options without success and similar log 
message: 


localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock 0 0 
localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,vers=3 0 0 
localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock,vers=3 0 0 
localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock,nfsvers=3 0 0 
localhost:/my_filestore_vol /data nfs 
defaults,nobootwait,noatime,_netdev,nolock,mountproto=tcp,nfsvers=3 0 0 



Anything else I can try? 


Regards, 

ALUN JAMES 
Senior Systems Engineer 
Tibus 

T: +44 (0)28 9033 1122 
E: aja...@tibus.com 
W: www.tibus.com 

Follow us on Twitter @tibus 

Tibus is a trading name of The Internet Business Ltd, a company limited by 
share capital and registered in Northern Ireland, NI31235. It is part of UTV 
Media Plc. 

This email and any attachment may contain confidential information for the sole 
use of the intended recipient. Any review, use, distribution or disclosure by 
others is strictly prohibited. If you are not the intended recipient (or 
authorised to receive for the recipient), please contact the sender by reply 
email and delete all copies of this message. 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bad volume specification | Connection timeout

2015-03-25 Thread Punit Dambiwal
Hi,

Hi All,

With the help of gluster community and ovirt-china community...my issue got
resolved...

The main root cause was the following :-

1. the glob operation takes quite a long time, longer than the ioprocess
default 60s..
2. python-ioprocess updated which makes a single change of configuration
file doesn't work properly, only because this we should hack the code
manually...

 Solution (Need to do on all the hosts) :-

 1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
 :-


[irs]
process_pool_timeout = 180
-

2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
there is  still "IOProcess(DEFAULT_TIMEOUT)" in it,if yes...then changing
the configuration file takes no effect because now timeout is the third
parameter not the second of IOProcess.__init__().

3. Change IOProcess(DEFAULT_TIMEOUT) to
IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
 /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts

Thanks,
Punit Dambiwal

On Thu, Mar 19, 2015 at 10:08 AM, Punit Dambiwal  wrote:

> Hi,
>
> Is there any body can help me to solve this issuei am struggling with
> this issue from last 3 days
>
> Thanks,
> Punit
>
> On Wed, Mar 18, 2015 at 11:30 AM, Punit Dambiwal 
> wrote:
>
>> Hi Vijay,
>>
>> Please find the gluster clinet logs from here :-
>> http://paste.ubuntu.com/10618869/
>>
>> On Wed, Mar 18, 2015 at 10:45 AM, Vijay Bellur 
>> wrote:
>>
>>> On 03/18/2015 07:37 AM, Punit Dambiwal wrote:
>>>
 Where i can find the gluster clinet logs :-

 [root@cpu07 glusterfs]# ls

>>>
>>>  rhev-data-center-mnt-glusterSD-10.10.0.14:_ds01.log

>>>
>>> This would be the one.
>>>
>>> -Vijay
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi All,

With the help of gluster community and ovirt-china community...my issue got
resolved...

The main root cause was the following :-

1. the glob operation takes quite a long time, longer than the ioprocess
default 60s..
2. python-ioprocess updated which makes a single change of configuration
file doesn't work properly, only because this we should hack the code
manually...

 Solution (Need to do on all the hosts) :-

 1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
 :-


[irs]
process_pool_timeout = 180
-

2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
there is  still "IOProcess(DEFAULT_TIMEOUT)" in it,if yes...then changing
the configuration file takes no effect because now timeout is the third
parameter not the second of IOProcess.__init__().

3. Change IOProcess(DEFAULT_TIMEOUT) to
IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
 /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts

Thanks,
Punit Dambiwal


On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal  wrote:

> Hi All,
>
> Still i am facing the same issue...please help me to overcome this issue...
>
> Thanks,
> punit
>
> On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink <
> thomas.holkenbr...@fibercloud.com> wrote:
>
>>  I’ve seen this before. The system thinks the storage system us up and
>> running and then attempts to utilize it.
>>
>> The way I got around it was to put a delay in the startup of the gluster
>> Node on the interface that the clients use to communicate.
>>
>>
>>
>> I use a bonded link, I then add a LINKDELAY to the interface to get the
>> underlying system up and running before the network comes up. This then
>> causes Network dependent features to wait for the network to finish.
>>
>> It adds about 10seconds to the startup time, in our environment it works
>> well, you may not need as long of a delay.
>>
>>
>>
>> CentOS
>>
>> root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
>>
>>
>>
>> DEVICE=bond0
>>
>> ONBOOT=yes
>>
>> BOOTPROTO=static
>>
>> USERCTL=no
>>
>> NETMASK=255.255.248.0
>>
>> IPADDR=10.10.1.17
>>
>> MTU=9000
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=no
>>
>> NETWORKING_IPV6=no
>>
>> NM_CONTROLLED=no
>>
>> LINKDELAY=10
>>
>> NAME="System Storage Bond0"
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi Michal,
>>
>>
>>
>> The Storage domain is up and running and mounted on all the host
>> nodes...as i updated before that it was working perfectly before but just
>> after reboot can not make the VM poweron...
>>
>>
>>
>> [image: Inline image 1]
>>
>>
>>
>> [image: Inline image 2]
>>
>>
>>
>> [root@cpu01 log]# gluster volume info
>>
>>
>>
>> Volume Name: ds01
>>
>> Type: Distributed-Replicate
>>
>> Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6
>>
>> Status: Started
>>
>> Number of Bricks: 48 x 2 = 96
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: cpu01:/bricks/1/vol1
>>
>> Brick2: cpu02:/bricks/1/vol1
>>
>> Brick3: cpu03:/bricks/1/vol1
>>
>> Brick4: cpu04:/bricks/1/vol1
>>
>> Brick5: cpu01:/bricks/2/vol1
>>
>> Brick6: cpu02:/bricks/2/vol1
>>
>> Brick7: cpu03:/bricks/2/vol1
>>
>> Brick8: cpu04:/bricks/2/vol1
>>
>> Brick9: cpu01:/bricks/3/vol1
>>
>> Brick10: cpu02:/bricks/3/vol1
>>
>> Brick11: cpu03:/bricks/3/vol1
>>
>> Brick12: cpu04:/bricks/3/vol1
>>
>> Brick13: cpu01:/bricks/4/vol1
>>
>> Brick14: cpu02:/bricks/4/vol1
>>
>> Brick15: cpu03:/bricks/4/vol1
>>
>> Brick16: cpu04:/bricks/4/vol1
>>
>> Brick17: cpu01:/bricks/5/vol1
>>
>> Brick18: cpu02:/bricks/5/vol1
>>
>> Brick19: cpu03:/bricks/5/vol1
>>
>> Brick20: cpu04:/bricks/5/vol1
>>
>> Brick21: cpu01:/bricks/6/vol1
>>
>> Brick22: cpu02:/bricks/6/vol1
>>
>> Brick23: cpu03:/bricks/6/vol1
>>
>> Brick24: cpu04:/bricks/6/vol1
>>
>> Brick25: cpu01:/bricks/7/vol1
>>
>> Brick26: cpu02:/bricks/7/vol1
>>
>> Brick27: cpu03:/bricks/7/vol1
>>
>> Brick28: cpu04:/bricks/7/vol1
>>
>> Brick29: cpu01:/bricks/8/vol1
>>
>> Brick30: cpu02:/bricks/8/vol1
>>
>> Brick31: cpu03:/bricks/8/vol1
>>
>> Brick32: cpu04:/bricks/8/vol1
>>
>> Brick33: cpu01:/bricks/9/vol1
>>
>> Brick34: cpu02:/bricks/9/vol1
>>
>> Brick35: cpu03:/bricks/9/vol1
>>
>> Brick36: cpu04:/bricks/9/vol1
>>
>> Brick37: cpu01:/bricks/10/vol1
>>
>> Brick38: cpu02:/bricks/10/vol1
>>
>> Brick39: cpu03:/bricks/10/vol1
>>
>> Brick40: cpu04:/bricks/10/vol1
>>
>> Brick41: cpu01:/bricks/11/vol1
>>
>> Brick42: cpu02:/bricks/11/vol1
>>
>> Brick43: cpu03:/bricks/11/vol1
>>
>> Brick44: cpu04:/bricks/11/vol1
>>
>> Brick45: cpu01:/bricks/12/vol1
>>
>> Brick46: cpu02:/bricks/12/vol1
>>
>> Brick47: cpu03:/bricks/12/vol1
>>
>> Brick48: cpu04:/bricks/12/vol1
>>
>> Brick49: cpu01:/bricks/13/vol1
>>
>> Brick50: cpu02:/bricks/13/vol1
>>
>> Brick51: cpu03:/bricks/13/vol1
>>
>> Brick52: cpu04:/bricks/13/vol1
>>
>> Brick53: cpu01:/bricks/14/vol1
>>
>> Brick54: cpu02:/bricks/14/vol1
>>
>> Brick55: cpu03:/bricks/14/vol1
>>
>> Brick56: cpu04:/bricks/14/vol1
>>
>> Brick57: cpu01:/bricks/15/vol1
>>
>> Brick58: cpu02:/bricks/15/v

Re: [Gluster-users] Is rebalance completely broken on 3.5.3 ?

2015-03-25 Thread Nithya Balachandran
Hi Alessandro,


I am sorry to hear that you are facing problems with rebalance.

Currently rebalance does not have the information as to how many files exist on 
the volume and so cannot calculate/estimate the time it will take to complete. 
Improving the rebalance status output to provide that info is on our to-do list 
already and we will be working on that.

I have a few questions :

1. Which version of Glusterfs are you using? 
2. How did you stop the rebalance ? I assume you ran "gluster  
rebalance stop" but just wanted confirmation.
3. What file operations were being performed during the rebalance?
4. Can you send the "gluster volume info" output as well as the gluster log 
files?

Regards,
Nithya

- Original Message -
From: "Alessandro Ipe" 
To: gluster-users@gluster.org
Sent: Friday, March 20, 2015 4:52:35 PM
Subject: [Gluster-users] Is rebalance completely broken on 3.5.3 ?



Hi, 





After lauching a "rebalance" on an idle gluster system one week ago, its status 
told me it has scanned 

more than 23 millions files on each of my 6 bricks. However, without knowing at 
least the total files to 

be scanned, this status is USELESS from an end-user perspective, because it 
does not allow you to 

know WHEN the rebalance could eventually complete (one day, one week, one year 
or never). From 

my point of view, the total files per bricks could be obtained and maintained 
when activating quota, 

since the whole filesystem has to be crawled... 



After one week being offline and still no clue when the rebalance would 
complete, I decided to stop it... 

Enormous mistake... It seems that rebalance cannot manage to not screw some 
files. Example, on 

the only client mounting the gluster system, "ls -la /home/seviri" returns 

ls: cannot access /home/seviri/.forward: Stale NFS file handle 

ls: cannot access /home/seviri/.forward: Stale NFS file handle 

-? ? ? ? ? ? .forward 

-? ? ? ? ? ? .forward 

while this file could perfectly be accessed before (being rebalanced) and has 
not been modifed for at 

least 3 years. 



Getting the extended attributes on the various bricks 3, 4, 5, 6 (3-4 
replicate, 5-6 replicate) 

Brick 3: 

ls -l /data/glusterfs/home/brick?/seviri/.forward 

-rw-r--r-- 2 seviri users 68 May 26 2014 
/data/glusterfs/home/brick1/seviri/.forward 

-rw-r--r-- 2 seviri users 68 Mar 10 10:22 
/data/glusterfs/home/brick2/seviri/.forward 



getfattr -d -m . -e hex /data/glusterfs/home/brick?/seviri/.forward 

# file: data/glusterfs/home/brick1/seviri/.forward 

trusted.afr.home-client-8=0x 

trusted.afr.home-client-9=0x 

trusted.gfid=0xc1d268beb17443a39d914de917de123a 



# file: data/glusterfs/home/brick2/seviri/.forward 

trusted.afr.home-client-10=0x 

trusted.afr.home-client-11=0x 

trusted.gfid=0x14a1c10eb1474ef2bf72f4c6c64a90ce 

trusted.glusterfs.quota.4138a9fa-a453-4b8e-905a-e02cce07d717.contri=0x0200
 

trusted.pgfid.4138a9fa-a453-4b8e-905a-e02cce07d717=0x0001 



Brick 4: 

ls -l /data/glusterfs/home/brick?/seviri/.forward 

-rw-r--r-- 2 seviri users 68 May 26 2014 
/data/glusterfs/home/brick1/seviri/.forward 

-rw-r--r-- 2 seviri users 68 Mar 10 10:22 
/data/glusterfs/home/brick2/seviri/.forward 



getfattr -d -m . -e hex /data/glusterfs/home/brick?/seviri/.forward 

# file: data/glusterfs/home/brick1/seviri/.forward 

trusted.afr.home-client-8=0x 

trusted.afr.home-client-9=0x 

trusted.gfid=0xc1d268beb17443a39d914de917de123a 



# file: data/glusterfs/home/brick2/seviri/.forward 

trusted.afr.home-client-10=0x 

trusted.afr.home-client-11=0x 

trusted.gfid=0x14a1c10eb1474ef2bf72f4c6c64a90ce 

trusted.glusterfs.quota.4138a9fa-a453-4b8e-905a-e02cce07d717.contri=0x0200
 

trusted.pgfid.4138a9fa-a453-4b8e-905a-e02cce07d717=0x0001 



Brick 5: 

ls -l /data/glusterfs/home/brick?/seviri/.forward 

-T 2 root root 0 Mar 18 08:19 
/data/glusterfs/home/brick2/seviri/.forward 



getfattr -d -m . -e hex /data/glusterfs/home/brick?/seviri/.forward 

# file: data/glusterfs/home/brick2/seviri/.forward 

trusted.gfid=0x14a1c10eb1474ef2bf72f4c6c64a90ce 

trusted.glusterfs.dht.linkto=0x686f6d652d7265706c69636174652d3400 



Brick 6: 

ls -l /data/glusterfs/home/brick?/seviri/.forward 

-T 2 root root 0 Mar 18 08:19 
/data/glusterfs/home/brick2/seviri/.forward 



getfattr -d -m . -e hex /data/glusterfs/home/brick?/seviri/.forward 

# file: data/glusterfs/home/brick2/seviri/.forward 

trusted.gfid=0x14a1c10eb1474ef2bf72f4c6c64a90ce 

trusted.glusterfs.dht.linkto=0x686f6d652d7265706c69636174652d3400 



Looking at the results from bricks 3 & 4 shows something weird. The file exists 
on 2 sub-bricks 

storage directories, while it should only be found once on each brick server. 
Or is the issue lying in the 

results of bricks 5

Re: [Gluster-users] gluster probe node by hostname

2015-03-25 Thread Kaushal M
You have just modified the on-disk stored information. The in-memory
information still is the IP. GlusterD only reads the on-disk file when
starting up, and after that uses the in-memory information for all
operations. This is why you still see the IP in peer status. You have to
restart GlusterD on Node2 and Node3 to load the information from the
modified file.

But instead of the above, you have simpler option to set the hostname for
Node1.
- Probe Node2 and Node3 from Node1 as normal ("gluster peer probe node2",
"gluster peer probe node3"). After this Node1 will be referred to by it's
IP on Node2 and Node3.
- From one of Node2 or Node3, do a reverse probe on Node1 ("gluster peer
probe node1"). This will update the IP to hostname everywhere (in-memory,
on-disk and on all nodes).

If you are using glusterfs-3.6 and above, doing the reverse probe on Node1
will not change the IP to hostname. Instead an extra name is attached to
Node1 and will be displayed in peer status under 'Other Names'.

~kaushal

On Wed, Mar 25, 2015 at 12:54 PM, 可樂我  wrote:

> Hi all,
> I have a problem about probe new node by hostname
> i have three nodes, (Node1, Node-2, Node3)
> Node1 hostname: node1
> Node2 hostname: node2
> Node3 hostname: node3
>
> Step 1:
> Node1 probe Node2
> # gluster probe node2
>
> Step 2:
> modify the peer file of Node2
> hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)
>
> Step 3:
> Node1 probe Node3
> #gluster probe node3
>
> Step 4:
> modify the peer file of Node2
> hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)
>
> but it still show the IP of Node1 in hostname when I execute *gluster
> peer status* cmd
> if I want to hostname of all peer in cluster will only show hostname,how
> can I do?
> is any solution to fix the problem?
> Thank you very much!!
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] lots of nfs.log activity since upgrading to 3.4.6

2015-03-25 Thread Nithya Balachandran
Hi,

This was inadvertently introduced with another patch. The fix was made to 
master (http://review.gluster.org/#/c/8621/) 3.6 and 3.5 but it looks like it 
was not backported to the 3.4 branch

Regards,
Nithya

- Original Message -
From: "Matt" 
To: gluster-users@gluster.org
Sent: Tuesday, March 24, 2015 7:29:23 PM
Subject: [Gluster-users] lots of nfs.log activity since upgrading to 3.4.6

Hello list, 

I have a few Wordpress sites served via NFS on Gluster. Since upgrading 3.4.6, 
I'm seeing a non-trivial amount (1-2 million depending on how busy the blogs 
are, about 4 gigs in the last three weeks) entries like this appear in nfs.log: 

[2015-03-24 13:49:17.314003] I [dht-common.c:1000:dht_lookup_everywhere_done] 
0-wpblog-storage-dht: STATUS: hashed_subvol wpblog-storage-replicate-0 
cached_subvol null 
[2015-03-24 13:49:17.355722] I [dht-common.c:1000:dht_lookup_everywhere_done] 
0-wpblog-storage-dht: STATUS: hashed_subvol wpblog-storage-replicate-0 
cached_subvol null 
[2015-03-24 13:49:17.616073] I [dht-common.c:1000:dht_lookup_everywhere_done] 
0-wpblog-storage-dht: STATUS: hashed_subvol wpblog-storage-replicate-0 
cached_subvol null 

It doesn't seem to be a big problem, it's just an INFO log, but it definitely 
wasn't there with 3.4.5. Can anyone give me any insight into what's going on? 

-Matt 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster probe node by hostname

2015-03-25 Thread A Ghoshal
Hi,

Do you probe by hostname or by IP? We probe our servers by hostname. 
Provided both servers are up and each one knows what IP's the hostnames 
resolve to, gluster peer status displays only hostnames.

Thanks,
Anirban



From:   可樂我 
To: gluster-users@gluster.org
Date:   03/25/2015 12:54 PM
Subject:[Gluster-users] gluster probe node by hostname
Sent by:gluster-users-boun...@gluster.org



Hi all,
I have a problem about probe new node by hostname
i have three nodes, (Node1, Node-2, Node3)
Node1 hostname: node1
Node2 hostname: node2
Node3 hostname: node3

Step 1:
Node1 probe Node2
# gluster probe node2

Step 2:
modify the peer file of Node2
hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)

Step 3:
Node1 probe Node3
#gluster probe node3

Step 4:
modify the peer file of Node2
hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)

but it still show the IP of Node1 in hostname when I execute gluster peer 
status cmd
if I want to hostname of all peer in cluster will only show hostname,how 
can I do? 
is any solution to fix the problem?
Thank you very much!!___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster probe node by hostname

2015-03-25 Thread 可樂我
Hi all,
I have a problem about probe new node by hostname
i have three nodes, (Node1, Node-2, Node3)
Node1 hostname: node1
Node2 hostname: node2
Node3 hostname: node3

Step 1:
Node1 probe Node2
# gluster probe node2

Step 2:
modify the peer file of Node2
hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)

Step 3:
Node1 probe Node3
#gluster probe node3

Step 4:
modify the peer file of Node2
hostanme1=IP of Node1 => hostname1=node1(hostname of Node1)

but it still show the IP of Node1 in hostname when I execute *gluster peer
status* cmd
if I want to hostname of all peer in cluster will only show hostname,how
can I do?
is any solution to fix the problem?
Thank you very much!!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users