Re: [Gluster-devel] New project on the Forge - gstatus

2014-05-16 Thread Anand Avati
KP, Vipul,

It will be awesome to get io-stats like instrumentation on the client side.
Here are some further thoughts on how to implement that. If you have a
recent git HEAD build, I would suggest that you explore the latency stats
on the client side exposed through meta at
$MNT/.meta/graphs/active/$xlator/profile. You can enable latency
measurement with "echo 1 > $MNT/.meta/measure_latency". I would suggest
extending these stats with the extra ones io-stats has, and make
glusterfsiostats expose these stats.

If you can compare libglusterfs/src/latency.c:gf_latency_begin(),
gf_latency_end() and gf_latency_udpate() with the macros in io-stats.c
UPDATE_PROFILE_STATS()
and START_FOP_LATENCY(), you will quickly realize how a lot of logic is
duplicated between io-stats and latency.c. If you can enhance latency.c and
make it capture the remaining stats what io-stats is capturing, the
benefits of this approach would be:

- stats are already getting captured at all xlator levels, and not just at
the position where io-stats is inserted
- file like interface makes the stats more easily inspectable and
consumable, and updated on the fly
- conforms with the way rest of the internals are exposed through $MNT/.meta

In order to this, you might want to look into:

- latency.c as of today captures fop count, mean latency, total time,
whereas io-stats measures these along with min-time, max-time and
block-size histogram.
- extend gf_proc_dump_latency_info() to dump the new stats
- either prettify that output like 'volume profile info' output, or JSONify
it like xlators/meta/src/frames-file.c
- add support for cumulative vs interval stats (store an extra copy of
this->latencies[])

etc..

Thanks!


On Fri, Apr 25, 2014 at 9:09 PM, Krishnan Parthasarathi  wrote:

> [Resending due to gluster-devel mailing list issue]
>
> Apologies for the late reply.
>
> glusterd uses its socket connection with brick processes (where io-stats
> xlator is loaded) to
> gather information from io-stats via an RPC request. This facility is
> restricted to brick processes
> as it stands today.
>
> Some background ...
> io-stats xlator is loaded, both in GlusterFS mounts and brick processes.
> So, we have the capabilities
> to monitor I/O statistics on both sides. To collect I/O statistics at the
> server side, we have
>
> # gluster volume profile VOLNAME [start | info | stop]
> AND
> #gluster volume top VOLNAME info [and other options]
>
> We don't have a usable way of gathering I/O statistics (not monitoring,
> though the counters could be enhanced)
> at the client-side, ie. for a given mount point. This is the gap
> glusterfsiostat aims to fill. We need to remember
> that the machines hosting GlusterFS mounts may not have glusterd installed
> on them.
>
> We are considering rrdtool as a possible statistics database because it
> seems like a natural choice for storing time-series
> data. rrdtool is capable of answering high-level statistical queries on
> statistics that were logged in it by io-stats xlator
> over and above printing running counters periodically.
>
> Hope this gives some more clarity on what we are thinking.
>
> thanks,
> Krish
> - Original Message -
>
> > Probably me not understanding.
>
> > the comment "iostats making data available to glusterd over RPC" - is
> what I
> > latched on to. I wondered whether this meant that a socket could be
> opened
> > that way to get at the iostats data flow.
>
> > Cheers,
>
> > PC
>
> > - Original Message -
>
> > > From: "Vipul Nayyar" 
> >
> > > To: "Paul Cuzner" , "Krishnan Parthasarathi"
> > > 
> >
> > > Cc: "Vijay Bellur" , "gluster-devel"
> > > 
> >
> > > Sent: Thursday, 20 February, 2014 5:06:27 AM
> >
> > > Subject: Re: [Gluster-devel] New project on the Forge - gstatus
> >
>
> > > Hi Paul,
> >
>
> > > I'm really not sure, if this can be done in python(at least
> comfortably).
> > > Maybe we can tread on the same path as Justin's glusterflow in python.
> But
> > > I
> > > don't think, all the io-stats counters will be available with the way
> how
> > > Justin's used Jeff Darcy's previous work to build his tool. I can be
> wrong.
> > > My knowledge is a bit incomplete and based on very less experience as a
> > > user
> > > and an amateur Gluster developer. Please do correct me, if I can be.
> >
>
> > > Regards
> >
> > > Vipul Nayyar
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Emmanuel Dreyfus
Niels de Vos  wrote:

>  $ ssh review.gerrit.org gerrit review --code-review +1 7722,7

Excellent! Thank you for the tip.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Niels de Vos
On Sat, May 17, 2014 at 01:06:13AM +0200, Niels de Vos wrote:
> On Fri, May 16, 2014 at 04:42:27PM +, Emmanuel Dreyfus wrote:
> > Hi
> > 
> > Since I have not recovered authenticated access to gerrit, I say it there:
> > glusterfs master branch builds again (and works as client and server), with
> > the following changes:
> > http://review.gluster.org/#/c/7783/ patchset 9
> > http://review.gluster.com/#/c/7757/ patchset 5
> > http://review.gluster.com/#/c/7722/ patchset 7
> > 
> > Someone can review +1 for me?
> 
> If you can post patches, you can use ssh to add your +1 to the 
> changesets:
> 
>  $ ssh review.gerrit.org gerrit review --code-review +1 7722,7

Of course, the server is called 'review.gluster.org' :)

> 
> More details in the Gerrit documentation at 
> http://review.gluster.org/Documentation/cmd-review.html .
> 
> HTH,
> Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Niels de Vos
On Fri, May 16, 2014 at 04:42:27PM +, Emmanuel Dreyfus wrote:
> Hi
> 
> Since I have not recovered authenticated access to gerrit, I say it there:
> glusterfs master branch builds again (and works as client and server), with
> the following changes:
> http://review.gluster.org/#/c/7783/ patchset 9
> http://review.gluster.com/#/c/7757/ patchset 5
> http://review.gluster.com/#/c/7722/ patchset 7
> 
> Someone can review +1 for me?

If you can post patches, you can use ssh to add your +1 to the 
changesets:

 $ ssh review.gerrit.org gerrit review --code-review +1 7722,7

More details in the Gerrit documentation at 
http://review.gluster.org/Documentation/cmd-review.html .

HTH,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Harshavardhana
>
> duplo# ls -li /mnt
> total 4
>   4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
>   4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
>   3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp
>   3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp
>

Haven't seen this on OSX - need to check the current master branch,
but i do have a
infinite symlink loop issue which i still need to debug.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Emmanuel Dreyfus
Hi

Since I have not recovered authenticated access to gerrit, I say it there:
glusterfs master branch builds again (and works as client and server), with
the following changes:
http://review.gluster.org/#/c/7783/ patchset 9
http://review.gluster.com/#/c/7757/ patchset 5
http://review.gluster.com/#/c/7722/ patchset 7

Someone can review +1 for me?

There is a funny bug: if I mount the volume from two clients, I see 
all the objects at the root of the filesystem as duplicated:

duplo# ls -li /mnt
total 4
  4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
  4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
  3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp
  3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp

If I unmount the second client, everything reverts to normal on the remaining
one. In lower level directories, there is no bug.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Volume failed to create (but did)

2014-05-16 Thread James
On Fri, May 16, 2014 at 8:06 AM, Vijay Bellur  wrote:
> On 05/16/2014 08:59 AM, James wrote:
>>
>> Hi,
>>
>> When automatically building volumes, a volume create failed:
>>
>> volume create: puppet: failed: Commit failed on
>> ----. Please check log file for
>> details.
>>
>> The funny thing was that 'gluster volume info' showed a normal looking
>> volume, and starting it worked fine.
>>
>> Attached are all the logs. Hopefully someone can decipher this, and
>> maybe kill a gluster bug.
>>
>> HTH,
>> James
>>
>> PS:
>> Cluster was a two host, Replica=2, single volume, with two disks each
>> host, all running in VM's.
>>
>
> Does "gluster peer status" look right in this two host setup?

It did, yes, or at least if there was something wrong, I didn't notice
anything unusual.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Need inputs for command deprecation output

2014-05-16 Thread Niels de Vos
On Fri, May 16, 2014 at 05:35:06PM +0530, Vijay Bellur wrote:
> On 05/16/2014 07:23 AM, Pranith Kumar Karampuri wrote:
> >
> >
> >- Original Message -
> >>From: "Ravishankar N" 
> >>To: "Pranith Kumar Karampuri" , "Gluster Devel" 
> >>
> >>Sent: Friday, May 16, 2014 7:15:58 AM
> >>Subject: Re: [Gluster-devel] Need inputs for command deprecation output
> >>
> >>On 05/16/2014 06:25 AM, Pranith Kumar Karampuri wrote:
> >>>Hi,
> >>> As part of changing behaviour of 'volume heal' commands. I want the
> >>> commands to show the following output. Any feedback in making them
> >>> better would be awesome :-).
> >>>
> >>>root@pranithk-laptop - ~
> >>>06:20:10 :) ⚡ gluster volume heal r2 info healed
> >>>This command has been deprecated
> >>>
> >>>root@pranithk-laptop - ~
> >>>06:20:13 :( ⚡ gluster volume heal r2 info heal-failed
> >>>This command has been deprecated
> >>When a command is deprecated, it still works the way it did but gives
> >>out a warning about it not being maintained and possible alternatives to it.
> >>If I understand http://review.gluster.org/#/c/7766/ correctly, we are
> >>not supporting these commands any more, in which case the right message
> >>would be "Command not supported"
> >
> >I am wondering if we should even let the command be sent to 
> >self-heal-daemons from glusterd.
> >
> >How about
> >06:20:10 :) ⚡ gluster volume heal r2 info healed
> >Command not supported.
> >
> 
> Since we no longer intend supporting this command, it might be
> better to withdraw the command from CLI and have documentation
> reflect the possible alternatives for this command.

I'd like to see an error message that the command has been replaced. The 
message should point to 'gluster volume help $whatever' or the man-page 
in case it has been updated for the new command. Accessing online 
documentation on the internet is not always possible, so try to restrict 
to resources available on the local system.

In addition, a non-zero exit code should be returned. This will help 
authors/users of scripts (in case they exist) to detect unexpected 
behavior.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Volume failed to create (but did)

2014-05-16 Thread Vijay Bellur

On 05/16/2014 08:59 AM, James wrote:

Hi,

When automatically building volumes, a volume create failed:

volume create: puppet: failed: Commit failed on
----. Please check log file for
details.

The funny thing was that 'gluster volume info' showed a normal looking
volume, and starting it worked fine.

Attached are all the logs. Hopefully someone can decipher this, and
maybe kill a gluster bug.

HTH,
James

PS:
Cluster was a two host, Replica=2, single volume, with two disks each
host, all running in VM's.



Does "gluster peer status" look right in this two host setup?

-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Need inputs for command deprecation output

2014-05-16 Thread Vijay Bellur

On 05/16/2014 07:23 AM, Pranith Kumar Karampuri wrote:



- Original Message -

From: "Ravishankar N" 
To: "Pranith Kumar Karampuri" , "Gluster Devel" 

Sent: Friday, May 16, 2014 7:15:58 AM
Subject: Re: [Gluster-devel] Need inputs for command deprecation output

On 05/16/2014 06:25 AM, Pranith Kumar Karampuri wrote:

Hi,
 As part of changing behaviour of 'volume heal' commands. I want the
 commands to show the following output. Any feedback in making them
 better would be awesome :-).

root@pranithk-laptop - ~
06:20:10 :) ⚡ gluster volume heal r2 info healed
This command has been deprecated

root@pranithk-laptop - ~
06:20:13 :( ⚡ gluster volume heal r2 info heal-failed
This command has been deprecated

When a command is deprecated, it still works the way it did but gives
out a warning about it not being maintained and possible alternatives to it.
If I understand http://review.gluster.org/#/c/7766/ correctly, we are
not supporting these commands any more, in which case the right message
would be "Command not supported"


I am wondering if we should even let the command be sent to self-heal-daemons 
from glusterd.

How about
06:20:10 :) ⚡ gluster volume heal r2 info healed
Command not supported.



Since we no longer intend supporting this command, it might be better to 
withdraw the command from CLI and have documentation reflect the 
possible alternatives for this command.


-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures because of nfs and snapshots

2014-05-16 Thread Joseph Fernandes

Hi All,

tests/bugs/bug-1090042.t : 

I was able to reproduce the issue i.e when this test is done in a loop 

for i in {1..135} ; do  ./bugs/bug-1090042.t

When checked the logs 
[2014-05-16 10:49:49.003978] I [rpc-clnt.c:973:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2014-05-16 10:49:49.004035] I [rpc-clnt.c:988:rpc_clnt_connection_init] 
0-management: defaulting ping-timeout to 30secs
[2014-05-16 10:49:49.004303] I [rpc-clnt.c:973:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2014-05-16 10:49:49.004340] I [rpc-clnt.c:988:rpc_clnt_connection_init] 
0-management: defaulting ping-timeout to 30secs

The issue is with ping-timeout and is tracked under the bug 

https://bugzilla.redhat.com/show_bug.cgi?id=1096729


The workaround is mentioned in 
https://bugzilla.redhat.com/show_bug.cgi?id=1096729#c8


Regards,
Joe

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Gluster Devel" 
Cc: "Joseph Fernandes" 
Sent: Friday, May 16, 2014 6:19:54 AM
Subject: Spurious failures because of nfs and snapshots

hi,
In the latest build I fired for review.gluster.com/7766 
(http://build.gluster.org/job/regression/4443/console) failed because of 
spurious failure. The script doesn't wait for nfs export to be available. I 
fixed that, but interestingly I found quite a few scripts with same problem. 
Some of the scripts are relying on 'sleep 5' which also could lead to spurious 
failures if the export is not available in 5 seconds. We found that waiting for 
20 seconds is better, but 'sleep 20' would unnecessarily delay the build 
execution. So if you guys are going to write any scripts which has to do nfs 
mounts, please do it the following way:

EXPECT_WITHIN 20 "1" is_nfs_export_available;
TEST mount -t nfs -o vers=3 $H0:/$V0 $N0;

Please review http://review.gluster.com/7773 :-)

I saw one more spurious failure in a snapshot related script 
tests/bugs/bug-1090042.t on the next build fired by Niels.
Joesph (CCed) is debugging it. He agreed to reply what he finds and share it 
with us so that we won't introduce similar bugs in future.

I encourage you guys to share what you fix to prevent spurious failures in 
future.

Thanks
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS and the logging framework

2014-05-16 Thread Nithya Balachandran
Agreed. Such a format would make log messages far more readable as well as 
making it easy for applications to parse them,

Nithya

- Original Message -
From: "Marcus Bointon" 
To: "gluster-users" , gluster-devel@gluster.org
Sent: Tuesday, 13 May, 2014 5:06:22 PM
Subject: Re: [Gluster-devel] [Gluster-users] GlusterFS and the logging  
framework

On 13 May 2014, at 13:30, Sahina Bose  wrote:

> If the message was in a format:{"msgid":xxx,"msg": "Usage is above soft 
> limit: 300.0KB used by /test/ ","volume": "test-vol", "dir":"/test"}
> 
> This helps the applications that parse the logs to identify affected 
> enitities. Otherwise we need to resort to pattern matching which is kinda 
> flaky. (currently we monitor logs for a nagios monitoring plugin).

This is a great idea - Drupal does exactly this. If you use things like 
logstash, simple, reliable log formatting makes automated log processing much 
simpler.

Marcus
-- 
Marcus Bointon
Technical Director, Synchromedia Limited

Creators of http://www.smartmessages.net/
UK 1CRM solutions http://www.syniah.com/
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel