Re: [Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-16 Thread Sachin Pandit
One more spurious failure.

./tests/bugs/bug-1038598.t  (Wstat: 0 Tests: 28 Failed: 1)
  Failed test:  28
Files=237, Tests=4632, 4619 wallclock secs ( 2.13 usr  1.48 sys + 832.41 cusr 
697.97 csys = 1533.99 CPU)
Result: FAIL

Patch : http://review.gluster.org/#/c/8060/
Build URL : 
http://build.gluster.org/job/rackspace-regression-2GB/186/consoleFull

~ Sachin.


- Original Message -
From: "Justin Clift" 
To: "Pranith Kumar Karampuri" 
Cc: "Gluster Devel" 
Sent: Sunday, June 15, 2014 3:55:05 PM
Subject: Re: [Gluster-devel] Want more spurious regression failure alerts...
?

On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:
> On 06/13/2014 06:41 PM, Justin Clift wrote:
>> Hi Pranith,
>> 
>> Do you want me to keep sending you spurious regression failure
>> notification?
>> 
>> There's a fair few of them isn't there?
> I am doing one run on my VM. I will get back with the ones that fail on my 
> VM. You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:

  * bug-859581.t -- SPURIOUS
* 4846 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140614:14:33:41.tgz
* 6009 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:20:24:58.tgz
* 6652 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:22:04:16.tgz
* 7796 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:14:22:53.tgz
* 7987 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:15:21:04.tgz
* 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz
* 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
* 8054 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:13:15:50.tgz
* 8062 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:13:28:48.tgz

  * mgmt_v3-locks.t -- SPURIOUS
* 6483 - build.gluster.org -> 
http://build.gluster.org/job/regression/4847/consoleFull
* 6630 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140614:15:42:39.tgz
* 6946 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:20:57:27.tgz
* 7392 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:13:57:20.tgz
* 7852 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:19:23:17.tgz
* 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
* 8015 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:14:26:01.tgz
* 8048 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:18:13:07.tgz

  * bug-918437-sh-mtime.t -- SPURIOUS
* 6459 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
* 7493 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
* 7987 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
* 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

  * fops-sanity.t -- SPURIOUS
* 8014 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:18:18:33.tgz
* 8066 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:21:35:57.tgz

  * bug-857330/xml.t - SPURIOUS
* 7523 - logs may (?) be hard to parse due to other failure data for this 
CR in them
* 8029 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:16:46:03.tgz

If we resolve these five, our regression testing should be a *lot* more
predictable. :)

Text file (attached to this email) has the bulk test results.  Manually
cut-n-pasted from browser to the text doc, so be wary of possible typos. ;)


> Give the output of "for i in `cat problematic-ones.txt`; do echo $i $(git log 
> $i | grep Author| tail -1); done"
>> 
>> Maybe we should make 1 BZ for the lot, and attach the logs
>> to that BZ for later analysis?
> I am already using 1092850 for this.

Good info. :)

+ Justin



--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-16 Thread Varun Shastry



> - Original Message -
>> hi,
>>   Could you guys remove 'sleep' for quota tests authored
by you guys
>> if it can be done. They are leading to spurious failures.



I don't get how sleep can cause the failures. But for script 
bug-1087198.t  in my name, it is part of the testing. I can reduce it to 
a smaller value but we need to have the test which waits for a small 
amount of time.


- Varun Shastry


>> I will be sending out a patch removing 'sleep' in other tests.
>>
>> Pranith
>>






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Better-SSL thought

2014-06-16 Thread James
On Tue, Jun 17, 2014 at 12:39 AM, Jeff Darcy  wrote:
> Unfortunately, *distributing* those keys and
> certificates securely is always going to be a bit of a problem.


Well, as we had discussed, puppet-gluster could be an easy way to
solve this... Maybe this doesn't meet everyone's use case, but I do
think that software bundled with config management is the future. A
wise man once put on a t-shirt: "It's not done until it's automated".
(yes this has two meanings!)

https://twitter.com/Obdurodon/status/471456328335245312/photo/1

As a side note for the rest of gluster-devel, from a config management
point of view, the earlier ssl stuff had a perfectly good enough
interface to work with for config management to drive. Not sure if
that's better or worse with the 3.6 interface. Is that
stable/finalized yet? Either way, if you know what it looks like, I
can maybe provide feedback about how hackable it is for puppet to glue
into.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Better-SSL thought

2014-06-16 Thread Jeff Darcy
> Do you reckon your "Better SSL" stuff for 3.6 will make it practical
> for not-in-depth-encryption-experts to use?  eg our normal everyday
> SysAdmin audience
> 
> The greater public awareness of encryption from the NSA's badness
> might make this a very useful useful thing.  Both in practical terms
> and for general marketing.
> 
> Especially if we can make the docs around it simple to follow, so
> that the less experienced admins out there can get it working right.

As it is right now, the SSL stuff is hard to use even for developer
types.  My goal for this week is to fix that, by providing easy to
use scripts around the rather cryptic OpenSSL commands to generate
keys and certificates.  Unfortunately, *distributing* those keys and
certificates securely is always going to be a bit of a problem.
Somewhere, somehow, someone has to enter a password that is turned
into a key that is used to secure a session in which that material
gets transferred.  That's going to be a weak point.  I think the
best we can do is ensure forward secrecy; anyone who's not able to
intercept traffic *at that precise moment* will be unable to do so
at any subsequent time.

The other problem is that glusterd can't handle the multi-threaded
transport that goes with SSL.  I'm trying to figure out whether
running with SSL but a single transport thread - a very lightly
tested combination because it would be a terrible idea for the I/O
path - will work.  If I can get that to work, glusterd will use
SSL *by default* so that we get strong authentication.  If the
I/O path stuff is also easier to use, then I do think it's a
positive differentiator worth crowing about.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-16 Thread Pranith Kumar Karampuri


On 06/17/2014 09:21 AM, Krutika Dhananjay wrote:





*From: *"Pranith Kumar Karampuri" 
*To: *"Krishnan Parthasarathi" 
*Cc: *"Raghavendra Gowdappa" ,
vshas...@redhat.com, "Krutika Dhananjay" ,
"Gluster Devel" , "Anuradha Talur"
, "Susant Palai" 
*Sent: *Monday, June 16, 2014 10:24:36 PM
*Subject: *Re: quota tests and usage of sleep

These are the test files which need attention. I added the authors to
the mail.

bug-1038598.t - Anuradha
bug-1040423.t - Susant
bug-1087198.t - Varun
bug-1099890.t - Krutika


sleep cannot be removed from bug-1099890.t until the way DHT currently 
refreshes cached values of its subvolumes' struct statvfs info is 
resolved.


-Krutika
Thanks for the update krutika. Is that a bug or is it by design?. If it 
is a bug please log one. Could the rest of you also respond please.


Pranith



Pranith

On 06/16/2014 09:08 AM, Krishnan Parthasarathi wrote:
> All,
>
> I have just one quota test authored by me in the regression test
suite and that doesn't use sleep. Let me know if you need my help
in reviewing your changes.
>
> thanks,
> Krish
>
> - Original Message -
>> hi,
>>   Could you guys remove 'sleep' for quota tests authored by
you guys
>> if it can be done. They are leading to spurious failures.
>> I will be sending out a patch removing 'sleep' in other tests.
>>
>> Pranith
>>




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-16 Thread Krutika Dhananjay
- Original Message -

> From: "Pranith Kumar Karampuri" 
> To: "Krishnan Parthasarathi" 
> Cc: "Raghavendra Gowdappa" , vshas...@redhat.com,
> "Krutika Dhananjay" , "Gluster Devel"
> , "Anuradha Talur" , "Susant
> Palai" 
> Sent: Monday, June 16, 2014 10:24:36 PM
> Subject: Re: quota tests and usage of sleep

> These are the test files which need attention. I added the authors to
> the mail.

> bug-1038598.t - Anuradha
> bug-1040423.t - Susant
> bug-1087198.t - Varun
> bug-1099890.t - Krutika
sleep cannot be removed from bug-1099890.t until the way DHT currently 
refreshes cached values of its subvolumes' struct statvfs info is resolved. 

-Krutika 

> Pranith

> On 06/16/2014 09:08 AM, Krishnan Parthasarathi wrote:
> > All,
> >
> > I have just one quota test authored by me in the regression test suite and
> > that doesn't use sleep. Let me know if you need my help in reviewing your
> > changes.
> >
> > thanks,
> > Krish
> >
> > - Original Message -
> >> hi,
> >> Could you guys remove 'sleep' for quota tests authored by you guys
> >> if it can be done. They are leading to spurious failures.
> >> I will be sending out a patch removing 'sleep' in other tests.
> >>
> >> Pranith
> >>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Better-SSL thought

2014-06-16 Thread Justin Clift
Hi Jeff,

Do you reckon your "Better SSL" stuff for 3.6 will make it practical
for not-in-depth-encryption-experts to use?  eg our normal everyday
SysAdmin audience

The greater public awareness of encryption from the NSA's badness
might make this a very useful useful thing.  Both in practical terms
and for general marketing.

Especially if we can make the docs around it simple to follow, so
that the less experienced admins out there can get it working right.

?

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Vijay Bellur

On 06/16/2014 09:20 PM, Jeff Darcy wrote:

Can't thank you enough for this :-)


+100

Justin has done a lot of hard, tedious work whipping this infrastructure into 
better shape, and has significantly improved the project as a result.  Such 
efforts deserve to be recognized.  Justin, I owe you a beer.


I have realised that keeping track of beers for Justin is a futile 
exercise as the count keeps incrementing every other day.


On a related note, my sincere thanks to Justin and Pranith for working 
tirelessly on regression tests to make the test suite better for all of us!


-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-16 Thread Pranith Kumar Karampuri
These are the test files which need attention. I added the authors to 
the mail.


bug-1038598.t - Anuradha
bug-1040423.t - Susant
bug-1087198.t - Varun
bug-1099890.t - Krutika

Pranith

On 06/16/2014 09:08 AM, Krishnan Parthasarathi wrote:

All,

I have just one quota test authored by me in the regression test suite and that 
doesn't use sleep. Let me know if you need my help in reviewing your changes.

thanks,
Krish

- Original Message -

hi,
  Could you guys remove 'sleep' for quota tests authored by you guys
if it can be done. They are leading to spurious failures.
I will be sending out a patch removing 'sleep' in other tests.

Pranith



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Pranith Kumar Karampuri


On 06/16/2014 09:24 PM, Justin Clift wrote:

On 16/06/2014, at 4:50 PM, Jeff Darcy wrote:

Can't thank you enough for this :-)

+100

Justin has done a lot of hard, tedious work whipping this infrastructure into 
better shape, and has significantly improved the project as a result.  Such 
efforts deserve to be recognized.  Justin, I owe you a beer.


Written thanks to my manager are definitely welcome: :D

   Daniel Veillard 

Just saying. ;)

Done.


+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Justin Clift
On 16/06/2014, at 4:50 PM, Jeff Darcy wrote:
>> Can't thank you enough for this :-)
> 
> +100
> 
> Justin has done a lot of hard, tedious work whipping this infrastructure into 
> better shape, and has significantly improved the project as a result.  Such 
> efforts deserve to be recognized.  Justin, I owe you a beer.


Written thanks to my manager are definitely welcome: :D

  Daniel Veillard 

Just saying. ;)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Jeff Darcy
> Can't thank you enough for this :-)

+100

Justin has done a lot of hard, tedious work whipping this infrastructure into 
better shape, and has significantly improved the project as a result.  Such 
efforts deserve to be recognized.  Justin, I owe you a beer.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests and umount

2014-06-16 Thread Pranith Kumar Karampuri


On 06/16/2014 09:00 PM, Jeff Darcy wrote:

   I see that most of the tests are doing umount and these may fail
sometimes because of EBUSY etc. I am wondering if we should change all
of them to umount -l.
Let me know if you foresee any problems.

I think I'd try "umount -f" first.  Using -l too much can cause an
accumulation of zombie mounts.  When I'm hacking around on my own, I
sometimes have to do "umount -f" twice but that's always sufficient.
Cool, I will do some kind of EXPECT_WITHIN with umount -f may be 5 times 
just to be on the safer side.


If no one has any objections I will send out a patch tomorrow for this.

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Pranith Kumar Karampuri


On 06/16/2014 09:00 PM, Justin Clift wrote:

On 14/06/2014, at 12:02 AM, Justin Clift wrote:

Small update.  The new "rackspace-regression-2GB" queue on build.gluster.org
has been running all day doing regression tests:

  http://build.gluster.org/job/rackspace-regression-2GB/


This is going well.  It's doing it's 200th regression test at the moment. :)

This regression test queue has 7 slave VMs for doing the tests in parallel.

These slave VM's go offline (on purpose) when they're inactive.

sweet :-)


After you submit a Gerrit CR to the queue, give it a minute and a new slave
will come online and start testing your CR.

Hope that helps. :)

Can't thank you enough for this :-)

Pranith


Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression testing status report

2014-06-16 Thread Justin Clift
On 14/06/2014, at 12:02 AM, Justin Clift wrote:
> Small update.  The new "rackspace-regression-2GB" queue on build.gluster.org
> has been running all day doing regression tests:
> 
>  http://build.gluster.org/job/rackspace-regression-2GB/


This is going well.  It's doing it's 200th regression test at the moment. :)

This regression test queue has 7 slave VMs for doing the tests in parallel.

These slave VM's go offline (on purpose) when they're inactive.

After you submit a Gerrit CR to the queue, give it a minute and a new slave
will come online and start testing your CR.

Hope that helps. :)

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests and umount

2014-06-16 Thread Jeff Darcy
>   I see that most of the tests are doing umount and these may fail
> sometimes because of EBUSY etc. I am wondering if we should change all
> of them to umount -l.
> Let me know if you foresee any problems.

I think I'd try "umount -f" first.  Using -l too much can cause an
accumulation of zombie mounts.  When I'm hacking around on my own, I
sometimes have to do "umount -f" twice but that's always sufficient.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Alexey Zilber
And, I found it myself:
https://github.com/gluster/glusterfs/blob/release-3.4/doc/release-notes/3.4.4.md




On Mon, Jun 16, 2014 at 11:13 PM, Alexey Zilber 
wrote:

> Changelog?
>
>
> On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY 
> wrote:
>
>>
>> RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
>> now available in YUM repos at
>>
>>   http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST
>>
>> There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.
>>
>> Debian and Ubuntu DPKGs should also be appearing soon.
>>
>> --
>>
>> Kaleb
>>
>>
>>
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Alexey Zilber
Changelog?


On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY 
wrote:

>
> RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
> now available in YUM repos at
>
>   http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST
>
> There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.
>
> Debian and Ubuntu DPKGs should also be appearing soon.
>
> --
>
> Kaleb
>
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tests and umount

2014-06-16 Thread Pranith Kumar Karampuri

hi,
 I see that most of the tests are doing umount and these may fail 
sometimes because of EBUSY etc. I am wondering if we should change all 
of them to umount -l.

Let me know if you foresee any problems.

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Gene Liverman
How well does Gluster work on Pidora? Does the Raspberry Pi's limited RAM
hinder it any?

Thanks,
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Jun 16, 2014 9:25 AM, "Kaleb S. KEITHLEY"  wrote:

>
> RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
> now available in YUM repos at
>
>   http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST
>
> There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.
>
> Debian and Ubuntu DPKGs should also be appearing soon.
>
> --
>
> Kaleb
>
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Gene Liverman
Makes sense. Thanks!





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.



On Mon, Jun 16, 2014 at 9:34 AM, Kaleb S. KEITHLEY 
wrote:

> On 06/16/2014 09:31 AM, Gene Liverman wrote:
>
>> How well does Gluster work on Pidora? Does the Raspberry Pi's limited
>> RAM hinder it any?
>>
>>
> It seems to work well enough. I've heard of several people who have built
> clusters of pis running GlusterFS.
>
> It's certainly not going to set any speed records though.
>
> --
>
> Kaleb
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Kaleb S. KEITHLEY

On 06/16/2014 09:31 AM, Gene Liverman wrote:

How well does Gluster work on Pidora? Does the Raspberry Pi's limited
RAM hinder it any?



It seems to work well enough. I've heard of several people who have 
built clusters of pis running GlusterFS.


It's certainly not going to set any speed records though.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Kaleb S. KEITHLEY


RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are 
now available in YUM repos at


  http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST

There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.

Debian and Ubuntu DPKGs should also be appearing soon.

--

Kaleb







___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] 答复: some questions about gfapi glfs_open

2014-06-16 Thread Gong XiaoHui
I want to test multi-client read/write the remote file on GlusterFS,
My test code:

bool testGFAPI(){
std::string volName = "rvonly";
std::string serverIP = "10.100.3.110";
int port = 24007;
glfs_t* glfs = glfs_new (volName.c_str());
if (glfs == NULL) {
std::cout<<"glfs_new(" Hi
> 
>When I use the libgfapi, I need to open and write a file, called ” 
> glfs_open(glfs, path, O_WRONLY|O_TRUNC);” return a glfs_fd_t,
> 
> before I close it ,there is a new request to read the same file,
> 
> I think the read request return NULL,

Does glfs_read() return -1? If so, this seems to be in line with expected 
behavior. Is there any reason for your application to send read on a fd that 
has been opened with O_WRONLY?


> but if I call “glfs_open(glfs,
> path, O_RDONLY);”,it return a available glfs_fd_t, so what should I do 
> to resolve this problem?
> 

Opening fd with O_RDWR | O_TRUNC seems to be the right thing to do if
both read and write operations are expected on the fd.

-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] some questions about gfapi glfs_open

2014-06-16 Thread Vijay Bellur
On 06/16/2014 08:47 AM, Gong XiaoHui wrote:
> Hi
> 
>When I use the libgfapi, I need to open and write a file, called ” 
> glfs_open(glfs, path, O_WRONLY|O_TRUNC);” return a glfs_fd_t,
> 
> before I close it ,there is a new request to read the same file,
> 
> I think the read request return NULL,

Does glfs_read() return -1? If so, this seems to be in line with
expected behavior. Is there any reason for your application to send read
on a fd that has been opened with O_WRONLY?


> but if I call “glfs_open(glfs, 
> path, O_RDONLY);”,it return a available glfs_fd_t, so what should I do 
> to resolve this problem?
> 

Opening fd with O_RDWR | O_TRUNC seems to be the right thing to do if
both read and write operations are expected on the fd.

-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS

2014-06-16 Thread Lalatendu Mohanty


FYI,

To fix these Coverity issues , please check the below link for how to 
and guidelines:


http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity

Thanks,
Lala


 Original Message 
Subject:New Defects reported by Coverity Scan for GlusterFS
Date:   Sun, 15 Jun 2014 23:52:47 -0700
From:   scan-ad...@coverity.com



Hi,


Please find the latest report on new defect(s) introduced to GlusterFS found 
with Coverity Scan.

Defect(s) Reported-by: Coverity Scan
Showing 8 of 8 defect(s)


** CID 1223039:  Dereference after null check  (FORWARD_NULL)
/xlators/features/changelog/src/changelog.c: 2057 in init()

** CID 1223041:  Data race condition  (MISSING_LOCK)
/xlators/features/snapview-server/src/snapview-server.c: 2768 in init()

** CID 1223040:  Data race condition  (MISSING_LOCK)
/xlators/features/snapview-server/src/snapview-server.c: 2770 in init()

** CID 1223046:  Resource leak  (RESOURCE_LEAK)
/xlators/features/snapview-server/src/snapview-server.c: 378 in 
mgmt_get_snapinfo_cbk()

** CID 1223045:  Resource leak  (RESOURCE_LEAK)
/xlators/mgmt/glusterd/src/glusterd-snapshot.c: 3826 in glusterd_update_fstype()

** CID 1223044:  Resource leak  (RESOURCE_LEAK)
/xlators/mgmt/glusterd/src/glusterd-snapshot.c: 5503 in 
glusterd_snapshot_config_commit()

** CID 1223043:  Resource leak  (RESOURCE_LEAK)
/xlators/mgmt/glusterd/src/glusterd-geo-rep.c: 1497 in _get_slave_status()

** CID 1223042:  Resource leak  (RESOURCE_LEAK)
/xlators/mgmt/glusterd/src/glusterd-geo-rep.c: 1035 in _get_status_mst_slv()



*** CID 1223039:  Dereference after null check  (FORWARD_NULL)
/xlators/features/changelog/src/changelog.c: 2057 in init()
2051 GF_FREE (priv->changelog_brick);
2052 GF_FREE (priv->changelog_dir);
2053 if (cond_lock_init)
2054 changelog_pthread_destroy (priv);
2055 GF_FREE (priv);
2056 }

CID 1223039:  Dereference after null check  (FORWARD_NULL)
Dereferencing null pointer "this".

2057 this->private = NULL;
2058 } else
2059 this->private = priv;
2060
2061 return ret;
2062 }


*** CID 1223041:  Data race condition  (MISSING_LOCK)
/xlators/features/snapview-server/src/snapview-server.c: 2768 in init()
2762 goto out;
2763
2764 this->private = priv;
2765
2766 GF_OPTION_INIT ("volname", priv->volname, str, out);
2767 pthread_mutex_init (&(priv->snaplist_lock), NULL);

CID 1223041:  Data race condition  (MISSING_LOCK)
Accessing "priv->is_snaplist_done" without holding lock "svs_private.snaplist_lock". Elsewhere, 
"priv->is_snaplist_done" is accessed with "svs_private.snaplist_lock" held 2 out of 2 times.

2768 priv->is_snaplist_done = 0;
2769 priv->num_snaps = 0;
2770 snap_worker_resume = _gf_false;
2771
2772 /* get the list of snaps first to return to client xlator */
2773 ret = svs_get_snapshot_list (this);


*** CID 1223040:  Data race condition  (MISSING_LOCK)
/xlators/features/snapview-server/src/snapview-server.c: 2770 in init()
2764 this->private = priv;
2765
2766 GF_OPTION_INIT ("volname", priv->volname, str, out);
2767 pthread_mutex_init (&(priv->snaplist_lock), NULL);
2768 priv->is_snaplist_done = 0;
2769 priv->num_snaps = 0;

CID 1223040:  Data race condition  (MISSING_LOCK)
Accessing "snap_worker_resume" without holding lock "mutex". Elsewhere, 
"snap_worker_resume" is accessed with "mutex" held 3 out of 3 times.

2770 snap_worker_resume = _gf_false;
2771
2772 /* get the list of snaps first to return to client xlator */
2773 ret = svs_get_snapshot_list (this);
2774 if (ret) {
2775 gf_log (this->name, GF_LOG_ERROR,


*** CID 1223046:  Resource leak  (RESOURCE_LEAK)
/xlators/features/snapview-server/src/snapview-server.c: 378 in 
mgmt_get_snapinfo_cbk()
372 free (rsp.op_errstr);
373
374 if (myframe)
375 SVS_STACK_DESTROY (myframe);
376
377 error_out:

CID 1223046:  Resource leak  (RESOURCE_LEAK)
Variable "dirents" going out of scope leaks the storage it points to.

378 return ret;
379 }
380
381 int
382 svs_get_snapshot_list (xlator_t