Re: [Gluster-devel] Review request - change pid file location to /var/run/gluster

2016-11-14 Thread Atin Mukherjee
Patch has been reviewed with some comments.

On Thu, Oct 27, 2016 at 11:56 AM, Atin Mukherjee 
wrote:

> Saravana,
>
> Thank you for working on this. We'll be considering this patch for 3.10.
>
> On Thu, Oct 27, 2016 at 11:54 AM, Saravanakumar Arumugam <
> sarum...@redhat.com> wrote:
>
>> Hi,
>>
>> I have refreshed this patch addressing review comments (originally
>> authored by Gaurav) which moves brick pid files from /var/lib/glusterd/* to
>> /var/run/gluster.
>>
>> It will be great if you can review this:
>> http://review.gluster.org/#/c/13580/
>>
>> Thank you
>>
>> Regards,
>> Saravana
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Upstream smoke test failures

2016-11-14 Thread Atin Mukherjee
On Tue, Nov 15, 2016 at 9:04 AM, Nithya Balachandran 
wrote:

>
>
> On 14 November 2016 at 21:38, Vijay Bellur  wrote:
>
>> I would prefer that we disable dbench only if we have an owner for
>> fixing the problem and re-enabling it as part of smoke tests. Running
>> dbench seamlessly on gluster has worked for a long while and if it is
>> failing today, we need to address this regression asap.
>>
>> Does anybody have more context or clues on why dbench is failing now?
>>
>> While I agree that it needs to be looked at asap, leaving it in until we
> get an owner seems rather pointless as all it does is hold up various
> patches and waste machine time. Re-triggering it multiple times so that it
> eventually passes does not add anything to the regression test processes or
> validate the patch as we know there is a problem.
>

I echo the same.


>
> I would vote for removing it and assigning someone to look at it
> immediately.
>
> Regards,
> Nithya
>
> Thanks!
>> Vijay
>>
>> On Mon, Nov 14, 2016 at 3:45 AM, Nigel Babu  wrote:
>> > It looks like. Nithya asked me to get some numbers.
>> >
>> > The first failure was on Aug 2nd [1]. Here are the monthly numbers since
>> > August:
>> >
>> > Aug: 31
>> > Sep: 59
>> > Oct: 107
>> > Nov: 43 (in 14 days)
>> >
>> > [1]: https://build.gluster.org/job/smoke/29605/consoleText
>> >
>> > On Sun, Nov 13, 2016 at 10:04:58PM -0800, Joe Julian wrote:
>> >> Does this mean race conditions are in master and are just being
>> retried until they're not hit?
>> >>
>> >> On November 13, 2016 9:33:51 PM PST, Nithya Balachandran <
>> nbala...@redhat.com> wrote:
>> >> >Hi,
>> >> >
>> >> >Our smoke tests have been failing quite frequently of late. While
>> >> >re-triggering smoke several times in order to get a +1 works
>> >> >eventually,
>> >> >this does not really help anything IMO.
>> >> >
>> >> >I believe Jeff has already proposed this earlier but can we remove the
>> >> >failing dbench tests from smoke until we figure out what is going on?
>> >
>> > --
>> > nigelb
>> > ___
>> > Gluster-devel mailing list
>> > Gluster-devel@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-devel
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Test failure stats

2016-11-14 Thread Niels de Vos
On Mon, Nov 14, 2016 at 04:15:41PM +0530, Nigel Babu wrote:
> Hello,
> 
> I've been working on a dashboard for test failure statistics. This is built on
> the work done by Poornima on `extras/failed-tests.py`. The site can be viewed
> at http://fstat.gluster.org. Please file any bugs against the Github project
> here: http://github.com/gluster/fstat/issues

Great, that should be very helpful!

Are these statistics from the normal patch-submission-regression-runs,
or the looping job on HEAD of the master branch? If its the first, there
might be a lot of failures for certain patches that are a work in
progress...

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Gandalf Corvotempesta
Il 14 nov 2016 7:28 PM, "Joe Julian"  ha scritto:
>
> IMHO, if a command will result in data loss, fall it. Period.
>
> It should never be ok for a filesystem to lose data. If someone wanted to
do that with ext or xfs they would have to format.
>

Exactly. I've wrote something similiar in some mail.
Gluster should preserve data consistency at any cost.
If you are trying to do something bad, this should be blocked or, AT
MINIMUM,  a confirm must be asked

Like doing fsck on a mounted FS
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Joe Julian
IMHO, if a command will result in data loss, fall it. Period.

It should never be ok for a filesystem to lose data. If someone wanted to do 
that with ext or xfs they would have to format. 

On November 14, 2016 8:15:16 AM PST, Ravishankar N  
wrote:
>On 11/14/2016 05:57 PM, Atin Mukherjee wrote:
>> This would be a straight forward thing to implement at glusterd, 
>> anyone up for it? If not, we will take this into consideration for 
>> GlusterD 2.0.
>>
>> On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C 
>> > wrote:
>>
>> I think it is worth to implement a lock option.
>>
>> +1
>>
>>
>> Rafi KC
>>
>>
>> On 11/14/2016 06:12 AM, David Gossage wrote:
>>> On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson
>>> >> > wrote:
>>>
>>> As discussed recently, it is way to easy to make destructive
>>> changes
>>> to a volume,e.g change shard size. This can corrupt the data
>>> with no
>>> warnings and its all to easy to make a typo or access the
>>> wrong volume
>>> when doing 3am maintenance ...
>>>
>>> So I'd like to suggest something like the following:
>>>
>>>   gluster volume lock 
>>>
>
>
>I don't think this is a good idea. It would make more sense to give out
>
>verbose warnings in the individual commands themselves. A volume lock 
>doesn't prevent users from unlocking and still inadvertently running 
>those commands without knowing the implications. The remove brick set
>of 
>commands provides verbose messages nicely:
>
>$gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{4..6}
>
>commit
>Removing brick(s) can result in data loss. Do you want to Continue?
>(y/n) y
>volume remove-brick commit: success
>Check the removed bricks to ensure all files are migrated.
>If files with data are found on the brick path, copy them via a gluster
>
>mount point before re-purposing the removed brick
>
>My 2 cents,
>Ravi
>
>
>>>
>>> Setting this would fail all:
>>> - setting changes
>>> - add bricks
>>> - remove bricks
>>> - delete volume
>>>
>>>   gluster volume unlock 
>>>
>>> would allow all changes to be made.
>>>
>>> Just a thought, open to alternate suggestions.
>>>
>>> Thanks
>>>
>>> +
>>> sounds handy
>>>
>>> --
>>> Lindsay
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> 
>>>
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> 
>> ___ Gluster-devel
>> mailing list Gluster-devel@gluster.org
>> 
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>  
>>
>> -- 
>> ~ Atin (atinm)
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
>
>___
>Gluster-devel mailing list
>Gluster-devel@gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Test failure stats

2016-11-14 Thread Vijay Bellur
On Mon, Nov 14, 2016 at 5:45 AM, Nigel Babu  wrote:
> Hello,
>
> I've been working on a dashboard for test failure statistics. This is built on
> the work done by Poornima on `extras/failed-tests.py`. The site can be viewed
> at http://fstat.gluster.org. Please file any bugs against the Github project
> here: http://github.com/gluster/fstat/issues
>

Thank you Nigel for putting this together!

One effective way of reducing spurious failures in regressions would
be to knock off problems causing tests to fail frequently (>5 times?)
in this list. Can we maintainers & component owners address frequently
failing tests in the same week that this report gets sent out? If we
can have this commitment, we will be in a much better position soon.

Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Ravishankar N

On 11/14/2016 05:57 PM, Atin Mukherjee wrote:
This would be a straight forward thing to implement at glusterd, 
anyone up for it? If not, we will take this into consideration for 
GlusterD 2.0.


On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C 
> wrote:


I think it is worth to implement a lock option.

+1


Rafi KC


On 11/14/2016 06:12 AM, David Gossage wrote:

On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson
> wrote:

As discussed recently, it is way to easy to make destructive
changes
to a volume,e.g change shard size. This can corrupt the data
with no
warnings and its all to easy to make a typo or access the
wrong volume
when doing 3am maintenance ...

So I'd like to suggest something like the following:

  gluster volume lock 




I don't think this is a good idea. It would make more sense to give out 
verbose warnings in the individual commands themselves. A volume lock 
doesn't prevent users from unlocking and still inadvertently running 
those commands without knowing the implications. The remove brick set of 
commands provides verbose messages nicely:


$gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{4..6} 
commit

Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster 
mount point before re-purposing the removed brick


My 2 cents,
Ravi




Setting this would fail all:
- setting changes
- add bricks
- remove bricks
- delete volume

  gluster volume unlock 

would allow all changes to be made.

Just a thought, open to alternate suggestions.

Thanks

+
sounds handy

--
Lindsay
___
Gluster-users mailing list
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users


___ Gluster-devel
mailing list Gluster-devel@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-devel
 


--
~ Atin (atinm)

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Upstream smoke test failures

2016-11-14 Thread Vijay Bellur
I would prefer that we disable dbench only if we have an owner for
fixing the problem and re-enabling it as part of smoke tests. Running
dbench seamlessly on gluster has worked for a long while and if it is
failing today, we need to address this regression asap.

Does anybody have more context or clues on why dbench is failing now?

Thanks!
Vijay

On Mon, Nov 14, 2016 at 3:45 AM, Nigel Babu  wrote:
> It looks like. Nithya asked me to get some numbers.
>
> The first failure was on Aug 2nd [1]. Here are the monthly numbers since
> August:
>
> Aug: 31
> Sep: 59
> Oct: 107
> Nov: 43 (in 14 days)
>
> [1]: https://build.gluster.org/job/smoke/29605/consoleText
>
> On Sun, Nov 13, 2016 at 10:04:58PM -0800, Joe Julian wrote:
>> Does this mean race conditions are in master and are just being retried 
>> until they're not hit?
>>
>> On November 13, 2016 9:33:51 PM PST, Nithya Balachandran 
>>  wrote:
>> >Hi,
>> >
>> >Our smoke tests have been failing quite frequently of late. While
>> >re-triggering smoke several times in order to get a +1 works
>> >eventually,
>> >this does not really help anything IMO.
>> >
>> >I believe Jeff has already proposed this earlier but can we remove the
>> >failing dbench tests from smoke until we figure out what is going on?
>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Atin Mukherjee
This would be a straight forward thing to implement at glusterd, anyone up
for it? If not, we will take this into consideration for GlusterD 2.0.

On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C 
wrote:

> I think it is worth to implement a lock option.
>
> +1
>
>
> Rafi KC
>
> On 11/14/2016 06:12 AM, David Gossage wrote:
>
> On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> As discussed recently, it is way to easy to make destructive changes
>> to a volume,e.g change shard size. This can corrupt the data with no
>> warnings and its all to easy to make a typo or access the wrong volume
>> when doing 3am maintenance ...
>>
>> So I'd like to suggest something like the following:
>>
>>   gluster volume lock 
>>
>> Setting this would fail all:
>> - setting changes
>> - add bricks
>> - remove bricks
>> - delete volume
>>
>>   gluster volume unlock 
>>
>> would allow all changes to be made.
>>
>> Just a thought, open to alternate suggestions.
>>
>> Thanks
>>
>> +
> sounds handy
>
>> --
>> Lindsay
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Test failure stats

2016-11-14 Thread Nigel Babu
Hello,

I've been working on a dashboard for test failure statistics. This is built on
the work done by Poornima on `extras/failed-tests.py`. The site can be viewed
at http://fstat.gluster.org. Please file any bugs against the Github project
here: http://github.com/gluster/fstat/issues

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel