Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Sobhan Samantaray


- Original Message -
From: "Varun Shastry" 
To: "Sobhan Samantaray" , ana...@redhat.com
Cc: gluster-devel@gluster.org, "gluster-users" , 
"Anand Avati" 
Sent: Thursday, May 8, 2014 11:16:11 AM
Subject: Re: [Gluster-users] User-serviceable snapshots design

Hi Sobhan,

On Wednesday 07 May 2014 09:12 PM, Sobhan Samantaray wrote:
> I think its a good idea to include the auto-remove of the snapshots based on 
> the time or space as threshold as mentioned in below link.
>
> http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/


I think this feature is already implemented (partially?) as part of the 
snapshot feature.


The feature proposed here only concentrates on the user serviceability 
of the snapshots taken.

I understand that it would be part of core snapshot feature. I was talking 
w.r.t Paul's suggestion of scheduling the snapshot based on threshold levels in 
snapshot which will be part of phase-2.
 
- Varun Shastry

>
> - Original Message -
> From: "Anand Subramanian" 
> To: "Paul Cuzner" 
> Cc: gluster-devel@gluster.org, "gluster-users" , 
> "Anand Avati" 
> Sent: Wednesday, May 7, 2014 7:50:30 PM
> Subject: Re: [Gluster-users] User-serviceable snapshots design
>
> Hi Paul, that is definitely doable and a very nice suggestion. It is just 
> that we probably won't be able to get to that in the immediate code drop 
> (what we like to call phase-1 of the feature). But yes, let us try to 
> implement what you suggest for phase-2. Soon :-)
>
> Regards,
> Anand
>
> On 05/06/2014 07:27 AM, Paul Cuzner wrote:
>
>
>
> Just one question relating to thoughts around how you apply a filter to the 
> snapshot view from a user's perspective.
>
> In the "considerations" section, it states - "We plan to introduce a 
> configurable option to limit the number of snapshots visible under the USS 
> feature."
> Would it not be possible to take the meta data from the snapshots to form a 
> tree hierarchy when the number of snapshots present exceeds a given 
> threshold, effectively organising the snaps by time. I think this would work 
> better from an end-user workflow perspective.
>
> i.e.
> .snaps
> \/ Today
> +-- snap01_20140503_0800
> +-- snap02_ 20140503_ 1400
>> Last 7 days
>> 7-21 days
>> 21-60 days
>> 60-180days
>> 180days
>
>
>
>
>
>
> From: "Anand Subramanian" 
> To: gluster-de...@nongnu.org , "gluster-users" 
> Cc: "Anand Avati" 
> Sent: Saturday, 3 May, 2014 2:35:26 AM
> Subject: [Gluster-users] User-serviceable snapshots design
>
> Attached is a basic write-up of the user-serviceable snapshot feature
> design (Avati's). Please take a look and let us know if you have
> questions of any sort...
>
> We have a basic implementation up now; reviews and upstream commit
> should follow very soon over the next week.
>
> Cheers,
> Anand
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Varun Shastry

Hi Sobhan,

On Wednesday 07 May 2014 09:12 PM, Sobhan Samantaray wrote:

I think its a good idea to include the auto-remove of the snapshots based on 
the time or space as threshold as mentioned in below link.

http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/



I think this feature is already implemented (partially?) as part of the 
snapshot feature.


The feature proposed here only concentrates on the user serviceability 
of the snapshots taken.


- Varun Shastry



- Original Message -
From: "Anand Subramanian" 
To: "Paul Cuzner" 
Cc: gluster-devel@gluster.org, "gluster-users" , "Anand 
Avati" 
Sent: Wednesday, May 7, 2014 7:50:30 PM
Subject: Re: [Gluster-users] User-serviceable snapshots design

Hi Paul, that is definitely doable and a very nice suggestion. It is just that 
we probably won't be able to get to that in the immediate code drop (what we 
like to call phase-1 of the feature). But yes, let us try to implement what you 
suggest for phase-2. Soon :-)

Regards,
Anand

On 05/06/2014 07:27 AM, Paul Cuzner wrote:



Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective.

In the "considerations" section, it states - "We plan to introduce a configurable 
option to limit the number of snapshots visible under the USS feature."
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective.

i.e.
.snaps
\/ Today
+-- snap01_20140503_0800
+-- snap02_ 20140503_ 1400

Last 7 days
7-21 days
21-60 days
60-180days
180days







From: "Anand Subramanian" 
To: gluster-de...@nongnu.org , "gluster-users" 
Cc: "Anand Avati" 
Sent: Saturday, 3 May, 2014 2:35:26 AM
Subject: [Gluster-users] User-serviceable snapshots design

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

We have a basic implementation up now; reviews and upstream commit
should follow very soon over the next week.

Cheers,
Anand

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Whats meaning about the inode link in fuse kernel such as fuse_direntplus_link

2014-05-07 Thread Jianguo Bao
Dear,

In reading the fuse kernel code and glusterfs ,i always meet the
word 'inode link',what is the excat meaning?

Thanks

-- 
_

Baul
e-mail: roidi...@gmail.com
_
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS and the logging framework

2014-05-07 Thread Nithya Balachandran
Thanks Vijay. I will go ahead with approach 2.

Regards,
Nithya

- Original Message -
From: "Vijay Bellur" 
To: "Nithya Balachandran" 
Cc: "Dan Lambright" , "gluster-users" 
, gluster-devel@gluster.org
Sent: Wednesday, 7 May, 2014 2:34:49 PM
Subject: Re: [Gluster-devel] GlusterFS and the logging framework

On 05/07/2014 10:21 AM, Nithya Balachandran wrote:
> We have had some feedback/concerns raised regarding not including the 
> messages in the header file. Some external products do include the message 
> strings in the header files which helps for documentation as well as easier 
> editing.

Is there more detail on the concerns being raised? For documentation 
ease, we can evolve a script to generate a consolidated file of all 
messages in a component. The consolidated file can then be subject to 
i18n etc. in the future.

 From a developer perspective, editing a message would involve an 
additional git grep for the message - it shouldn't be too hard?


>
> Does anyone have any thoughts on this? The advantages are listed above. 
> Disadvantages were listed in earlier emails. If we decide to include messages 
> in the header file, we will need to consolidate all messages that fall into 
> various classes and come up with a single format string - currently there 
> seem to be too many messages that mean the same thing but use different 
> foramts to say it.


I suggest we finalize an approach and go ahead with implementation. My 
obvious preference at this point in time is approach #2 described 
earlier in this thread. In scenarios like this where there are multiple 
options and there is no obvious winner, it is always better to implement 
an approach and listen to feedback from the intended audience of the 
feature. That will let us know whether we are on the right track or not.

Regards,
Vijay

>
>
> Regards,
> Nithya
>
> - Original Message -
> From: "Vijay Bellur" 
> To: "Dan Lambright" , "Nithya Balachandran" 
> 
> Cc: "gluster-users" , gluster-devel@gluster.org
> Sent: Thursday, 1 May, 2014 1:31:04 PM
> Subject: Re: [Gluster-devel] GlusterFS and the logging framework
>
> On 05/01/2014 04:07 AM, Dan Lambright wrote:
>> Hello,
>>
>> In a previous job, an engineer in our storage group modified our I/O stack 
>> logs in a manner similar to your proposal #1 (except he did not tell anyone, 
>> and did it for DEBUG messages as well as ERRORS and WARNINGS, over the 
>> weekend). Developers came to work Monday and found over a thousand log 
>> message strings had been buried in a new header file, and any new logs 
>> required a new message id, along with a new string entry in the header file.
>>
>> This did render the code harder to read. The ensuing uproar closely mirrored 
>> the arguments (1) and (2) you listed. Logs are like comments. If you move 
>> them out of the source, the code is harder to follow. And you probably wan't 
>> fewer message IDs than comments.
>>
>> The developer retracted his work. After some debate, his V2 solution 
>> resembled your "approach #2". Developers were once again free to use plain 
>> text strings directly in logs, but the notion of "classes" (message ID) was 
>> kept. We allowed multiple text strings to be used against a single class, 
>> and any new classes went in a master header file. The "debug" message ID 
>> class was a general purpose bucket and what most coders used day to day.
>>
>> So basically, your email sounded very familiar to me and I think your 
>> proposal #2 is on the right track.
>>
>
> +1. Proposal #2 seems to be better IMO.
>
> Thanks,
> Vijay
>
>
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: build: Support for unit tests using Cmockery2

2014-05-07 Thread Justin Clift
On 07/05/2014, at 8:40 PM, Luis Pabon wrote:
> I agree that this is a major issue.  Justin and I for a while tried to build 
> the regressions on different VMs (other than build.gluster.org).  I was never 
> successull in running the regression on either CentOS 6.5 or Fedora.  Once we 
> are able to run them on any VM, we can then parallelize (is that a word) the 
> regression workload over many (N) VMs.

There is working Python code (on the Forge) which kicks off the regression
tests in Rackspace.  It's been a learning process, so the code is becoming
a bit messy now and could do with a refactor... but it's functional.

  https://forge.gluster.org/glusterfs-rackspace-regression-tester

It needs a bit more work before we can use it for automated testing:

* At the moment it only compiles git HEAD for a given branch.

eg master, release-3.4, release-3.5, etc

  I need to update the code so it can be passed a Change Set #, which
  it then uses to grab the right branch + proposed patch from gerrit,
  tests them.

* Then need to hook it up to Jenkins

  This I haven't investigated, apart from knowing we can call a script
  through Jenkins to kick things off.  As per existing regression test
  kick off script.

We shoul also get this BZ fixed, as it impacts the regression tests
from another direction:

  https://bugzilla.redhat.com/show_bug.cgi?id=1084175

To workaround this problem, keeping the regression test hostnames very
short is working (eg "jc0").  Otherwise the "volume status" output
wraps wrongly and tests/bugs/bug-861542.t fails (every time).

Probably not as urgent to get done as the rackspace-regression-testing
code though.


> I like your stage-1 idea.  In previous jobs we had a script called 
> "presubmit.sh" which did all you have described there.  I'm not sure if 
> forcing developers is a good idea, though.  I think that if we shape up 
> Jenkins to do the right thing, with the stages implemented there (and run 
> optionally by the developers -- I would like to run them before I submit), 
> then this issue would be resolved.

Yeah.  Though we *must* also make our regression tests run reliably.

The problems found running regression tests in Rackspace are very likely
not Rackspace specific.  Every test failure I've looked at in depth (so
far) has turned out to be a bug either in GlusterFS or in the test
itself.

If we can't get the tests to run reliably, then we're going to have
to do silly things like "run each test up to 3 times, if any of them
pass 100% then report back SUCCESS, else report back FAILURE".  While
it'd probably technically work for a while, it'd also be kind of
unbelievably lousy.  (but if that's what it takes for now... ) ;)


> On 05/07/2014 03:27 PM, Harshavardhana wrote:

>> stage-1 tests - runs on the Author's laptop (i.e Linux) - git hook
>> perhaps which runs for each ./rfc.sh (reports build issues, other
>> apparent compilation problems, segfaults on init etc.)
>> This could comprise of
>> - smoke.sh
>> - 'make -j16, make -j32' for parallel build test
>> - Unittests

My understanding is that patch authors are already supposed to run the
full regression test before submitting a patch using rfc.sh.

It doesn't seem to happening consistently though.  One of the problems
with doing that, is it pretty much ties up the patch author's laptop
until it's finished, unless it's run in a VM or something (recommended).



>> On Wed, May 7, 2014 at 12:00 PM, Luis Pabon (Code Review)
>>  wrote:

> 
>>> Good point, but unit tests take no more time to compile, and only take 0.55 
>>> secs to run all of them (at the moment).  Is this really an issue?

For 0.55 seconds, not really.  Was just mentioning the principal. ;)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] OS X porting merged

2014-05-07 Thread Justin Clift
On 05/05/2014, at 8:57 AM, Bernhard Glomm wrote:
> fwiw
> 
> I use cfengine to automate MAC, (don't like messing with launchd and
> 
> shell scripts anymore ;-)
> 
> http://www.cfengineers.net/downloads/cfengine-community-packages/

That's a good point.  Once GlusterFS is definitely happy on OSX,
it'd be a good idea to use something like cfengine, puppet, or
ansible.

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] OS X porting merged

2014-05-07 Thread Dan Mons
> Anyways moving on - can you open a bug for this? this is an interesting issue
> perhaps `mac-compat` could be a real culprit here.

Done.  Bug 1095525.

-Dan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Volume Create Failed

2014-05-07 Thread Yang Ye
have you tried using name other than snoopy?
 On 6 May 2014 11:25, "Cary Tsai"  wrote:

> # gluster peer status
> Number of Peers: 3
>
> Hostname: us-east-2
> Uuid: 3b102df3-74a7-4794-b300-b93bccfe8072
> State: Peer in Cluster (Connected)
>
> Hostname: us-west-1
> Uuid: 98906a76-dd5b-4db9-99d5-1d51b1ee3d2a
> State: Peer in Cluster (Connected)
>
> Hostname: us-west-2
> Uuid: 16eff965-ec88-4d12-adea-8512350bdaa7
> State: Peer in Cluster (Connected)
>
> # gluster volume  create  snoopy replica 4 transport tcp 192.168.255.5:/brick1
> us-east-2:/brick1 us-west-1:/brick1 us-west-2:/brick1 force
> volume create: snoopy: failed
> ---
> When I check the debug log, /var/log/glusterfs/cli.log , it shows:
>
> [2014-05-06 00:17:29.988414] W [rpc-transport.c:175:rpc_transport_load]
> 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
> [2014-05-06 00:17:29.988909] I [socket.c:3480:socket_init] 0-glusterfs:
> SSL support is NOT enabled
> [2014-05-06 00:17:29.988930] I [socket.c:3495:socket_init] 0-glusterfs:
> using system polling thread
> [2014-05-06 00:17:30.022545] I
> [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate cluster
> type found. Checking brick order.
> [2014-05-06 00:17:30.022706] I
> [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order okay
> [2014-05-06 00:17:30.273942] I
> [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to create
> volume
> [2014-05-06 00:17:30.274027] I [input.c:36:cli_batch] 0-: Exiting with: -1
>
> What did I do wrong? Is more details I can read to figure out why my
> volume create failed?
> Thanks
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: build: Support for unit tests using Cmockery2

2014-05-07 Thread Luis Pabon
I agree that this is a major issue.  Justin and I for a while tried to 
build the regressions on different VMs (other than build.gluster.org).  
I was never successull in running the regression on either CentOS 6.5 or 
Fedora.  Once we are able to run them on any VM, we can then parallelize 
(is that a word) the regression workload over many (N) VMs.


I like your stage-1 idea.  In previous jobs we had a script called 
"presubmit.sh" which did all you have described there.  I'm not sure if 
forcing developers is a good idea, though.  I think that if we shape up 
Jenkins to do the right thing, with the stages implemented there (and 
run optionally by the developers -- I would like to run them before I 
submit), then this issue would be resolved.


- Luis

On 05/07/2014 03:27 PM, Harshavardhana wrote:

This has been really bothering me a bit as our queues are getting
bigger and bigger upstream to even get smallest of the patches to get
fixed quickly.

In these scenarios a decentralized regression testing could be made mandatory?

stage-1 tests - runs on the Author's laptop (i.e Linux) - git hook
perhaps which runs for each ./rfc.sh (reports build issues, other
apparent compilation problems, segfaults on init etc.)
 This could comprise of
 - smoke.sh
 - 'make -j16, make -j32' for parallel build test
 - Unittests

stage-2 tests - run on the initial review post.
 - build rpms EL5, EL6, FC20, future
 - mockbuild
 - ./tests/basic/*
 - any others?

stage-3 tests - run on the final Verification process.
 - full blown ./tests/bugs/*

Currently if you look at the regression test suite it getting bigger
and bigger (our overall time of regression test) completion. Just a
though since simple build failures, compilation failure and other
really simple bugs - upstream servers shouldn't be used. One can
leverage Author's laptop :-)

Don't know what you guys think?

On Wed, May 7, 2014 at 12:00 PM, Luis Pabon (Code Review)
 wrote:

Luis Pabon has posted comments on this change.

Change subject: build: Support for unit tests using Cmockery2
..


Patch Set 6:

Good point, but unit tests take no more time to compile, and only take 0.55 
secs to run all of them (at the moment).  Is this really an issue?

--
To view, visit http://review.gluster.org/7538
To unsubscribe, visit http://review.gluster.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I1b36cb1f56fd10916f9bf535e8ad080a3358289f
Gerrit-PatchSet: 6
Gerrit-Project: glusterfs
Gerrit-Branch: master
Gerrit-Owner: Luis Pabon 
Gerrit-Reviewer: Gluster Build System 
Gerrit-Reviewer: Harshavardhana 
Gerrit-Reviewer: Jeff Darcy 
Gerrit-Reviewer: Justin Clift 
Gerrit-Reviewer: Kaleb KEITHLEY 
Gerrit-Reviewer: Luis Pabon 
Gerrit-Reviewer: Rajesh Joseph 
Gerrit-Reviewer: Ravishankar N 
Gerrit-Reviewer: Vijay Bellur 
Gerrit-HasComments: No





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] OS X porting merged

2014-05-07 Thread Dan Mons
A bug reported by one of our testers:

When a symlink exists to another directory in the tree, an infinite
loop of directories occurs.

We're running OSX 10.8.5 with OSXFUSE 2.6.4
# make -v
GNU Make 3.81
Copyright (C) 2006  Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
# gcc -v
Configured with:
--prefix=/Applications/Xcode.app/Contents/Developer/usr
--with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin12.5.0
Thread model: posix

Test:

# cd /prod/backups/glustertest
# mkdir -p a/b/c/d
# mkdir -p 1/2/3/4
# find `pwd`
/prod/backups/glustertest
/prod/backups/glustertest/a
/prod/backups/glustertest/a/b
/prod/backups/glustertest/a/b/c
/prod/backups/glustertest/a/b/c/d
/prod/backups/glustertest/a/b/1
/prod/backups/glustertest/1
/prod/backups/glustertest/1/2
/prod/backups/glustertest/1/2/3
/prod/backups/glustertest/1/2/3/4
# cd a/b
# ls
c
# ln -s ../../1
# ls -al
total 32
drwxr-xr-x@ 3 root wheel 50 8 May 08:52 .
drwxr-xr-x@ 3 root wheel 42 8 May 08:49 ..
lrwxrwxrwx@ 1 root wheel 7 8 May 08:52 1 -> ../../1
drwxr-xr-x@ 3 root wheel 42 8 May 08:49 c
# cd 1
# ls
2
# cd 2 ; pwd ; ls
/prod/backups/glustertest/a/b/1/2/2
2
# cd 2 ; pwd ; ls
/prod/backups/glustertest/a/b/1/2/2/2
2
# cd 2 ; pwd ; ls
/prod/backups/glustertest/a/b/1/2/2/2/2
2
# cd 2 ; pwd ; ls
/prod/backups/glustertest/a/b/1/2/2/2/2/2
2
# cd 2 ; pwd ; ls
/prod/backups/glustertest/a/b/1/2/2/2/2/2/2
2
# find `pwd`
/prod/backups/glustertest/a/b/1/2/2/2/2/2/2
/prod/backups/glustertest/a/b/1/2/2/2/2/2/2/2
/prod/backups/glustertest/a/b/1/2/2/2/2/2/2/2/3
/prod/backups/glustertest/a/b/1/2/2/2/2/2/2/2/3/4
#

-Dan



Dan Mons
Unbreaker of broken things
Cutting Edge
http://cuttingedge.com.au


On 5 May 2014 17:57, Bernhard Glomm  wrote:
> For b), I'm not sure yet. Might have to do get the Mac moved into
> a DMZ. Or we could get the Mac to initiate connection to Jenkins
> via cronjob or something to query for any new job to run.
>
>
> fwiw
>
> I use cfengine to automate MAC, (don't like messing with launchd and
>
> shell scripts anymore ;-)
>
> http://www.cfengineers.net/downloads/cfengine-community-packages/
>
>
> hth
>
>
> b
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: build: Support for unit tests using Cmockery2

2014-05-07 Thread Harshavardhana
This has been really bothering me a bit as our queues are getting
bigger and bigger upstream to even get smallest of the patches to get
fixed quickly.

In these scenarios a decentralized regression testing could be made mandatory?

stage-1 tests - runs on the Author's laptop (i.e Linux) - git hook
perhaps which runs for each ./rfc.sh (reports build issues, other
apparent compilation problems, segfaults on init etc.)
This could comprise of
- smoke.sh
- 'make -j16, make -j32' for parallel build test
- Unittests

stage-2 tests - run on the initial review post.
- build rpms EL5, EL6, FC20, future
- mockbuild
- ./tests/basic/*
- any others?

stage-3 tests - run on the final Verification process.
- full blown ./tests/bugs/*

Currently if you look at the regression test suite it getting bigger
and bigger (our overall time of regression test) completion. Just a
though since simple build failures, compilation failure and other
really simple bugs - upstream servers shouldn't be used. One can
leverage Author's laptop :-)

Don't know what you guys think?

On Wed, May 7, 2014 at 12:00 PM, Luis Pabon (Code Review)
 wrote:
> Luis Pabon has posted comments on this change.
>
> Change subject: build: Support for unit tests using Cmockery2
> ..
>
>
> Patch Set 6:
>
> Good point, but unit tests take no more time to compile, and only take 0.55 
> secs to run all of them (at the moment).  Is this really an issue?
>
> --
> To view, visit http://review.gluster.org/7538
> To unsubscribe, visit http://review.gluster.org/settings
>
> Gerrit-MessageType: comment
> Gerrit-Change-Id: I1b36cb1f56fd10916f9bf535e8ad080a3358289f
> Gerrit-PatchSet: 6
> Gerrit-Project: glusterfs
> Gerrit-Branch: master
> Gerrit-Owner: Luis Pabon 
> Gerrit-Reviewer: Gluster Build System 
> Gerrit-Reviewer: Harshavardhana 
> Gerrit-Reviewer: Jeff Darcy 
> Gerrit-Reviewer: Justin Clift 
> Gerrit-Reviewer: Kaleb KEITHLEY 
> Gerrit-Reviewer: Luis Pabon 
> Gerrit-Reviewer: Rajesh Joseph 
> Gerrit-Reviewer: Ravishankar N 
> Gerrit-Reviewer: Vijay Bellur 
> Gerrit-HasComments: No



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Volume Create Failed

2014-05-07 Thread Cary Tsai
Does the volume name really matter?
No to mentioned snoopy is a trademark,
chance is small that glusterfs would use it internally.
I can try other name
Thanks
Cary


On Wed, May 7, 2014 at 5:15 PM, Yang Ye  wrote:

> have you tried using name other than snoopy?
>  On 6 May 2014 11:25, "Cary Tsai"  wrote:
>
>> # gluster peer status
>> Number of Peers: 3
>>
>> Hostname: us-east-2
>> Uuid: 3b102df3-74a7-4794-b300-b93bccfe8072
>> State: Peer in Cluster (Connected)
>>
>> Hostname: us-west-1
>> Uuid: 98906a76-dd5b-4db9-99d5-1d51b1ee3d2a
>> State: Peer in Cluster (Connected)
>>
>> Hostname: us-west-2
>> Uuid: 16eff965-ec88-4d12-adea-8512350bdaa7
>> State: Peer in Cluster (Connected)
>>
>> # gluster volume  create  snoopy replica 4 transport tcp 
>> 192.168.255.5:/brick1
>> us-east-2:/brick1 us-west-1:/brick1 us-west-2:/brick1 force
>> volume create: snoopy: failed
>> ---
>> When I check the debug log, /var/log/glusterfs/cli.log , it shows:
>>
>> [2014-05-06 00:17:29.988414] W [rpc-transport.c:175:rpc_transport_load]
>> 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
>> [2014-05-06 00:17:29.988909] I [socket.c:3480:socket_init] 0-glusterfs:
>> SSL support is NOT enabled
>> [2014-05-06 00:17:29.988930] I [socket.c:3495:socket_init] 0-glusterfs:
>> using system polling thread
>> [2014-05-06 00:17:30.022545] I
>> [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate cluster
>> type found. Checking brick order.
>> [2014-05-06 00:17:30.022706] I
>> [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order okay
>> [2014-05-06 00:17:30.273942] I
>> [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to create
>> volume
>> [2014-05-06 00:17:30.274027] I [input.c:36:cli_batch] 0-: Exiting with: -1
>>
>> What did I do wrong? Is more details I can read to figure out why my
>> volume create failed?
>> Thanks
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] OS X porting merged

2014-05-07 Thread Harshavardhana
> We're running OSX 10.8.5 with OSXFUSE 2.6.4
> # make -v
> GNU Make 3.81
> Copyright (C) 2006  Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.
> There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
> PARTICULAR PURPOSE.
> This program built for i386-apple-darwin11.3.0

Looks like your clang version is making glusterfs work on OSX even without
suppressing optimization flags as expected it would seem like a compiler issue.

Anyways moving on - can you open a bug for this? this is an interesting issue
perhaps `mac-compat` could be a real culprit here.

I will take a look at it this week.
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Ira Cooper
Anand, I also have a concern regarding the user-serviceable snapshot feature.

You rightfully call out the lack of scaling caused by maintaining the gfid -> 
gfid mapping tables, and correctly point out that this will limit the use cases 
this feature will be applicable to, on the client side.

If in fact gluster generates its gfids randomly, and has always done so, I 
propose that we can change the algorithm used to determine the mapping, to 
eliminate the lack of scaling of our solution.

We can create a fixed constant per-snapshot.  (Can be in just the client's 
memory, or stored on disk, that is an implementation detail here.)  We will 
call this constant "n".

I propose we just add the constant to the gfid determine the new gfid.  It 
turns out that this new gfid has the same chance of collision as any random 
gfid.  (It will take a moment for you to convince yourself of this, but the 
argument is fairly intuitive.)  If we do this, I'd suggest we do it on the 
first 32 bits of the gfid, because we can use simple unsigned math, and let it 
just overflow.  (If we get up to 2^32 snapshots, we can revisit this aspect of 
the design, but we'll have other issues at that number.)

By using addition this way, we also allow for subtraction to be used for a 
later purpose.

Note: This design relies on our random gfid generator not turning out a linear 
range of numbers.  If it has in the past, or will in the future, clearly this 
design has flaws.  But, I know of no such plans.  As long as the randomness is 
sufficient, there should be no issue.  (IE: It doesn't turn out linear results.)

Thanks,

-Ira / ira@(redhat.com|samba.org)

PS: +1 to Jeff here.  He's spotting major issues, that should be looked at, 
above the issue above.

- Original Message -
> > Attached is a basic write-up of the user-serviceable snapshot feature
> > design (Avati's). Please take a look and let us know if you have
> > questions of any sort...
> 
> A few.
> 
> The design creates a new type of daemon: snapview-server.
> 
> * Where is it started?  One server (selected how) or all?
> 
> * How do clients find it?  Are we dynamically changing the client
>   side graph to add new protocol/client instances pointing to new
>   snapview-servers, or is snapview-client using RPC directly?  Are
>   the snapview-server ports managed through the glusterd portmapper
>   interface, or patched in some other way?
> 
> * Since a snap volume will refer to multiple bricks, we'll need
>   more brick daemons as well.  How are *those* managed?
> 
> * How does snapview-server manage user credentials for connecting
>   to snap bricks?  What if multiple users try to use the same
>   snapshot at the same time?  How does any of this interact with
>   on-wire or on-disk encryption?
> 
> I'm sure I'll come up with more later.  Also, next time it might
> be nice to use the upstream feature proposal template *as it was
> designed* to make sure that questions like these get addressed
> where the whole community can participate in a timely fashion.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Jeff Darcy
> Attached is a basic write-up of the user-serviceable snapshot feature
> design (Avati's). Please take a look and let us know if you have
> questions of any sort...

A few.

The design creates a new type of daemon: snapview-server.

* Where is it started?  One server (selected how) or all?

* How do clients find it?  Are we dynamically changing the client
  side graph to add new protocol/client instances pointing to new
  snapview-servers, or is snapview-client using RPC directly?  Are
  the snapview-server ports managed through the glusterd portmapper
  interface, or patched in some other way?

* Since a snap volume will refer to multiple bricks, we'll need
  more brick daemons as well.  How are *those* managed?

* How does snapview-server manage user credentials for connecting
  to snap bricks?  What if multiple users try to use the same
  snapshot at the same time?  How does any of this interact with
  on-wire or on-disk encryption?

I'm sure I'll come up with more later.  Also, next time it might
be nice to use the upstream feature proposal template *as it was
designed* to make sure that questions like these get addressed
where the whole community can participate in a timely fashion.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Sobhan Samantaray
I think its a good idea to include the auto-remove of the snapshots based on 
the time or space as threshold as mentioned in below link.

http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/

- Original Message -
From: "Anand Subramanian" 
To: "Paul Cuzner" 
Cc: gluster-devel@gluster.org, "gluster-users" , 
"Anand Avati" 
Sent: Wednesday, May 7, 2014 7:50:30 PM
Subject: Re: [Gluster-users] User-serviceable snapshots design

Hi Paul, that is definitely doable and a very nice suggestion. It is just that 
we probably won't be able to get to that in the immediate code drop (what we 
like to call phase-1 of the feature). But yes, let us try to implement what you 
suggest for phase-2. Soon :-) 

Regards, 
Anand 

On 05/06/2014 07:27 AM, Paul Cuzner wrote: 



Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective. 

In the "considerations" section, it states - "We plan to introduce a 
configurable option to limit the number of snapshots visible under the USS 
feature." 
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective. 

i.e. 
.snaps 
\/ Today 
+-- snap01_20140503_0800 
+-- snap02_ 20140503_ 1400 
> Last 7 days 
> 7-21 days 
> 21-60 days 
> 60-180days 
> 180days 







From: "Anand Subramanian"  
To: gluster-de...@nongnu.org , "gluster-users"  
Cc: "Anand Avati"  
Sent: Saturday, 3 May, 2014 2:35:26 AM 
Subject: [Gluster-users] User-serviceable snapshots design 

Attached is a basic write-up of the user-serviceable snapshot feature 
design (Avati's). Please take a look and let us know if you have 
questions of any sort... 

We have a basic implementation up now; reviews and upstream commit 
should follow very soon over the next week. 

Cheers, 
Anand 

___ 
Gluster-users mailing list 
gluster-us...@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 



___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Anand Subramanian
Hi Paul, that is definitely doable and a very nice suggestion. It is 
just that we probably won't be able to get to that in the immediate code 
drop (what we like to call phase-1 of the feature). But yes, let us try 
to implement what you suggest for phase-2. Soon :-)


Regards,
Anand

On 05/06/2014 07:27 AM, Paul Cuzner wrote:
Just one question relating to thoughts around how you apply a filter 
to the snapshot view from a user's perspective.


In the "considerations" section, it states - "We plan to introduce a 
configurable option to limit the number of snapshots visible under the 
USS feature."
Would it not be possible to take the meta data from the snapshots to 
form a tree hierarchy when the number of snapshots present exceeds a 
given threshold, effectively organising the snaps by time. I think 
this would work better from an end-user workflow perspective.


i.e.
.snaps
  \/  Today
+-- snap01_20140503_0800
+-- snap02_20140503_1400
  > Last 7 days
> 7-21 days
> 21-60 days
> 60-180days
> 180days





*From: *"Anand Subramanian" 
*To: *gluster-de...@nongnu.org, "gluster-users"

*Cc: *"Anand Avati" 
*Sent: *Saturday, 3 May, 2014 2:35:26 AM
*Subject: *[Gluster-users] User-serviceable snapshots design

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

We have a basic implementation up now; reviews and upstream commit
should follow very soon over the next week.

Cheers,
Anand

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Addition of sub-maintainers

2014-05-07 Thread Shyamsundar Ranganathan
Awesome! Congratulations folks.

Shyam

On Mon, May 5, 2014 at 7:15 AM, Vijay Bellur  wrote:
> On 04/30/2014 09:28 AM, Vijay Bellur wrote:
>
>>
>> We plan to perform update gerrit to provide access to sub-maintainers by
>> end of this week (i.e. 4th May). If you have any objections, concerns or
>> feedback on this process, please feel free to provide that before then
>> on this thread or to me in person.
>>
>
> I did not receive any objections either on this thread or in person. Gerrit
> has now been updated to provide access to sub-maintainers. My
> congratulations to Krishnan Parthasarathi, Kaushal Madappa, Pranith Kumar,
> Venky Shankar, Raghavendra G & Niels de Vos on becoming sub-maintainers of
> GlusterFS.
>
> Please extend your congratulations and co-operation to the new
> sub-maintainers :).
>
> Thanks,
> Vijay
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-07 Thread Kaleb S. KEITHLEY

On 05/07/2014 10:33 AM, Kaleb S. KEITHLEY wrote:


Reminder!!!

The weekly Gluster Community meeting is in 30 mins, in #gluster-meeting
on IRC.

This is a completely public meeting, everyone is encouraged to attend
and be a part of it. :)

To add Agenda items
***

Just add them to the main text of the Google Doc, and **be at the
meeting**. :)


Short meeting, a few key people were not in attendance.

Meeting minutes here:


http://meetbot.fedoraproject.org/gluster-meeting/2014-05-07/gluster-meeting.2014-05-07-15.00.html

Full log here:


http://meetbot.fedoraproject.org/gluster-meeting/2014-05-07/gluster-meeting.2014-05-07-15.00.log.html

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-07 Thread Kaleb S. KEITHLEY

On 05/07/2014 10:33 AM, Kaleb S. KEITHLEY wrote:


Reminder!!!

The weekly Gluster Community meeting is in 8 mins, in #gluster-meeting
on IRC.


25 minutes, not 8.




This is a completely public meeting, everyone is encouraged to attend
and be a part of it. :)

To add Agenda items
***

Just add them to the main text of the Google Doc, and **be at the
meeting**. :)

Agenda - http://goo.gl/XDv6jf



--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-07 Thread Kaleb S. KEITHLEY


Reminder!!!

The weekly Gluster Community meeting is in 8 mins, in #gluster-meeting 
on IRC.


This is a completely public meeting, everyone is encouraged to attend 
and be a part of it. :)


To add Agenda items
***

Just add them to the main text of the Google Doc, and **be at the 
meeting**. :)


Agenda - http://goo.gl/XDv6jf

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Anand Subramanian

Hi Sobhan,

Thanks for the comments. It was a very quick writeup so that there is at 
least some clarity on the implementation internals. I will try to find 
time and plug in some more details.


I am not quite sure what you mean by "the default value of the option of 
uss" but am assuming its the option of turning the feature on/off? If 
so, this will be set to "off" by default initially. The admin would need 
to turn uss on for a given vol.


Thanks,
Anand

On 05/06/2014 10:08 PM, Sobhan Samantaray wrote:

Hi Anand,
Thanks to come-up with the nice design. I have couple of comments.

1. It should be mention in the design the access-protocols that should be 
used(NFS/CIFS etc) although the requirement states that.
2. Consideration section:
"Again, this is not a performance oriented feature. Rather, the goal is to allow a 
seamless user-experience by allowing easy and useful access to snapshotted volumes and 
individual data stored in those volumes".

If it is something that fops performance would not be impacted due to the 
introduction of this feature then it should be clarified.

3. It's good to mention the default value of the option of uss.

Regards
Sobhan


From: "Paul Cuzner" 
To: ana...@redhat.com
Cc: gluster-devel@gluster.org, "gluster-users" , "Anand 
Avati" 
Sent: Tuesday, May 6, 2014 7:27:29 AM
Subject: Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective.

In the "considerations" section, it states - "We plan to introduce a configurable 
option to limit the number of snapshots visible under the USS feature."
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective.

i.e.
.snaps
\/ Today
+-- snap01_20140503_0800
+-- snap02_ 20140503_ 1400

Last 7 days
7-21 days
21-60 days
60-180days
180days







From: "Anand Subramanian" 
To: gluster-de...@nongnu.org, "gluster-users" 
Cc: "Anand Avati" 
Sent: Saturday, 3 May, 2014 2:35:26 AM
Subject: [Gluster-users] User-serviceable snapshots design

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

We have a basic implementation up now; reviews and upstream commit
should follow very soon over the next week.

Cheers,
Anand

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Community Weekly Meeting

2014-05-07 Thread Vijay Bellur
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:16010101T00
TZOFFSETTO:+0530
TZOFFSETFROM:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:7b49ac44-ffca-4de4-a179-f9edde80bf1e
RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=WE
SUMMARY:Gluster Community Weekly Meeting
DESCRIPTION:[Updating agenda link to point to google docs] \n\nGreetings\, \
 n\nThis is the weekly slot to discuss all aspects concerning the Gluster com
 munity. \n\nAgenda - http://goo.gl/XDv6jf \n\nPlease feel free to add your a
 genda items before the meeting. \n\nCheers\, \nVijay \n
LOCATION:#gluster-meeting on irc.freenode.net
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:mailto:gluster
 -us...@gluster.org
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:mailto:gluster
 -de...@gluster.org
ORGANIZER;CN=Vijay Bellur:mailto:vbel...@redhat.com
DTSTART;TZID="Asia/Kolkata":20131204T203000
DTEND;TZID="Asia/Kolkata":20131204T213000
STATUS:CONFIRMED
CLASS:PUBLIC
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
TRANSP:OPAQUE
LAST-MODIFIED:20140507T120523Z
DTSTAMP:20140507T120523Z
SEQUENCE:4
EXDATE;TZID="Asia/Kolkata":20131225T203000
EXDATE;TZID="Asia/Kolkata":20140101T203000
EXDATE;TZID="Asia/Kolkata":20140226T203000
BEGIN:VALARM
ACTION:DISPLAY
TRIGGER;RELATED=START:-PT5M
DESCRIPTION:Reminder
END:VALARM
END:VEVENT
END:VCALENDAR___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression tests: Should we test non-XFS too?

2014-05-07 Thread Kaleb S. KEITHLEY

On 05/06/2014 10:44 PM, B.K.Raghuram wrote:

For those of us who are toying with the idea of using ZFS as the
underlying filesystem but are hesitating only because it is not widely
tested, a regression test on ZFS would be very welcome. If there are
some issues running it at redhat for license reasons,


Yes, there are issues with running it at Red Hat for exactly those reasons.


 would it help if
someone outside ran the tests and reported the results periodically?


Yes, if someone were to do that I'm sure it would be appreciated.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-07 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Vijay Bellur" , gluster-devel@gluster.org, "Anand 
> Avati" 
> Sent: Wednesday, May 7, 2014 3:42:16 PM
> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for 
> setxattr
> 
> I think with "repetitive log message suppression" patch being merged, we
> don't really need gf_log_occasionally (except if they are logged in DEBUG or
> TRACE levels).

That definitely helps. But still, setxattr calls are not supposed to fail with 
ENOTSUP on FS where we support gluster. If there are special keys which fail 
with ENOTSUPP, we can conditionally log setxattr failures only when the key is 
something new?

Pranith

> 
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Vijay Bellur" 
> > Cc: gluster-devel@gluster.org, "Anand Avati" 
> > Sent: Wednesday, 7 May, 2014 3:12:10 PM
> > Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for
> > setxattr
> > 
> > 
> > 
> > - Original Message -
> > > From: "Vijay Bellur" 
> > > To: "Pranith Kumar Karampuri" , "Anand Avati"
> > > 
> > > Cc: gluster-devel@gluster.org
> > > Sent: Tuesday, May 6, 2014 7:16:12 PM
> > > Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for
> > > setxattr
> > > 
> > > On 05/06/2014 01:07 PM, Pranith Kumar Karampuri wrote:
> > > > hi,
> > > >Why is there occasional logging for ENOTSUP errno when setxattr
> > > >fails?
> > > >
> > > 
> > > In the absence of occasional logging, the log files would be flooded
> > > with this message every time there is a setxattr() call.
> > 
> > How to know which keys are failing setxattr with ENOTSUPP if it is not
> > logged
> > when the key keeps changing?
> > 
> > Pranith
> > > 
> > > -Vijay
> > > 
> > > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-07 Thread Raghavendra Gowdappa
I think with "repetitive log message suppression" patch being merged, we don't 
really need gf_log_occasionally (except if they are logged in DEBUG or TRACE 
levels).

- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Vijay Bellur" 
> Cc: gluster-devel@gluster.org, "Anand Avati" 
> Sent: Wednesday, 7 May, 2014 3:12:10 PM
> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for 
> setxattr
> 
> 
> 
> - Original Message -
> > From: "Vijay Bellur" 
> > To: "Pranith Kumar Karampuri" , "Anand Avati"
> > 
> > Cc: gluster-devel@gluster.org
> > Sent: Tuesday, May 6, 2014 7:16:12 PM
> > Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for
> > setxattr
> > 
> > On 05/06/2014 01:07 PM, Pranith Kumar Karampuri wrote:
> > > hi,
> > >Why is there occasional logging for ENOTSUP errno when setxattr fails?
> > >
> > 
> > In the absence of occasional logging, the log files would be flooded
> > with this message every time there is a setxattr() call.
> 
> How to know which keys are failing setxattr with ENOTSUPP if it is not logged
> when the key keeps changing?
> 
> Pranith
> > 
> > -Vijay
> > 
> > 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-07 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Vijay Bellur" 
> To: "Pranith Kumar Karampuri" , "Anand Avati" 
> 
> Cc: gluster-devel@gluster.org
> Sent: Tuesday, May 6, 2014 7:16:12 PM
> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for 
> setxattr
> 
> On 05/06/2014 01:07 PM, Pranith Kumar Karampuri wrote:
> > hi,
> >Why is there occasional logging for ENOTSUP errno when setxattr fails?
> >
> 
> In the absence of occasional logging, the log files would be flooded
> with this message every time there is a setxattr() call.

How to know which keys are failing setxattr with ENOTSUPP if it is not logged 
when the key keeps changing?

Pranith
> 
> -Vijay
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS and the logging framework

2014-05-07 Thread Vijay Bellur

On 05/07/2014 10:21 AM, Nithya Balachandran wrote:

We have had some feedback/concerns raised regarding not including the messages 
in the header file. Some external products do include the message strings in 
the header files which helps for documentation as well as easier editing.


Is there more detail on the concerns being raised? For documentation 
ease, we can evolve a script to generate a consolidated file of all 
messages in a component. The consolidated file can then be subject to 
i18n etc. in the future.


From a developer perspective, editing a message would involve an 
additional git grep for the message - it shouldn't be too hard?





Does anyone have any thoughts on this? The advantages are listed above. 
Disadvantages were listed in earlier emails. If we decide to include messages 
in the header file, we will need to consolidate all messages that fall into 
various classes and come up with a single format string - currently there seem 
to be too many messages that mean the same thing but use different foramts to 
say it.



I suggest we finalize an approach and go ahead with implementation. My 
obvious preference at this point in time is approach #2 described 
earlier in this thread. In scenarios like this where there are multiple 
options and there is no obvious winner, it is always better to implement 
an approach and listen to feedback from the intended audience of the 
feature. That will let us know whether we are on the right track or not.


Regards,
Vijay




Regards,
Nithya

- Original Message -
From: "Vijay Bellur" 
To: "Dan Lambright" , "Nithya Balachandran" 

Cc: "gluster-users" , gluster-devel@gluster.org
Sent: Thursday, 1 May, 2014 1:31:04 PM
Subject: Re: [Gluster-devel] GlusterFS and the logging framework

On 05/01/2014 04:07 AM, Dan Lambright wrote:

Hello,

In a previous job, an engineer in our storage group modified our I/O stack logs 
in a manner similar to your proposal #1 (except he did not tell anyone, and did 
it for DEBUG messages as well as ERRORS and WARNINGS, over the weekend). 
Developers came to work Monday and found over a thousand log message strings 
had been buried in a new header file, and any new logs required a new message 
id, along with a new string entry in the header file.

This did render the code harder to read. The ensuing uproar closely mirrored 
the arguments (1) and (2) you listed. Logs are like comments. If you move them 
out of the source, the code is harder to follow. And you probably wan't fewer 
message IDs than comments.

The developer retracted his work. After some debate, his V2 solution resembled your "approach #2". 
Developers were once again free to use plain text strings directly in logs, but the notion of 
"classes" (message ID) was kept. We allowed multiple text strings to be used against a single 
class, and any new classes went in a master header file. The "debug" message ID class was a general 
purpose bucket and what most coders used day to day.

So basically, your email sounded very familiar to me and I think your proposal 
#2 is on the right track.



+1. Proposal #2 seems to be better IMO.

Thanks,
Vijay





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel