Re: [Gluster-devel] [Gluster-users] What's the impact of enabling the profiler?

2014-07-22 Thread Pranith Kumar Karampuri


On 07/22/2014 11:56 AM, Joe Julian wrote:


On 07/21/2014 11:20 PM, Pranith Kumar Karampuri wrote:


On 07/22/2014 11:39 AM, Joe Julian wrote:


On 07/17/2014 07:30 PM, Pranith Kumar Karampuri wrote:


On 07/18/2014 03:05 AM, Joe Julian wrote:
What impact, if any, does starting profiling (gluster volume 
profile $vol start) have on performance?

Joe,
According to the code the only extra things it does is calling 
gettimeofday() call at the beginning and end of the FOP to 
calculate latency, increment some variables. So I guess not much?




So far so good. Is the only way to clear the stats to restart the 
brick?

I think when the feature is initially proposed we wanted two things
1) cumulative stats
2) Interval stats

Interval stats get cleared whenever 'gluster volume profile volname 
info' is executed (Although it starts counting the next set of fops 
that happen after this command execution). But there is no way to 
clear the cumulative stats. It would be nice if you could give some 
feedback about what you liked/what you think should change to make 
better use of it. So I am guessing there wasn't big performance hit?


Pranith

No noticeable performance hit, no.

I'm writing a whitepaper for the best practices for OpenStack on 
GlusterFS so I needed some idea how qemu actually uses the filesystem. 
What the operations are so I can look at not only the best ways to 
tune for that use, but how to build the systems around that.


At this point, I'm just collecting data. TBH, I hadn't noticed the 
interval data. That should be perfect for this. I'll poll it in XML 
and run the numbers in a few days.

Joe,
 Do let us know your feedback. It needs some real-world usage 
suggestions from users like you :-).


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] What's the impact of enabling the profiler?

2014-07-22 Thread Kaushal M
The clear you have at top (the one for clear-stats) is for 'top
clear'. Below there is a section with GF_CLI_INFO_CLEAR, which handles
'profile info clear'.

~kaushal

On Tue, Jul 22, 2014 at 11:50 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:

 On 07/22/2014 11:43 AM, Kaushal M wrote:

 'gluster volume profile VOLNAME info clear' should clear the stats as
 well.

  From 'gluster volume help'
volume profile VOLNAME {start|info [peek|incremental
 [peek]|cumulative|clear]|stop} [nfs]

 According to the code it is clearing only stats of 'top' not of the
 'profile'

 Pranith


 On Tue, Jul 22, 2014 at 11:39 AM, Joe Julian j...@julianfamily.org wrote:

 On 07/17/2014 07:30 PM, Pranith Kumar Karampuri wrote:


 On 07/18/2014 03:05 AM, Joe Julian wrote:

 What impact, if any, does starting profiling (gluster volume profile
 $vol
 start) have on performance?

 Joe,
  According to the code the only extra things it does is calling
 gettimeofday() call at the beginning and end of the FOP to calculate
 latency, increment some variables. So I guess not much?

 So far so good. Is the only way to clear the stats to restart the brick?

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Duplicate entries and other weirdness in a 3*4 volume

2014-07-22 Thread Xavier Hernandez
On Monday 21 July 2014 13:14:46 Jeff Darcy wrote:
 Perhaps it's time to revisit the idea of making assumptions about d_off
 values and twiddling them back and forth, vs. maintaining a precise
 mapping between our values and local-FS values.
 
 http://review.gluster.org/#/c/4675/
 
 That patch is old and probably incomplete, but at the time it worked
 just as well as the one that led us into the current situation.

I think directory handling has a lot of issues, not only the problem of big 
offsets. The most important will be scalability when the number of bricks will 
be greater.

Maybe we should try to find a better solution to address all these problems at 
once.

One possible solution is to convert directories into files managed by 
storage/posix (some changes will also be required in dht and afr probably). We 
will have full control about the format of this file, so we'll be able to use 
the directory offset that we want to avoid interferences with upper xlators in 
readdir(p) calls. This will also allow to optimize directory accesses and even 
minimize or solve the problem of renames.

Additionally, this will give the same reliability to directories that files 
have (replicated or dispersed).

Obviously this is an important architectural change on the brick level, but I 
think its benefits are worth it.

Xavi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] What's the impact of enabling the profiler?

2014-07-22 Thread Kaushal M
Joe,
BTW, Vipul (in CC) is also looking to quantify the impact of enabling
io-stats counters permanently.
It'd be great if you both could help each other out.

~kaushal

On Tue, Jul 22, 2014 at 12:54 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:

 On 07/22/2014 12:07 PM, Kaushal M wrote:

 The clear you have at top (the one for clear-stats) is for 'top
 clear'. Below there is a section with GF_CLI_INFO_CLEAR, which handles
 'profile info clear'.

 Cool. Thanks :-). The stuff that is present on master for 'profile' is so
 much better :-)

 Pranith


 ~kaushal

 On Tue, Jul 22, 2014 at 11:50 AM, Pranith Kumar Karampuri
 pkara...@redhat.com wrote:

 On 07/22/2014 11:43 AM, Kaushal M wrote:

 'gluster volume profile VOLNAME info clear' should clear the stats as
 well.

   From 'gluster volume help'
 volume profile VOLNAME {start|info [peek|incremental
 [peek]|cumulative|clear]|stop} [nfs]

 According to the code it is clearing only stats of 'top' not of the
 'profile'

 Pranith

 On Tue, Jul 22, 2014 at 11:39 AM, Joe Julian j...@julianfamily.org
 wrote:

 On 07/17/2014 07:30 PM, Pranith Kumar Karampuri wrote:


 On 07/18/2014 03:05 AM, Joe Julian wrote:

 What impact, if any, does starting profiling (gluster volume profile
 $vol
 start) have on performance?

 Joe,
   According to the code the only extra things it does is calling
 gettimeofday() call at the beginning and end of the FOP to calculate
 latency, increment some variables. So I guess not much?

 So far so good. Is the only way to clear the stats to restart the
 brick?

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Developer Documentation for datastructures in gluster

2014-07-22 Thread Pranith Kumar Karampuri
Here is my first draft of mem-pool data structure for review: 
http://review.gluster.org/8343

Please don't laugh at the ascii art ;-).

Pranith

On 07/17/2014 04:10 PM, Ravishankar N wrote:

On 07/15/2014 04:39 PM, Pranith Kumar Karampuri wrote:

hi,
  Please respond if you guys volunteer to add documentation for 
any of the following things that are not already taken.


client_t - pranith
integration with statedump - pranith
mempool - Pranith

event-hostory + circ-buff - Raghavendra Bhat
inode - Raghavendra Bhat

call-stub
fd
iobuf
graph
xlator
option-framework
rbthash
runner-framework
stack/frame
strfd
timer
store
gid-cache(source is heavily documented)
dict
event-poll




I'll take up event-poll. I have created an etherpad link with the 
components and volunteers thus far:

https://etherpad.wikimedia.org/p/glusterdoc
Feel free to update this doc with your patch details, other components 
etc.


- Ravi


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Compilation issue with master branch

2014-07-22 Thread Lalatendu Mohanty


I am trying to compile master for doing a fresh Coverity run. But 
Coverity is complaining that source is not compiled fully. When I did 
the same manually, I saw make is returning very less output compared 
to past. I have copied the make output (along with autogen and 
configure) in http://ur1.ca/ht05t .


I will appreciate any help on this.

Below is the error returned from Coverity:

/Your request for analysis of GlusterFS is failed.
Analysis status: FAILURE
Please fix the error and upload the build again.

Error details:
Build uploaded has not been compiled fully. Please fix any compilation error. 
You may have to run bin/cov-configure as described in the article on Coverity 
Community. Last few lines of cov-int/build-log.txt should indicate 85% or more 
compilation units ready for analysis

For more detail explanation on the error, please 
check://https://communities.coverity.com/message/4820//
/


Thanks,
Lala
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Compilation issue with master branch

2014-07-22 Thread Lalatendu Mohanty

On 07/22/2014 03:07 PM, Atin Mukherjee wrote:


On 07/22/2014 03:01 PM, Lalatendu Mohanty wrote:

I am trying to compile master for doing a fresh Coverity run. But
Coverity is complaining that source is not compiled fully. When I did
the same manually, I saw make is returning very less output compared
to past. I have copied the make output (along with autogen and
configure) in http://ur1.ca/ht05t .

I will appreciate any help on this.

Below is the error returned from Coverity:

/Your request for analysis of GlusterFS is failed.
Analysis status: FAILURE
Please fix the error and upload the build again.

Could be because of the lcmockery lib missing at the missing?


Nope, the below cmockery packages were installed during the compilation. 
Otherwise it would have failed with error.


cmockery2-1.3.7-1.fc19.x86_64
cmockery2-devel-1.3.7-1.fc19.x86_64


Error details:
Build uploaded has not been compiled fully. Please fix any compilation error. 
You may have to run bin/cov-configure as described in the article on Coverity 
Community. Last few lines of cov-int/build-log.txt should indicate 85% or more 
compilation units ready for analysis

For more detail explanation on the error, please check: 
//https://communities.coverity.com/message/4820//
/


Thanks,
Lala


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Duplicate entries and other weirdness in a 3*4 volume

2014-07-22 Thread Jeff Darcy
 One possible solution is to convert directories into files managed by
 storage/posix (some changes will also be required in dht and afr
 probably).  We will have full control about the format of this file,
 so we'll be able to use the directory offset that we want to avoid
 interferences with upper xlators in readdir(p) calls. This will also
 allow to optimize directory accesses and even minimize or solve the
 problem of renames.

Unfortunately, most of the problems with renames involve multiple
directories and/or multiple bricks, so changing how we store directory
information within a brick won't solve those particular problems.

 Additionally, this will give the same reliability to directories that
 files have (replicated or dispersed).

If it's within storage/posix then it's well below either replication or
dispersal.  I think there's the kernel of a good idea here, but it's
going to require changes to multiple components (and how they relate to
one another).
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-22 Thread Luis Pabón
I understand that when something is new and different, it is most likely 
blamed for anything wrong that happens.  I highly propose that we do not 
do this, and instead work to learn more about the tool.


Cmockery2 is a tool that is important as the compiler.  It provides an 
extremely easy method to determine the quality of the software after it 
has been constructed, and therefore it has been made to be a requirement 
of the build.  Making it optional undermines its importance, and could 
in turn make it useless.


Cmockery2 is available for all supported EPEL/Fedora versions.  For any 
other distribution or operating system, it takes about 3 mins to 
download and compile.


Please let me know if you have any other questions.

- Luis

On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote:

On 07/21/2014 10:48 PM, Harshavardhana wrote:

Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?


+1, we need to make it conditional and enable it if necessary.  I am 
also not sure if we have cmockery2-devel in el5, el6. If not Build 
will fail.



On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se];
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:

On 2014-07-21 16:17, Anders Blomdell wrote:

On 2014-07-20 16:01, Niels de Vos wrote:

On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:

Hi all,
 A few months ago, the unit test framework based on 
cmockery2 was

in the repo for a little while, then removed while we improved the
packaging method.  Now support for cmockery2 (
http://review.gluster.org/#/c/7538/ ) has been merged into the repo
again.  This will most likely require you to install cmockery2 on
your development systems by doing the following:

* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation 



Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation:
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS:
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 




When starting out writing unit tests, I would suggest writing unit
tests for non-xlator interface files when you start. Once you feel
more comfortable writing unit tests, then move to writing them for
the xlators interface files.

Awesome, many thanks! I'd like to add some unittests for the RPC and
NFS
layer. Several functions (like ip-address/netmask matching for ACLs)
look very suitable.

Did you have any particular functions in mind that you would like to
see
unittests for? If so, maybe you can file some bugs for the different
tests so that we won't forget about it? Depending on the tests, 
these

bugs may get the EasyFix keyword if there is a clear description and
some pointers to examples.

Looks like parts of cmockery was forgotten in glusterfs.spec.in:

# rpm -q -f  `which gluster`
glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
# ldd `which gluster`
  linux-vdso.so.1 =  (0x74dfe000)
  libglusterfs.so.0 = /lib64/libglusterfs.so.0 
(0x7fe034cc4000)
  libreadline.so.6 = /lib64/libreadline.so.6 
(0x7fe034a7d000)

  libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000)
  libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000)
  libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000)
  libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000)
  libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000)
  libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000)
  libm.so.6 = /lib64/libm.so.6 (0x7fe033971000)
  libdl.so.2 = /lib64/libdl.so.2 (0x7fe03376d000)
  libcmockery.so.0 = not found
  libpthread.so.0 = /lib64/libpthread.so.0 (0x7fe03354f000)
  libcrypto.so.10 = /lib64/libcrypto.so.10 (0x7fe033168000)
  libc.so.6 = /lib64/libc.so.6 (0x7fe032da9000)
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  liblzma.so.5 = /lib64/liblzma.so.5 (0x7fe032b82000)
  /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)

Should I file a bug report or could someone on the fast-lane fix 
this?

My bad (installation with --nodeps --force :-()

Actually, I was not expecting a dependency on cmockery2. My
understanding was that 

Re: [Gluster-devel] [ovirt-users] Can we debug some truths/myths/facts about hosted-engine and gluster?

2014-07-22 Thread Itamar Heim

On 07/22/2014 04:28 AM, Vijay Bellur wrote:

On 07/21/2014 05:09 AM, Pranith Kumar Karampuri wrote:


On 07/21/2014 02:08 PM, Jiri Moskovcak wrote:

On 07/19/2014 08:58 AM, Pranith Kumar Karampuri wrote:


On 07/19/2014 11:25 AM, Andrew Lau wrote:



On Sat, Jul 19, 2014 at 12:03 AM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:


On 07/18/2014 05:43 PM, Andrew Lau wrote:

​ ​

On Fri, Jul 18, 2014 at 10:06 PM, Vijay Bellur
vbel...@redhat.com mailto:vbel...@redhat.com wrote:

[Adding gluster-devel]


On 07/18/2014 05:20 PM, Andrew Lau wrote:

Hi all,

As most of you have got hints from previous messages,
hosted engine
won't work on gluster . A quote from BZ1097639

Using hosted engine with Gluster backed storage is
currently something
we really warn against.


I think this bug should be closed or re-targeted at
documentation, because there is nothing we can do here.
Hosted engine assumes that all writes are atomic and
(immediately) available for all hosts in the cluster.
Gluster violates those assumptions.
​

I tried going through BZ1097639 but could not find much
detail with respect to gluster there.

A few questions around the problem:

1. Can somebody please explain in detail the scenario that
causes the problem?

2. Is hosted engine performing synchronous writes to ensure
that writes are durable?

Also, if there is any documentation that details the hosted
engine architecture that would help in enhancing our
understanding of its interactions with gluster.


​

Now my question, does this theory prevent a scenario of
perhaps
something like a gluster replicated volume being mounted
as a glusterfs
filesystem and then re-exported as the native kernel NFS
share for the
hosted-engine to consume? It could then be possible to
chuck ctdb in
there to provide a last resort failover solution. I have
tried myself
and suggested it to two people who are running a similar
setup. Now
using the native kernel NFS server for hosted-engine and
they haven't
reported as many issues. Curious, could anyone validate
my theory on this?


If we obtain more details on the use case and obtain gluster
logs from the failed scenarios, we should be able to
understand the problem better. That could be the first step
in validating your theory or evolving further
recommendations :).


​ I'm not sure how useful this is, but ​Jiri Moskovcak tracked
this down in an off list message.

​ Message Quote:​

​ ==​

​We were able to track it down to this (thanks Andrew for
providing the testing setup):

-b686-4363-bb7e-dba99e5789b6/ha_agent service_type=hosted-engine'
Traceback (most recent call last):
File
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,


line 165, in handle
  response = success  + self._dispatch(data)
File
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,


line 261, in _dispatch
  .get_all_stats_for_service_type(**options)
File
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,


line 41, in get_all_stats_for_service_type
  d = self.get_raw_stats_for_service_type(storage_dir,
service_type)
File
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,


line 74, in get_raw_stats_for_service_type
  f = os.open(path, direct_flag | os.O_RDONLY)
OSError: [Errno 116] Stale file handle:
'/rhev/data-center/mnt/localhost:_mnt_hosted-engine/c898fd2a-b686-4363-bb7e-dba99e5789b6/ha_agent/hosted-engine.metadata'



Andrew/Jiri,
Would it be possible to post gluster logs of both the
mount and bricks on the bz? I can take a look at it once. If I
gather nothing then probably I will ask for your help in
re-creating the issue.

Pranith


​Unfortunately, I don't have the logs for that setup any more.. ​I'll
try replicate when I get a chance. If I understand the comment from
the BZ, I don't think it's a gluster bug per-say, more just how
gluster does its replication.

hi Andrew,
  Thanks for that. I couldn't come to any conclusions
because no
logs were available. It is unlikely that self-heal is involved because
there were no bricks going down/up according to the bug description.



Hi,
I've never had such setup, I guessed problem with gluster based on
OSError: [Errno 116] Stale file handle: which happens when the file
opened by application on client gets removed on the server. I'm pretty
sure 

Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-22 Thread Lalatendu Mohanty

On 07/22/2014 04:35 PM, Luis Pabón wrote:
I understand that when something is new and different, it is most 
likely blamed for anything wrong that happens.  I highly propose that 
we do not do this, and instead work to learn more about the tool.


Cmockery2 is a tool that is important as the compiler.  It provides an 
extremely easy method to determine the quality of the software after 
it has been constructed, and therefore it has been made to be a 
requirement of the build.  Making it optional undermines its 
importance, and could in turn make it useless.




Hey Luis,

Th intention was not to undermine or give less importance to Cmockery2. 
Sorry if it looked like that.


However I was thinking from a flexibility point of view. I am assuming 
in future, it would be part of upstream regression test suite. So each 
patch will go through full unit testing by-default. So when somebody is 
creating RPMs from pristine sources, we should be able to do that 
without Cmockery2 because the tests were already ran through 
Jenkins/gerrit.


The question is do we need Cmockery every-time we compile glusterfs 
source? if the answer is yes, then I am fine with current code.


Cmockery2 is available for all supported EPEL/Fedora versions.  For 
any other distribution or operating system, it takes about 3 mins to 
download and compile.


Please let me know if you have any other questions.

- Luis

On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote:

On 07/21/2014 10:48 PM, Harshavardhana wrote:

Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?


+1, we need to make it conditional and enable it if necessary. I am 
also not sure if we have cmockery2-devel in el5, el6. If not Build 
will fail.



On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se];
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:

On 2014-07-21 16:17, Anders Blomdell wrote:

On 2014-07-20 16:01, Niels de Vos wrote:

On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:

Hi all,
 A few months ago, the unit test framework based on 
cmockery2 was

in the repo for a little while, then removed while we improved the
packaging method.  Now support for cmockery2 (
http://review.gluster.org/#/c/7538/ ) has been merged into the 
repo

again.  This will most likely require you to install cmockery2 on
your development systems by doing the following:

* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation 



Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation:
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS:
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 




When starting out writing unit tests, I would suggest writing unit
tests for non-xlator interface files when you start. Once you feel
more comfortable writing unit tests, then move to writing them for
the xlators interface files.
Awesome, many thanks! I'd like to add some unittests for the RPC 
and

NFS
layer. Several functions (like ip-address/netmask matching for 
ACLs)

look very suitable.

Did you have any particular functions in mind that you would 
like to

see
unittests for? If so, maybe you can file some bugs for the 
different
tests so that we won't forget about it? Depending on the tests, 
these
bugs may get the EasyFix keyword if there is a clear description 
and

some pointers to examples.

Looks like parts of cmockery was forgotten in glusterfs.spec.in:

# rpm -q -f  `which gluster`
glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
# ldd `which gluster`
  linux-vdso.so.1 =  (0x74dfe000)
  libglusterfs.so.0 = /lib64/libglusterfs.so.0 
(0x7fe034cc4000)
  libreadline.so.6 = /lib64/libreadline.so.6 
(0x7fe034a7d000)

  libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000)
  libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000)
  libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000)
  libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000)
  libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000)
  libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000)
  libm.so.6 = /lib64/libm.so.6 (0x7fe033971000)
  libdl.so.2 = /lib64/libdl.so.2 

Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-22 Thread Lalatendu Mohanty

On 07/22/2014 05:22 PM, Luis Pabón wrote:

Hi Lala,
No problem at all, I just want to make sure that developers 
understand the importance of the tool.  On the topic of RPMs, they 
have a really cool section called %check, which is currently being 
used to run the unit tests after the glusterfs RPM is created.  
Normally developers test only on certain systems and certain 
architectures, but by having the %check section, we can guarantee a 
level of quality when an RPM is created on an architecture or 
operating system version which is not normally used for development.  
This actually worked really well for cmockery2 when the RPM was first 
introduced to Fedora.  The %check section ran the unit tests on two 
architectures that I do not have, and both of them found issues on 
ARM32 and s390 architectures.  Without the %check section, cmockery2 
would have been released and not been able to have been used.  This is 
why cmockery2 is set in the BuildRequires section.




Awesome!, now it make perfect sense to run these units tests during RPM 
building. Thanks Luis.





On 07/22/2014 07:34 AM, Lalatendu Mohanty wrote:

On 07/22/2014 04:35 PM, Luis Pabón wrote:
I understand that when something is new and different, it is most 
likely blamed for anything wrong that happens.  I highly propose 
that we do not do this, and instead work to learn more about the tool.


Cmockery2 is a tool that is important as the compiler.  It provides 
an extremely easy method to determine the quality of the software 
after it has been constructed, and therefore it has been made to be 
a requirement of the build.  Making it optional undermines its 
importance, and could in turn make it useless.




Hey Luis,

Th intention was not to undermine or give less importance to 
Cmockery2. Sorry if it looked like that.


However I was thinking from a flexibility point of view. I am 
assuming in future, it would be part of upstream regression test 
suite. So each patch will go through full unit testing by-default. So 
when somebody is creating RPMs from pristine sources, we should be 
able to do that without Cmockery2 because the tests were already ran 
through Jenkins/gerrit.


The question is do we need Cmockery every-time we compile glusterfs 
source? if the answer is yes, then I am fine with current code.


Cmockery2 is available for all supported EPEL/Fedora versions.  For 
any other distribution or operating system, it takes about 3 mins to 
download and compile.


Please let me know if you have any other questions.

- Luis

On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote:

On 07/21/2014 10:48 PM, Harshavardhana wrote:

Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?


+1, we need to make it conditional and enable it if necessary.  I 
am also not sure if we have cmockery2-devel in el5, el6. If not 
Build will fail.


On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com 
wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se];
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:

On 2014-07-21 16:17, Anders Blomdell wrote:

On 2014-07-20 16:01, Niels de Vos wrote:

On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:

Hi all,
 A few months ago, the unit test framework based on 
cmockery2 was
in the repo for a little while, then removed while we 
improved the

packaging method.  Now support for cmockery2 (
http://review.gluster.org/#/c/7538/ ) has been merged into 
the repo
again.  This will most likely require you to install 
cmockery2 on

your development systems by doing the following:

* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation 



Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation:
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS:
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 




When starting out writing unit tests, I would suggest writing 
unit
tests for non-xlator interface files when you start. Once you 
feel
more comfortable writing unit tests, then move to writing 
them for

the xlators interface files.
Awesome, many thanks! I'd like to add some unittests for the 
RPC and

NFS
layer. Several functions (like ip-address/netmask matching for 
ACLs)

look very suitable.

Did 

Re: [Gluster-devel] spurious regression failures again! [bug-1112559.t]

2014-07-22 Thread Anders Blomdell
On 2014-07-22 15:12, Joseph Fernandes wrote:
 Hi All,
 
 As with further investigation found the following,
 
 1) Was the able to reproduce the issue, without running the complete 
 regression, just by running bug-1112559.t only on slave30(which is been 
 rebooted and a clean gluster setup).
This rules out any involvement of previous failure from other spurious 
 errors like mgmt_v3-locks.t. 
 2) Added some messages and script (netstat and ps -ef | grep gluster ) 
 execution when the binding to a port fails (in 
 rpc/rpc-transport/socket/src/socket.c) and found the following,
 
 Always the snapshot brick in second node (127.1.1.2) fails to acquire 
 the port (eg : 127.1.1.2 : 49155 )
 
 Netstat output shows: 
 tcp0  0 127.1.1.2:49155 0.0.0.0:* 
   LISTEN  3555/glusterfsd
Could this be a time to propose that gluster understands port reservation a'la 
systemd (LISTEN_FDS),
and make the test harness make sure that random ports do not collide with the 
set of expected ports,
which will be beneficial when starting from systemd as well.


 
 and the process that is holding the port 49155 is 
 
 root  3555 1  0 12:38 ?00:00:00 
 /usr/local/sbin/glusterfsd -s 127.1.1.2 --volfile-id 
 patchy.127.1.1.2.d-backends-2-patchy_snap_mnt -p 
 /d/backends/3/glusterd/vols/patchy/run/127.1.1.2-d-backends-2-patchy_snap_mnt.pid
  -S /var/run/ff772f1ff85950660f389b0ed43ba2b7.socket --brick-name 
 /d/backends/2/patchy_snap_mnt -l 
 /usr/local/var/log/glusterfs/bricks/d-backends-2-patchy_snap_mnt.log 
 --xlator-option *-posix.glusterd-uuid=3af134ec-5552-440f-ad24-1811308ca3a8 
 --brick-port 49155 --xlator-option patchy-server.listen-port=49155
 
 Please note even though it says 127.1.1.2 its shows the glusterd-uuid 
 of the 3 node that was been probed when the snapshot was created 
 3af134ec-5552-440f-ad24-1811308ca3a8
 
 To clarify things there, there are already a volume brick in 127.1.1.2
 
 root  3446 1  0 12:38 ?00:00:00 
 /usr/local/sbin/glusterfsd -s 127.1.1.2 --volfile-id 
 patchy.127.1.1.2.d-backends-2-patchy_snap_mnt -p 
 /d/backends/2/glusterd/vols/patchy/run/127.1.1.2-d-backends-2-patchy_snap_mnt.pid
  -S /var/run/e667c69aa7a1481c7bd567b917cd1b05.socket --brick-name 
 /d/backends/2/patchy_snap_mnt -l 
 /usr/local/var/log/glusterfs/bricks/d-backends-2-patchy_snap_mnt.log 
 --xlator-option *-posix.glusterd-uuid=a7f461d0-5ea7-4b25-b6c5-388d8eb1893f 
 --brick-port 49153 --xlator-option patchy-server.listen-port=49153
 
 And the above brick process(3555) is not visible before the snap 
 creation or after the failure of the snap brick start on the 127.1.1.2
 This means that this process was spawned and died during the creation 
 of the snapshot and probe of the 3rd node (which happens simultaneously)
 
 In addition to these process, we can see multiple snap brick process 
 for the second brick on second node, which are not seen after the failure to 
 start the snap brick on 127.1.1.2
 
 root  3582 1  0 12:38 ?00:00:00 
 /usr/local/sbin/glusterfsd -s 127.1.1.2 --volfile-id 
 /snaps/patchy_snap1/66ac70130be84b5c9695df8252a56a6d.127.1.1.2.var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2
  -p 
 /d/backends/2/glusterd/snaps/patchy_snap1/66ac70130be84b5c9695df8252a56a6d/run/127.1.1.2-var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2.pid
  -S /var/run/668f3d4b1c55477fd5ad1ae381de0447.socket --brick-name 
 /var/run/gluster/snaps/66ac70130be84b5c9695df8252a56a6d/brick2 -l 
 /usr/local/var/log/glusterfs/bricks/var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2.log
  --xlator-option *-posix.glusterd-uuid=a7f461d0-5ea7-4b25-b6c5-388d8eb1893f 
 --brick-port 49155 --xlator-option 
 66ac70130be84b5c9695df8252a56a6d-server.listen-port=49155
 root  3583  3582  0 12:38 ?00:00:00 
 /usr/local/sbin/glusterfsd -s 127.1.1.2 --volfile-id 
 /snaps/patchy_snap1/66ac70130be84b5c9695df8252a56a6d.127.1.1.2.var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2
  -p 
 /d/backends/2/glusterd/snaps/patchy_snap1/66ac70130be84b5c9695df8252a56a6d/run/127.1.1.2-var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2.pid
  -S /var/run/668f3d4b1c55477fd5ad1ae381de0447.socket --brick-name 
 /var/run/gluster/snaps/66ac70130be84b5c9695df8252a56a6d/brick2 -l 
 /usr/local/var/log/glusterfs/bricks/var-run-gluster-snaps-66ac70130be84b5c9695df8252a56a6d-brick2.log
  --xlator-option *-posix.glusterd-uuid=a7f461d0-5ea7-4b25-b6c5-388d8eb1893f 
 --brick-port 49155 --xlator-option 
 66ac70130be84b5c9695df8252a56a6d-server.listen-port=49155
 
 
 
 This looks like the second node tries to start snap brick
 1) with wrong brickinfo and peerinfo (process 3555)
 2) Multiple times with the correct brickinfo (process 3582,3583)
3583 is a subprocess of 3582, so it's only one invocation.

 3) This issue is not seen when, 

Re: [Gluster-devel] Compilation issue with master branch

2014-07-22 Thread Lalatendu Mohanty


The issue is resolved now. make clean fixed the issue.

On 07/22/2014 03:34 PM, Lalatendu Mohanty wrote:

On 07/22/2014 03:07 PM, Atin Mukherjee wrote:


On 07/22/2014 03:01 PM, Lalatendu Mohanty wrote:

I am trying to compile master for doing a fresh Coverity run. But
Coverity is complaining that source is not compiled fully. When I did
the same manually, I saw make is returning very less output compared
to past. I have copied the make output (along with autogen and
configure) in http://ur1.ca/ht05t .

I will appreciate any help on this.

Below is the error returned from Coverity:

/Your request for analysis of GlusterFS is failed.
Analysis status: FAILURE
Please fix the error and upload the build again.

Could be because of the lcmockery lib missing at the missing?


Nope, the below cmockery packages were installed during the 
compilation. Otherwise it would have failed with error.


cmockery2-1.3.7-1.fc19.x86_64
cmockery2-devel-1.3.7-1.fc19.x86_64


Error details:
Build uploaded has not been compiled fully. Please fix any 
compilation error. You may have to run bin/cov-configure as 
described in the article on Coverity Community. Last few lines of 
cov-int/build-log.txt should indicate 85% or more compilation units 
ready for analysis


For more detail explanation on the error, please check: 
//https://communities.coverity.com/message/4820//

/


Thanks,
Lala


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures again! [bug-1112559.t]

2014-07-22 Thread Joe Julian

On 07/22/2014 07:19 AM, Anders Blomdell wrote:

Could this be a time to propose that gluster understands port reservation a'la 
systemd (LISTEN_FDS),
and make the test harness make sure that random ports do not collide with the 
set of expected ports,
which will be beneficial when starting from systemd as well.

Wouldn't that only work for Fedora and RHEL7?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Better organization for code documentation [Was: Developer Documentation for datastructures in gluster]

2014-07-22 Thread Kaushal M
Hey everyone,

While I was writing the documentation for the options framework, I
thought up of a way to better organize the code documentation we are
creating now. I've posted a patch for review that implements this
organization. [1]

Copying the description from the patch I've posted for review,
```
A new directory hierarchy has been created in doc/code for the code
documentation, which follows the general GlusterFS source hierarchy.
Each GlusterFS module has an entry in this tree. The source directory of
every GlusterFS module has a symlink, 'doc', to its corresponding
directory in the doc/code tree.

Taking glusterd for example. With this scheme, there will be
doc/code/xlators/mgmg/glusterd directory which will contain the relevant
documentation to glusterd. This directory will be symlinked to
xlators/mgmt/glusterd/src/doc .

This organization should allow for easy reference by developers when
developing on GlusterFS and also allow for easy hosting of the documents
when we set it up.
```

Comments?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures again! [bug-1112559.t]

2014-07-22 Thread Anders Blomdell
On 2014-07-22 16:44, Justin Clift wrote:
 On 22/07/2014, at 3:28 PM, Joe Julian wrote:
 On 07/22/2014 07:19 AM, Anders Blomdell wrote:
 Could this be a time to propose that gluster understands port reservation 
 a'la systemd (LISTEN_FDS),
 and make the test harness make sure that random ports do not collide with 
 the set of expected ports,
 which will be beneficial when starting from systemd as well.
 Wouldn't that only work for Fedora and RHEL7?
 
 Probably depends how it's done.  Maybe make it a conditional
 thing that's compiled in or not, depending on the platform?
Don't think so, the LISTEN_FDS is dead simple; if LISTEN_FDS is 
set in the environment, fd#3 to fd#3+LISTEN_FDS are sockets opened
by the calling process, and their function has to be deduced via 
getsockname and sockets should not opened by the process. If 
LISTEN_FDS is not set, proceed to open sockets just like before.

The good thing about this is that systemd can reserve the ports 
used very early during boot, and no other process can steal them
away. For testing purposes, this could be used to assure that
all ports are available before starting tests (if random port
stealing is the true problem here, that is still an unverified
shot in the dark).

 
 Unless there's a better, cross platform approach of course. :)
 
 Regards and best wishes,
 
 Justin Clift
 
/Anders


-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS

2014-07-22 Thread Lalatendu Mohanty



To fix these Coverity issues , please check the below link for guidelines:
http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity

Thanks,
Lala

 Original Message 
Subject:New Defects reported by Coverity Scan for GlusterFS
Date:   Tue, 22 Jul 2014 07:06:56 -0700
From:   scan-ad...@coverity.com



Hi,


Please find the latest report on new defect(s) introduced to GlusterFS found 
with Coverity Scan.

Defect(s) Reported-by: Coverity Scan
Showing 7 of 7 defect(s)


** CID 1228599:  Logically dead code  (DEADCODE)
/xlators/mgmt/glusterd/src/glusterd-store.c: 4069 in 
glusterd_store_retrieve_peers()

** CID 1228598:  Logically dead code  (DEADCODE)
/xlators/mgmt/glusterd/src/glusterd-peer-utils.c: 531 in gd_add_friend_to_dict()

** CID 1228600:  Data race condition  (MISSING_LOCK)
/xlators/cluster/ec/src/ec-data.c: 155 in ec_fop_data_allocate()

** CID 1228601:  Copy into fixed size buffer  (STRING_OVERFLOW)
/xlators/features/snapview-server/src/snapview-server.c: 1660 in 
svs_add_xattrs_to_dict()

** CID 1228603:  Use of untrusted scalar value  (TAINTED_SCALAR)
/xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file()
/xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file()
/xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file()
/xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file()

** CID 1228602:  Use of untrusted scalar value  (TAINTED_SCALAR)
/xlators/mount/fuse/src/fuse-bridge.c: 4805 in fuse_thread_proc()

** CID 1124682:  Dereference null return value  (NULL_RETURNS)
/rpc/rpc-lib/src/rpc-drc.c: 502 in rpcsvc_add_op_to_cache()



*** CID 1228599:  Logically dead code  (DEADCODE)
/xlators/mgmt/glusterd/src/glusterd-store.c: 4069 in 
glusterd_store_retrieve_peers()
4063 /* Set first hostname from peerinfo-hostnames to
4064  * peerinfo-hostname
4065  */
4066 address = list_entry (peerinfo-hostnames.next,
4067   glusterd_peer_hostname_t, 
hostname_list);
4068 if (!address) {

CID 1228599:  Logically dead code  (DEADCODE)
Execution cannot reach this statement ret = -1;.

4069 ret = -1;
4070 goto out;
4071 }
4072 peerinfo-hostname = gf_strdup (address-hostname);
4073
4074 ret = glusterd_friend_add_from_peerinfo (peerinfo, 1, 
NULL);


*** CID 1228598:  Logically dead code  (DEADCODE)
/xlators/mgmt/glusterd/src/glusterd-peer-utils.c: 531 in gd_add_friend_to_dict()
525  */
526 memset (key, 0, sizeof (key));
527 snprintf (key, sizeof (key), %s.hostname, prefix);
528 address = list_entry (friend-hostnames, 
glusterd_peer_hostname_t,
529   hostname_list);
530 if (!address) {

CID 1228598:  Logically dead code  (DEADCODE)
Execution cannot reach this statement ret = -1;.

531 ret = -1;
532 gf_log (this-name, GF_LOG_ERROR, Could not retrieve first 

533 address for peer);
534 goto out;
535 }
536 ret = dict_set_dynstr_with_alloc (dict, key, address-hostname);


*** CID 1228600:  Data race condition  (MISSING_LOCK)
/xlators/cluster/ec/src/ec-data.c: 155 in ec_fop_data_allocate()
149
150 mem_put(fop);
151
152 return NULL;
153 }
154 fop-id = id;

CID 1228600:  Data race condition  (MISSING_LOCK)
Accessing fop-refs without holding lock _ec_fop_data.lock. Elsewhere, 
fop-refs is accessed with _ec_fop_data.lock held 7 out of 8 times.

155 fop-refs = 1;
156
157 fop-flags = flags;
158 fop-minimum = minimum;
159 fop-mask = target;
160


*** CID 1228601:  Copy into fixed size buffer  (STRING_OVERFLOW)
/xlators/features/snapview-server/src/snapview-server.c: 1660 in 
svs_add_xattrs_to_dict()
1654 GF_VALIDATE_OR_GOTO (this-name, dict, out);
1655 GF_VALIDATE_OR_GOTO (this-name, list, out);
1656
1657 remaining_size = size;
1658 list_offset = 0;
1659 while (remaining_size  0) {

CID 1228601:  Copy into fixed size buffer  (STRING_OVERFLOW)
You might overrun the 4096 byte fixed-size string keybuffer by copying list + 
list_offset without 

Re: [Gluster-devel] Suggestions on implementing trash translator

2014-07-22 Thread Anoop C S


On 07/22/2014 06:58 PM, Xavier Hernandez wrote:

On Tuesday 22 July 2014 07:33:44 Jiffin Thottan wrote:

Hi,

There are some issues we are dealing with trash translator(see attachment
for design doc).In our implementation, we create trash directory using
trash translator.Thus trash directories on different bricks will have
different gfids.A gfid conflict will arise.

* To deal with gfid issue we tried to create trash directory using posix
translator and set fixed gfid for trash directory.And gfid conflict was
solved.Is this solution feasible?

I think that a global fixed gfid is is the right solution here.


* Trash directory is a configurable option by trash translator from cli.So
when we perform volume set for changing the trash directory,it will be
available in trash translator's dictionary.It is not passed to posix
translator(every translator will have different dictionaries for them).The
only way is to make configurable option as part of posix translator.Is this
a right way of implementation?

I think that mixing options from one xlator into another is not a good idea.
Specially if one xlator can be disabled, because the other will have to know
in which state is the former to react differently (for example not showing the
trash directory if trash xlator is disabled).


* When a trash directory is reconfigured from cli  , whether we need to
perform a rename operation from old trash directory to new one or just
create new trash directory?

A rename would be better (all trash contents will be kept) than creating a new
directory (and moving all data ?), however if the option reconfiguration is
done while the volume is stopped, such rename won't be possible. And even
worst, when the volume starts you won't know if there has been any change in
the directory name, so you would need to validate some things on each start to
avoid duplicate trash directories.

Maybe it would be better that the directory name were fixed from the posix
point of view, but the trash xlator return the configured name to upper
xlators on readdir(p) for example.



To summarize , we trying to make posix translator as the owner of trash
directory and trash translator will intercept fops like unlink,truncate .
What are your suggestions for it?

I see some other issues to this approach.

1. If the directory is physically created inside the normal namespace of
posix, it will be visible even if you disable the xlator. In this case all
users will have uncontrolled access to the trash directory. It should
disappear when trash xlator is disabled. A possible solution to this would
be to have this directory inside .glusterfs (some support from posix would be
needed).
The trash directory should be visible from mount point. So it cannot be 
inside .glusterfs.

rmdir and mkdir calls are not permitted over trash directory.

2. I'm not sure how this local management on each brick will affect higher
level xlators like dht, afr or ec, or complex functions like self-heal.
Couldn't this be a problem ?
calls from dht, self-heal etc will be treated by trash translator 
similar to other fops. For example, truncate call during re-balance 
operation is intercepted by trash translator.

Xavi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggestions on implementing trash translator

2014-07-22 Thread Xavier Hernandez
On Tuesday 22 July 2014 22:02:53 Anoop C S wrote:
  I see some other issues to this approach.
  
  1. If the directory is physically created inside the normal namespace of
  posix, it will be visible even if you disable the xlator. In this case all
  users will have uncontrolled access to the trash directory. It should
  disappear when trash xlator is disabled. A possible solution to this
  would be to have this directory inside .glusterfs (some support from
  posix would be needed).
 
 The trash directory should be visible from mount point. So it cannot be
 inside .glusterfs.
 rmdir and mkdir calls are not permitted over trash directory.

It can be inside .glusterfs if posix offsers some help and accesses to it are 
intercepted and translated by trash xlator. 

rmdir can only be intercepted if trash xlator is enabled. If it's disabled, 
users will be able to delete the directory or even copy things inside because 
posix will return it as any other normal directory. Of course another option 
would be to move all this logic to posix, but I'm not sure if this won't mix 
both xlators too much.

 
  2. I'm not sure how this local management on each brick will affect higher
  level xlators like dht, afr or ec, or complex functions like self-heal.
  Couldn't this be a problem ?
 
 calls from dht, self-heal etc will be treated by trash translator
 similar to other fops. For example, truncate call during re-balance
 operation is intercepted by trash translator.

For example, what will happen if a file being healed (missing in some bricks) 
is deleted ? how that file will be recovered ?

As I see it, afr will think that the file has been deleted, so it won't try to 
heal it, however the file won't have been deleted and can reappear in the 
future. When that happens, the file will be recovered from some bricks, but 
not from others. How will afr be aware of the recovered file and its state ?

I think that having trash below dht, afr and ec make it necessary to modify 
these xlators to handle some special cases. But I might be wrong.

Xavi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Compilation issue with master branch

2014-07-22 Thread Santosh Pradhan
Compilation fails if configured with --disable-xml-output 
--disable-georeplication options.


BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1122186

-Santosh

On 07/22/2014 07:56 PM, Lalatendu Mohanty wrote:


The issue is resolved now. make clean fixed the issue.

On 07/22/2014 03:34 PM, Lalatendu Mohanty wrote:

On 07/22/2014 03:07 PM, Atin Mukherjee wrote:


On 07/22/2014 03:01 PM, Lalatendu Mohanty wrote:

I am trying to compile master for doing a fresh Coverity run. But
Coverity is complaining that source is not compiled fully. When I did
the same manually, I saw make is returning very less output compared
to past. I have copied the make output (along with autogen and
configure) in http://ur1.ca/ht05t .

I will appreciate any help on this.

Below is the error returned from Coverity:

/Your request for analysis of GlusterFS is failed.
Analysis status: FAILURE
Please fix the error and upload the build again.

Could be because of the lcmockery lib missing at the missing?


Nope, the below cmockery packages were installed during the 
compilation. Otherwise it would have failed with error.


cmockery2-1.3.7-1.fc19.x86_64
cmockery2-devel-1.3.7-1.fc19.x86_64


Error details:
Build uploaded has not been compiled fully. Please fix any 
compilation error. You may have to run bin/cov-configure as 
described in the article on Coverity Community. Last few lines of 
cov-int/build-log.txt should indicate 85% or more compilation units 
ready for analysis


For more detail explanation on the error, please check: 
//https://communities.coverity.com/message/4820//

/


Thanks,
Lala


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Better organization for code documentation [Was: Developer Documentation for datastructures in gluster]

2014-07-22 Thread Anand Avati
On Tue, Jul 22, 2014 at 7:35 AM, Kaushal M kshlms...@gmail.com wrote:

 Hey everyone,

 While I was writing the documentation for the options framework, I
 thought up of a way to better organize the code documentation we are
 creating now. I've posted a patch for review that implements this
 organization. [1]

 Copying the description from the patch I've posted for review,
 ```
 A new directory hierarchy has been created in doc/code for the code
 documentation, which follows the general GlusterFS source hierarchy.
 Each GlusterFS module has an entry in this tree. The source directory of
 every GlusterFS module has a symlink, 'doc', to its corresponding
 directory in the doc/code tree.

 Taking glusterd for example. With this scheme, there will be
 doc/code/xlators/mgmg/glusterd directory which will contain the relevant
 documentation to glusterd. This directory will be symlinked to
 xlators/mgmt/glusterd/src/doc .

 This organization should allow for easy reference by developers when
 developing on GlusterFS and also allow for easy hosting of the documents
 when we set it up.
 ```



I haven't read the previous thread, but having doc dir co-exist with src in
each module would encourage (or at least remind) keeping doc updated along
with src changes.  Generally recommended not to store symlinks in the
source repo (though git supports it I think). You could create symlinks
from top level doc/code to per module (or vice versa) in autogen.sh.

Thanks
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel