Re: [Gluster-devel] v3.5qa2 tag name on master is annoying
On 2014-07-09 22:39, Harshavardhana wrote: I thought pkg-version in build-aux should have fixed this properly? Well, it does generate correct ascending names, but since 'git describe' picks up the v3.5qa2 tag, new package names are based on that (and hence yum considers 3.5.1 newer than master). On Wed, Jul 9, 2014 at 1:33 PM, Justin Clift jus...@gluster.org wrote: That v3.5qa2 tag name on master is annoying, due to the RPM naming it causes when building on master. Did we figure out a solution? Maybe we should do a v3.6something tag at feature freeze time or something? + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] v3.5qa2 tag name on master is annoying
On Wed, Jul 09, 2014 at 09:33:01PM +0100, Justin Clift wrote: That v3.5qa2 tag name on master is annoying, due to the RPM naming it causes when building on master. Did we figure out a solution? Maybe we should do a v3.6something tag at feature freeze time or something? I think we can push a v3.6dev tag to master, it should reference a commit after the release-3.5 branch-point. The first 3.6 release would be something like v3.6.0alpha, possibly adding v3.6.0alpha1, and subsequent v3.6.0beta + N. Comparing versions can be done in the RPM-way like this: $ rpmdev-vercmp 3.6dev 3.6.0alpha1 0:3.6.0alpha1-None is newer When branching release-3.6, we'll need a tag for the master branch again, maybe v3.7dev? HTH, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
On 07/08/2014 01:54 PM, Avra Sengupta wrote: In the test case, we are checking gluster snap status to see if all the bricks are alive. One of the snap bricks fail to start up, and hence we see the failure. The brick fails to bind with Address already in use error. But if we see clearly that same log also says binding to failed, where the address is missing. So it might be trying to bind to the wrong(or empty) address. Following are the brick logs for the same: [2014-07-07 11:20:15.662573] I [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64 [2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate] 0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction [2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind] 0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to failed: Address already in use [2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind] 0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use [2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create] 0-rpc-service: listening on transport failed [2014-07-07 11:20:15.662810] W [server.c:920:init] 0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed [2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init] 0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume 'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again [2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init] 0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed [2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (-- 0-: received signum (0), shutting down Regards, Avra On 07/08/2014 11:28 AM, Joseph Fernandes wrote: Hi Pranith, I am looking into this issue. Will keep you posted on the process by EOD Regards, ~Joe - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: josfe...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph rjos...@redhat.com, Sachin Pandit span...@redhat.com, aseng...@redhat.com Sent: Monday, July 7, 2014 8:42:24 PM Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote: Joseph, Any updates on this? It failed 5 regressions today. http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console Pranith CC some more folks who work on snapshot. A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? -Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Slave20 22 are dead! Long live new slave20 22!
Looks like Rackspace had some issues during one of their upgrades or something, and two of the slave VMs permanently died: * slave20 * slave22 Those VMs have been nuked and new VMs built to take their place (same names). Any logs that were on the old VMs are no longer available, having also gone to meet the great god /dev/null. ;) (some of the other VMs had issues to, but those cleared up with a reboot) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] documentation about statedump
hi, I wanted to document the core data structres and debugging infra in gluster. This is the first patch in that series. Please review and provide comments. I am not very familiar with iobuf infra. Please feel free to provide comments in the patch for that section as well. I can amend the document with those changes and resend the patch. http://review.gluster.org/8288 Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote: On 07/08/2014 01:54 PM, Avra Sengupta wrote: In the test case, we are checking gluster snap status to see if all the bricks are alive. One of the snap bricks fail to start up, and hence we see the failure. The brick fails to bind with Address already in use error. But if we see clearly that same log also says binding to failed, where the address is missing. So it might be trying to bind to the wrong(or empty) address. Following are the brick logs for the same: [2014-07-07 11:20:15.662573] I [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64 [2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate] 0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction [2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind] 0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to failed: Address already in use [2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind] 0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use [2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create] 0-rpc-service: listening on transport failed [2014-07-07 11:20:15.662810] W [server.c:920:init] 0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed [2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init] 0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume 'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again [2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init] 0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed [2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (-- 0-: received signum (0), shutting down Regards, Avra On 07/08/2014 11:28 AM, Joseph Fernandes wrote: Hi Pranith, I am looking into this issue. Will keep you posted on the process by EOD Regards, ~Joe - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: josfe...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph rjos...@redhat.com, Sachin Pandit span...@redhat.com, aseng...@redhat.com Sent: Monday, July 7, 2014 8:42:24 PM Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote: Joseph, Any updates on this? It failed 5 regressions today. http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console Pranith CC some more folks who work on snapshot. A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? I don't think that is easily possible. We also need to remove the -1 verified that the Gluster Build System sets. I'm not sure how we should be doing that. Maybe its better to disable (parts of) the test-case? Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
On 10/07/2014, at 1:41 PM, Niels de Vos wrote: On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote: snip A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? I don't think that is easily possible. We also need to remove the -1 verified that the Gluster Build System sets. I'm not sure how we should be doing that. Maybe its better to disable (parts of) the test-case? We can set results manually as the Gluster Build System by using the gerrit command from build.gluster.org. Looking at the failure here: http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console At the bottom, it shows this was the command run to communicate failure: $ ssh bu...@review.gluster.org gerrit review --message ''\'' http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull : FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 So, we run the same thing from the jenkins user on build.gluster.org, but change the result bits to +1 and SUCCESS. And a better message: $ sudo su - jenkins $ ssh bu...@review.gluster.org gerrit review --message ''\'' Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs --verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 Seems to work: http://review.gluster.org/#/c/8285/ + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] glusterfs-3.4.5beta2 released
On 07/09/2014 12:11 AM, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.5beta2.tar.gz This release is made off jenkins-release-80 -- Gluster Build System ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel RPMs are available for el5, el6, el7, f19, f20,f21 at download.gluster.org [1] with yum repos. [1] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.5beta2/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] v3.5qa2 tag name on master is annoying
This sounds perfect. On Thu, Jul 10, 2014 at 2:15 PM, Niels de Vos nde...@redhat.com wrote: On Wed, Jul 09, 2014 at 09:33:01PM +0100, Justin Clift wrote: That v3.5qa2 tag name on master is annoying, due to the RPM naming it causes when building on master. Did we figure out a solution? Maybe we should do a v3.6something tag at feature freeze time or something? I think we can push a v3.6dev tag to master, it should reference a commit after the release-3.5 branch-point. The first 3.6 release would be something like v3.6.0alpha, possibly adding v3.6.0alpha1, and subsequent v3.6.0beta + N. Comparing versions can be done in the RPM-way like this: $ rpmdev-vercmp 3.6dev 3.6.0alpha1 0:3.6.0alpha1-None is newer When branching release-3.6, we'll need a tag for the master branch again, maybe v3.7dev? HTH, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] v3.5qa2 tag name on master is annoying
On 07/10/2014 02:23 PM, Justin Clift wrote: On 10/07/2014, at 9:45 AM, Niels de Vos wrote: On Wed, Jul 09, 2014 at 09:33:01PM +0100, Justin Clift wrote: That v3.5qa2 tag name on master is annoying, due to the RPM naming it causes when building on master. Did we figure out a solution? Maybe we should do a v3.6something tag at feature freeze time or something? I think we can push a v3.6dev tag to master, it should reference a commit after the release-3.5 branch-point. The first 3.6 release would be something like v3.6.0alpha, possibly adding v3.6.0alpha1, and subsequent v3.6.0beta + N. Comparing versions can be done in the RPM-way like this: $ rpmdev-vercmp 3.6dev 3.6.0alpha1 0:3.6.0alpha1-None is newer When branching release-3.6, we'll need a tag for the master branch again, maybe v3.7dev? Yeah, that sounds like a workable approach. Who wants to push the tag to master to make that happen? :) Since we are very close to branching release-3.6 (over this weekend), I will do the following: 1. Create the first release from release-3.6 branch as v3.6.0alpha1 2. Create a new v3.7dev tag in the master branch after release-3.6 is in place. Thanks, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
Hi All, 1) Tried reproducing the issue in local setup by running the regression test multiple times in a for loop. But the issue never hit! 2) As Avra pointed out the logs suggests that the port(49159) assigned by the glusterd(host1) to the snap brick is already in use by some other process 3) For time being I can comment out the TEST that is failing i,e comment the checking of the status of snap brick so the regression test doesnt block any check-in 4) If we can get rackspace system where actually the regression tests are run, We can reproduce and point out the root cause. Regards, ~Joe - Original Message - From: Justin Clift jus...@gluster.org To: Niels de Vos nde...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Thursday, July 10, 2014 6:25:16 PM Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t On 10/07/2014, at 1:41 PM, Niels de Vos wrote: On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote: snip A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? I don't think that is easily possible. We also need to remove the -1 verified that the Gluster Build System sets. I'm not sure how we should be doing that. Maybe its better to disable (parts of) the test-case? We can set results manually as the Gluster Build System by using the gerrit command from build.gluster.org. Looking at the failure here: http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console At the bottom, it shows this was the command run to communicate failure: $ ssh bu...@review.gluster.org gerrit review --message ''\'' http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull : FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 So, we run the same thing from the jenkins user on build.gluster.org, but change the result bits to +1 and SUCCESS. And a better message: $ sudo su - jenkins $ ssh bu...@review.gluster.org gerrit review --message ''\'' Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs --verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 Seems to work: http://review.gluster.org/#/c/8285/ + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
On 10/07/2014, at 2:27 PM, Joseph Fernandes wrote: Hi All, 1) Tried reproducing the issue in local setup by running the regression test multiple times in a for loop. But the issue never hit! 2) As Avra pointed out the logs suggests that the port(49159) assigned by the glusterd(host1) to the snap brick is already in use by some other process 3) For time being I can comment out the TEST that is failing i,e comment the checking of the status of snap brick so the regression test doesnt block any check-in 4) If we can get rackspace system where actually the regression tests are run, We can reproduce and point out the root cause. Sure. Remote access via ssh is definitely workable. I'll email you the details. :) + Justin Regards, ~Joe -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
Sent a patch that temporarily disables the failing TEST http://review.gluster.org/#/c/8259/ - Original Message - From: Joseph Fernandes josfe...@redhat.com To: Justin Clift jus...@gluster.org Cc: Niels de Vos nde...@redhat.com, Gluster Devel gluster-devel@gluster.org, Vijay Bellur vbel...@redhat.com Sent: Thursday, July 10, 2014 6:57:34 PM Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t Hi All, 1) Tried reproducing the issue in local setup by running the regression test multiple times in a for loop. But the issue never hit! 2) As Avra pointed out the logs suggests that the port(49159) assigned by the glusterd(host1) to the snap brick is already in use by some other process 3) For time being I can comment out the TEST that is failing i,e comment the checking of the status of snap brick so the regression test doesnt block any check-in 4) If we can get rackspace system where actually the regression tests are run, We can reproduce and point out the root cause. Regards, ~Joe - Original Message - From: Justin Clift jus...@gluster.org To: Niels de Vos nde...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Thursday, July 10, 2014 6:25:16 PM Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t On 10/07/2014, at 1:41 PM, Niels de Vos wrote: On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote: snip A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? I don't think that is easily possible. We also need to remove the -1 verified that the Gluster Build System sets. I'm not sure how we should be doing that. Maybe its better to disable (parts of) the test-case? We can set results manually as the Gluster Build System by using the gerrit command from build.gluster.org. Looking at the failure here: http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console At the bottom, it shows this was the command run to communicate failure: $ ssh bu...@review.gluster.org gerrit review --message ''\'' http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull : FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 So, we run the same thing from the jenkins user on build.gluster.org, but change the result bits to +1 and SUCCESS. And a better message: $ sudo su - jenkins $ ssh bu...@review.gluster.org gerrit review --message ''\'' Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs --verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382 Seems to work: http://review.gluster.org/#/c/8285/ + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
On 10/07/2014, at 12:44 PM, Vijay Bellur wrote: snip A lot of regression runs are failing because of this test unit. Given feature freeze is around the corner, shall we provide a +1 verified manually for those patchsets that fail this test? Went through and did this manually, as Gluster Build System. Also got Joe set up so he can debug things on a Rackspace VM to find out what's wrong. + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] regarding warnings on master
hi Harsha, Know anything about the following warnings on latest master? In file included from msg-nfs3.h:20:0, from msg-nfs3.c:22: nlm4-xdr.h:6:14: warning: extra tokens at end of #ifndef directive [enabled by default] #ifndef _NLM4-XDR_H_RPCGEN ^ nlm4-xdr.h:7:14: warning: missing whitespace after the macro name [enabled by default] #define _NLM4-XDR_H_RPCGEN Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding warnings on master
Do not know, they do not show up locally on my laptop, can you point me to a build so that i can investigate? On Thu, Jul 10, 2014 at 6:45 PM, Pranith Kumar Karampuri pkara...@redhat.com wrote: hi Harsha, Know anything about the following warnings on latest master? In file included from msg-nfs3.h:20:0, from msg-nfs3.c:22: nlm4-xdr.h:6:14: warning: extra tokens at end of #ifndef directive [enabled by default] #ifndef _NLM4-XDR_H_RPCGEN ^ nlm4-xdr.h:7:14: warning: missing whitespace after the macro name [enabled by default] #define _NLM4-XDR_H_RPCGEN Pranith -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding warnings on master
On Thu, Jul 10, 2014 at 7:30 PM, Harshavardhana har...@harshavardhana.net wrote: Do not know, they do not show up locally on my laptop, can you point me to a build so that i can investigate? I think these are related to C99 standards, are you using clang? - this must be xdrgen bug that it doesn't produce the proper #ifdef's,#define -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel