Re: [Gluster-devel] release schedule for glusterfs
On 08/05/2015 07:01 PM, Humble Devassy Chirammal wrote: Indeed , no feedback on QA releases happens between minor versions. I do think , there is no point to keep alpha/beta releases between minor versions , but we can do that for Major as Kaleb mentioned. --Humble +1 On Wed, Aug 5, 2015 at 6:32 PM, Kaleb S. KEITHLEY kkeit...@redhat.com mailto:kkeit...@redhat.com wrote: On 08/05/2015 08:45 AM, Raghavendra Bhat wrote: On 08/05/2015 05:57 PM, Humble Devassy Chirammal wrote: Hi Ragavendra, How many beta releases hit for each minor release ? and the gap between these releases ? --Humble I am not sure about the beta releases. As per my understanding there are no beta releases happening in release-3.5 branch and also the latest release-3.7 branch. I was doing beta releases for release-3.6 branch. But I am also thinking of moving away from it and make 3.6.5 directly (and also future release-3.6 releases). We get zero feedback from alphas and betas. Nobody has complained about the lack of them. IMO we should still to alpha and beta for x.y.0 releases though. -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Cleaning some old build job on jenkins ?
On 08/05/2015 07:05 PM, Michael Scherer wrote: Le mardi 04 août 2015 à 16:22 +0200, Michael Scherer a écrit : Hi, (resending from gluster-infra) so while fiddling around with jenkins interface (autopromotionnow in ssl/autopromotion), I wondered about cleaning the following jobs: - cmockery2 - kerbauth-pre-commit-19 - kerbauth-builds-f19 - glusterfs-unittests - glusterfs-rpms-el6-nsr - glusterfs-rpms-nsr - glusterfs-devrpms-el6-nsr - GlusterFS-hadoop this one just send a email to 2 RH people, I am gonna ask them about it. The gluster hadoop project on github seems dormant. - rh-bugid who have not been fixed since more than 1 year, or not run since 1 year Would anyone be against removing them ? ( so far, on irc, people said go for it, but I want to make sure we do not miss anything ) It would make the interface less cluttered, and help refactoring the current test. If no one is against, i will do it next week. So while on it: Automated Covscan runs this one requires coverty, so likely a license, and at least the binary. Lala ? So this job is supposed to run coverity scans using https://scan.coverity.com;. This Jenkins job basically compiles the code and uploads it to scan.coverity.com to do the scan. So we don't need any license of Coverity. I used to do these runs through a shell script in my laptop. So it would be good if we can run it through Jenkins and we just need a slave to run this job successfully. Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Issue with el5 builds for 3.7.0beta1
On 05/01/2015 12:34 PM, Humble Devassy Chirammal wrote: Hi All, GlusterFS 3.7 beta1 RPMs for RHEL, CentOS, ( except el5) and Fedora are available at download.gluster.org http://download.gluster.org [1]. [1]http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/ http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/ el5 rpms will be available soon. Scratch build [1] for el5 is failing because it is failing on dependencies of libuuid-devel and userspace-rcu-devel. DEBUG util.py:388: Error: No Package found for libuuid-devel DEBUG util.py:388: Error: No Package found for userspace-rcu-devel = 0.7 I remember seeing some discussion around making userspace-rcu-devel in epel 5. But not sure what we decided for it. [1] http://koji.fedoraproject.org/koji/taskinfo?taskID=9615033 -Lala On Wed, Apr 29, 2015 at 4:39 PM, Vijay Bellur vbel...@redhat.com mailto:vbel...@redhat.com wrote: Hi All, Just pushed tag v3.7.0beta1 to glusterfs.git. A tarball of 3.7.0beta1 is now available at [1]. RPM and other packages will appear in download.gluster.org http://download.gluster.org when the respective packages are ready. Important features available in beta1 include: - bitrot - tiering - inode quotas - sharding - glusterfind - multi-threaded epoll - trash - netgroups style authentication for nfs exports - snapshot scheduling - cli support for NFS Ganesha - Cloning volumes from snapshots I suspect that I might have missed a few from the list here. Please chime in with your favorite feature if I have missed including it here :). List of known bugs for 3.7.0 is being tracked at [2]. Testing feedback and patches would be very welcome! Thanks, Vijay [1] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/ [2] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.0 ___ Gluster-devel mailing list Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Guidelines for posting in our mailing lists
On 04/17/2015 12:25 PM, Ira Cooper wrote: Do we have a problem Or is this a solution looking for a problem? I'd rather have an overall code of conduct... of which e-mail correspondence is a small part. :) IMO email is one of the most important medium through which we conduct ourself in the community and from communication point or view it plays the most important role. I think it is a good idea to have mailing list guidelines (whatever we agree upon) . Specially when the community is distributed and have lots of non native English speakers. I'll use Debian's an example: https://www.debian.org/code_of_conduct https://www.debian.org/MailingLists/#codeofconduct I have gone through it. It definitely addresses the over all idea. However I feel from Gluster point of view, we need more specific guidelines. One of the reason is Gluster is not an old/mature community as compared to Debian. In addition, in a quick scan of the Fedora document, I find it feels stiff and uninviting. Compare it to the Debian docs and you'll see what I mean. Not that either community will tolerate you violating their norms. It is merely how they express that sentiment. My thoughts, -Ira - Original Message - Hi All, Since the volume of posts on our mailing lists seems to be steadily increasing, I wonder if we should evolve guidelines for posting in the lists along the lines of the Fedora one [1]. It could be useful for new entrants in the community to understand our mailing list etiquette. If we think it is a good idea to evolve this artifact and don't find anything in the fedora guidelines objectionable, I would be happy to port it for Gluster :). Thanks, Vijay [1] https://fedoraproject.org/wiki/Mailing_list_guidelines ___ Gluster-users mailing list gluster-us...@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list gluster-us...@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Guidelines for posting in our mailing lists
On 04/17/2015 11:56 AM, Vijay Bellur wrote: Hi All, Since the volume of posts on our mailing lists seems to be steadily increasing, I wonder if we should evolve guidelines for posting in the lists along the lines of the Fedora one [1]. It could be useful for new entrants in the community to understand our mailing list etiquette. If we think it is a good idea to evolve this artifact and don't find anything in the fedora guidelines objectionable, I would be happy to port it for Gluster :). Thanks, Vijay [1] https://fedoraproject.org/wiki/Mailing_list_guidelines +1. Specially I agree with Fedora guideline about Top posting . It is really inconvenient to go through a long mail thread when people do Top posting [2]. Fedora mailing list guidelines has link to following pdf [1]. It actually covers most of the things in a precise way. Just putting it here in case you missed it in the original wiki page. [1] http://www.shakthimaan.com/downloads/glv/presentations/mailing-list-etiquette.pdf [2] http://en.wikipedia.org/wiki/Posting_style#Top-posting Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Got a slogan idea?
On 04/01/2015 06:59 PM, Vijay Bellur wrote: On 04/01/2015 05:44 PM, Tom Callaway wrote: Hello Gluster Ant People! Right now, if you go to gluster.org, you see our current slogan in giant text: Write once, read everywhere However, no one seems to be super-excited about that slogan. It doesn't really help differentiate gluster from a portable hard drive or a paperback book. I am going to work with Red Hat's branding geniuses to come up with some possibilities, but sometimes, the best ideas come from the people directly involved with a project. What I am saying is that if you have a slogan idea for Gluster, I want to hear it. You can reply on list or send it to me directly. I will collect all the proposals (yours and the ones that Red Hat comes up with) and circle back around for community discussion in about a month or so. I also think that we should start calling ourselves Gluster or GlusterDS (Gluster Distributed Storage) instead of GlusterFS by default. We are certainly not file storage only, we have object, api block interfaces too and the FS in GlusterFS seems to imply a file storage connotation alone. -Vijay +1 for Gluster. I like GlusterDS too (in the present context). But Gluster is more simplistic, generic and not oriented towards any particular type of storage. Which will be better for long term IMO (in case Gluster becomes a clustered storage in distant future). -Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How can we prevent GlusterFS packaging installation/update issues in future?
On 02/19/2015 02:30 PM, Niels de Vos wrote: Hey Pranith! Thanks for putting this topic on my radar. Uncommunicated packaging changes have indeed been a pain for non-RPM distributions on several occasions. We should try to inform other packagers about required changes in the packaging scripts or upgrade/installation process better. +1 On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote: https://bugzilla.redhat.com/show_bug.cgi?id=1113778 https://bugzilla.redhat.com/show_bug.cgi?id=1191176 How can we make the process of giving good packages for things other than RPMs? My guess is that we need to announce packaging changes very clearly. Maybe it makes sense to have a very low-traffic packag...@gluster.org mailinglist where all packagers from all distributions are subscribed? +1 for announce packaging changes very clearly. But I think we should keep using gluster-devel ML for packaging discussions as IMO it is the right platform to get all developers and packagers together. However We need to discuss these stuff clearly which we lacked before. Thanks, Lala I've added all packagers that I could track on CC, and am interested in their preferences and ideas. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How can we prevent GlusterFS packaging installation/update issues in future?
On 02/19/2015 04:25 PM, Niels de Vos wrote: On Thu, Feb 19, 2015 at 03:45:41PM +0530, Lalatendu Mohanty wrote: On 02/19/2015 02:30 PM, Niels de Vos wrote: Hey Pranith! Thanks for putting this topic on my radar. Uncommunicated packaging changes have indeed been a pain for non-RPM distributions on several occasions. We should try to inform other packagers about required changes in the packaging scripts or upgrade/installation process better. +1 On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote: https://bugzilla.redhat.com/show_bug.cgi?id=1113778 https://bugzilla.redhat.com/show_bug.cgi?id=1191176 How can we make the process of giving good packages for things other than RPMs? My guess is that we need to announce packaging changes very clearly. Maybe it makes sense to have a very low-traffic packag...@gluster.org mailinglist where all packagers from all distributions are subscribed? +1 for announce packaging changes very clearly. But I think we should keep using gluster-devel ML for packaging discussions as IMO it is the right platform to get all developers and packagers together. However We need to discuss these stuff clearly which we lacked before. My idea was to reduce the number of emails that packagers receive. Not all packagers are active as a Gluster developer, and the -devel list has quite some traffic. I am afraid that important changes would get lost in the noise. Niels I understand. However my thought was, if we segregate the discussion we might miss valuable feedback from developers. Also not sure if discussion around packaging on gluster-devel will increase understanding around packaging of developers. I agree with you on the email traffic though. Thanks, Lala Thanks, Lala I've added all packagers that I could track on CC, and am interested in their preferences and ideas. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How can we prevent GlusterFS packaging installation/update issues in future?
On 02/19/2015 05:21 PM, Niels de Vos wrote: On Thu, Feb 19, 2015 at 04:52:41PM +0530, Lalatendu Mohanty wrote: On 02/19/2015 04:25 PM, Niels de Vos wrote: On Thu, Feb 19, 2015 at 03:45:41PM +0530, Lalatendu Mohanty wrote: On 02/19/2015 02:30 PM, Niels de Vos wrote: Hey Pranith! Thanks for putting this topic on my radar. Uncommunicated packaging changes have indeed been a pain for non-RPM distributions on several occasions. We should try to inform other packagers about required changes in the packaging scripts or upgrade/installation process better. +1 On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote: https://bugzilla.redhat.com/show_bug.cgi?id=1113778 https://bugzilla.redhat.com/show_bug.cgi?id=1191176 How can we make the process of giving good packages for things other than RPMs? My guess is that we need to announce packaging changes very clearly. Maybe it makes sense to have a very low-traffic packag...@gluster.org mailinglist where all packagers from all distributions are subscribed? +1 for announce packaging changes very clearly. But I think we should keep using gluster-devel ML for packaging discussions as IMO it is the right platform to get all developers and packagers together. However We need to discuss these stuff clearly which we lacked before. My idea was to reduce the number of emails that packagers receive. Not all packagers are active as a Gluster developer, and the -devel list has quite some traffic. I am afraid that important changes would get lost in the noise. Niels I understand. However my thought was, if we segregate the discussion we might miss valuable feedback from developers. Also not sure if discussion around packaging on gluster-devel will increase understanding around packaging of developers. I agree with you on the email traffic though. I prefer to not bother most developers with the packaging. If they are interested, they can subscribe to the packagers list :-) Developers should clearly state how their components need to get installed/updated, they should be able to send those details to the packaging list. When we review changes to the .spec, we should keep in mind to share those details too. If we as packagers notice any issues, we would file a bug or request input on the gluster-devel list. Once the issue is settled, a description can be shared among the packagers. Do you think that should be workable? Niels Yes, we can definitely try this. Lets do it. Thanks, Lala Thanks, Lala Thanks, Lala I've added all packagers that I could track on CC, and am interested in their preferences and ideas. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Updates to operating-version
On 12/17/2014 07:39 PM, Niels de Vos wrote: On Wed, Dec 17, 2014 at 08:40:18AM -0500, James wrote: Hello, If you plan on updating the operating-version value of GlusterFS, please either ping me (@purpleidea) or send a patch to puppet-gluster [1]. Patches are 4 line yaml files, and you don't need any knowledge of puppet or yaml to do so. Example: +# gluster/data/versions/3.6.yaml +--- +gluster::versions::operating_version: '30600' # v3.6.0 +# vim: ts=8 As seen at: https://github.com/purpleidea/puppet-gluster/commit/43c60d2ddd6f57d2117585dc149de6653bdabd4b#diff-7cb3f60a533975d869ffd4a772d66cfeR1 Thanks for your cooperation! This will ensure puppet-gluster can always correctly work with new versions of GlusterFS. How about you post a patch that adds this request as a comment in the glusterfs sources (libglusterfs/src/globals.h)? Or, maybe this should be noted on some wiki page, and have the comment point to the wiki instead. Maybe other projects start to use the op-version in future too, and they also need to get informed about a change. IMO we should make it a practice to send a mail to gluster-devel whenever a patch is sent to increase the operating-version. Similar to practice what Fedora follows for so version bump. -Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Organising developer docs in Gluster Source code
There are two patches about how we can organise the developer docs inside the GlusterFS source code. These patches needs more set of eyes and also please put your thoughts here or in the patches. [1] http://review.gluster.org/#/c/8348/ [2] http://review.gluster.org/#/c/8827/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Help needed with Coverity - How to remove tainted_data_argument?
On 12/17/2014 12:56 PM, Krishnan Parthasarathi wrote: I was looking into a Coverity issue (CID 1228603) in GlusterFS. I sent a patch[1] before I fully understood why this was an issue. After searching around in the internet for explanations, I identified that the core issue was that a character buffer, storing parts of a file (external I/O), was marked tainted. This taint spread wherever the buffer was used. This seems acceptable in the context of static analysis. How do we indicate to Coverity that the 'taint' would cause no harm as speculated? [1] - Coverity fix attempt: http://review.gluster.org/#/c/9286/ [2] - CID 1228603: Use of untrusted scalar value (TAINTED_SCALAR): glusterd-utils.c: 2131 in glusterd_readin_file() thanks, kp ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel KP, We can mark the CID in Coverity scan website that it is not an issue (i.e. as designed) and it would stop reporting it as a bug. Let me if you need any help to mark it as not a bug. Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
Guideline for fixing Coverity issues : http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Forwarded Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Thu, 27 Nov 2014 12:31:06 -0800 From: scan-ad...@coverity.com To: l...@redhat.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. 13 new defect(s) introduced to GlusterFS found with Coverity Scan. 97 defect(s), reported by Coverity Scan earlier, were marked fixed in the recent build analyzed by Coverity Scan. New defect(s) Reported-by: Coverity Scan Showing 13 of 13 defect(s) ** CID 1256178: Logically dead code (DEADCODE) /api/src/glfs.c: 153 in glusterfs_ctx_defaults_init() ** CID 1256180: Logically dead code (DEADCODE) /api/src/glfs.c: 161 in glusterfs_ctx_defaults_init() ** CID 1256176: Logically dead code (DEADCODE) /glusterfsd/src/glusterfsd.c: 1426 in glusterfs_ctx_defaults_init() ** CID 1256179: Dereference after null check (FORWARD_NULL) /xlators/nfs/server/src/mount3.c: 1082 in mnt3_readlink_cbk() ** CID 1256177: Explicit null dereferenced (FORWARD_NULL) /api/src/glfs-fops.c: 702 in pub_glfs_preadv_async() ** CID 1256175: Array compared against 0 (NO_EFFECT) /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 2433 in glusterd_lvm_snapshot_remove() /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 2433 in glusterd_lvm_snapshot_remove() ** CID 1256173: Thread deadlock (ORDER_REVERSAL) /xlators/cluster/ec/src/ec-common.c: 1335 in ec_unlock_timer_add() ** CID 1256174: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd.c: 287 in glusterd_dump_peer() ** CID 1256172: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd.c: 330 in glusterd_dump_peer_rpcstat() ** CID 1256171: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-handshake.c: 279 in build_volfile_path() ** CID 1238183: Missing break in switch (MISSING_BREAK) /xlators/mgmt/glusterd/src/glusterd-rebalance.c: 577 in glusterd_op_stage_rebalance() ** CID 1228602: Use of untrusted scalar value (TAINTED_SCALAR) /xlators/mount/fuse/src/fuse-bridge.c: 4843 in fuse_thread_proc() ** CID 1228603: Use of untrusted scalar value (TAINTED_SCALAR) /xlators/mgmt/glusterd/src/glusterd-utils.c: 2131 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 2131 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 2131 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 2131 in glusterd_readin_file() *** CID 1256178: Logically dead code (DEADCODE) /api/src/glfs.c: 153 in glusterfs_ctx_defaults_init() 147 148 pthread_mutex_init ((ctx-lock), NULL); 149 150 ret = 0; 151 err: 152 if (ret pool) { CID 1256178: Logically dead code (DEADCODE) Execution cannot reach this statement if (pool-frame_mem_pool) 153 if (pool-frame_mem_pool) 154 mem_pool_destroy (pool-frame_mem_pool); 155 if (pool-stack_mem_pool) 156 mem_pool_destroy (pool-stack_mem_pool); 157 GF_FREE (pool); 158 } *** CID 1256180: Logically dead code (DEADCODE) /api/src/glfs.c: 161 in glusterfs_ctx_defaults_init() 155 if (pool-stack_mem_pool) 156 mem_pool_destroy (pool-stack_mem_pool); 157 GF_FREE (pool); 158 } 159 160 if (ret ctx) { CID 1256180: Logically dead code (DEADCODE) Execution cannot reach this statement if (ctx-stub_mem_pool) m 161 if (ctx-stub_mem_pool) 162 mem_pool_destroy (ctx-stub_mem_pool); 163 if (ctx-dict_pool) 164 mem_pool_destroy (ctx-dict_pool); 165 if (ctx-dict_data_pool) 166 mem_pool_destroy (ctx-dict_data_pool); *** CID 1256176: Logically dead code (DEADCODE) /glusterfsd/src/glusterfsd.c: 1426 in glusterfs_ctx_defaults_init() 1420 lim.rlim_max = RLIM_INFINITY; 1421 setrlimit (RLIMIT_CORE, lim); 1422 1423 ret = 0; 1424 out: 1425 CID 1256176: Logically dead code (DEADCODE) Execution cannot reach this expression ctx inside statement if (ret ctx) { if (ctx 1426 if (ret ctx) { 1427 if (ctx-pool) { 1428
Re: [Gluster-devel] gluster 3.6.1 rpm for CentOS 6.6
On 11/11/2014 10:46 AM, Kiran Patil wrote: Hi, Please let me know where can we find the gluster v3.6.1 rpm for CentOS 6.6. Thanks, Kiran. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel hey Kiran, Here it is : http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] gluster 3.6.1 rpm for CentOS 6.6
On 11/11/2014 12:19 PM, Kiran Patil wrote: Name confused me as I thought epel-6 means epel-6.0 since there is no epel-6.6. Thanks, Kiran. Agree, as it might confuse. We will fix it. However you can go ahead and use these for 6.6 as these packages are build for 6.6. Thanks, Lala On Tue, Nov 11, 2014 at 11:32 AM, Lalatendu Mohanty lmoha...@redhat.com mailto:lmoha...@redhat.com wrote: On 11/11/2014 10:46 AM, Kiran Patil wrote: Hi, Please let me know where can we find the gluster v3.6.1 rpm for CentOS 6.6. Thanks, Kiran. ___ Gluster-devel mailing list Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel hey Kiran, Here it is : http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] warning: bogus date in %changelog
On 10/28/2014 11:32 PM, Anders Blomdell wrote: Does the following warning: warning: bogus date in %changelog: Thu Jun 29 2014 Humble Chirammal hchir...@redhat.com warrant a bug report/fix as below (based on date of commit 8d8abc19)? Or should I just leave it as is? diff --git a/glusterfs.spec.in b/glusterfs.spec.in index f373b45..5ad4956 100644 --- a/glusterfs.spec.in +++ b/glusterfs.spec.in @@ -1042,7 +1042,7 @@ fi * Wed Sep 24 2014 Balamurugan Arumugam barum...@redhat.com - remove /sbin/ldconfig as interpreter (#1145992) -* Thu Jun 29 2014 Humble Chirammal hchir...@redhat.com +* Thu Jun 19 2014 Humble Chirammal hchir...@redhat.com - Added dynamic loading of fuse module with glusterfs-fuse package installation in el5. * Fri Jun 27 2014 Kaleb S. KEITHLEY kkeit...@redhat.com /Anders I checked master branch and I can see the date and day is correct in glusterfs.spec.in as below. Where do you see wrong change log i.e. Thu Jun 29 2014 Humble Chirammal hchir...@redhat.com? * Thu Jun 19 2014 Humble Chirammal hchir...@redhat.com - Added dynamic loading of fuse module with glusterfs-fuse package installation in el5. Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] REMINDER: Gluster Bug Triage meeting later today (12:00 UTC)
On 09/30/2014 01:08 PM, Niels de Vos wrote: Hi all, please join the #gluster-meeting IRC channel on irc.freenode.net to participate on the following topics: * Roll call * Status of last weeks action items * What happens after a bug has been marked Triaged? * Add distinction between problem reports and enhancement requests * Group Triage * Open Floor More details on the above, and last minute changes to the agenda are kept in the etherpad for this meeting: - https://public.pad.fsfe.org/p/gluster-bug-triage The meeting starts at 12:00 UTC, you can convert that to your own timezone with the 'date' command: $ date -d 12:00 UTC Cheers, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel Here is the meeting summery, the links to meeting minutes and logs. Just a reminder that * the next bug triage meeting is on 14th October 2014 * Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2014-09-30/gluster-meeting.2014-09-30-12.02.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2014-09-30/gluster-meeting.2014-09-30-12.02.txt Log: http://meetbot.fedoraproject.org/gluster-meeting/2014-09-30/gluster-meeting.2014-09-30-12.02.log.html #gluster-meeting Meeting Meeting summary --- * Roll Call (ndevos, 12:03:20) * Bugs in NEW or ASSIGNED state where a non-developer is assigned (ndevos, 12:09:21) * LINK: http://goo.gl/JxXzUR is a report that contains the users and NEW/ASSIGNED bugs (ndevos, 12:10:01) * avati and rwheeler should not be assigned to any bugs anyomre (ndevos, 12:13:26) * bug report lifecycle http://www.gluster.org/community/documentation/index.php/Bug_report_life_cycle (lalatenduM, 12:17:53) * AGREED: Move bugs in NEW+assignee to new+gluster-b...@redhat.com (ndevos, 12:30:43) * What happens after a bug has been marked Triaged? (ndevos, 12:31:31) * ACTION: hagarth will look for somebody that can act like a bug assigner manager kind of person (ndevos, 12:46:53) * ACTION: ndevos to add b...@gluster.org on all existing gluster bugs (ndevos, 13:00:50) * ACTION: pranithk to report next week how his team is assigning triaged bugs (ndevos, 13:02:28) * AGREED: No meeting next week (ndevos, 13:06:26) Meeting ended at 13:06:53 UTC. Action Items * hagarth will look for somebody that can act like a bug assigner manager kind of person * ndevos to add b...@gluster.org on all existing gluster bugs * pranithk to report next week how his team is assigning triaged bugs Action Items, by person --- * hagarth * hagarth will look for somebody that can act like a bug assigner manager kind of person * ndevos * ndevos to add b...@gluster.org on all existing gluster bugs * pranithk * pranithk to report next week how his team is assigning triaged bugs * **UNASSIGNED** * (none) People Present (lines said) --- * ndevos (108) * Humble (35) * hagarth (34) * lalatenduM (32) * kkeithley_ (24) * pranithk (23) * krishnan_p (19) * JustinClift (7) * zodbot (6) ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure
On 09/26/2014 02:59 AM, Justin Clift wrote: On 25/09/2014, at 9:28 PM, Lalatendu Mohanty wrote: snip Have we published somewhere which distributions or OS versions we are running regression tests ? if not lets compile it and publish as this will help community to understand which os distributions are part of the regression testing. The best we have so far is probably this: http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework Do we have plans to run regression on a variety of distributions? Not sure how difficult or complex it is to maintain. The primary OS at the moment is CentOS 6.x (mainly due to it being the primary OS for GlusterFS I think). Manu and Harsha have been going through the regression tests recently, making them more cross platform in order to run on the BSDs. This effort has also highlighted some interesting Linux specific behaviour in the main GlusterFS code base, and led to fixes there. In short, we're all for running the regression tests on as many distributions as possible. If Community members want to put VM's or something online (medium-long term), I'd be happy to hook our Jenkins infrastructure up to them to automatically run tests on them. Is that kind of what you're asking? :) Yup, I will try to get a CentOS 7 instance for running regression tests :) -Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] RPMs for glusterfs-3.4.6beta1 are available
On 09/08/2014 11:02 PM, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.6beta1.tar.gz This release is made off jenkins-release-87 -- Gluster Build System ___ Gluster-users mailing list gluster-us...@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users RPMs for glusterfs-3.4.6beta1 are available at d.g.o[1]. [1] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.6beta1/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] A suggestion on naming test files.
On 08/27/2014 02:13 PM, Niels de Vos wrote: On Wed, Aug 27, 2014 at 01:50:40PM +0530, Kaushal M wrote: I'll do that, if someone could point me to it. Is it on the community wiki or in doc/ directory in the source? Seems to be only (?) on the wiki: - http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Creating_your_own_tests Niels We should also update README.md file in source code i.e. glusterfs/tests. -Lala ~kaushal On Wed, Aug 27, 2014 at 1:12 PM, Niels de Vos nde...@redhat.com wrote: On Wed, Aug 27, 2014 at 12:21:56PM +0530, Kaushal M wrote: Hi all, Currently we name tests associated with a bug with just the bug-id as 'bug-XXX.t'. This naming doesn't provide any context directly, and someone looking at just the name wouldn't know what it does. I suggest that we use slightly more descriptive names to the tests. Lets name the tests as 'bug-XXX-checking-if-feature-Y-works.t'. This provides just enough context to understand what the test is trying to do from just the name. How do you all feel about this? Sounds good to me. Please make sure to update the documentation about writing test-cases in a few days, if nobody objects. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] A suggestion on naming test files.
On 08/27/2014 03:29 PM, Lalatendu Mohanty wrote: On 08/27/2014 02:13 PM, Niels de Vos wrote: On Wed, Aug 27, 2014 at 01:50:40PM +0530, Kaushal M wrote: I'll do that, if someone could point me to it. Is it on the community wiki or in doc/ directory in the source? Seems to be only (?) on the wiki: - http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Creating_your_own_tests Niels We should also update README.md file in source code i.e. glusterfs/tests. + Forgot to add that README.md does not a section about writing tests, either we add a new section or we can update the wiki page and update the wiki link in the README file. -Lala ~kaushal On Wed, Aug 27, 2014 at 1:12 PM, Niels de Vos nde...@redhat.com wrote: On Wed, Aug 27, 2014 at 12:21:56PM +0530, Kaushal M wrote: Hi all, Currently we name tests associated with a bug with just the bug-id as 'bug-XXX.t'. This naming doesn't provide any context directly, and someone looking at just the name wouldn't know what it does. I suggest that we use slightly more descriptive names to the tests. Lets name the tests as 'bug-XXX-checking-if-feature-Y-works.t'. This provides just enough context to understand what the test is trying to do from just the name. How do you all feel about this? Sounds good to me. Please make sure to update the documentation about writing test-cases in a few days, if nobody objects. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Our plan to get bugs fixed quicker, and features implemented sooner
On 08/22/2014 01:20 PM, Pranith Kumar Karampuri wrote: On 08/22/2014 11:46 AM, Joe Julian wrote: adding the correct gluster-devel to this. On 8/21/2014 10:55 PM, Joe Julian wrote: There have been times that I, as a user and a front-line supporter, have marked a bug with the urgency that I think it deserves - based on my own knowledge as well as feedback from other users - only to have that urgency lowered to normal because a developer who doesn't experience the urgency we do as users, deemed it so. Actually developers/assignees are not allowed to change severity, never (Something that I learnt an year back. May be some developers don't even know about it :-( ). Only priority is something that can be lowered by the assignee. In my experience this happens for severe bugs as well when the steps to re-create the bug are not there or logs are not enough to find the root cause etc. Granted, there have also been many occasions where I didn't set the urgency and my bug has been treated as if it was a message from beyond the natural realm, for which I am truly grateful. Because the message _is_ from beyond natural realm ;-) When I set the urgency, I try to consider more than just my own current needs (though often they're truly quite urgent) and look at potential for data loss, likelihood to encounter the bug during normal operations, and feedback from other users who have (usually) encountered the same bug. To have all that forethought discarded is disheartening. For that reason, I never set the urgency of any of my bugs any more. I guess it would be better if you could raise it in devel mailing list rather that not setting the urgency anymore :-(. I would be willing to participate in triage, but I would expect the same rigidness in changing an urgency as there is in getting a change accepted. The developer who wants to change the urgency should be expected to argue his case, not simply change it. +1 Pranith It would good if we can update the triage wiki page [1] to have right criteria/information about changing Severity and Priority. We should treat the wiki page as our policy document. http://www.gluster.org/community/documentation/index.php/Bug_triage -Lala On 8/21/2014 12:12 AM, Lalatendu Mohanty wrote: I hope the subject line have increased your curiosity to go through the email :). As a community, we are looking for contributors for GlusterFS bug triage and hopefully this mail will give you enough motivation for it. As mentioned in the subject , bug triage will help to get bugs fixed quicker, and features implemented sooner. If you are not sure what bug triage means, please refer the gluster wiki page [1] for bug triage. Here are few questions and answers to help you if this is something you should do. * Q: Why we need to do bug triage? * A: It reduces the time between reporting a bug and the availability of a fix enormously. o Because many developers have bad response times for new bugs that are not pointed out to them. When well triaged bugs get assigned to right developers, the response time improves. o Also developers work mainly on writing bug fixes and implementing new features. Which in turn results in to spending (too little) time on bug triaging. * Q: I am just a GlusterFS user. Why bug triage will help me? * A: It will increase your understanding of GlusterFS, current issues in GlusterFS, increase the interaction between developers, community and triager. It will also help filing better bugs too. The better the bug report, the easier it is for a developer to write a fix. * Q: Do you run GlusterFS in production? * A: If your answer is yes, then bug triage is the right platform for you to raise the importance of bugs. Bug triage will also help you know existing issues in the version of GlusterFS you are using, help you to know existing issues in a new feature. So you will be in a better position to decide if a feature is production ready or not. * Q: I want to contribute to GlusterFS. Will bug triage help? * A: Yes, it is an awesome place to start. You will get to know about all the components in GlusterFS along with issues in each of them. This knowledge will help you to do better testing, development (bug fixing) etc. Also you will interact with developers while triaging bugs. You can use these interactions to ask more detailed questions. * Q: How can I triage bugs of GlusterFS? * A: The wiki page [1] is the right place start. We are starting up a bi-weekly/weekly triage meeting in #gluster-meeting on Freenode. There would be another mail with details about the meeting. You can join the meeting and interact with other triagers. Please don't hesitate to hit the reply button if you have any question on this. We would love to hear your suggestions/feed back :). [1] http://www.gluster.org
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
To fix these Coverity issues , please check the below link for guidelines: http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Original Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Mon, 04 Aug 2014 02:31:28 -0700 From: scan-ad...@coverity.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. Defect(s) Reported-by: Coverity Scan Showing 3 of 3 defect(s) ** CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4061 in glusterd_add_brick_to_snap_volume() /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4062 in glusterd_add_brick_to_snap_volume() /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4049 in glusterd_add_brick_to_snap_volume() /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4050 in glusterd_add_brick_to_snap_volume() ** CID 1229876: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-utils.c: 13482 in glusterd_update_mntopts() /xlators/mgmt/glusterd/src/glusterd-utils.c: 13481 in glusterd_update_mntopts() ** CID 1229878: Time of check time of use (TOCTOU) /xlators/features/changelog/lib/src/gf-changelog.c: 475 in gf_changelog_register() *** CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4061 in glusterd_add_brick_to_snap_volume() 4055 4056 snprintf (key, sizeof(key) - 1, vol%PRId64.mnt_opts%d, volcount, 4057 brick_count); 4058 ret = dict_get_str (dict, key, value); 4059 if (!ret) { 4060 /* Update the mnt_opts in original brickinfo as well */ CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 1024 byte fixed-size string original_brickinfo-mnt_opts by copying value without checking the length. 4061 strcpy (original_brickinfo-mnt_opts, value); 4062 strcpy (snap_brickinfo-mnt_opts, value); 4063 } else { 4064 if (is_origin_glusterd (dict) == _gf_true) 4065 add_missed_snap = _gf_true; 4066 } /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4062 in glusterd_add_brick_to_snap_volume() 4056 snprintf (key, sizeof(key) - 1, vol%PRId64.mnt_opts%d, volcount, 4057 brick_count); 4058 ret = dict_get_str (dict, key, value); 4059 if (!ret) { 4060 /* Update the mnt_opts in original brickinfo as well */ 4061 strcpy (original_brickinfo-mnt_opts, value); CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 1024 byte fixed-size string snap_brickinfo-mnt_opts by copying value without checking the length. 4062 strcpy (snap_brickinfo-mnt_opts, value); 4063 } else { 4064 if (is_origin_glusterd (dict) == _gf_true) 4065 add_missed_snap = _gf_true; 4066 } 4067 /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4049 in glusterd_add_brick_to_snap_volume() 4043 4044 snprintf (key, sizeof(key) - 1, vol%PRId64.fstype%d, volcount, 4045 brick_count); 4046 ret = dict_get_str (dict, key, value); 4047 if (!ret) { 4048 /* Update the fstype in original brickinfo as well */ CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 255 byte fixed-size string original_brickinfo-fstype by copying value without checking the length. 4049 strcpy (original_brickinfo-fstype, value); 4050 strcpy (snap_brickinfo-fstype, value); 4051 } else { 4052 if (is_origin_glusterd (dict) == _gf_true) 4053 add_missed_snap = _gf_true; 4054 } /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 4050 in glusterd_add_brick_to_snap_volume() 4044 snprintf (key, sizeof(key) - 1, vol%PRId64.fstype%d, volcount, 4045 brick_count); 4046 ret = dict_get_str (dict, key, value); 4047 if (!ret) { 4048 /* Update the fstype in original brickinfo as well */ 4049 strcpy (original_brickinfo-fstype, value); CID 1229877: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 255 byte fixed-size string snap_brickinfo-fstype by copying value without checking the length. 4050 strcpy (snap_brickinfo-fstype, value); 4051 } else { 4052 if (is_origin_glusterd (dict) == _gf_true)
[Gluster-devel] glusterfs-3.5.2 RPMs are now available
On 07/31/2014 05:07 PM, Niels de Vos wrote: On Thu, Jul 31, 2014 at 04:06:46AM -0700, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.2.tar.gz Bugs that have been marked for 3.5.2 and did not get fixed with this release, are being moved to the new 3.5.3 release tracker: - https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.5.3 Please add 'glusterfs-3.5.3' in the 'Blocks' field of bugs to propose inclusion in GlusterFS 3.5.3. Link to the release notes of glusterfs-3.5.2: - http://blog.nixpanic.net/2014/07/glusterfs-352-has-been-released.html - https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.2.md Remember that packages for different distributions (downstream) will get created after this (upstream) release. Thanks, Niels [RPMS for EL5, 6 , 7] RPMs are available at download.gluster.org [1]. [Fedora] GlusterFS-3.5.2 RPMs for Fedora will be available from the Fedora updates-testing YUM repository. After they have passed a nominal testing period they will be available in the Fedora updates YUM repository. The RPMs were build using fedoraproject koji, so you can also find them at koji [2] [1] http://download.gluster.org/pub/gluster/glusterfs/LATEST/ [2] https://koji.fedoraproject.org/koji/packageinfo?packageID=5443 Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Static analysis, cpp-check, clang-analyze, coverity, etc., of community glusterfs source
On 07/24/2014 09:47 PM, Justin Clift wrote: Where's a good place to add this to the Wiki? Main Developer page? I think it is http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity On Thu, 24 Jul 2014 18:42:11 +0530 Humble Devassy Chirammal humble.deva...@gmail.com wrote: Hi Kaleb, Nice initiative !! Thanks for this single window system. --Humble On Thu, Jul 24, 2014 at 6:27 PM, Kaleb S. KEITHLEY kkeit...@redhat.com wrote: We now have daily runs of various static source code analysis tools on the glusterfs sources. There are daily analyses of the master, release-3.6, and release-3.5 branches. Results are posted at http://download.gluster.org/ pub/gluster/glusterfs/static-analysis/ If you're interested in contributing to Gluster, but don't know where to start, look here for some easy (and hard) bugs to fix. To fix a bug, start by opening a BZ at https://bugzilla.redhat.com/ enter_bug.cgi?product=GlusterFS Submit the fix in gerrit, instructions are at http://www.gluster.org/ community/documentation/index.php/Development_Work_Flow -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] glusterfs-3.5.2beta1 RPMs available
RPMs for el5-7 (RHEL, CentOS, etc.), Fedora (19,20,21,22) are available at download.gluster.org [1]. [1] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.2beta1/ Thanks, Lala On 07/21/2014 09:05 PM, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.2beta1.tar.gz This release is made off jenkins-release-84 -- Gluster Build System ___ Gluster-users mailing list gluster-us...@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Compilation issue with master branch
I am trying to compile master for doing a fresh Coverity run. But Coverity is complaining that source is not compiled fully. When I did the same manually, I saw make is returning very less output compared to past. I have copied the make output (along with autogen and configure) in http://ur1.ca/ht05t . I will appreciate any help on this. Below is the error returned from Coverity: /Your request for analysis of GlusterFS is failed. Analysis status: FAILURE Please fix the error and upload the build again. Error details: Build uploaded has not been compiled fully. Please fix any compilation error. You may have to run bin/cov-configure as described in the article on Coverity Community. Last few lines of cov-int/build-log.txt should indicate 85% or more compilation units ready for analysis For more detail explanation on the error, please check://https://communities.coverity.com/message/4820// / Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Compilation issue with master branch
On 07/22/2014 03:07 PM, Atin Mukherjee wrote: On 07/22/2014 03:01 PM, Lalatendu Mohanty wrote: I am trying to compile master for doing a fresh Coverity run. But Coverity is complaining that source is not compiled fully. When I did the same manually, I saw make is returning very less output compared to past. I have copied the make output (along with autogen and configure) in http://ur1.ca/ht05t . I will appreciate any help on this. Below is the error returned from Coverity: /Your request for analysis of GlusterFS is failed. Analysis status: FAILURE Please fix the error and upload the build again. Could be because of the lcmockery lib missing at the missing? Nope, the below cmockery packages were installed during the compilation. Otherwise it would have failed with error. cmockery2-1.3.7-1.fc19.x86_64 cmockery2-devel-1.3.7-1.fc19.x86_64 Error details: Build uploaded has not been compiled fully. Please fix any compilation error. You may have to run bin/cov-configure as described in the article on Coverity Community. Last few lines of cov-int/build-log.txt should indicate 85% or more compilation units ready for analysis For more detail explanation on the error, please check: //https://communities.coverity.com/message/4820// / Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Cmockery2 in GlusterFS
On 07/22/2014 04:35 PM, Luis Pabón wrote: I understand that when something is new and different, it is most likely blamed for anything wrong that happens. I highly propose that we do not do this, and instead work to learn more about the tool. Cmockery2 is a tool that is important as the compiler. It provides an extremely easy method to determine the quality of the software after it has been constructed, and therefore it has been made to be a requirement of the build. Making it optional undermines its importance, and could in turn make it useless. Hey Luis, Th intention was not to undermine or give less importance to Cmockery2. Sorry if it looked like that. However I was thinking from a flexibility point of view. I am assuming in future, it would be part of upstream regression test suite. So each patch will go through full unit testing by-default. So when somebody is creating RPMs from pristine sources, we should be able to do that without Cmockery2 because the tests were already ran through Jenkins/gerrit. The question is do we need Cmockery every-time we compile glusterfs source? if the answer is yes, then I am fine with current code. Cmockery2 is available for all supported EPEL/Fedora versions. For any other distribution or operating system, it takes about 3 mins to download and compile. Please let me know if you have any other questions. - Luis On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote: On 07/21/2014 10:48 PM, Harshavardhana wrote: Cmockery2 is a hard dependency before GlusterFS can be compiled in upstream master now - we could make it conditional and enable if necessary? since we know we do not have the cmockery2 packages available on all systems? +1, we need to make it conditional and enable it if necessary. I am also not sure if we have cmockery2-devel in el5, el6. If not Build will fail. On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com wrote: Niels you are correct. Let me take a look. Luis -Original Message- From: Niels de Vos [nde...@redhat.com] Received: Monday, 21 Jul 2014, 10:41AM To: Luis Pabon [lpa...@redhat.com] CC: Anders Blomdell [anders.blomd...@control.lth.se]; gluster-devel@gluster.org Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote: On 2014-07-21 16:17, Anders Blomdell wrote: On 2014-07-20 16:01, Niels de Vos wrote: On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote: Hi all, A few months ago, the unit test framework based on cmockery2 was in the repo for a little while, then removed while we improved the packaging method. Now support for cmockery2 ( http://review.gluster.org/#/c/7538/ ) has been merged into the repo again. This will most likely require you to install cmockery2 on your development systems by doing the following: * Fedora/EPEL: $ sudo yum -y install cmockery2-devel * All other systems please visit the following page: https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation Here is also some information about Cmockery2 and how to use it: * Introduction to Unit Tests in C Presentation: http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/ * Cmockery2 Usage Guide: https://github.com/lpabon/cmockery2/blob/master/doc/usage.md * Using Cmockery2 with GlusterFS: https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md When starting out writing unit tests, I would suggest writing unit tests for non-xlator interface files when you start. Once you feel more comfortable writing unit tests, then move to writing them for the xlators interface files. Awesome, many thanks! I'd like to add some unittests for the RPC and NFS layer. Several functions (like ip-address/netmask matching for ACLs) look very suitable. Did you have any particular functions in mind that you would like to see unittests for? If so, maybe you can file some bugs for the different tests so that we won't forget about it? Depending on the tests, these bugs may get the EasyFix keyword if there is a clear description and some pointers to examples. Looks like parts of cmockery was forgotten in glusterfs.spec.in: # rpm -q -f `which gluster` glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64 # ldd `which gluster` linux-vdso.so.1 = (0x74dfe000) libglusterfs.so.0 = /lib64/libglusterfs.so.0 (0x7fe034cc4000) libreadline.so.6 = /lib64/libreadline.so.6 (0x7fe034a7d000) libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000) libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000) libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000) libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000) libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000) libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000) libm.so.6 = /lib64/libm.so.6 (0x7fe033971000) libdl.so.2 = /lib64/libdl.so.2
Re: [Gluster-devel] Cmockery2 in GlusterFS
On 07/22/2014 05:22 PM, Luis Pabón wrote: Hi Lala, No problem at all, I just want to make sure that developers understand the importance of the tool. On the topic of RPMs, they have a really cool section called %check, which is currently being used to run the unit tests after the glusterfs RPM is created. Normally developers test only on certain systems and certain architectures, but by having the %check section, we can guarantee a level of quality when an RPM is created on an architecture or operating system version which is not normally used for development. This actually worked really well for cmockery2 when the RPM was first introduced to Fedora. The %check section ran the unit tests on two architectures that I do not have, and both of them found issues on ARM32 and s390 architectures. Without the %check section, cmockery2 would have been released and not been able to have been used. This is why cmockery2 is set in the BuildRequires section. Awesome!, now it make perfect sense to run these units tests during RPM building. Thanks Luis. On 07/22/2014 07:34 AM, Lalatendu Mohanty wrote: On 07/22/2014 04:35 PM, Luis Pabón wrote: I understand that when something is new and different, it is most likely blamed for anything wrong that happens. I highly propose that we do not do this, and instead work to learn more about the tool. Cmockery2 is a tool that is important as the compiler. It provides an extremely easy method to determine the quality of the software after it has been constructed, and therefore it has been made to be a requirement of the build. Making it optional undermines its importance, and could in turn make it useless. Hey Luis, Th intention was not to undermine or give less importance to Cmockery2. Sorry if it looked like that. However I was thinking from a flexibility point of view. I am assuming in future, it would be part of upstream regression test suite. So each patch will go through full unit testing by-default. So when somebody is creating RPMs from pristine sources, we should be able to do that without Cmockery2 because the tests were already ran through Jenkins/gerrit. The question is do we need Cmockery every-time we compile glusterfs source? if the answer is yes, then I am fine with current code. Cmockery2 is available for all supported EPEL/Fedora versions. For any other distribution or operating system, it takes about 3 mins to download and compile. Please let me know if you have any other questions. - Luis On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote: On 07/21/2014 10:48 PM, Harshavardhana wrote: Cmockery2 is a hard dependency before GlusterFS can be compiled in upstream master now - we could make it conditional and enable if necessary? since we know we do not have the cmockery2 packages available on all systems? +1, we need to make it conditional and enable it if necessary. I am also not sure if we have cmockery2-devel in el5, el6. If not Build will fail. On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com wrote: Niels you are correct. Let me take a look. Luis -Original Message- From: Niels de Vos [nde...@redhat.com] Received: Monday, 21 Jul 2014, 10:41AM To: Luis Pabon [lpa...@redhat.com] CC: Anders Blomdell [anders.blomd...@control.lth.se]; gluster-devel@gluster.org Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote: On 2014-07-21 16:17, Anders Blomdell wrote: On 2014-07-20 16:01, Niels de Vos wrote: On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote: Hi all, A few months ago, the unit test framework based on cmockery2 was in the repo for a little while, then removed while we improved the packaging method. Now support for cmockery2 ( http://review.gluster.org/#/c/7538/ ) has been merged into the repo again. This will most likely require you to install cmockery2 on your development systems by doing the following: * Fedora/EPEL: $ sudo yum -y install cmockery2-devel * All other systems please visit the following page: https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation Here is also some information about Cmockery2 and how to use it: * Introduction to Unit Tests in C Presentation: http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/ * Cmockery2 Usage Guide: https://github.com/lpabon/cmockery2/blob/master/doc/usage.md * Using Cmockery2 with GlusterFS: https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md When starting out writing unit tests, I would suggest writing unit tests for non-xlator interface files when you start. Once you feel more comfortable writing unit tests, then move to writing them for the xlators interface files. Awesome, many thanks! I'd like to add some unittests for the RPC and NFS layer. Several functions (like ip-address/netmask matching for ACLs) look very suitable. Did
Re: [Gluster-devel] Compilation issue with master branch
The issue is resolved now. make clean fixed the issue. On 07/22/2014 03:34 PM, Lalatendu Mohanty wrote: On 07/22/2014 03:07 PM, Atin Mukherjee wrote: On 07/22/2014 03:01 PM, Lalatendu Mohanty wrote: I am trying to compile master for doing a fresh Coverity run. But Coverity is complaining that source is not compiled fully. When I did the same manually, I saw make is returning very less output compared to past. I have copied the make output (along with autogen and configure) in http://ur1.ca/ht05t . I will appreciate any help on this. Below is the error returned from Coverity: /Your request for analysis of GlusterFS is failed. Analysis status: FAILURE Please fix the error and upload the build again. Could be because of the lcmockery lib missing at the missing? Nope, the below cmockery packages were installed during the compilation. Otherwise it would have failed with error. cmockery2-1.3.7-1.fc19.x86_64 cmockery2-devel-1.3.7-1.fc19.x86_64 Error details: Build uploaded has not been compiled fully. Please fix any compilation error. You may have to run bin/cov-configure as described in the article on Coverity Community. Last few lines of cov-int/build-log.txt should indicate 85% or more compilation units ready for analysis For more detail explanation on the error, please check: //https://communities.coverity.com/message/4820// / Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
To fix these Coverity issues , please check the below link for guidelines: http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Original Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Tue, 22 Jul 2014 07:06:56 -0700 From: scan-ad...@coverity.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. Defect(s) Reported-by: Coverity Scan Showing 7 of 7 defect(s) ** CID 1228599: Logically dead code (DEADCODE) /xlators/mgmt/glusterd/src/glusterd-store.c: 4069 in glusterd_store_retrieve_peers() ** CID 1228598: Logically dead code (DEADCODE) /xlators/mgmt/glusterd/src/glusterd-peer-utils.c: 531 in gd_add_friend_to_dict() ** CID 1228600: Data race condition (MISSING_LOCK) /xlators/cluster/ec/src/ec-data.c: 155 in ec_fop_data_allocate() ** CID 1228601: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/features/snapview-server/src/snapview-server.c: 1660 in svs_add_xattrs_to_dict() ** CID 1228603: Use of untrusted scalar value (TAINTED_SCALAR) /xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file() /xlators/mgmt/glusterd/src/glusterd-utils.c: 1987 in glusterd_readin_file() ** CID 1228602: Use of untrusted scalar value (TAINTED_SCALAR) /xlators/mount/fuse/src/fuse-bridge.c: 4805 in fuse_thread_proc() ** CID 1124682: Dereference null return value (NULL_RETURNS) /rpc/rpc-lib/src/rpc-drc.c: 502 in rpcsvc_add_op_to_cache() *** CID 1228599: Logically dead code (DEADCODE) /xlators/mgmt/glusterd/src/glusterd-store.c: 4069 in glusterd_store_retrieve_peers() 4063 /* Set first hostname from peerinfo-hostnames to 4064 * peerinfo-hostname 4065 */ 4066 address = list_entry (peerinfo-hostnames.next, 4067 glusterd_peer_hostname_t, hostname_list); 4068 if (!address) { CID 1228599: Logically dead code (DEADCODE) Execution cannot reach this statement ret = -1;. 4069 ret = -1; 4070 goto out; 4071 } 4072 peerinfo-hostname = gf_strdup (address-hostname); 4073 4074 ret = glusterd_friend_add_from_peerinfo (peerinfo, 1, NULL); *** CID 1228598: Logically dead code (DEADCODE) /xlators/mgmt/glusterd/src/glusterd-peer-utils.c: 531 in gd_add_friend_to_dict() 525 */ 526 memset (key, 0, sizeof (key)); 527 snprintf (key, sizeof (key), %s.hostname, prefix); 528 address = list_entry (friend-hostnames, glusterd_peer_hostname_t, 529 hostname_list); 530 if (!address) { CID 1228598: Logically dead code (DEADCODE) Execution cannot reach this statement ret = -1;. 531 ret = -1; 532 gf_log (this-name, GF_LOG_ERROR, Could not retrieve first 533 address for peer); 534 goto out; 535 } 536 ret = dict_set_dynstr_with_alloc (dict, key, address-hostname); *** CID 1228600: Data race condition (MISSING_LOCK) /xlators/cluster/ec/src/ec-data.c: 155 in ec_fop_data_allocate() 149 150 mem_put(fop); 151 152 return NULL; 153 } 154 fop-id = id; CID 1228600: Data race condition (MISSING_LOCK) Accessing fop-refs without holding lock _ec_fop_data.lock. Elsewhere, fop-refs is accessed with _ec_fop_data.lock held 7 out of 8 times. 155 fop-refs = 1; 156 157 fop-flags = flags; 158 fop-minimum = minimum; 159 fop-mask = target; 160 *** CID 1228601: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/features/snapview-server/src/snapview-server.c: 1660 in svs_add_xattrs_to_dict() 1654 GF_VALIDATE_OR_GOTO (this-name, dict, out); 1655 GF_VALIDATE_OR_GOTO (this-name, list, out); 1656 1657 remaining_size = size; 1658 list_offset = 0; 1659 while (remaining_size 0) { CID 1228601: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 4096 byte fixed-size string keybuffer by copying list + list_offset without
Re: [Gluster-devel] Duplicate entries and other weirdness in a 3*4 volume
On 07/18/2014 07:57 PM, Anders Blomdell wrote: During testing of a 3*4 gluster (from master as of yesterday), I encountered two major weirdnesses: 1. A 'rm -rf some_dir' needed several invocations to finish, each time reporting a number of lines like these: rm: cannot remove ‘a/b/c/d/e/f’: Directory not empty 2. After having successfully deleted all files from the volume, i have a single directory that is duplicated in gluster-fuse, like this: # ls -l /mnt/gluster total 24 drwxr-xr-x 2 root root 12288 18 jul 16.17 work2/ drwxr-xr-x 2 root root 12288 18 jul 16.17 work2/ any idea on how to debug this issue? /Anders Anders, Check Gluster log files present in /var/log/glusterfs. Specifically glusterd logfile i.e. /var/log/glusterfs/etc-glusterfs-glusterd.vol.log. You can also start glusterd with debug mode i.e. $glusterd -L DEBUG and check the log files for more information. Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
To fix these Coverity issues , please check the below link for guidelines: http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Original Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Mon, 14 Jul 2014 23:47:00 -0700 From: scan-ad...@coverity.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. Defect(s) Reported-by: Coverity Scan Showing 20 of 23 defect(s) ** CID 1226162: Logically dead code (DEADCODE) /xlators/cluster/ec/src/ec-method.c: 119 in ec_method_decode() ** CID 1226164: Division or modulo by zero (DIVIDE_BY_ZERO) /xlators/cluster/dht/src/dht-selfheal.c: 1068 in dht_selfheal_layout_new_directory() ** CID 1226163: Division or modulo by zero (DIVIDE_BY_ZERO) /xlators/cluster/dht/src/dht-selfheal.c: 1062 in dht_selfheal_layout_new_directory() ** CID 1226165: Null pointer dereference (FORWARD_NULL) /libglusterfs/src/client_t.c: 294 in gf_client_get() /libglusterfs/src/client_t.c: 294 in gf_client_get() ** CID 1226177: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-dir-write.c: 181 in ec_manager_create() ** CID 1226176: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-generic.c: 911 in ec_manager_lookup() ** CID 1226175: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-inode-read.c: 671 in ec_manager_open() ** CID 1226174: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-inode-write.c: 1366 in ec_manager_truncate() ** CID 1226173: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-inode-write.c: 2022 in ec_manager_writev() ** CID 1226172: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-locks.c: 218 in ec_manager_entrylk() ** CID 1226171: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-locks.c: 649 in ec_manager_inodelk() ** CID 1226170: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-locks.c: 1134 in ec_manager_lk() ** CID 1226169: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-inode-read.c: 1239 in ec_manager_readv() ** CID 1226168: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-dir-read.c: 366 in ec_manager_readdir() ** CID 1226167: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-heal.c: 1164 in ec_manager_heal() ** CID 1226166: Missing break in switch (MISSING_BREAK) /xlators/cluster/ec/src/ec-heal.c: 1224 in ec_manager_heal() ** CID 1226180: Data race condition (MISSING_LOCK) /xlators/cluster/ec/src/ec-heal.c: 945 in ec_heal_needs_data_rebuild() ** CID 1226179: Data race condition (MISSING_LOCK) /xlators/cluster/ec/src/ec-heal.c: 94 in ec_heal_lookup_resume() ** CID 1226178: Data race condition (MISSING_LOCK) /xlators/cluster/ec/src/ec-heal.c: 93 in ec_heal_lookup_resume() ** CID 1226181: Thread deadlock (ORDER_REVERSAL) /xlators/cluster/ec/src/ec-heal.c: 458 in ec_heal_init() *** CID 1226162: Logically dead code (DEADCODE) /xlators/cluster/ec/src/ec-method.c: 119 in ec_method_decode() 113 } 114 k = 0; 115 for (i = 0; i columns; i++) 116 { 117 while ((mask 1) != 0) 118 { CID 1226162: Logically dead code (DEADCODE) Execution cannot reach this statement k++;. 119 k++; 120 mask = 1; 121 } 122 mtx[k][columns - 1] = 1; 123 for (j = columns - 1; j 0; j--) 124 { *** CID 1226164: Division or modulo by zero (DIVIDE_BY_ZERO) /xlators/cluster/dht/src/dht-selfheal.c: 1068 in dht_selfheal_layout_new_directory() 1062 chunk = ((unsigned long) 0x) / total_size; 1063 gf_log (this-name, GF_LOG_INFO, 1064 chunk size = 0x / %u = 0x%x, 1065 total_size, chunk); 1066 } 1067 else { CID 1226164: Division or modulo by zero (DIVIDE_BY_ZERO) In expression 4294967295UL / bricks_used, division by expression bricks_used which may be zero has undefined behavior. 1068 chunk = ((unsigned long) 0x) / bricks_used; 1069 } 1070 1071 start_subvol = dht_selfheal_layout_alloc_start (this, loc, layout); 1072 1073 /* clear out the range, as we are re-computing here */ *** CID 1226163: Division or modulo by zero (DIVIDE_BY_ZERO) /xlators/cluster/dht/src/dht-selfheal.c: 1062 in
Re: [Gluster-devel] glusterfs-3.4.5beta2 released
On 07/09/2014 12:11 AM, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.5beta2.tar.gz This release is made off jenkins-release-80 -- Gluster Build System ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel RPMs are available for el5, el6, el7, f19, f20,f21 at download.gluster.org [1] with yum repos. [1] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.5beta2/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Initiative to increase developer participation
On 05/30/2014 01:16 PM, Niels de Vos wrote: snip Documentation in the wiki, mainly these pages: - http://www.gluster.org/community/documentation/index.php/Main_Page#Developers - http://www.gluster.org/community/documentation/index.php/Developers - Over time make these new comers as experienced developers in glusterfs :-) Maintainers, Could you please come up with the initial list of bugs by next Wednesday before community meeting? If we mark bugs with the EasyFix keyword (see below), all bugs can get listed with a simple Bugzilla query: - https://bugzilla.redhat.com/buglist.cgi?bug_status=NEWkeywords=EasyFixproduct=GlusterFS Niels, Could you send out the guideline to mark the bugs as easy fix. Also the wiki link for backports. To mark a bug as Easy to Fix, open the Bug, and add in 'EasyFix' in the 'Keywords' field. Of course, it would help any newcomer if there is a comment on where (i.e. which source file, function) the changes are needed. For example, https://bugzilla.redhat.com/1100204 contains a clear description what is needed, and where. For the stable branches (release-3.4 and release-3.5 in the git repository), it often is needed to 'backport' a change from the current development branch (master). Backporting is for many changes relatively easy and we have started to document the steps here: - http://www.gluster.org/community/documentation/index.php/Backport_Guidelines The Backport Wishlist is a list of patches/bugs that are proposed candidates for backporting. When a backport request has been filed, a bug for this backport and the release-version should be created. Depending on the change, the EasyFix keyword could be set by the person creating the bug (or by a developer or sub-maintainer after a review). snip We have started a page in gluster wiki about easyfix bugs [1]. It contains the initial information for developers who want to fix easy bugs in GlusterFS. [1] http://www.gluster.org/community/documentation/index.php/EasyFix_Bugs Happy hacking!! Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding message for '-1' on gerrit
On 07/07/2014 03:11 PM, Justin Clift wrote: On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote: On 07/06/2014 11:05 PM, Vijay Bellur wrote: On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote: hi Justin/Vijay, I always felt '-1' saying 'I prefer you didn't submit this' is a bit harsh. Most of the times all it means is 'Need some more changes' Do you think we can change this message? The message can be changed. What would everyone like to see as appropriate messages accompanying values '-1' and '-2'? For '-1' - 'Please address the comments and Resubmit.' +1 That sounds good. :) I am not sure about '-2' Maybe something like? I have strong doubts about this approach +1 (seems to reflect it's usage) Thanks to Pranith for bringing it up :). -Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1 released
On 06/24/2014 03:45 PM, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1.tar.gz This release is made off jenkins-release-73 -- Gluster Build System ___ Gluster-users mailing list gluster-us...@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users RPMs for el5-7 (RHEL, CentOS, etc.) are available at download.gluster.org [1]. [1] http://download.gluster.org/pub/gluster/glusterfs/LATEST/ Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
FYI, To fix these Coverity issues , please check the below link for how to and guidelines: http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Original Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Sun, 15 Jun 2014 23:52:47 -0700 From: scan-ad...@coverity.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. Defect(s) Reported-by: Coverity Scan Showing 8 of 8 defect(s) ** CID 1223039: Dereference after null check (FORWARD_NULL) /xlators/features/changelog/src/changelog.c: 2057 in init() ** CID 1223041: Data race condition (MISSING_LOCK) /xlators/features/snapview-server/src/snapview-server.c: 2768 in init() ** CID 1223040: Data race condition (MISSING_LOCK) /xlators/features/snapview-server/src/snapview-server.c: 2770 in init() ** CID 1223046: Resource leak (RESOURCE_LEAK) /xlators/features/snapview-server/src/snapview-server.c: 378 in mgmt_get_snapinfo_cbk() ** CID 1223045: Resource leak (RESOURCE_LEAK) /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 3826 in glusterd_update_fstype() ** CID 1223044: Resource leak (RESOURCE_LEAK) /xlators/mgmt/glusterd/src/glusterd-snapshot.c: 5503 in glusterd_snapshot_config_commit() ** CID 1223043: Resource leak (RESOURCE_LEAK) /xlators/mgmt/glusterd/src/glusterd-geo-rep.c: 1497 in _get_slave_status() ** CID 1223042: Resource leak (RESOURCE_LEAK) /xlators/mgmt/glusterd/src/glusterd-geo-rep.c: 1035 in _get_status_mst_slv() *** CID 1223039: Dereference after null check (FORWARD_NULL) /xlators/features/changelog/src/changelog.c: 2057 in init() 2051 GF_FREE (priv-changelog_brick); 2052 GF_FREE (priv-changelog_dir); 2053 if (cond_lock_init) 2054 changelog_pthread_destroy (priv); 2055 GF_FREE (priv); 2056 } CID 1223039: Dereference after null check (FORWARD_NULL) Dereferencing null pointer this. 2057 this-private = NULL; 2058 } else 2059 this-private = priv; 2060 2061 return ret; 2062 } *** CID 1223041: Data race condition (MISSING_LOCK) /xlators/features/snapview-server/src/snapview-server.c: 2768 in init() 2762 goto out; 2763 2764 this-private = priv; 2765 2766 GF_OPTION_INIT (volname, priv-volname, str, out); 2767 pthread_mutex_init ((priv-snaplist_lock), NULL); CID 1223041: Data race condition (MISSING_LOCK) Accessing priv-is_snaplist_done without holding lock svs_private.snaplist_lock. Elsewhere, priv-is_snaplist_done is accessed with svs_private.snaplist_lock held 2 out of 2 times. 2768 priv-is_snaplist_done = 0; 2769 priv-num_snaps = 0; 2770 snap_worker_resume = _gf_false; 2771 2772 /* get the list of snaps first to return to client xlator */ 2773 ret = svs_get_snapshot_list (this); *** CID 1223040: Data race condition (MISSING_LOCK) /xlators/features/snapview-server/src/snapview-server.c: 2770 in init() 2764 this-private = priv; 2765 2766 GF_OPTION_INIT (volname, priv-volname, str, out); 2767 pthread_mutex_init ((priv-snaplist_lock), NULL); 2768 priv-is_snaplist_done = 0; 2769 priv-num_snaps = 0; CID 1223040: Data race condition (MISSING_LOCK) Accessing snap_worker_resume without holding lock mutex. Elsewhere, snap_worker_resume is accessed with mutex held 3 out of 3 times. 2770 snap_worker_resume = _gf_false; 2771 2772 /* get the list of snaps first to return to client xlator */ 2773 ret = svs_get_snapshot_list (this); 2774 if (ret) { 2775 gf_log (this-name, GF_LOG_ERROR, *** CID 1223046: Resource leak (RESOURCE_LEAK) /xlators/features/snapview-server/src/snapview-server.c: 378 in mgmt_get_snapinfo_cbk() 372 free (rsp.op_errstr); 373 374 if (myframe) 375 SVS_STACK_DESTROY (myframe); 376 377 error_out: CID 1223046: Resource leak (RESOURCE_LEAK) Variable dirents going out of scope leaks the storage it points to. 378 return ret; 379 } 380 381 int 382 svs_get_snapshot_list (xlator_t *this) 383 {
[Gluster-devel] Fwd: New Defects reported by Coverity Scan for GlusterFS
FYI, To fix these Coverity issues , please check the below link for guidelines: http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis#Coverity Thanks, Lala Original Message Subject:New Defects reported by Coverity Scan for GlusterFS Date: Wed, 11 Jun 2014 06:36:17 -0700 From: scan-ad...@coverity.com Hi, Please find the latest report on new defect(s) introduced to GlusterFS found with Coverity Scan. Defect(s) Reported-by: Coverity Scan Showing 2 of 2 defect(s) ** CID 1222523: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-utils.c: 3728 in gd_import_new_brick_snap_details() /xlators/mgmt/glusterd/src/glusterd-utils.c: 3737 in gd_import_new_brick_snap_details() ** CID 1222524: Structurally dead code (UNREACHABLE) /cli/src/cli-rpc-ops.c: 8796 in gf_cli_snapshot_for_status() *** CID 1222523: Copy into fixed size buffer (STRING_OVERFLOW) /xlators/mgmt/glusterd/src/glusterd-utils.c: 3728 in gd_import_new_brick_snap_details() 3722 snprintf (key, sizeof (key), %s.device_path, prefix); 3723 ret = dict_get_str (dict, key, snap_device); 3724 if (ret) { 3725 gf_log (this-name, GF_LOG_ERROR, %s missing in payload, key); 3726 goto out; 3727 } CID 1222523: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 4096 byte fixed-size string brickinfo-device_path by copying snap_device without checking the length. 3728 strcpy (brickinfo-device_path, snap_device); 3729 3730 memset (key, 0, sizeof (key)); 3731 snprintf (key, sizeof (key), %s.mount_dir, prefix); 3732 ret = dict_get_str (dict, key, mount_dir); 3733 if (ret) { /xlators/mgmt/glusterd/src/glusterd-utils.c: 3737 in gd_import_new_brick_snap_details() 3731 snprintf (key, sizeof (key), %s.mount_dir, prefix); 3732 ret = dict_get_str (dict, key, mount_dir); 3733 if (ret) { 3734 gf_log (this-name, GF_LOG_ERROR, %s missing in payload, key); 3735 goto out; 3736 } CID 1222523: Copy into fixed size buffer (STRING_OVERFLOW) You might overrun the 4096 byte fixed-size string brickinfo-mount_dir by copying mount_dir without checking the length. 3737 strcpy (brickinfo-mount_dir, mount_dir); 3738 3739 out: 3740 return ret; 3741 } 3742 *** CID 1222524: Structurally dead code (UNREACHABLE) /cli/src/cli-rpc-ops.c: 8796 in gf_cli_snapshot_for_status() 8790 dict_unref (snap_dict); 8791 } 8792 } 8793 out: 8794 return ret; 8795 CID 1222524: Structurally dead code (UNREACHABLE) This code cannot be reached: if (ret snap_dict) dic 8796 if (ret snap_dict) 8797 dict_unref (snap_dict); 8798 } 8799 8800 int32_t 8801 gf_cli_snapshot (call_frame_t *frame, xlator_t *this, To view the defects in Coverity Scan visit, http://scan.coverity.com/projects/987?tab=overview To unsubscribe from the email notification for new defects, http://scan5.coverity.com/cgi-bin/unsubscribe.py ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Patch Backport Guidelines For Stable Branch
Hi All, We realised ( Humble, Niels and I) that we dont have any documentation regarding the $sub, while disscussing over a backport in #gluster-dev. The below[1] wiki page is a result of this. Comments, feedbacks are welcome. [1] http://www.gluster.org/community/documentation/index.php/Backport_Guidelines Thanks, Lala ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regression tests and DEBUG flags
On 05/22/2014 08:37 AM, Pranith Kumar Karampuri wrote: hi, I think we should run the regression tests with DEBUG builds so that GF_ASSERTs are caught. Excellent idea!, lets do it asap. I will work with Justin to make sure we don't see too many failures before turning it on. I also want the regression tests to catch memory-corruption (invalid read/write of deallocated memory). For that I sent the following patch http://review.gluster.com/7835 to minimize the effects of mem-pool. Please let me know your comments. Review on the patch would be nice too :-). Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel