Re: [Gluster-devel] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Prashanth Pai
On Wednesday, July 5, 2017, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Il 5 lug 2017 11:31 AM, "Kaushal M"  > ha scritto:
>
> - Preliminary support for volume expansion has been added. (Note that
> rebalancing is not available yet)
>
>
> What do you mean with this?
> Any differences in volume expansion from the current architecture?
>

No. It's still the same.
Glusterd2 hasn't implemented volume rebalancing yet. It will be there,
eventually.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster statedumps and mallinfo

2017-07-05 Thread Vijay Bellur

On 07/03/2017 05:55 AM, Raghavendra Gowdappa wrote:

Hi,

Recently I observed one of the mallinfo fields had a negative value.

DUMP-START-TIME: 2017-06-09 10:59:43.747440

[mallinfo]
mallinfo_arena=-1517670400
mallinfo_ordblks=8008214
mallinfo_smblks=0
mallinfo_hblks=1009
mallinfo_hblkhd=863453184
mallinfo_usmblks=0
mallinfo_fsmblks=0
mallinfo_uordblks=1473090528
mallinfo_fordblks=1304206368
mallinfo_keepcost=2232208

As seen above mallinfo_arena is negative.

On probing further I came across posts that said mallinfo is not the ideal 
interface to get metadata about memory allocated by malloc [1]. Instead there 
were two alternatives - malloc_stats and malloc_info - suggested.


Good find!



* what among the above gives accurate and simple explanation about memory 
consumption of glusterfs?
* Should we deprecate mallinfo and just retain malloc_stats and malloc_info 
outputs? IOW, which among these need to be retained in statedump?


Yes, let us deprecate mallinfo() on platforms that support malloc_info().

man 3 malloc_info states:

"The malloc_info() function is designed to address deficiencies in 
malloc_stats(3) and mallinfo(3)."


Hence adding malloc_info() to statedump looks like a better option to me.

Regards,
Vijay



Since I've limited understanding of glibc memory allocator, I am reaching out 
to the wider community for feedback.

[1] http://udrepper.livejournal.com/20948.html

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS v3.12 - Nearing deadline for branch out

2017-07-05 Thread Shyam

Further to this,

1) I cleared up the projects lane [1] and also issues marked for 3.12 [2]
  - I did this optimistically, moving everything to 3.12 (both from a 
projects and a milestones perspective), so if something is not making 
it, drop a note, and we can clear up the tags accordingly.


2) Reviews posted and open against the issues in [1] can be viewed here [3]

  - Request maintainers and contributors to take a look at these and 
accelerate the reviews, to meet the feature cut-off deadline


  - Request feature owners to ensure that their patches are listed in 
the link [3]


3) Finally, we need a status of open issues to understand how we can 
help. Requesting all feature owners to post the same (as Amar has 
requested).


Thanks,
Shyam

[1] Project lane: https://github.com/gluster/glusterfs/projects/1
[2] Issues with 3.12 milestone: 
https://github.com/gluster/glusterfs/milestone/4
[3] Reviews needing attetion: 
https://review.gluster.org/#/q/starredby:srangana%2540redhat.com


"Releases are made better together"

On 07/05/2017 03:18 AM, Amar Tumballi wrote:

All,

We are around 10 working days remaining for branching out for 3.12
release, after which, we will have just 15 more days open for 'critical'
features to get in, for which there should be more detailed proposals.

If you have few things planned out, but haven't taken it to completion
yet, OR you have sent some patches, but not yet reviewed, start whining
now, and get these in.

Thanks,
Amar

--
Amar Tumballi (amarts)


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Compilation with gcc 7.x

2017-07-05 Thread Csaba Henk
Hi Amar,

On Wed, Jul 5, 2017 at 9:15 AM, Amar Tumballi  wrote:
> Csaba, please open a github issue for it, also attach the log there. Thanks

Please find it here: https://github.com/gluster/glusterfs/issues/259

Csaba
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusto failures with dispersed volumes + Samba

2017-07-05 Thread Amar Tumballi
On Wed, Jul 5, 2017 at 6:16 PM, Ashish Pandey  wrote:

> Hi Nigel,
>
> As Pranith has already mentioned, we are getting different gfid's in loc
> and loc->inode.
> It looks like issue with DHT. If a re validate fails for gfid, a fresh
> look up should be done.
>
> I don't know if it is related or not but a similar bug was fixed by
> Pranith
> https://review.gluster.org/#/c/16986/
>
> Ashish
>
>
Thanks for this info Ashish & Pranith. Also thanks for looking into this
Anoop.

Nigel, lets retry these things, and see if its still the case! if not
great, but if it is, then I will help you sort this out!

Regards,
Amar


>
>
> --
> *From: *"Pranith Kumar Karampuri" 
> *To: *"Anoop C S" 
> *Cc: *"gluster-devel" 
> *Sent: *Thursday, June 29, 2017 7:36:45 PM
> *Subject: *Re: [Gluster-devel] Glusto failures with dispersed volumes +
> Samba
>
>
>
>
> On Thu, Jun 29, 2017 at 6:49 PM, Anoop C S  wrote:
>
>> On Thu, 2017-06-29 at 16:35 +0530, Nigel Babu wrote:
>> > Hi Pranith and Xavi,
>> >
>> > We seem to be running into a problem with glusto tests when we try to
>> run them against dispersed
>> > volumes over a CIFS mount[1].
>>
>> Is this a new test case? If not was it running successfully before?
>>
>> > You can find the logs attached to the job [2].
>>
>> VFS stat call failures are seen in Samba logs:
>>
>> [2017/06/29 11:01:55.959374,  0] ../source3/modules/vfs_
>> glusterfs.c:870(vfs_gluster_stat)
>>   glfs_stat(.) failed: Invalid argument
>>
>> I could also see the following errors(repeatedly..) in glusterfs client
>> logs:
>>
>> [2017-06-29 10:33:43.031198] W [MSGID: 122019]
>> [ec-helpers.c:412:ec_loc_gfid_check] 0-
>> testvol_distributed-dispersed-disperse-0: Mismatching GFID's in loc
>> [2017-06-29 10:33:43.031303] I [MSGID: 109094] 
>> [dht-common.c:1016:dht_revalidate_cbk]
>> 0-
>> testvol_distributed-dispersed-dht: Revalidate: subvolume
>> testvol_distributed-dispersed-disperse-0
>> for /user11 (gfid = 665c515b-3940-480f-af7c-6aaf37731eaa) returned -1
>> [Invalid argument]
>>
>
> This log basically says that EC received loc which has different gfids in
> loc->inode->gfid and loc->gfid.
>
>
>>
>> > I've triggered a fresh job[3] to confirm that it only fails in these
>> particular conditions and
>> > certainly seems to be the case. The job is currently ongoing, so you
>> may want to take a look when
>> > you get some time how this job went.
>> >
>> > Let me know if you have any questions or need more debugging
>> information.
>> >
>> > [1]: https://ci.centos.org/job/gluster_glusto/325/testReport/
>> > [2]: https://ci.centos.org/job/gluster_glusto/325/artifact/
>> > [3]: https://ci.centos.org/job/gluster_glusto/326/console
>> >
>> >
>> > ___
>> > Gluster-devel mailing list
>> > Gluster-devel@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Gandalf Corvotempesta
Il 5 lug 2017 11:31 AM, "Kaushal M"  ha scritto:

- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)


What do you mean with this?
Any differences in volume expansion from the current architecture?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusto failures with dispersed volumes + Samba

2017-07-05 Thread Ashish Pandey
Hi Nigel, 

As Pranith has already mentioned, we are getting different gfid's in loc and 
loc->inode. 
It looks like issue with DHT. If a re validate fails for gfid, a fresh look up 
should be done. 

I don't know if it is related or not but a similar bug was fixed by Pranith 
https://review.gluster.org/#/c/16986/ 

Ashish 



- Original Message -

From: "Pranith Kumar Karampuri"  
To: "Anoop C S"  
Cc: "gluster-devel"  
Sent: Thursday, June 29, 2017 7:36:45 PM 
Subject: Re: [Gluster-devel] Glusto failures with dispersed volumes + Samba 



On Thu, Jun 29, 2017 at 6:49 PM, Anoop C S < anoo...@autistici.org > wrote: 


On Thu, 2017-06-29 at 16:35 +0530, Nigel Babu wrote: 
> Hi Pranith and Xavi, 
> 
> We seem to be running into a problem with glusto tests when we try to run 
> them against dispersed 
> volumes over a CIFS mount[1]. 

Is this a new test case? If not was it running successfully before? 

> You can find the logs attached to the job [2]. 

VFS stat call failures are seen in Samba logs: 

[2017/06/29 11:01:55.959374, 0] 
../source3/modules/vfs_glusterfs.c:870(vfs_gluster_stat) 
glfs_stat(.) failed: Invalid argument 

I could also see the following errors(repeatedly..) in glusterfs client logs: 

[2017-06-29 10:33:43.031198] W [MSGID: 122019] 
[ec-helpers.c:412:ec_loc_gfid_check] 0- 
testvol_distributed-dispersed-disperse-0: Mismatching GFID's in loc 
[2017-06-29 10:33:43.031303] I [MSGID: 109094] 
[dht-common.c:1016:dht_revalidate_cbk] 0- 
testvol_distributed-dispersed-dht: Revalidate: subvolume 
testvol_distributed-dispersed-disperse-0 
for /user11 (gfid = 665c515b-3940-480f-af7c-6aaf37731eaa) returned -1 [Invalid 
argument] 




This log basically says that EC received loc which has different gfids in 
loc->inode->gfid and loc->gfid. 



> I've triggered a fresh job[3] to confirm that it only fails in these 
> particular conditions and 
> certainly seems to be the case. The job is currently ongoing, so you may want 
> to take a look when 
> you get some time how this job went. 
> 
> Let me know if you have any questions or need more debugging information. 
> 
> [1]: https://ci.centos.org/job/gluster_glusto/325/testReport/ 
> [2]: https://ci.centos.org/job/gluster_glusto/325/artifact/ 
> [3]: https://ci.centos.org/job/gluster_glusto/326/console 
> 
> 
> ___ 
> Gluster-devel mailing list 
> Gluster-devel@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-devel 






-- 
Pranith 

___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Suggest how to recognize the time when heal is triggered by using events

2017-07-05 Thread Pranith Kumar Karampuri
patch looks good to me. Do you want to send it on gerrit?

On Wed, Jul 5, 2017 at 6:49 AM, Taehwa Lee  wrote:

> Hello, Karampuri.
>
>
> I've been developing products using glusterfs for 2 years almost with my
> co-workers.
>
> I got a problem that the products cannot recognize the time when heal is
> triggered.
>
> I think healing affect the performance of glusterfs volume definitely.
>
> So, we should monitor whether healing is in progress or not.
>
>
> To monitor it, event api is one of the best way, I guess.
>
> So I have created the issue including patch for it on bugzilla;
> https://bugzilla.redhat.com/show_bug.cgi?id=1467543
>
>
>
> Can I get some feedbacks?
>
> Thanks in advance.
>
>
> Best regards.
>
>
> -
> 이 태 화
> Taehwa Lee
> Gluesys Co.,Ltd.
> alghost@gmail.com
> 010-3420-6114, 070-8785-6591
> -
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-07-05 Thread Pranith Kumar Karampuri
On Tue, Jul 4, 2017 at 2:26 PM, Xavier Hernandez 
wrote:

> Hi Pranith,
>
> On 03/07/17 08:33, Pranith Kumar Karampuri wrote:
>
>> Xavi,
>>   Now that the change has been reverted, we can resume this
>> discussion and decide on the exact format that considers, tier, dht,
>> afr, ec. People working geo-rep/dht/afr/ec had an internal discussion
>> and we all agreed that this proposal would be a good way forward. I
>> think once we agree on the format and decide on the initial
>> encoding/decoding functions of the xattr and this change is merged, we
>> can send patches on afr/ec/dht and geo-rep to take it to closure.
>>
>> Could you propose the new format you have in mind that considers all of
>> the xlators?
>>
>
> My idea was to create a new xattr not bound to any particular function but
> which could give enough information to be used in many places.
>
> Currently we have another attribute called glusterfs.pathinfo that returns
> hierarchical information about the location of a file. Maybe we can extend
> this to unify all these attributes into a single feature that could be used
> for multiple purposes.
>
> Since we have time to discuss it, I would like to design it with more
> information than we already talked.
>
> First of all, the amount of information that this attribute can contain is
> quite big if we expect to have volumes with thousands of bricks. Even in
> the most simple case of returning only an UUID, we can easily go beyond the
> limit of 64KB.
>
> Consider also, for example, what shard should return when pathinfo is
> requested for a file. Probably it should return a list of shards, each one
> with all its associated pathinfo. We are talking about big amounts of data
> here.
>
> I think this kind of information doesn't fit very well in an extended
> attribute. Another think to consider is that most probably the requester of
> the data only needs a fragment of it, so we are generating big amounts of
> data only to be parsed and reduced later, dismissing most of it.
>
> What do you think about using a very special virtual file to manage all
> this information ? it could be easily read using normal read fops, so it
> could manage big amounts of data easily. Also, accessing only to some parts
> of the file we could go directly where we want, avoiding the read of all
> remaining data.
>
> A very basic idea could be this:
>
> Each xlator would have a reserved area of the file. We can reserve up to
> 4GB per xlator (32 bits). The remaining 32 bits of the offset would
> indicate the xlator we want to access.
>
> At offset 0 we have generic information about the volume. One of the the
> things that this information should include is a basic hierarchy of the
> whole volume and the offset for each xlator.
>
> After reading this, the user will seek to the desired offset and read the
> information related to the xlator it is interested in.
>
> All the information should be stored in a format easily extensible that
> will be kept compatible even if new information is added in the future (for
> example doing special mappings of the 32 bits offsets reserved for the
> xlator).
>
> For example we can reserve the first megabyte of the xlator area to have a
> mapping of attributes with its respective offset.
>
> I think that using a binary format would simplify all this a lot.
>
> Do you think this is a way to explore or should I stop wasting time here ?
>

I think this just became a very big feature :-). Shall we just live with it
the way it is now?


>
> Xavi
>
>
>>
>>
>> On Wed, Jun 21, 2017 at 2:08 PM, Karthik Subrahmanya
>> > wrote:
>>
>>
>>
>> On Wed, Jun 21, 2017 at 1:56 PM, Xavier Hernandez
>> > wrote:
>>
>> That's ok. I'm currently unable to write a patch for this on ec.
>>
>> Sunil is working on this patch.
>>
>> ~Karthik
>>
>> If no one can do it, I can try to do it in 6 - 7 hours...
>>
>> Xavi
>>
>>
>> On Wednesday, June 21, 2017 09:48 CEST, Pranith Kumar Karampuri
>> > wrote:
>>
>>
>>>
>>> On Wed, Jun 21, 2017 at 1:00 PM, Xavier Hernandez
>>> > wrote:
>>>
>>> I'm ok with reverting node-uuid content to the previous
>>> format and create a new xattr for the new format.
>>> Currently, only rebalance will use it.
>>>
>>> Only thing to consider is what can happen if we have a
>>> half upgraded cluster where some clients have this change
>>> and some not. Can rebalance work in this situation ? if
>>> so, could there be any issue ?
>>>
>>>
>>> I think there shouldn't be any problem, because this is
>>> in-memory xattr so layers below afr/ec will only see node-uuid
>>> xattr.
>>> This also 

Re: [Gluster-devel] Disperse volume : Sequential Writes

2017-07-05 Thread Pranith Kumar Karampuri
On Tue, Jul 4, 2017 at 1:39 PM, Xavier Hernandez 
wrote:

> Hi Pranith,
>
> On 03/07/17 05:35, Pranith Kumar Karampuri wrote:
>
>> Ashish, Xavi,
>>I think it is better to implement this change as a separate
>> read-after-write caching xlator which we can load between EC and client
>> xlator. That way EC will not get a lot more functionality than necessary
>> and may be this xlator can be used somewhere else in the stack if
>> possible.
>>
>
> while this seems a good way to separate functionalities, it has a big
> problem. If we add a caching xlator between ec and *all* of its subvolumes,
> it will only be able to cache encoded data. So, when ec needs the "cached"
> data, it will need to issue a request to each of its subvolumes and compute
> the decoded data before being able to use it, so we don't avoid the
> decoding overhead.
>
> Also, if we want to make the xlator generic, it will probably cache a lot
> more data than ec really needs. Increasing memory footprint considerably
> for no real use.
>
> Additionally, this new xlator will need to guarantee that the cached data
> is current, so it will need its own locking logic (that would be another
> copy of the existing logic in one of the current xlators) which is
> slow and difficult to maintain, or it will need to intercept and reuse
> locking calls from parent xlators, which can be quite complex since we have
> multiple xlator levels where locks can be taken, not only ec.
>
> This is a relatively simple change to make inside ec, but a very complex
> change (IMO) if we want to do it as a stand-alone xlator and be generic
> enough to be reused and work safely in other places of the stack.
>
> If we want to separate functionalities I think we should create a new
> concept of xlator which is transversal to the "traditional" xlator stack.
>
> Current xlators are linear in the sense that each one operates only at one
> place (it can be moved by reconfiguration, but once instantiated, it always
> work at the same place) and passes data to the next one.
>
> A transversal xlator (or maybe a service xlator would be better) would be
> one not bound to any place of the stack, but could be used by all other
> xlators to implement some service, like caching, multithreading, locking,
> ... these are features that many xlators need but cannot use easily (nor
> efficiently) if they are implicitly implemented in some specific place of
> the stack outside its control.
>
> The transaction framework we already talked, could be though as one of
> these service xlators. Multithreading could also benefit of this approach
> because xlators would have more control about what things can be processed
> by a background thread and which ones not. Probably there are other
> features that could benefit from this approach.
>
> In the case of brick multiplexing, if some xlators are removed from each
> stack and loaded as global services, most probably the memory footprint
> will be lower and the resource usage more optimized.
>

I like the service xlator approach. But I don't think we have enough time
to make it operational in the short term. Let us go with implementation of
this feature in EC for now. I didn't realize the extra cost of decoding
when I thought about the separation. So I guess we will stick to the old
idea for now.


>
> Just an idea...
>
> Xavi
>
>
>> On Fri, Jun 16, 2017 at 4:19 PM, Ashish Pandey > > wrote:
>>
>>
>> I think it should be done as we have agreement on basic design.
>>
>> 
>> 
>> *From: *"Pranith Kumar Karampuri" > >
>> *To: *"Xavier Hernandez" > >
>> *Cc: *"Ashish Pandey" > >, "Gluster Devel"
>> >
>> *Sent: *Friday, June 16, 2017 3:50:09 PM
>> *Subject: *Re: [Gluster-devel] Disperse volume : Sequential Writes
>>
>>
>>
>>
>> On Fri, Jun 16, 2017 at 3:12 PM, Xavier Hernandez
>> > wrote:
>>
>> On 16/06/17 10:51, Pranith Kumar Karampuri wrote:
>>
>>
>>
>> On Fri, Jun 16, 2017 at 12:02 PM, Xavier Hernandez
>> 
>> >
>> >> wrote:
>>
>> On 15/06/17 11:50, Pranith Kumar Karampuri wrote:
>>
>>
>>
>> On Thu, Jun 15, 2017 at 11:51 AM, Ashish Pandey
>> 
>> >
>> >  

[Gluster-devel] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Kaushal M
After nearly 3 months, we have another preview release for GlusterD-2.0.

The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework is now available
- And RPMs are available for Fedora >= 25 and EL7.

This release still doesn't provide a CLI. The HTTP ReST API is the
only access method right now.

Prebuilt binaries are available from [1]. RPMs have been built in
Fedora Copr and available at [2]. A Docker image is also available
from [3].

Try this release out and let us know if you face any problems at [4].

The GD2 development team is re-organizing and kicking of development
again. So regular updates can be expected again.

Cheers,
Kaushal and the GD2 developers.

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-7
[2]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[3]: https://hub.docker.com/r/gluster/glusterd2-test/
[4]: https://github.com/gluster/glusterd2/issues
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Compilation with gcc 7.x

2017-07-05 Thread Niels de Vos
On Wed, Jul 05, 2017 at 12:45:11PM +0530, Amar Tumballi wrote:
> Csaba,
> 
> Thanks for looking into this.
> 
> On Tue, Jul 4, 2017 at 5:30 PM, Csaba Henk  wrote:
> 
> > Hi list,
> >
> > I've compiled glusterfs with gcc 7.x (to be precise, with 7.1.1),
> > which is soon to get its prime time as the C compiler of
> > Fedora 26.
> >
> > The Release Notes (https://gcc.gnu.org/gcc-7/changes.html)
> > give account of a broad list of new and improved warnings...
> > and that shows. While with gcc 6.x the only warning I had
> > is "lchmod is not implemented and will always fail", with
> > gcc 7.x I got 218 warnings alltogether. For reference, I
> > attach the excerpted warnings from the compilation output.
> >
> > Went through the logs, and I see it is in project's interest to fix them.
> 
> Technically, fixing these warnings would be good to reduce our coverity
> warnings too in many cases. I am all for it.
> 
> 
> > Are you aware of this? Is there any plan what to do about it?
> >
> >
> I was not aware of it. Thanks for pointing it out. I propose we to fix it
> before 4.0 release branch out, and start having  gcc7.x job compiling as
> part of smoke.
> 
> We can keep the smoke job non-voting till some time, and can turn the knob
> ON someday, say October 15th for start voting -1 on any warnings. Others,
> any comments?

We have similar job that checks for certain string-format warnings [1].
I think it is enabled to vote, otherwise only very few will pay
attention to the test results.

Instead of cluttering the patch reviews with non-voting, the gcc-7.1
compile can be sent to the mailinglist similar to the Coverity results?
Maybe with some simple statistics in the beginning of the email, showing
how many warnings/error have been detected? This can be a regular job,
just build the RPMs within a Fedora Rawhide mock environment. Once the
next gcc version is out, it will get used automatically too (and we'll
test building with the latest headers/libraries as well).

Thanks,
Niels

1. https://build.gluster.org/job/strfmt_errors/


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Compilation with gcc 7.x

2017-07-05 Thread Amar Tumballi
Csaba,

Thanks for looking into this.

On Tue, Jul 4, 2017 at 5:30 PM, Csaba Henk  wrote:

> Hi list,
>
> I've compiled glusterfs with gcc 7.x (to be precise, with 7.1.1),
> which is soon to get its prime time as the C compiler of
> Fedora 26.
>
> The Release Notes (https://gcc.gnu.org/gcc-7/changes.html)
> give account of a broad list of new and improved warnings...
> and that shows. While with gcc 6.x the only warning I had
> is "lchmod is not implemented and will always fail", with
> gcc 7.x I got 218 warnings alltogether. For reference, I
> attach the excerpted warnings from the compilation output.
>
> Went through the logs, and I see it is in project's interest to fix them.

Technically, fixing these warnings would be good to reduce our coverity
warnings too in many cases. I am all for it.


> Are you aware of this? Is there any plan what to do about it?
>
>
I was not aware of it. Thanks for pointing it out. I propose we to fix it
before 4.0 release branch out, and start having  gcc7.x job compiling as
part of smoke.

We can keep the smoke job non-voting till some time, and can turn the knob
ON someday, say October 15th for start voting -1 on any warnings. Others,
any comments?

Csaba, please open a github issue for it, also attach the log there. Thanks

Regards,
Amar


> Csaba
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel