[Gluster-devel] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-18 Thread Amar Tumballi
*Hi all,Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively maintaining
some of these features.We are now thinking of cleaning up some of these
‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally
taken out of codebase in following releases) in next upcoming release,
v5.0. The release notes will provide options for smoothly migrating to the
supported configurations.If you are using any of these features, do let us
know, so that we can help you with ‘migration’.. Also, we are happy to
guide new developers to work on those components which are not actively
being maintained by current set of developers.List of features hitting
sunset:‘cluster/stripe’ translator:This translator was developed very early
in the evolution of GlusterFS, and addressed one of the very common
question of Distributed FS, which is “What happens if one of my file is
bigger than the available brick. Say, I have 2 TB hard drive, exported in
glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard
to handle failure scenarios, and give a real good experience to our users
with this feature. Over the time, Gluster solved the problem with it’s
‘Shard’ feature, which solves the problem in much better way, and provides
much better solution with existing well supported stack. Hence the proposal
for Deprecation.If you are using this feature, then do write to us, as it
needs a proper migration from existing volume to a new full supported
volume type before you upgrade.‘storage/bd’ translator:This feature got
into the code base 5 years back with this patch
[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
[2] Happy to get
contributions, and help in getting a newer approach to
Quota.--These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the 

Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Shyam Ranganathan
On 07/18/2018 01:16 PM, Deepshikha Khandelwal wrote:
> Shyam,
> 
> Thank you for pointing this out. I've updated the logs for bug-990028.t test.

Yup, looked at it, ENOSPC failure is in setxattr on the brick, as we are
attempting to set a lot of them due to hardlinks to the file, failure
log is as follows,

[2018-07-18 12:50:07.298478]:++
G_LOG:tests/bugs/posix/bug-990028.t: TEST: 37 ln /mnt/glusterfs/0/file1
/mnt/glusterfs/0/file45 ++
[2018-07-18 12:50:07.307101] W [MSGID: 113117]
[posix-metadata.c:671:posix_set_parent_ctime] 0-patchy-posix: posix
parent set mdata failed on file [No such file or directory]
[2018-07-18 12:50:07.322628] W [MSGID: 113093]
[posix-gfid-path.c:51:posix_set_gfid2path_xattr] 0-patchy-posix: setting
gfid2path xattr failed on /d/backends/brick/file45: key =
trusted.gfid2path.4434be659b4d25e
4  [No space left on device]
[2018-07-18 12:50:07.322813] I [MSGID: 115062]
[server-rpc-fops_v2.c:1089:server4_link_cbk] 0-patchy-server: 333: LINK
/file43 (40ef3115-f818-4cc2-a5c3-64875f7a273a) ->
----0001/file4
5, client:
CTX_ID:98c24d79-4889-4aba-bc93-91e1d5d73abe-GRAPH_ID:0-PID:4993-HOST:distributed-testing.8b445247-2057-47e7-894f-41e4a91bb536-PC_NAME:patchy-client-0-RECON_NO:-0,
error-xlator: patchy-posix [No space
left on device]
[2018-07-18 12:50:07.335223]:++
G_LOG:tests/bugs/posix/bug-990028.t: TEST: 37 ln /mnt/glusterfs/0/file1
/mnt/glusterfs/0/file46 ++

Need to determine what is different in the backing XFS FS across
instances where this works and in the distributed instances (or to
determine what the options are to create XFS with the ability to not run
out of space when adding extended attrs and apply them to the
distributed test setup).

> On Wed, Jul 18, 2018 at 8:40 PM Shyam Ranganathan  wrote:
>>
>> On 07/18/2018 10:51 AM, Shyam Ranganathan wrote:
>>> On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
 Hi all,

 There are tests which have been constantly failing for distributed
 regression framework[1]. I would like to draw the maintainer's
 attention to look at these two bugs[2]&[3] and help us to attain the
 RCA for such failures.

 Until then, we're disabling these two blocking tests.

 [1] https://build.gluster.org/job/distributed-regression/
 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
>>>
>>> Bug updated with current progress:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c1
>>>
>>> Pasting it here for others to chime in based on past experience if any.
>>>
>>> 
>>> This fails as follows,
>>> =
>>> TEST 52 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file44
>>> ln: failed to create hard link ‘/mnt/glusterfs/0/file44’: No space left
>>> on device
>>> RESULT 52: 1
>>> =
>>> (continues till the last file) IOW, file44-file50 fail creation
>>> =
>>> TEST 58 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file50
>>> ln: failed to create hard link ‘/mnt/glusterfs/0/file50’: No space left
>>> on device
>>> RESULT 58: 1
>>> =
>>>
>>> Post this the failures are due to attempts to inspect these files for
>>> metadata and attrs and such, so the failures are due to the above.
>>>
>>> At first I suspected max-hardlink setting, but this is at a default of
>>> 100, and we do not use any specific site.h or tuning when running in the
>>> distributed environment (as far as I can tell).
>>>
>>> Also, the test, when it fails, has only created 1 empty file and 42
>>> links to the same, this should not cause the bricks to run out of space.
>>>
>>> The Gluster logs till now did not throw up any surprises, or causes.
>>
>> Just realized that logs attached to the bug are not from this test
>> failure, requesting the right logs, so that we can possibly find the
>> root cause.
>>
>>> 
>>>

 Thanks,
 Deepshikha Khandelwal
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-devel

>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Deepshikha Khandelwal
Shyam,

Thank you for pointing this out. I've updated the logs for bug-990028.t test.
On Wed, Jul 18, 2018 at 8:40 PM Shyam Ranganathan  wrote:
>
> On 07/18/2018 10:51 AM, Shyam Ranganathan wrote:
> > On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
> >> Hi all,
> >>
> >> There are tests which have been constantly failing for distributed
> >> regression framework[1]. I would like to draw the maintainer's
> >> attention to look at these two bugs[2]&[3] and help us to attain the
> >> RCA for such failures.
> >>
> >> Until then, we're disabling these two blocking tests.
> >>
> >> [1] https://build.gluster.org/job/distributed-regression/
> >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
> >> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
> >
> > Bug updated with current progress:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c1
> >
> > Pasting it here for others to chime in based on past experience if any.
> >
> > 
> > This fails as follows,
> > =
> > TEST 52 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file44
> > ln: failed to create hard link ‘/mnt/glusterfs/0/file44’: No space left
> > on device
> > RESULT 52: 1
> > =
> > (continues till the last file) IOW, file44-file50 fail creation
> > =
> > TEST 58 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file50
> > ln: failed to create hard link ‘/mnt/glusterfs/0/file50’: No space left
> > on device
> > RESULT 58: 1
> > =
> >
> > Post this the failures are due to attempts to inspect these files for
> > metadata and attrs and such, so the failures are due to the above.
> >
> > At first I suspected max-hardlink setting, but this is at a default of
> > 100, and we do not use any specific site.h or tuning when running in the
> > distributed environment (as far as I can tell).
> >
> > Also, the test, when it fails, has only created 1 empty file and 42
> > links to the same, this should not cause the bricks to run out of space.
> >
> > The Gluster logs till now did not throw up any surprises, or causes.
>
> Just realized that logs attached to the bug are not from this test
> failure, requesting the right logs, so that we can possibly find the
> root cause.
>
> > 
> >
> >>
> >> Thanks,
> >> Deepshikha Khandelwal
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Documentation Hackathon - 7/19 through 7/23

2018-07-18 Thread Vijay Bellur
Hey All,

We are organizing a hackathon to improve our upstream documentation. More
details about the hackathon can be found at [1].

Please feel free to let us know if you have any questions.

Thanks,
Amar & Vijay

[1]
https://docs.google.com/document/d/11LLGA-bwuamPOrKunxojzAEpHEGQxv8VJ68L3aKdPns/edit?usp=sharing
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Shyam Ranganathan
On 07/18/2018 10:51 AM, Shyam Ranganathan wrote:
> On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
>> Hi all,
>>
>> There are tests which have been constantly failing for distributed
>> regression framework[1]. I would like to draw the maintainer's
>> attention to look at these two bugs[2]&[3] and help us to attain the
>> RCA for such failures.
>>
>> Until then, we're disabling these two blocking tests.
>>
>> [1] https://build.gluster.org/job/distributed-regression/
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
> 
> Bug updated with current progress:
> https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c1
> 
> Pasting it here for others to chime in based on past experience if any.
> 
> 
> This fails as follows,
> =
> TEST 52 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file44
> ln: failed to create hard link ‘/mnt/glusterfs/0/file44’: No space left
> on device
> RESULT 52: 1
> =
> (continues till the last file) IOW, file44-file50 fail creation
> =
> TEST 58 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file50
> ln: failed to create hard link ‘/mnt/glusterfs/0/file50’: No space left
> on device
> RESULT 58: 1
> =
> 
> Post this the failures are due to attempts to inspect these files for
> metadata and attrs and such, so the failures are due to the above.
> 
> At first I suspected max-hardlink setting, but this is at a default of
> 100, and we do not use any specific site.h or tuning when running in the
> distributed environment (as far as I can tell).
> 
> Also, the test, when it fails, has only created 1 empty file and 42
> links to the same, this should not cause the bricks to run out of space.
> 
> The Gluster logs till now did not throw up any surprises, or causes.

Just realized that logs attached to the bug are not from this test
failure, requesting the right logs, so that we can possibly find the
root cause.

> 
> 
>>
>> Thanks,
>> Deepshikha Khandelwal
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Shyam Ranganathan
On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
> Hi all,
> 
> There are tests which have been constantly failing for distributed
> regression framework[1]. I would like to draw the maintainer's
> attention to look at these two bugs[2]&[3] and help us to attain the
> RCA for such failures.
> 
> Until then, we're disabling these two blocking tests.
> 
> [1] https://build.gluster.org/job/distributed-regression/
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262

Bug updated with current progress:
https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c1

Pasting it here for others to chime in based on past experience if any.


This fails as follows,
=
TEST 52 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file44
ln: failed to create hard link ‘/mnt/glusterfs/0/file44’: No space left
on device
RESULT 52: 1
=
(continues till the last file) IOW, file44-file50 fail creation
=
TEST 58 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file50
ln: failed to create hard link ‘/mnt/glusterfs/0/file50’: No space left
on device
RESULT 58: 1
=

Post this the failures are due to attempts to inspect these files for
metadata and attrs and such, so the failures are due to the above.

At first I suspected max-hardlink setting, but this is at a default of
100, and we do not use any specific site.h or tuning when running in the
distributed environment (as far as I can tell).

Also, the test, when it fails, has only created 1 empty file and 42
links to the same, this should not cause the bricks to run out of space.

The Gluster logs till now did not throw up any surprises, or causes.


> 
> Thanks,
> Deepshikha Khandelwal
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-07-18-e4f6d887 (master branch)

2018-07-18 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-07-18-e4f6d887/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Nithya Balachandran
Hi Mohit,

Please take a look at BZ 1602282
.

Thanks,
Nithya

On 18 July 2018 at 15:12, Deepshikha Khandelwal  wrote:

> Hi all,
>
> There are tests which have been constantly failing for distributed
> regression framework[1]. I would like to draw the maintainer's
> attention to look at these two bugs[2]&[3] and help us to attain the
> RCA for such failures.
>
> Until then, we're disabling these two blocking tests.
>
> [1] https://build.gluster.org/job/distributed-regression/
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
>
> Thanks,
> Deepshikha Khandelwal
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Deepshikha Khandelwal
Hi all,

There are tests which have been constantly failing for distributed
regression framework[1]. I would like to draw the maintainer's
attention to look at these two bugs[2]&[3] and help us to attain the
RCA for such failures.

Until then, we're disabling these two blocking tests.

[1] https://build.gluster.org/job/distributed-regression/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262

Thanks,
Deepshikha Khandelwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel