Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-23 Thread Amar Tumballi
On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <
g.amed...@uni-luebeck.de> wrote:

> Hi,
>
> we're planning a dispersed volume with at least 50 project directories.
> Each of those has its own quota ranging between 0.1TB and 200TB. Comparing
> XFS
> project quotas over several servers and bricks to make sure their total
> matches the desired value doesn't really sound practical. It would probably
> be
> possible to create and maintain 50 volumes and more, but it doesn't seem
> to be a desirable solution. The quotas aren't fixed and resizing a volume is
> not as trivial as changing the quota.
>
> Quota was in the past and still is a very comfortable way to solve this.
>
> But what is the new recommended way for such a setting when the quota is
> going to be deprecated?
>
>
Thanks for the feedback. Helps us to prioritize. Will get back on this.

-Amar



> Kind regards
>
> Gudrun
> Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:
> > Hi all,
> >
> > Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have figured
> out
> > better methods of doing things. Also we are not actively maintaining
> some of these features.
> >
> > We are now thinking of cleaning up some of these ‘unsupported’ features,
> and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> > following releases) in next upcoming release, v5.0. The release notes
> will provide options for smoothly migrating to the supported configurations.
> >
> > If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers to
> > work on those components which are not actively being maintained by
> current set of developers.
> >
> > List of features hitting sunset:
> >
> > ‘cluster/stripe’ translator:
> >
> > This translator was developed very early in the evolution of GlusterFS,
> and addressed one of the very common question of Distributed FS, which is
> > “What happens if one of my file is bigger than the available brick. Say,
> I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> > solved the purpose, it was very hard to handle failure scenarios, and
> give a real good experience to our users with this feature. Over the time,
> > Gluster solved the problem with it’s ‘Shard’ feature, which solves the
> problem in much better way, and provides much better solution with existing
> > well supported stack. Hence the proposal for Deprecation.
> >
> > If you are using this feature, then do write to us, as it needs a proper
> migration from existing volume to a new full supported volume type before
> > you upgrade.
> >
> > ‘storage/bd’ translator:
> >
> > This feature got into the code base 5 years back with this patch[1].
> Plan was to use a block device directly as a brick, which would help to
> handle
> > disk-image storage much easily in glusterfs.
> >
> > As the feature is not getting more contribution, and we are not seeing
> any user traction on this, would like to propose for Deprecation.
> >
> > If you are using the feature, plan to move to a supported gluster volume
> configuration, and have your setup ‘supported’ before upgrading to your new
> > gluster version.
> >
> > ‘RDMA’ transport support:
> >
> > Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did work
> > with Mellanox, and got the technology into GlusterFS for better data
> migration, data copy. While current day kernels support very good speed with
> > IPoIB module itself, and there are no more bandwidth for experts in
> these area to maintain the feature, we recommend migrating over to TCP (IP
> > based) network for your volume.
> >
> > If you are successfully using RDMA transport, do get in touch with us to
> prioritize the migration plan for your volume. Plan is to work on this
> > after the release, so by version 6.0, we will have a cleaner transport
> code, which just needs to support one type.
> >
> > ‘Tiering’ feature
> >
> > Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> > get better performance. While we saw some users for the feature, it
> needs much more attention to be completely bug free. At the time, we are not
> > having any active maintainers for the feature, and hence suggesting to
> take it out of the ‘supported’ tag.
> >
> > If you are willing to take it up, and maintain it, do let us know, and
> we are happy to assist you.
> >
> > If you are already using tiering feature, before upgrading, make sure to
> do gluster volume tier detach all the bricks before upgrading to next
> > release. Also, we recommend you to use features like dmcache on your LVM
> setup to get best performance from bricks.
> >
> > ‘Quota’
> >
> > This is a call out for ‘Quota’ feature, 

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-23 Thread Gudrun Mareike Amedick
Hi,

we're planning a dispersed volume with at least 50 project directories. Each of 
those has its own quota ranging between 0.1TB and 200TB. Comparing XFS
project quotas over several servers and bricks to make sure their total matches 
the desired value doesn't really sound practical. It would probably be
possible to create and maintain 50 volumes and more, but it doesn't seem to be 
a desirable solution. The quotas aren't fixed and resizing a volume is
not as trivial as changing the quota. 

Quota was in the past and still is a very comfortable way to solve this.

But what is the new recommended way for such a setting when the quota is going 
to be deprecated?

Kind regards

Gudrun
Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:
> Hi all,
> 
> Over last 12 years of Gluster, we have developed many features, and continue 
> to support most of it till now. But along the way, we have figured out
> better methods of doing things. Also we are not actively maintaining some of 
> these features.
> 
> We are now thinking of cleaning up some of these ‘unsupported’ features, and 
> mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> following releases) in next upcoming release, v5.0. The release notes will 
> provide options for smoothly migrating to the supported configurations.
> 
> If you are using any of these features, do let us know, so that we can help 
> you with ‘migration’.. Also, we are happy to guide new developers to
> work on those components which are not actively being maintained by current 
> set of developers.
> 
> List of features hitting sunset:
> 
> ‘cluster/stripe’ translator:
> 
> This translator was developed very early in the evolution of GlusterFS, and 
> addressed one of the very common question of Distributed FS, which is
> “What happens if one of my file is bigger than the available brick. Say, I 
> have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> solved the purpose, it was very hard to handle failure scenarios, and give a 
> real good experience to our users with this feature. Over the time,
> Gluster solved the problem with it’s ‘Shard’ feature, which solves the 
> problem in much better way, and provides much better solution with existing
> well supported stack. Hence the proposal for Deprecation.
> 
> If you are using this feature, then do write to us, as it needs a proper 
> migration from existing volume to a new full supported volume type before
> you upgrade.
> 
> ‘storage/bd’ translator:
> 
> This feature got into the code base 5 years back with this patch[1]. Plan was 
> to use a block device directly as a brick, which would help to handle
> disk-image storage much easily in glusterfs.
> 
> As the feature is not getting more contribution, and we are not seeing any 
> user traction on this, would like to propose for Deprecation.
> 
> If you are using the feature, plan to move to a supported gluster volume 
> configuration, and have your setup ‘supported’ before upgrading to your new
> gluster version.
> 
> ‘RDMA’ transport support:
> 
> Gluster started supporting RDMA while ib-verbs was still new, and very 
> high-end infra around that time were using Infiniband. Engineers did work
> with Mellanox, and got the technology into GlusterFS for better data 
> migration, data copy. While current day kernels support very good speed with
> IPoIB module itself, and there are no more bandwidth for experts in these 
> area to maintain the feature, we recommend migrating over to TCP (IP
> based) network for your volume.
> 
> If you are successfully using RDMA transport, do get in touch with us to 
> prioritize the migration plan for your volume. Plan is to work on this
> after the release, so by version 6.0, we will have a cleaner transport code, 
> which just needs to support one type.
> 
> ‘Tiering’ feature
> 
> Gluster’s tiering feature which was planned to be providing an option to keep 
> your ‘hot’ data in different location than your cold data, so one can
> get better performance. While we saw some users for the feature, it needs 
> much more attention to be completely bug free. At the time, we are not
> having any active maintainers for the feature, and hence suggesting to take 
> it out of the ‘supported’ tag.
> 
> If you are willing to take it up, and maintain it, do let us know, and we are 
> happy to assist you.
> 
> If you are already using tiering feature, before upgrading, make sure to do 
> gluster volume tier detach all the bricks before upgrading to next
> release. Also, we recommend you to use features like dmcache on your LVM 
> setup to get best performance from bricks.
> 
> ‘Quota’
> 
> This is a call out for ‘Quota’ feature, to let you all know that it will be 
> ‘no new development’ state. While this feature is ‘actively’ in use by
> many people, the challenges we have in accounting mechanisms involved, has 
> made it hard to achieve good performance with the feature. Also, the
> amount of extended 

[Gluster-devel] Coverity covscan for 2018-07-23-5fa004f3 (master branch)

2018-07-23 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-07-23-5fa004f3/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-23 Thread Lian, George (NSB - CN/Hangzhou)
Hi,

I tested both patchset1 and patchset2 of  https://review.gluster.org/20549, the 
ctime issue seems both be there.
And I use my test c program and “dd” program, the issue both be there.

But when use the patch of https://review.gluster.org/#/c/20410/11,
My test C program and “dd” to an exist file will pass,
ONLY “dd” to new file will be failed.

Best Regards,
George



From: gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Raghavendra Gowdappa
Sent: Monday, July 23, 2018 10:37 AM
To: Lian, George (NSB - CN/Hangzhou) 
Cc: Zhang, Bingxuan (NSB - CN/Hangzhou) ; 
Raghavendra G ; Gluster-devel@gluster.org
Subject: Re: [Gluster-devel] The ctime of fstat is not correct which lead to 
"tar" utility error



On Sun, Jul 22, 2018 at 1:41 PM, Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:
George,
Sorry. I sent you a version of the fix which was stale. Can you try with:
https://review.gluster.org/20549
This patch passes the test case you've given.

Patchset 1 solves this problem. However, it ran into dbench failures as 
md-cache was slow to update its cache. Once I fixed it, I am seeing failures 
again. with performance.stat-prefetch off, the error goes away. But, I can see 
only ctime changes. Wondering whether this is related to ctime translator or an 
issue in  md-cache. Note that md-cache caches stats from codepaths which don't 
result in stat updation in kernel too. So, it could be either,
* a bug in md-cache
* or a bug where in those codepaths wrong/changed stat was sent.

I'll probe the first hypothesis. @Pranith/@Ravi,

What do you think about second hypothesis?

regards,
Raghavendra

regards,
Raghavendra

On Fri, Jul 20, 2018 at 2:59 PM, Lian, George (NSB - CN/Hangzhou) 
mailto:george.l...@nokia-sbell.com>> wrote:
Hi,

Sorry, there seems still have issue.

We use “dd” application of linux tools instead of my demo program, and if the 
file is not exist before dd, the issue still be there.

The test command is
rm -rf /mnt/test/file.txt ; dd if=/dev/zero of=/mnt/test/file.txt bs=512 
count=1 oflag=sync;stat /mnt/test/file.txt;tar -czvf /tmp/abc.gz

1) If we set md-cache-timeout to 0, the issue will not happen

2) If we set md-cache-timeout to 1, the issue will 100% reproduced! (with 
new patch you mentioned in the mail)


Please see detail test result as the below:

bash-4.4# gluster v set export md-cache-timeout 0
volume set: failed: Volume export does not exist
bash-4.4# gluster v set test md-cache-timeout 0
volume set: success
bash-4.4# dd if=/dev/zero of=/mnt/test/file.txt bs=512 count=1 oflag=sync;stat 
/mnt/test/file.txt;tar -czvf /tmp/abc.gz /mnt/test/file.txt;stat 
/mnt/test/file.txt^C
bash-4.4# rm /mnt/test/file.txt
bash-4.4# dd if=/dev/zero of=/mnt/test/file.txt bs=512 count=1 oflag=sync;stat 
/mnt/test/file.txt;tar -czvf /tmp/abc.gz /mnt/test/file.txt;stat 
/mnt/test/file.txt
1+0 records in
1+0 records out
512 bytes copied, 0.00932571 s, 54.9 kB/s
  File: /mnt/test/file.txt
  Size: 512 Blocks: 1  IO Block: 131072 regular file
Device: 33h/51d Inode: 9949244856126716752  Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2018-07-13 17:55:02.75600 +
Modify: 2018-07-13 17:55:02.76400 +
Change: 2018-07-13 17:55:02.76800 +
Birth: -
tar: Removing leading `/' from member names
/mnt/test/file.txt
  File: /mnt/test/file.txt
  Size: 512 Blocks: 1  IO Block: 131072 regular file
Device: 33h/51d Inode: 9949244856126716752  Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2018-07-13 17:55:02.77600 +
Modify: 2018-07-13 17:55:02.76400 +
Change: 2018-07-13 17:55:02.76800 +
Birth: -
bash-4.4# gluster v set test md-cache-timeout 1
volume set: success
bash-4.4# rm /mnt/test/file.txt
bash-4.4# dd if=/dev/zero of=/mnt/test/file.txt bs=512 count=1 oflag=sync;stat 
/mnt/test/file.txt;tar -czvf /tmp/abc.gz /mnt/test/file.txt;stat 
/mnt/test/file.txt
1+0 records in
1+0 records out
512 bytes copied, 0.0107589 s, 47.6 kB/s
  File: /mnt/test/file.txt
  Size: 512 Blocks: 1  IO Block: 131072 regular file
Device: 33h/51d Inode: 13569976446871695205  Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2018-07-13 17:55:11.54800 +
Modify: 2018-07-13 17:55:11.56000 +
Change: 2018-07-13 17:55:11.56000 +
Birth: -
tar: Removing leading `/' from member names
/mnt/test/file.txt
tar: /mnt/test/file.txt: file changed as we read it
  File: /mnt/test/file.txt
  Size: 512 Blocks: 1  IO Block: 131072 regular file
Device: 33h/51d Inode: 13569976446871695205  Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2018-07-13 17:55:11.58000 +
Modify: 2018-07-13 17:55:11.56000 +
Change: 2018-07-13 17:55:11.56400 +
Birth: -


Best Regards,
George
From: