Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-11 Thread Manikandan Selvaganesh
Yes Sanoj, that's the issue. It will somehow write the latest header which
has conf 1.2.
But for all the individual gfid's it won't work properly always. We need to
do it many times
and sometimes it will be still not set properly.

--
Thanks & Regards,
Manikandan Selvaganesan.
(@Manikandan Selvaganesh on Web)

On Fri, Nov 11, 2016 at 12:47 PM, Sanoj Unnikrishnan 
wrote:

>  Pasting Testing Logs
> ==
>
> 3.6
>
> [root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
> volume create: v1: success: please start the volume to access data
>
> [root@dhcp-0-112 rpms]# gluster v start v1
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
> [root@dhcp-0-112 rpms]# gluster v quota v1 enable
> volume quota : success
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1;  gluster v quota
> v1 limit-usage /dir1 5MB 10
> volume quota : success
> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2;  gluster v quota
> v1 limit-usage /dir2 16MB 10
> volume quota : success
> [root@dhcp-0-112 rpms]# gluster v quota v1 list
>   Path   Hard-limit Soft-limit   Used
> Available  Soft-limit exceeded? Hard-limit exceeded?
> 
> ---
> /dir1  5.0MB   10%  0Bytes
> 5.0MB  No   No
> /dir2 16.0MB   10%  0Bytes
> 16.0MB  No   No
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
> glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
>
> [root@dhcp-0-112 rpms]# umount /gluster_vols/vol
> [root@dhcp-0-112 rpms]#
>
> [root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
> [root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
>
> [root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
> 020   v   1   .   1  \n   U  \t 213   I 252 251   C 337 262   x  \b
> 030   i   y   r   5 021 312 335   w 366   X   5   B   H 210 260 227
> 040   ^ 251   X 237   G
> 045
> [root@dhcp-0-112 rpms]#
>
> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ |
> grep gfid
> getfattr: Removing leading '/' from absolute path names
> trusted.gfid=0x55098b49aaa943dfb278086979723511
> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ |
> grep gfid
> getfattr: Removing leading '/' from absolute path names
> trusted.gfid=0xcadd77f65835424888b0975ea9589f47
>
> [root@dhcp-0-112 rpms]# gluster v stop v1
> Stopping volume will make its data inaccessible. Do you want to continue?
> (y/n) y
> volume stop: v1: success
>
> [root@dhcp-0-112 rpms]# pkill glusterd
>
> +++ Replace with 3.9 build  without patch++
>
> [root@dhcp-0-112 3.9]# systemctl start glusterd
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
> glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
> [
> [root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
> volume set: success
>
> [root@dhcp-0-112 3.9]# gluster v start v1
> volume start: v1: success
>
> [root@dhcp-0-112 3.9]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>
> >> not sure why we see this , second attempt succeeds
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
> quota command failed : Failed to start aux mount
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir2 12MB 10
> volume quota : success
>
> [root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
> 020   v   1   .   2  \n   U  \t 213   I 252 251   C 337 262   x  \b
> 030   i   y   r   5 021 001 312 335   w 366   X   5   B   H 210 260
> 040 227   ^ 251   X 237   G 001
> 047
> [root@dhcp-0-112 3.9]# gluster v quota v1 list
>   Path   Hard-limit  Soft-limit  Used
> Available  Soft-limit exceeded? Hard-limit exceeded?
> 
> ---
> /dir1  5.0MB 10%(512.0KB)
> 0Bytes   5.0MB  No   No
> /dir2 12.0MB 10%(1.2MB)   0Bytes
> 12.0MB  No   No
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
>
> [root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
> 496616948 71 /var/lib/glusterd/vols/

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-11 Thread Manikandan Selvaganesh
Thanks Sanoj for the work and pasting the work in detail.

--
Thanks & Regards,
Manikandan Selvaganesan.
(@Manikandan Selvaganesh on Web)

On Fri, Nov 11, 2016 at 3:31 PM, Manikandan Selvaganesh <
manikandancs...@gmail.com> wrote:

> Yes Sanoj, that's the issue. It will somehow write the latest header which
> has conf 1.2.
> But for all the individual gfid's it won't work properly always. We need
> to do it many times
> and sometimes it will be still not set properly.
>
> --
> Thanks & Regards,
> Manikandan Selvaganesan.
> (@Manikandan Selvaganesh on Web)
>
> On Fri, Nov 11, 2016 at 12:47 PM, Sanoj Unnikrishnan 
> wrote:
>
>>  Pasting Testing Logs
>> ==
>>
>> 3.6
>>
>> [root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
>> volume create: v1: success: please start the volume to access data
>>
>> [root@dhcp-0-112 rpms]# gluster v start v1
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>> [root@dhcp-0-112 rpms]# gluster v quota v1 enable
>> volume quota : success
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1;  gluster v
>> quota v1 limit-usage /dir1 5MB 10
>> volume quota : success
>> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2;  gluster v
>> quota v1 limit-usage /dir2 16MB 10
>> volume quota : success
>> [root@dhcp-0-112 rpms]# gluster v quota v1 list
>>   Path   Hard-limit Soft-limit   Used
>> Available  Soft-limit exceeded? Hard-limit exceeded?
>> 
>> ---
>> /dir1  5.0MB   10%  0Bytes
>> 5.0MB  No   No
>> /dir2 16.0MB   10%  0Bytes
>> 16.0MB  No   No
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
>> glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
>>
>> [root@dhcp-0-112 rpms]# umount /gluster_vols/vol
>> [root@dhcp-0-112 rpms]#
>>
>> [root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
>> [root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
>>
>> [root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
>> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
>> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
>> 020   v   1   .   1  \n   U  \t 213   I 252 251   C 337 262   x  \b
>> 030   i   y   r   5 021 312 335   w 366   X   5   B   H 210 260 227
>> 040   ^ 251   X 237   G
>> 045
>> [root@dhcp-0-112 rpms]#
>>
>> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ |
>> grep gfid
>> getfattr: Removing leading '/' from absolute path names
>> trusted.gfid=0x55098b49aaa943dfb278086979723511
>> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ |
>> grep gfid
>> getfattr: Removing leading '/' from absolute path names
>> trusted.gfid=0xcadd77f65835424888b0975ea9589f47
>>
>> [root@dhcp-0-112 rpms]# gluster v stop v1
>> Stopping volume will make its data inaccessible. Do you want to continue?
>> (y/n) y
>> volume stop: v1: success
>>
>> [root@dhcp-0-112 rpms]# pkill glusterd
>>
>> +++ Replace with 3.9 build  without patch++
>>
>> [root@dhcp-0-112 3.9]# systemctl start glusterd
>> [root@dhcp-0-112 3.9]#
>> [root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
>> glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
>> [
>> [root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
>> volume set: success
>>
>> [root@dhcp-0-112 3.9]# gluster v start v1
>> volume start: v1: success
>>
>> [root@dhcp-0-112 3.9]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>>
>> >> not sure why we see this , second attempt succeeds
>> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
>> quota command failed : Failed to start aux mount
>> [root@dhcp-0-112 3.9]#
>> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir2 12MB 10
>> volume quota : success
>>
>> [root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
>> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
>> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
>> 020   v   1   .   2  \n   U  \t 213   I 252 251   C 337 262   x  \b
>> 030   i   y   r   5 021 001 312 335   w 366   X   5   B   H 210 260
>> 040 227   ^ 251   X 237   G 001
>> 047
>> [root@dhcp-0-112 3.9]# gluster v quota v1 list
>>   Path   Hard-limit  Soft-limit
>> Used  Available  Soft-limit exceeded? Hard-limit exceeded?
>> 
>> ---
>> /dir1  5.0MB 10%(512.0KB)
>> 0Bytes   5.0MB 

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Sanoj Unnikrishnan
 Pasting Testing Logs
==

3.6

[root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
volume create: v1: success: please start the volume to access data

[root@dhcp-0-112 rpms]# gluster v start v1
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
[root@dhcp-0-112 rpms]# gluster v quota v1 enable
volume quota : success
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1;  gluster v quota
v1 limit-usage /dir1 5MB 10
volume quota : success
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2;  gluster v quota
v1 limit-usage /dir2 16MB 10
volume quota : success
[root@dhcp-0-112 rpms]# gluster v quota v1 list
  Path   Hard-limit Soft-limit   Used
Available  Soft-limit exceeded? Hard-limit exceeded?
---
/dir1  5.0MB   10%  0Bytes
5.0MB  No   No
/dir2 16.0MB   10%  0Bytes
16.0MB  No   No
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64

[root@dhcp-0-112 rpms]# umount /gluster_vols/vol
[root@dhcp-0-112 rpms]#

[root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf

[root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
010   c   o   n   f   |   v   e   r   s   i   o   n   :
020   v   1   .   1  \n   U  \t 213   I 252 251   C 337 262   x  \b
030   i   y   r   5 021 312 335   w 366   X   5   B   H 210 260 227
040   ^ 251   X 237   G
045
[root@dhcp-0-112 rpms]#

[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ | grep
gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0x55098b49aaa943dfb278086979723511
[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ | grep
gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0xcadd77f65835424888b0975ea9589f47

[root@dhcp-0-112 rpms]# gluster v stop v1
Stopping volume will make its data inaccessible. Do you want to continue?
(y/n) y
volume stop: v1: success

[root@dhcp-0-112 rpms]# pkill glusterd

+++ Replace with 3.9 build  without patch++

[root@dhcp-0-112 3.9]# systemctl start glusterd
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
[
[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
volume set: success

[root@dhcp-0-112 3.9]# gluster v start v1
volume start: v1: success

[root@dhcp-0-112 3.9]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol

>> not sure why we see this , second attempt succeeds
[root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
quota command failed : Failed to start aux mount
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir2 12MB 10
volume quota : success

[root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
010   c   o   n   f   |   v   e   r   s   i   o   n   :
020   v   1   .   2  \n   U  \t 213   I 252 251   C 337 262   x  \b
030   i   y   r   5 021 001 312 335   w 366   X   5   B   H 210 260
040 227   ^ 251   X 237   G 001
047
[root@dhcp-0-112 3.9]# gluster v quota v1 list
  Path   Hard-limit  Soft-limit  Used
Available  Soft-limit exceeded? Hard-limit exceeded?
---
/dir1  5.0MB 10%(512.0KB)
0Bytes   5.0MB  No   No
/dir2 12.0MB 10%(1.2MB)   0Bytes
12.0MB  No   No
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10

[root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
496616948 71 /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 3.9]#

>> Now we disable , followed by enable and set the same limits to check if
we get the same quota.conf contents

[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 disable
Disabling quota will delete all the quota configuration. Do you want to
continue? (y/n) y
quota command failed : Volume quota failed. The cluster is operating at
version 30700. Quota command disable is unavailable in this version.
[root@dhcp-0-112 3.9]#

>> we need to upgrade to 3_7_12 to use disable

[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30712
v

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Vijay Bellur
On Thu, Nov 10, 2016 at 11:56 AM, Niels de Vos  wrote:
> On Thu, Nov 10, 2016 at 11:44:21AM -0500, Vijay Bellur wrote:
>> On Thu, Nov 10, 2016 at 11:14 AM, Shyam  wrote:
>> > On 11/10/2016 11:01 AM, Vijay Bellur wrote:
>> >>
>> >> On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:
>> >>>
>> >>>
>> >>>
>> >>> On 11/10/2016 10:21 AM, Vijay Bellur wrote:
>> 
>> 
>>  On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
>>   wrote:
>>  Given that we are done with the last release in 3.6.x, I think there
>>  would be users looking to upgrade.  My vote is to include the
>>  necessary patches in 3.9 and not let users go through unnatural
>>  workflows to get quota working again in 3.9.0.
>> >>>
>> >>>
>> >>>
>> >>> 
>> >>>
>> >>> Consider this a curiosity question ATM,
>> >>>
>> >>> 3.9 is an LTM, right? So we are not stating workflows here are set in
>> >>> stone?
>> >>> Can this not be an projected workflow?
>> >>>
>> >>
>> >>
>> >> 3.9 is a STM release as per [1].
>> >
>> >
>> > Sorry, I meant STM.
>> >
>> >>
>> >> Irrespective of a release being LTM or not, being able to upgrade to a
>> >> release without operational disruptions is a requirement.
>> >
>> >
>> > I would say upgrade to an STM *maybe* painful, as it is an STM and hence 
>> > may
>> > contain changes that are yet to be announced stable or changed workflows
>> > that are not easy to upgrade to. We do need to document them though, even
>> > for the STM.
>> >
>> > Along these lines, the next LTM should be as stated, i.e "without
>> > operational disruptions". The STM is for adventurous folks, no?
>> >
>>
>> In my view STM releases for getting new features out early. This would
>> enable early adopters to try and provide feedback about new features.
>> Existing features and upgrades should work smoothly. IOW, we do not
>> want to have known regressions for existing features in STM releases.
>> New features might have rough edges and this should be amply
>> advertised.
>
> I do not think users on 3.6 are the right consumers for a STM release.
> These users are conservative and did not ugrade earlier. I doubt they
> are interested in new features *now*. Users that did not upgrade before,
> are unlikely the users that will upgrade in three months when 3.9 is
> EOL.
>
>> In this specific case, quota has not undergone any significant changes
>> in 3.9 and letting such a relatively unchanged feature affect users
>> upgrading from 3.6 does not seem right to me. Also note that since
>> LATEST in d.g.o would point to 3.9.0 after the release, users
>> performing package upgrades on their systems could end up with 3.9.0
>> inadvertently.
>
> The packages from the CentOS Storage SIG will by default provide the
> latest LTM release. The STM release is provided in addition, and needs
> an extra step to enable.
>
> I am not sure how we can handle this in other distributions (or also
> with the packages on d.g.o.).

Maybe we should not flip the LATEST for non-RPM distributions in
d.g.o? or should we introduce LTM/LATEST and encourage users to change
their repository files to point to this?

Packaging in distributions would be handled by package maintainers and
I presume they can decide the appropriateness of a release for
packaging?

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Sanoj Unnikrishnan
On Thu, Nov 10, 2016 at 9:33 PM, Manikandan Selvaganesh <
manikandancs...@gmail.com> wrote:

>
> No problem. As you said , glusterd_quota_limit_usage invokes the function
> which regenerates the conf file. Though I do not remember exactly, to my
> understanding when I tried, it did not work properly in my setup. It is
> apparently because in the later function where we regenerate the quota.conf
> for versions greater than or equal to 3.7, when it is setting a limit or
> ideally when you are resetting a limit, it searches for the gfid on which
> it needs to set/reset the limit and modify only that to 17 bytes leaving
> the remaining ones untouched which again would result in unexpected
> behavior. In the case of enable or disable, the entire file gets newly
> generated. With this patch, we have done that during an upgrade as well.
>

 The code seems to handles this. The first thing done (before parsing for
gfid) is to bring the conf to 1.2 version in glusterd_store_quota_config.

Even I am not completely sure. Anyways its better to test and confirm the
> fact. I can test the same over the weekend if it's fine.
>

Ran into an unrelated set up issue at my end while testing this, I Will
test this before noon.

Thanks and Regards,
Sanoj
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Shyam

On 11/10/2016 11:56 AM, Niels de Vos wrote:

On Thu, Nov 10, 2016 at 11:44:21AM -0500, Vijay Bellur wrote:

On Thu, Nov 10, 2016 at 11:14 AM, Shyam  wrote:

On 11/10/2016 11:01 AM, Vijay Bellur wrote:


On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:




On 11/10/2016 10:21 AM, Vijay Bellur wrote:



On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
 wrote:
Given that we are done with the last release in 3.6.x, I think there
would be users looking to upgrade.  My vote is to include the
necessary patches in 3.9 and not let users go through unnatural
workflows to get quota working again in 3.9.0.






Consider this a curiosity question ATM,

3.9 is an LTM, right? So we are not stating workflows here are set in
stone?
Can this not be an projected workflow?




3.9 is a STM release as per [1].



Sorry, I meant STM.



Irrespective of a release being LTM or not, being able to upgrade to a
release without operational disruptions is a requirement.



I would say upgrade to an STM *maybe* painful, as it is an STM and hence may
contain changes that are yet to be announced stable or changed workflows
that are not easy to upgrade to. We do need to document them though, even
for the STM.

Along these lines, the next LTM should be as stated, i.e "without
operational disruptions". The STM is for adventurous folks, no?



In my view STM releases for getting new features out early. This would
enable early adopters to try and provide feedback about new features.
Existing features and upgrades should work smoothly. IOW, we do not
want to have known regressions for existing features in STM releases.
New features might have rough edges and this should be amply
advertised.


I do not think users on 3.6 are the right consumers for a STM release.
These users are conservative and did not ugrade earlier. I doubt they
are interested in new features *now*. Users that did not upgrade before,
are unlikely the users that will upgrade in three months when 3.9 is
EOL.


Valid and useful point.

But, just to be clear, if we introduce a non-backward compatible upgrade 
process (say) for a feature in an STM, we need to smoothen this out by 
the LTM, and not use the STM as the gate that let the upgrade process 
through and accept it as final.





In this specific case, quota has not undergone any significant changes
in 3.9 and letting such a relatively unchanged feature affect users
upgrading from 3.6 does not seem right to me. Also note that since
LATEST in d.g.o would point to 3.9.0 after the release, users
performing package upgrades on their systems could end up with 3.9.0
inadvertently.


The packages from the CentOS Storage SIG will by default provide the
latest LTM release. The STM release is provided in addition, and needs
an extra step to enable.


Perfect! This is along the lines of what I had in mind as well. I.e, the 
LTM is provided as the default, and STM is used *by choice*.




I am not sure how we can handle this in other distributions (or also
with the packages on d.g.o.).

Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Niels de Vos
On Thu, Nov 10, 2016 at 02:13:32PM +0530, Pranith Kumar Karampuri wrote:
> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:
> 
> >
> >
> > On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> > pkara...@redhat.com> wrote:
> >
> >> I am trying to understand the criticality of these patches. Raghavendra's
> >> patch is crucial because gfapi workloads(for samba and qemu) are affected
> >> severely. I waited for Krutika's patch because VM usecase can lead to disk
> >> corruption on replace-brick. If you could let us know the criticality and
> >> we are in agreement that they are this severe, we can definitely take them
> >> in. Otherwise next release is better IMO. Thoughts?
> >>
> >
> > If you are asking about how critical they are, then the first two are
> > definitely not but third one is actually a critical one as if user upgrades
> > from 3.6 to latest with quota enable, further peer probes get rejected and
> > the only work around is to disable quota and re-enable it back.
> >
> > On a different note, 3.9 head is not static and moving forward. So if you
> > are really looking at only critical patches need to go in, that's not
> > happening, just a word of caution!
> >
> 
> Yes this is one more workflow problem. There is no way to stop others from
> merging it in the tool. I once screwed Kaushal's release process by merging
> a patch because I didn't see his mail about pausing merges or something. I
> will send out a post-mortem about our experiences and the painpoints we
> felt after 3.9.0 release.

All bugfix updates have defined dates for releases. I expect that all
maintainers are aware of those. At least the maintainers that merge
patches in the stable branches. A couple of days before the release is
planned, patch merging should be coordinated with the release
engineer(s).
  https://www.gluster.org/community/release-schedule/

This is not the case for 3.8 yet, but because it is in RC state, none
but the release engineers are supposed to merge patches. That is what we
followed for other releases, I do not assume it changed.

We should probably document this better, possibly in the maintainers
responsibilities document (which I fail to find atm).

Niels


> 
> 
> >
> >
> >> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
> >> wrote:
> >>
> >>> Pranith,
> >>>
> >>> I'd like to see following patches getting in:
> >>>
> >>> http://review.gluster.org/#/c/15722/
> >>> http://review.gluster.org/#/c/15714/
> >>> http://review.gluster.org/#/c/15792/
> >>>
> >>
> >>>
> >>>
> >>>
> >>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
> >>> pkara...@redhat.com> wrote:
> >>>
>  hi,
>    The only problem left was EC taking more time. This should affect
>  small files a lot more. Best way to solve it is using compound-fops. So 
>  for
>  now I think going ahead with the release is best.
> 
>  We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>  15778 before going ahead with the release. If we missed any other
>  crucial patch please let us know.
> 
>  Will make the release as soon as this patch is merged.
> 
>  --
>  Pranith & Aravinda
> 
>  ___
>  maintainers mailing list
>  maintain...@gluster.org
>  http://www.gluster.org/mailman/listinfo/maintainers
> 
> 
> >>>
> >>>
> >>> --
> >>>
> >>> ~ Atin (atinm)
> >>>
> >>
> >>
> >>
> >> --
> >> Pranith
> >>
> >
> >
> >
> > --
> >
> > ~ Atin (atinm)
> >
> 
> 
> 
> -- 
> Pranith

> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Niels de Vos
On Thu, Nov 10, 2016 at 11:44:21AM -0500, Vijay Bellur wrote:
> On Thu, Nov 10, 2016 at 11:14 AM, Shyam  wrote:
> > On 11/10/2016 11:01 AM, Vijay Bellur wrote:
> >>
> >> On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:
> >>>
> >>>
> >>>
> >>> On 11/10/2016 10:21 AM, Vijay Bellur wrote:
> 
> 
>  On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
>   wrote:
>  Given that we are done with the last release in 3.6.x, I think there
>  would be users looking to upgrade.  My vote is to include the
>  necessary patches in 3.9 and not let users go through unnatural
>  workflows to get quota working again in 3.9.0.
> >>>
> >>>
> >>>
> >>> 
> >>>
> >>> Consider this a curiosity question ATM,
> >>>
> >>> 3.9 is an LTM, right? So we are not stating workflows here are set in
> >>> stone?
> >>> Can this not be an projected workflow?
> >>>
> >>
> >>
> >> 3.9 is a STM release as per [1].
> >
> >
> > Sorry, I meant STM.
> >
> >>
> >> Irrespective of a release being LTM or not, being able to upgrade to a
> >> release without operational disruptions is a requirement.
> >
> >
> > I would say upgrade to an STM *maybe* painful, as it is an STM and hence may
> > contain changes that are yet to be announced stable or changed workflows
> > that are not easy to upgrade to. We do need to document them though, even
> > for the STM.
> >
> > Along these lines, the next LTM should be as stated, i.e "without
> > operational disruptions". The STM is for adventurous folks, no?
> >
> 
> In my view STM releases for getting new features out early. This would
> enable early adopters to try and provide feedback about new features.
> Existing features and upgrades should work smoothly. IOW, we do not
> want to have known regressions for existing features in STM releases.
> New features might have rough edges and this should be amply
> advertised.

I do not think users on 3.6 are the right consumers for a STM release.
These users are conservative and did not ugrade earlier. I doubt they
are interested in new features *now*. Users that did not upgrade before,
are unlikely the users that will upgrade in three months when 3.9 is
EOL.

> In this specific case, quota has not undergone any significant changes
> in 3.9 and letting such a relatively unchanged feature affect users
> upgrading from 3.6 does not seem right to me. Also note that since
> LATEST in d.g.o would point to 3.9.0 after the release, users
> performing package upgrades on their systems could end up with 3.9.0
> inadvertently.

The packages from the CentOS Storage SIG will by default provide the
latest LTM release. The STM release is provided in addition, and needs
an extra step to enable.

I am not sure how we can handle this in other distributions (or also
with the packages on d.g.o.).

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Vijay Bellur
On Thu, Nov 10, 2016 at 11:14 AM, Shyam  wrote:
> On 11/10/2016 11:01 AM, Vijay Bellur wrote:
>>
>> On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:
>>>
>>>
>>>
>>> On 11/10/2016 10:21 AM, Vijay Bellur wrote:


 On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
  wrote:
 Given that we are done with the last release in 3.6.x, I think there
 would be users looking to upgrade.  My vote is to include the
 necessary patches in 3.9 and not let users go through unnatural
 workflows to get quota working again in 3.9.0.
>>>
>>>
>>>
>>> 
>>>
>>> Consider this a curiosity question ATM,
>>>
>>> 3.9 is an LTM, right? So we are not stating workflows here are set in
>>> stone?
>>> Can this not be an projected workflow?
>>>
>>
>>
>> 3.9 is a STM release as per [1].
>
>
> Sorry, I meant STM.
>
>>
>> Irrespective of a release being LTM or not, being able to upgrade to a
>> release without operational disruptions is a requirement.
>
>
> I would say upgrade to an STM *maybe* painful, as it is an STM and hence may
> contain changes that are yet to be announced stable or changed workflows
> that are not easy to upgrade to. We do need to document them though, even
> for the STM.
>
> Along these lines, the next LTM should be as stated, i.e "without
> operational disruptions". The STM is for adventurous folks, no?
>

In my view STM releases for getting new features out early. This would
enable early adopters to try and provide feedback about new features.
Existing features and upgrades should work smoothly. IOW, we do not
want to have known regressions for existing features in STM releases.
New features might have rough edges and this should be amply
advertised.

In this specific case, quota has not undergone any significant changes
in 3.9 and letting such a relatively unchanged feature affect users
upgrading from 3.6 does not seem right to me. Also note that since
LATEST in d.g.o would point to 3.9.0 after the release, users
performing package upgrades on their systems could end up with 3.9.0
inadvertently.

-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Shyam

On 11/10/2016 11:01 AM, Vijay Bellur wrote:

On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:



On 11/10/2016 10:21 AM, Vijay Bellur wrote:


On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
 wrote:
Given that we are done with the last release in 3.6.x, I think there
would be users looking to upgrade.  My vote is to include the
necessary patches in 3.9 and not let users go through unnatural
workflows to get quota working again in 3.9.0.





Consider this a curiosity question ATM,

3.9 is an LTM, right? So we are not stating workflows here are set in stone?
Can this not be an projected workflow?




3.9 is a STM release as per [1].


Sorry, I meant STM.



Irrespective of a release being LTM or not, being able to upgrade to a
release without operational disruptions is a requirement.


I would say upgrade to an STM *maybe* painful, as it is an STM and hence 
may contain changes that are yet to be announced stable or changed 
workflows that are not easy to upgrade to. We do need to document them 
though, even for the STM.


Along these lines, the next LTM should be as stated, i.e "without 
operational disruptions". The STM is for adventurous folks, no?




I was referring to the upgrade workflow in my previous email. I seem
to be having a dense moment and am unable to comprehend your question
about workflows. Can you please re-phrase that for me?


No, I guess there were a few confusing remarks in my response. I hope 
additional responses above make this more clear or at least intent that 
I see with an STM more clear.




Thanks!
Vijay

[1] https://www.gluster.org/community/release-schedule/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Manikandan Selvaganesh
Raghavendra,

No problem. As you said , glusterd_quota_limit_usage invokes the function
which regenerates the conf file. Though I do not remember exactly, to my
understanding when I tried, it did not work properly in my setup. It is
apparently because in the later function where we regenerate the quota.conf
for versions greater than or equal to 3.7, when it is setting a limit or
ideally when you are resetting a limit, it searches for the gfid on which
it needs to set/reset the limit and modify only that to 17 bytes leaving
the remaining ones untouched which again would result in unexpected
behavior. In the case of enable or disable, the entire file gets newly
generated. With this patch, we have done that during an upgrade as well.

Even I am not completely sure. Anyways its better to test and confirm the
fact. I can test the same over the weekend if it's fine.

On Nov 10, 2016 9:00 PM, "Raghavendra G"  wrote:

>
>
> On Thu, Nov 10, 2016 at 8:46 PM, Manikandan Selvaganesh <
> manikandancs...@gmail.com> wrote:
>
>> Enabling/disabling quota or removing limits are the ways in which
>> quota.conf is regenerated to the later version. It works properly. And as
>> Pranith said, both enabling/disabling takes a lot of time to crawl(though
>> now much faster with enhanced quota enable/disable process) which we cannot
>> suggest the users with a lot of quota configuration. Resetting the limit
>> using limit-usage does not work properly. I have tested the same. The
>> workaround is based on the user setup here. I mean the steps he exactly
>> used in order matters here. The workaround is not so generic.
>>
>
> Thanks Manikandan for the reply :). I've not tested this, but went through
> the code. If I am not wrong, function glusterd_store_quota_config  would
> write a quota.conf which is compatible for versions >= 3.7. This function
> is invoked by glusterd_quota_limit_usage unconditionally in success path.
> What am I missing here?
>
> @Pranith,
>
> Since Manikandan says his tests didn't succeed always, probably we should
> do any of the following
> 1. hold back the release till we successfully test limit-usage to rewrite
> quota.conf (I can do this tomorrow)
> 2. get the patch in question for 3.9
> 3. If 1 is failing, debug why 1 is not working and fix that.
>
> regards,
> Raghavendra
>
>> However, quota enable/disable would regenerate the file on any case.
>>
>> IMO, this bug is critical. I am not sure though how often users would hit
>> this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
>> this has nothing to do with this patch.
>>
>> On Nov 10, 2016 8:03 PM, "Pranith Kumar Karampuri" 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G 
>>> wrote:
>>>


 On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

>
>
> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> I am trying to understand the criticality of these patches.
>>> Raghavendra's patch is crucial because gfapi workloads(for samba and 
>>> qemu)
>>> are affected severely. I waited for Krutika's patch because VM usecase 
>>> can
>>> lead to disk corruption on replace-brick. If you could let us know the
>>> criticality and we are in agreement that they are this severe, we can
>>> definitely take them in. Otherwise next release is better IMO. Thoughts?
>>>
>>
>> If you are asking about how critical they are, then the first two are
>> definitely not but third one is actually a critical one as if user 
>> upgrades
>> from 3.6 to latest with quota enable, further peer probes get rejected 
>> and
>> the only work around is to disable quota and re-enable it back.
>>
>
> Let me take Raghavendra G's input also here.
>
> Raghavendra, what do you think we should do? Merge it or live with it
> till 3.9.1?
>

 The commit says quota.conf is rewritten to compatible version during
 three operations:
 1. enable/disable quota

>>>
>>> This will involve crawling the whole FS doesn't it?
>>>
>>> 2. limit usage

>>>
>>> This is a good way IMO. Could Sanoj/you confirm that this works once by
>>> testing it.
>>>
>>>
 3. remove quota limit

>>>
>>> I guess you added this for completeness. We can't really suggest this to
>>> users as a work around.
>>>
>>>

 I checked the code and it works as stated in commit msg. Probably we
 can list the above three operations as work around and take this patch in
 for 3.9.1

>>>

>
>>
>> On a different note, 3.9 head is not static and moving forward. So if
>> you are really looking at only critical patches need to go in, that's not
>> happening, just a word of caution!
>>
>>
>>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Muk

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Vijay Bellur
On Thu, Nov 10, 2016 at 10:49 AM, Shyam  wrote:
>
>
> On 11/10/2016 10:21 AM, Vijay Bellur wrote:
>>
>> On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
>>  wrote:
>>>
>>> Enabling/disabling quota or removing limits are the ways in which
>>> quota.conf
>>> is regenerated to the later version. It works properly. And as Pranith
>>> said,
>>> both enabling/disabling takes a lot of time to crawl(though now much
>>> faster
>>> with enhanced quota enable/disable process) which we cannot suggest the
>>> users with a lot of quota configuration. Resetting the limit using
>>> limit-usage does not work properly. I have tested the same. The
>>> workaround
>>> is based on the user setup here. I mean the steps he exactly used in
>>> order
>>> matters here. The workaround is not so generic. However, quota
>>> enable/disable would regenerate the file on any case.
>>>
>>> IMO, this bug is critical. I am not sure though how often users would hit
>>> this - Updating from 3.6 to latest versions. From 3.7 to latest, its
>>> fine,
>>> this has nothing to do with this patch.
>>>
>>
>> Given that we are done with the last release in 3.6.x, I think there
>> would be users looking to upgrade.  My vote is to include the
>> necessary patches in 3.9 and not let users go through unnatural
>> workflows to get quota working again in 3.9.0.
>
>
> 
>
> Consider this a curiosity question ATM,
>
> 3.9 is an LTM, right? So we are not stating workflows here are set in stone?
> Can this not be an projected workflow?
>


3.9 is a STM release as per [1].

Irrespective of a release being LTM or not, being able to upgrade to a
release without operational disruptions is a requirement.

I was referring to the upgrade workflow in my previous email. I seem
to be having a dense moment and am unable to comprehend your question
about workflows. Can you please re-phrase that for me?

Thanks!
Vijay

[1] https://www.gluster.org/community/release-schedule/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Shyam



On 11/10/2016 10:21 AM, Vijay Bellur wrote:

On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
 wrote:

Enabling/disabling quota or removing limits are the ways in which quota.conf
is regenerated to the later version. It works properly. And as Pranith said,
both enabling/disabling takes a lot of time to crawl(though now much faster
with enhanced quota enable/disable process) which we cannot suggest the
users with a lot of quota configuration. Resetting the limit using
limit-usage does not work properly. I have tested the same. The workaround
is based on the user setup here. I mean the steps he exactly used in order
matters here. The workaround is not so generic. However, quota
enable/disable would regenerate the file on any case.

IMO, this bug is critical. I am not sure though how often users would hit
this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
this has nothing to do with this patch.



Given that we are done with the last release in 3.6.x, I think there
would be users looking to upgrade.  My vote is to include the
necessary patches in 3.9 and not let users go through unnatural
workflows to get quota working again in 3.9.0.




Consider this a curiosity question ATM,

3.9 is an LTM, right? So we are not stating workflows here are set in 
stone? Can this not be an projected workflow?




Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Raghavendra Talur
On 10-Nov-2016 20:52, "Vijay Bellur"  wrote:
>
> On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
>  wrote:
> > Enabling/disabling quota or removing limits are the ways in which
quota.conf
> > is regenerated to the later version. It works properly. And as Pranith
said,
> > both enabling/disabling takes a lot of time to crawl(though now much
faster
> > with enhanced quota enable/disable process) which we cannot suggest the
> > users with a lot of quota configuration. Resetting the limit using
> > limit-usage does not work properly. I have tested the same. The
workaround
> > is based on the user setup here. I mean the steps he exactly used in
order
> > matters here. The workaround is not so generic. However, quota
> > enable/disable would regenerate the file on any case.
> >
> > IMO, this bug is critical. I am not sure though how often users would
hit
> > this - Updating from 3.6 to latest versions. From 3.7 to latest, its
fine,
> > this has nothing to do with this patch.
> >
>
> Given that we are done with the last release in 3.6.x, I think there
> would be users looking to upgrade.  My vote is to include the
> necessary patches in 3.9 and not let users go through unnatural
> workflows to get quota working again in 3.9.0.

+1, especially considering this is a ".0" release.

>
> Thanks,
> Vijay
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Raghavendra G
On Thu, Nov 10, 2016 at 8:46 PM, Manikandan Selvaganesh <
manikandancs...@gmail.com> wrote:

> Enabling/disabling quota or removing limits are the ways in which
> quota.conf is regenerated to the later version. It works properly. And as
> Pranith said, both enabling/disabling takes a lot of time to crawl(though
> now much faster with enhanced quota enable/disable process) which we cannot
> suggest the users with a lot of quota configuration. Resetting the limit
> using limit-usage does not work properly. I have tested the same. The
> workaround is based on the user setup here. I mean the steps he exactly
> used in order matters here. The workaround is not so generic.
>

Thanks Manikandan for the reply :). I've not tested this, but went through
the code. If I am not wrong, function glusterd_store_quota_config  would
write a quota.conf which is compatible for versions >= 3.7. This function
is invoked by glusterd_quota_limit_usage unconditionally in success path.
What am I missing here?

@Pranith,

Since Manikandan says his tests didn't succeed always, probably we should
do any of the following
1. hold back the release till we successfully test limit-usage to rewrite
quota.conf (I can do this tomorrow)
2. get the patch in question for 3.9
3. If 1 is failing, debug why 1 is not working and fix that.

regards,
Raghavendra

> However, quota enable/disable would regenerate the file on any case.
>
> IMO, this bug is critical. I am not sure though how often users would hit
> this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
> this has nothing to do with this patch.
>
> On Nov 10, 2016 8:03 PM, "Pranith Kumar Karampuri" 
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>


 On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
 wrote:

>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> I am trying to understand the criticality of these patches.
>> Raghavendra's patch is crucial because gfapi workloads(for samba and 
>> qemu)
>> are affected severely. I waited for Krutika's patch because VM usecase 
>> can
>> lead to disk corruption on replace-brick. If you could let us know the
>> criticality and we are in agreement that they are this severe, we can
>> definitely take them in. Otherwise next release is better IMO. Thoughts?
>>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user 
> upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

 Let me take Raghavendra G's input also here.

 Raghavendra, what do you think we should do? Merge it or live with it
 till 3.9.1?

>>>
>>> The commit says quota.conf is rewritten to compatible version during
>>> three operations:
>>> 1. enable/disable quota
>>>
>>
>> This will involve crawling the whole FS doesn't it?
>>
>> 2. limit usage
>>>
>>
>> This is a good way IMO. Could Sanoj/you confirm that this works once by
>> testing it.
>>
>>
>>> 3. remove quota limit
>>>
>>
>> I guess you added this for completeness. We can't really suggest this to
>> users as a work around.
>>
>>
>>>
>>> I checked the code and it works as stated in commit msg. Probably we can
>>> list the above three operations as work around and take this patch in for
>>> 3.9.1
>>>
>>
>>>

>
> On a different note, 3.9 head is not static and moving forward. So if
> you are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee > > wrote:
>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 hi,
   The only problem left was EC taking more time. This should
 affect small files a lot more. Best way to solve it is using 
 compound-fops.
 So for now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's
 http://review.gluster.org/#/c/15778 before going ahead with the
 release. If we missed any other crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing 

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Vijay Bellur
On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
 wrote:
> Enabling/disabling quota or removing limits are the ways in which quota.conf
> is regenerated to the later version. It works properly. And as Pranith said,
> both enabling/disabling takes a lot of time to crawl(though now much faster
> with enhanced quota enable/disable process) which we cannot suggest the
> users with a lot of quota configuration. Resetting the limit using
> limit-usage does not work properly. I have tested the same. The workaround
> is based on the user setup here. I mean the steps he exactly used in order
> matters here. The workaround is not so generic. However, quota
> enable/disable would regenerate the file on any case.
>
> IMO, this bug is critical. I am not sure though how often users would hit
> this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
> this has nothing to do with this patch.
>

Given that we are done with the last release in 3.6.x, I think there
would be users looking to upgrade.  My vote is to include the
necessary patches in 3.9 and not let users go through unnatural
workflows to get quota working again in 3.9.0.

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Manikandan Selvaganesh
Enabling/disabling quota or removing limits are the ways in which
quota.conf is regenerated to the later version. It works properly. And as
Pranith said, both enabling/disabling takes a lot of time to crawl(though
now much faster with enhanced quota enable/disable process) which we cannot
suggest the users with a lot of quota configuration. Resetting the limit
using limit-usage does not work properly. I have tested the same. The
workaround is based on the user setup here. I mean the steps he exactly
used in order matters here. The workaround is not so generic. However,
quota enable/disable would regenerate the file on any case.

IMO, this bug is critical. I am not sure though how often users would hit
this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
this has nothing to do with this patch.

On Nov 10, 2016 8:03 PM, "Pranith Kumar Karampuri" 
wrote:

>
>
> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G 
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
>>> wrote:
>>>


 On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

> I am trying to understand the criticality of these patches.
> Raghavendra's patch is crucial because gfapi workloads(for samba and qemu)
> are affected severely. I waited for Krutika's patch because VM usecase can
> lead to disk corruption on replace-brick. If you could let us know the
> criticality and we are in agreement that they are this severe, we can
> definitely take them in. Otherwise next release is better IMO. Thoughts?
>

 If you are asking about how critical they are, then the first two are
 definitely not but third one is actually a critical one as if user upgrades
 from 3.6 to latest with quota enable, further peer probes get rejected and
 the only work around is to disable quota and re-enable it back.

>>>
>>> Let me take Raghavendra G's input also here.
>>>
>>> Raghavendra, what do you think we should do? Merge it or live with it
>>> till 3.9.1?
>>>
>>
>> The commit says quota.conf is rewritten to compatible version during
>> three operations:
>> 1. enable/disable quota
>>
>
> This will involve crawling the whole FS doesn't it?
>
> 2. limit usage
>>
>
> This is a good way IMO. Could Sanoj/you confirm that this works once by
> testing it.
>
>
>> 3. remove quota limit
>>
>
> I guess you added this for completeness. We can't really suggest this to
> users as a work around.
>
>
>>
>> I checked the code and it works as stated in commit msg. Probably we can
>> list the above three operations as work around and take this patch in for
>> 3.9.1
>>
>
>>
>>>

 On a different note, 3.9 head is not static and moving forward. So if
 you are really looking at only critical patches need to go in, that's not
 happening, just a word of caution!


> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
> wrote:
>
>> Pranith,
>>
>> I'd like to see following patches getting in:
>>
>> http://review.gluster.org/#/c/15722/
>> http://review.gluster.org/#/c/15714/
>> http://review.gluster.org/#/c/15792/
>>
>
>>
>>
>>
>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> hi,
>>>   The only problem left was EC taking more time. This should
>>> affect small files a lot more. Best way to solve it is using 
>>> compound-fops.
>>> So for now I think going ahead with the release is best.
>>>
>>> We are waiting for Raghavendra Talur's
>>> http://review.gluster.org/#/c/15778 before going ahead with the
>>> release. If we missed any other crucial patch please let us know.
>>>
>>> Will make the release as soon as this patch is merged.
>>>
>>> --
>>> Pranith & Aravinda
>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>



 --

 ~ Atin (atinm)

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Raghavendra G
>>
>
>
>
> --
> Pranith
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Raghavendra G
On Thu, Nov 10, 2016 at 8:03 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G 
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
>>> wrote:
>>>


 On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

> I am trying to understand the criticality of these patches.
> Raghavendra's patch is crucial because gfapi workloads(for samba and qemu)
> are affected severely. I waited for Krutika's patch because VM usecase can
> lead to disk corruption on replace-brick. If you could let us know the
> criticality and we are in agreement that they are this severe, we can
> definitely take them in. Otherwise next release is better IMO. Thoughts?
>

 If you are asking about how critical they are, then the first two are
 definitely not but third one is actually a critical one as if user upgrades
 from 3.6 to latest with quota enable, further peer probes get rejected and
 the only work around is to disable quota and re-enable it back.

>>>
>>> Let me take Raghavendra G's input also here.
>>>
>>> Raghavendra, what do you think we should do? Merge it or live with it
>>> till 3.9.1?
>>>
>>
>> The commit says quota.conf is rewritten to compatible version during
>> three operations:
>> 1. enable/disable quota
>>
>
> This will involve crawling the whole FS doesn't it?
>

Yes. As you suggested, this is not the best work around.


>
> 2. limit usage
>>
>
> This is a good way IMO. Could Sanoj/you confirm that this works once by
> testing it.
>

We can do that by sometime tomorrow afternoon.


>
>> 3. remove quota limit
>>
>
> I guess you added this for completeness. We can't really suggest this to
> users as a work around.
>

Yes. I mentioned it for completeness sake.


>
>
>>
>> I checked the code and it works as stated in commit msg. Probably we can
>> list the above three operations as work around and take this patch in for
>> 3.9.1
>>
>
>>
>>>

 On a different note, 3.9 head is not static and moving forward. So if
 you are really looking at only critical patches need to go in, that's not
 happening, just a word of caution!


> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
> wrote:
>
>> Pranith,
>>
>> I'd like to see following patches getting in:
>>
>> http://review.gluster.org/#/c/15722/
>> http://review.gluster.org/#/c/15714/
>> http://review.gluster.org/#/c/15792/
>>
>
>>
>>
>>
>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> hi,
>>>   The only problem left was EC taking more time. This should
>>> affect small files a lot more. Best way to solve it is using 
>>> compound-fops.
>>> So for now I think going ahead with the release is best.
>>>
>>> We are waiting for Raghavendra Talur's
>>> http://review.gluster.org/#/c/15778 before going ahead with the
>>> release. If we missed any other crucial patch please let us know.
>>>
>>> Will make the release as soon as this patch is merged.
>>>
>>> --
>>> Pranith & Aravinda
>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>



 --

 ~ Atin (atinm)

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Raghavendra G
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Pranith Kumar Karampuri
On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G 
wrote:

>
>
> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 I am trying to understand the criticality of these patches.
 Raghavendra's patch is crucial because gfapi workloads(for samba and qemu)
 are affected severely. I waited for Krutika's patch because VM usecase can
 lead to disk corruption on replace-brick. If you could let us know the
 criticality and we are in agreement that they are this severe, we can
 definitely take them in. Otherwise next release is better IMO. Thoughts?

>>>
>>> If you are asking about how critical they are, then the first two are
>>> definitely not but third one is actually a critical one as if user upgrades
>>> from 3.6 to latest with quota enable, further peer probes get rejected and
>>> the only work around is to disable quota and re-enable it back.
>>>
>>
>> Let me take Raghavendra G's input also here.
>>
>> Raghavendra, what do you think we should do? Merge it or live with it
>> till 3.9.1?
>>
>
> The commit says quota.conf is rewritten to compatible version during three
> operations:
> 1. enable/disable quota
>

This will involve crawling the whole FS doesn't it?

2. limit usage
>

This is a good way IMO. Could Sanoj/you confirm that this works once by
testing it.


> 3. remove quota limit
>

I guess you added this for completeness. We can't really suggest this to
users as a work around.


>
> I checked the code and it works as stated in commit msg. Probably we can
> list the above three operations as work around and take this patch in for
> 3.9.1
>

>
>>
>>>
>>> On a different note, 3.9 head is not static and moving forward. So if
>>> you are really looking at only critical patches need to go in, that's not
>>> happening, just a word of caution!
>>>
>>>
 On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
 wrote:

> Pranith,
>
> I'd like to see following patches getting in:
>
> http://review.gluster.org/#/c/15722/
> http://review.gluster.org/#/c/15714/
> http://review.gluster.org/#/c/15792/
>

>
>
>
> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>>   The only problem left was EC taking more time. This should
>> affect small files a lot more. Best way to solve it is using 
>> compound-fops.
>> So for now I think going ahead with the release is best.
>>
>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>> 15778 before going ahead with the release. If we missed any other
>> crucial patch please let us know.
>>
>> Will make the release as soon as this patch is merged.
>>
>> --
>> Pranith & Aravinda
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>
>
> --
>
> ~ Atin (atinm)
>



 --
 Pranith

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>> Pranith
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Raghavendra G
On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee 
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> I am trying to understand the criticality of these patches.
>>> Raghavendra's patch is crucial because gfapi workloads(for samba and qemu)
>>> are affected severely. I waited for Krutika's patch because VM usecase can
>>> lead to disk corruption on replace-brick. If you could let us know the
>>> criticality and we are in agreement that they are this severe, we can
>>> definitely take them in. Otherwise next release is better IMO. Thoughts?
>>>
>>
>> If you are asking about how critical they are, then the first two are
>> definitely not but third one is actually a critical one as if user upgrades
>> from 3.6 to latest with quota enable, further peer probes get rejected and
>> the only work around is to disable quota and re-enable it back.
>>
>
> Let me take Raghavendra G's input also here.
>
> Raghavendra, what do you think we should do? Merge it or live with it till
> 3.9.1?
>

The commit says quota.conf is rewritten to compatible version during three
operations:
1. enable/disable quota
2. limit usage
3. remove quota limit

I checked the code and it works as stated in commit msg. Probably we can
list the above three operations as work around and take this patch in for
3.9.1


>
>>
>> On a different note, 3.9 head is not static and moving forward. So if you
>> are really looking at only critical patches need to go in, that's not
>> happening, just a word of caution!
>>
>>
>>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>>> wrote:
>>>
 Pranith,

 I'd like to see following patches getting in:

 http://review.gluster.org/#/c/15722/
 http://review.gluster.org/#/c/15714/
 http://review.gluster.org/#/c/15792/

>>>



 On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

> hi,
>   The only problem left was EC taking more time. This should
> affect small files a lot more. Best way to solve it is using 
> compound-fops.
> So for now I think going ahead with the release is best.
>
> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
> 15778 before going ahead with the release. If we missed any other
> crucial patch please let us know.
>
> Will make the release as soon as this patch is merged.
>
> --
> Pranith & Aravinda
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>


 --

 ~ Atin (atinm)

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Pranith Kumar Karampuri
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:

>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and
>> we are in agreement that they are this severe, we can definitely take them
>> in. Otherwise next release is better IMO. Thoughts?
>>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

Let me take Raghavendra G's input also here.

Raghavendra, what do you think we should do? Merge it or live with it till
3.9.1?


>
> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
 15778 before going ahead with the release. If we missed any other
 crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintain...@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers


>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Pranith Kumar Karampuri
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:

>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and
>> we are in agreement that they are this severe, we can definitely take them
>> in. Otherwise next release is better IMO. Thoughts?
>>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>
> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>

Yes this is one more workflow problem. There is no way to stop others from
merging it in the tool. I once screwed Kaushal's release process by merging
a patch because I didn't see his mail about pausing merges or something. I
will send out a post-mortem about our experiences and the painpoints we
felt after 3.9.0 release.


>
>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
 15778 before going ahead with the release. If we missed any other
 crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintain...@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers


>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Raghavendra Talur
On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri
 wrote:
> hi,
>   The only problem left was EC taking more time. This should affect
> small files a lot more. Best way to solve it is using compound-fops. So for
> now I think going ahead with the release is best.
>
> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/15778
> before going ahead with the release. If we missed any other crucial patch
> please let us know.

This patch is now merged.

>
> Will make the release as soon as this patch is merged.
>
> --
> Pranith & Aravinda
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Kaushal M
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:
>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri
>  wrote:
>>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and we
>> are in agreement that they are this severe, we can definitely take them in.
>> Otherwise next release is better IMO. Thoughts?
>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

If a workaround is present, I don't consider it a blocker for the release.

> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri
>>>  wrote:

 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's
 http://review.gluster.org/#/c/15778 before going ahead with the release. If
 we missed any other crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintain...@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
>
> ~ Atin (atinm)
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Atin Mukherjee
On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> I am trying to understand the criticality of these patches. Raghavendra's
> patch is crucial because gfapi workloads(for samba and qemu) are affected
> severely. I waited for Krutika's patch because VM usecase can lead to disk
> corruption on replace-brick. If you could let us know the criticality and
> we are in agreement that they are this severe, we can definitely take them
> in. Otherwise next release is better IMO. Thoughts?
>

If you are asking about how critical they are, then the first two are
definitely not but third one is actually a critical one as if user upgrades
from 3.6 to latest with quota enable, further peer probes get rejected and
the only work around is to disable quota and re-enable it back.

On a different note, 3.9 head is not static and moving forward. So if you
are really looking at only critical patches need to go in, that's not
happening, just a word of caution!


> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
> wrote:
>
>> Pranith,
>>
>> I'd like to see following patches getting in:
>>
>> http://review.gluster.org/#/c/15722/
>> http://review.gluster.org/#/c/15714/
>> http://review.gluster.org/#/c/15792/
>>
>
>>
>>
>>
>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> hi,
>>>   The only problem left was EC taking more time. This should affect
>>> small files a lot more. Best way to solve it is using compound-fops. So for
>>> now I think going ahead with the release is best.
>>>
>>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>>> 15778 before going ahead with the release. If we missed any other
>>> crucial patch please let us know.
>>>
>>> Will make the release as soon as this patch is merged.
>>>
>>> --
>>> Pranith & Aravinda
>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Pranith Kumar Karampuri
I am trying to understand the criticality of these patches. Raghavendra's
patch is crucial because gfapi workloads(for samba and qemu) are affected
severely. I waited for Krutika's patch because VM usecase can lead to disk
corruption on replace-brick. If you could let us know the criticality and
we are in agreement that they are this severe, we can definitely take them
in. Otherwise next release is better IMO. Thoughts?

On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
wrote:

> Pranith,
>
> I'd like to see following patches getting in:
>
> http://review.gluster.org/#/c/15722/
> http://review.gluster.org/#/c/15714/
> http://review.gluster.org/#/c/15792/
>
>
>
> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>>   The only problem left was EC taking more time. This should affect
>> small files a lot more. Best way to solve it is using compound-fops. So for
>> now I think going ahead with the release is best.
>>
>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>> 15778 before going ahead with the release. If we missed any other
>> crucial patch please let us know.
>>
>> Will make the release as soon as this patch is merged.
>>
>> --
>> Pranith & Aravinda
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Atin Mukherjee
Pranith,

I'd like to see following patches getting in:

http://review.gluster.org/#/c/15722/
http://review.gluster.org/#/c/15714/
http://review.gluster.org/#/c/15792/



On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>   The only problem left was EC taking more time. This should affect
> small files a lot more. Best way to solve it is using compound-fops. So for
> now I think going ahead with the release is best.
>
> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/15778
> before going ahead with the release. If we missed any other crucial patch
> please let us know.
>
> Will make the release as soon as this patch is merged.
>
> --
> Pranith & Aravinda
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>


-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel