Re: [Gluster-devel] fstest for fuse mount gluster

2017-04-24 Thread Raghavendra Gowdappa


- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Raghavendra Gowdappa" 
> Cc: "qingwei wei" , "Gluster Devel" 
> , "Nithya Balachandran"
> 
> Sent: Monday, April 24, 2017 11:45:43 AM
> Subject: Re: [Gluster-devel] fstest for fuse mount gluster
> 
> On Mon, Apr 24, 2017 at 11:28 AM, Raghavendra Gowdappa 
> wrote:
> 
> > Its a known issue in DHT. Since DHT relies on some bits (S+T) in the stat
> > for identifying that a file is being migrated by a rebalance process, it
> > strips them off. But the problem is if the same bits are being used by
> > application (as in this case). The fix (if we can come up with one) might
> > require some time for validation as impact can be large. Possible solutions
> > are:
> >
> > 1. Pranith pointed out that rebalance process always holds an inodelk
> > during file migration. Probably we can leverage this information and decide
> > whether to strip the flags or not.
> > 2. There is also a linkto xattr stored on the data-file during migration.
> > Instead of unconditionally striping Phase1(2) flags, lookup/stat codepaths
> > can check for the presence of this xattr too and strip the flags only if it
> > is present. Note that with this solution application won't see the (S+T)
> > bits in stat during rebalance.
> >
> 
> Innocent question. If we have a linkto-xattr and the lock, do we need
> anything more to detect that migration is in progress? I am wondering if we
> can remove the reliance on S+T bits. Worst case we will have to store the
> info on the xattr.

We can use locks, but the lock information is not persistent. If a brick goes 
down and comes back up that information is lost. Also, if we are using locks as 
metadata for identifying migration, it shouldn't be an explicit call as all 
write fops check for migration status. Instead, bricks can return lock 
information in various fops like writev, setattr, setxattr etc. Note that the 
change of rebalance process acquiring a lock on a file before migration was 
introduced much later. So, that information was not always available. We cannot 
just rely on linkto xattrs as there are two phases in migration:

1. phase 1 - migration is in progress (clients have to replay write operations 
on dst subvol too after they do on src)
2. phase 2 - migration is complete (clients have to switch to dst subvol)

So, rebalance need two metadata - (S+T) and linkto xattr.

Also, not all write fops (writev, setattr, setxattr etc) don't return xattrs 
like linkto etc. I assume the initial designers wanted to use as less xattrs as 
possible.

However, using xattr to store migration metadata looks simple and doesn't 
affect anything visible to application. But of course, there is an added 
getxattr overhead on brick during _all_ write (writev, setxattr, setattr, 
fxattrop etc) operations.

> 
> 
> >
> > regards,
> > Raghavendra.
> >
> > - Original Message -
> > > From: "Pranith Kumar Karampuri" 
> > > To: "qingwei wei" 
> > > Cc: "Gluster Devel" , "Raghavendra Gowdappa"
> > , "Nithya Balachandran"
> > > 
> > > Sent: Monday, April 24, 2017 11:03:29 AM
> > > Subject: Re: [Gluster-devel] fstest for fuse mount gluster
> > >
> > > This is a bug in dht it seems like. It is stripping PHASE1 flags
> > > unconditionally.
> > >
> > > (gdb)
> > > 1212DHT_STRIP_PHASE1_FLAGS (&local->stbuf);
> > > (gdb) p local->stbuf.ia_prot
> > > $18 = {
> > >   suid = 1 '\001',
> > >   sgid = 1 '\001', <---
> > >   sticky = 1 '\001', <---
> > > .
> > > }
> > > (gdb) n
> > > 1213dht_set_fixed_dir_stat (&local->postparent);
> > > (gdb) p local->stbuf.ia_prot
> > > $19 = {
> > >   suid = 1 '\001',
> > >   sgid = 0 '\000', <---
> > >   sticky = 0 '\000', <---
> > > ...
> > >
> > > This is leading to -->4777
> > >
> > > Will update bug with same info
> > >
> > >
> > > On Thu, Apr 20, 2017 at 8:58 PM, qingwei wei 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > Posted this in gluster-user mailing list but got no response so far,
> > so i
> > > > post in gluster-devel.
> > > >
> > > > I found this test suite (https://github.com/Hnasar/pjdfstest) for me
> > to
> > > > test fuse mount gluster and there is some reported issue from the
> > test. One
> > > > of the error is as follow.
> > > >
> > > > When i chmod  to a file in fuse mounted gluster volume. the return
> > > > stat value for the file is not  instead of 4777.
> > > >
> > > > root@ubuntu16d:/mnt/g310mp# touch test
> > > > root@ubuntu16d:/mnt/g310mp# chmod  test
> > > > root@ubuntu16d:/mnt/g310mp# stat test
> > > >   File: 'test'
> > > >   Size: 0   Blocks: 0  IO Block: 131072 regular
> > empty
> > > > file
> > > > Device: 29h/41d Inode: 9618589997017543511  Links: 1
> > > > Access: (4777/-rwsrwxrwx)  Uid: (0/root)   Gid: (0/
> > root)
> > > > Access: 2017-11-30 14:21:23.374871207 +0800
> > > > Modify: 2017-11-30 14:21:16.974871000 +0800
> > > > Change: 2017-11-30 14:21:23.374871207 

Re: [Gluster-devel] [Gluster-users] CANNOT Install gluster on aarch64: ./configure: syntax error near upexpected token 'UUID, ' PKG_CHECK_MODULES(UUID, '

2017-04-24 Thread Niels de Vos
On Sat, Apr 22, 2017 at 01:47:56PM +, Zhitao Li wrote:
> Hello, everyone,
> 
> 
> I am installing glusterfs release 3.8.11 now on my aarch64 computer. It will 
> fail in execute configure.
> 
> [cid:0c2c69f8-aaad-400a-b0b8-9635f173ed31]
> 
> 
> Configure file is this:
> [cid:2ccf15ef-5d33-4dd5-ab15-81003c863878]
> 
> I think architectures result in this error because on my x86_64 computer, the 
> configure file generated will not check package.
> 
> Could anyone know how to fix this? Thanks!

You need pkg-config installed, on Fedora and CentOS this is provided by
the "pkgconfig" package, and it contains /usr/share/aclocal/pkg.m4 that
provices the needed macros for configure.ac.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-24 Thread Kotresh Hiremath Ravishankar
Hi Abhishek,

Bitrot requires versioning of files to be down on writes.
This was being done irrespective of whether bitrot is
enabled or not. This takes considerable CPU. With the
fix https://review.gluster.org/#/c/14442/, it is made
optional and is enabled only with bitrot. If bitrot
is not enabled, then you won't see any setxattr/getxattrs
related to bitrot.

The fix would be available in 3.11. 


Thanks and Regards,
Kotresh H R

- Original Message -
> From: "ABHISHEK PALIWAL" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Gluster Devel" , "gluster-users" 
> , "Kotresh Hiremath
> Ravishankar" 
> Sent: Monday, April 24, 2017 11:30:57 AM
> Subject: Re: [Gluster-users] High load on glusterfsd process
> 
> Hi Kotresh,
> 
> Could you please update me on this?
> 
> Regards,
> Abhishek
> 
> On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
> 
> > +Kotresh who seems to have worked on the bug you mentioned.
> >
> > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> > abhishpali...@gmail.com> wrote:
> >
> >>
> >> If the patch provided in that case will resolve my bug as well then
> >> please provide the patch so that I will backport it on 3.7.6
> >>
> >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> >> abhishpali...@gmail.com> wrote:
> >>
> >>> Hi Team,
> >>>
> >>> I have noticed that there are so many glusterfsd threads are running in
> >>> my system and we observed some of those thread consuming more cpu. I
> >>> did “strace” on two such threads (before the problem disappeared by
> >>> itself)
> >>> and found that there is a continuous activity like below:
> >>>
> >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
> >>> available)
> >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
> >>> available)
> >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
> >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
> >>> available)
> >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
> >>> available)
> >>>
> >>> I have found the below existing issue which is very similar to my
> >>> scenario.
> >>>
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
> >>>
> >>> We are using the gluster-3.7.6 and it seems that the issue is fixed in
> >>> 3.8.4 version.
> >>>
> >>> Could you please let me know why it showing the number of above logs and
> >>> reason behind it as it is not explained in the above bug.
> >>>
> >>> Regards,
> >>> Abhishek
> >>>
> >>> --
> >>>
> >>>
> >>>
> >>>
> >>> Regards
> >>> Abhishek Paliwal
> >>>
> >>
> >>
> >>
> >> --
> >>
> >>
> >>
> >>
> >> Regards
> >> Abhishek Paliwal
> >>
> >> ___
> >> Gluster-users mailing list
> >> gluster-us...@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >
> >
> >
> > --
> > Pranith
> >
> 
> 
> 
> --
> 
> 
> 
> 
> Regards
> Abhishek Paliwal
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
Hi Kotresh,

I have seen the patch available on the link which you shared. It seems we
don't have some files in gluser 3.7.6 which you modified in the patch.

Is there any possibility to provide the patch for Gluster 3.7.6?

Regards,
Abhishek

On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi Abhishek,
>
> Bitrot requires versioning of files to be down on writes.
> This was being done irrespective of whether bitrot is
> enabled or not. This takes considerable CPU. With the
> fix https://review.gluster.org/#/c/14442/, it is made
> optional and is enabled only with bitrot. If bitrot
> is not enabled, then you won't see any setxattr/getxattrs
> related to bitrot.
>
> The fix would be available in 3.11.
>
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "ABHISHEK PALIWAL" 
> > To: "Pranith Kumar Karampuri" 
> > Cc: "Gluster Devel" , "gluster-users" <
> gluster-us...@gluster.org>, "Kotresh Hiremath
> > Ravishankar" 
> > Sent: Monday, April 24, 2017 11:30:57 AM
> > Subject: Re: [Gluster-users] High load on glusterfsd process
> >
> > Hi Kotresh,
> >
> > Could you please update me on this?
> >
> > Regards,
> > Abhishek
> >
> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> > pkara...@redhat.com> wrote:
> >
> > > +Kotresh who seems to have worked on the bug you mentioned.
> > >
> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> > > abhishpali...@gmail.com> wrote:
> > >
> > >>
> > >> If the patch provided in that case will resolve my bug as well then
> > >> please provide the patch so that I will backport it on 3.7.6
> > >>
> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> > >> abhishpali...@gmail.com> wrote:
> > >>
> > >>> Hi Team,
> > >>>
> > >>> I have noticed that there are so many glusterfsd threads are running
> in
> > >>> my system and we observed some of those thread consuming more cpu. I
> > >>> did “strace” on two such threads (before the problem disappeared by
> > >>> itself)
> > >>> and found that there is a continuous activity like below:
> > >>>
> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
> 20170126T113552+.log.gz",
> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-
> 425_20170126T113552+.log.gz",
> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-
> 425_20170126T113552+.log.gz",
> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>>
> > >>> I have found the below existing issue which is very similar to my
> > >>> scenario.
> > >>>
> > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
> > >>>
> > >>> We are using the gluster-3.7.6 and it seems that the issue is fixed
> in
> > >>> 3.8.4 version.
> > >>>
> > >>> Could you please let me know why it showing the number of above logs
> and
> > >>> reason behind it as it is not explained in the above bug.
> > >>>
> > >>> Regards,
> > >>> Abhishek
> > >>>
> > >>> --
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Regards
> > >>> Abhishek Paliwal
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >>
> > >>
> > >>
> > >>
> > >> Regards
> > >> Abhishek Paliwal
> > >>
> > >> ___
> > >> Gluster-users mailing list
> > >> gluster-us...@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-users
> > >>
> > >
> > >
> > >
> > > --
> > > Pranith
> > >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> >
>



-- 




Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2017-04-24-f071d2a2 (master branch)

2017-04-24 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-24-f071d2a2
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
the version?

On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL 
wrote:

> Hi Kotresh,
>
> I have seen the patch available on the link which you shared. It seems we
> don't have some files in gluser 3.7.6 which you modified in the patch.
>
> Is there any possibility to provide the patch for Gluster 3.7.6?
>
> Regards,
> Abhishek
>
> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> Bitrot requires versioning of files to be down on writes.
>> This was being done irrespective of whether bitrot is
>> enabled or not. This takes considerable CPU. With the
>> fix https://review.gluster.org/#/c/14442/, it is made
>> optional and is enabled only with bitrot. If bitrot
>> is not enabled, then you won't see any setxattr/getxattrs
>> related to bitrot.
>>
>> The fix would be available in 3.11.
>>
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> - Original Message -
>> > From: "ABHISHEK PALIWAL" 
>> > To: "Pranith Kumar Karampuri" 
>> > Cc: "Gluster Devel" , "gluster-users" <
>> gluster-us...@gluster.org>, "Kotresh Hiremath
>> > Ravishankar" 
>> > Sent: Monday, April 24, 2017 11:30:57 AM
>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>> >
>> > Hi Kotresh,
>> >
>> > Could you please update me on this?
>> >
>> > Regards,
>> > Abhishek
>> >
>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
>> > pkara...@redhat.com> wrote:
>> >
>> > > +Kotresh who seems to have worked on the bug you mentioned.
>> > >
>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
>> > > abhishpali...@gmail.com> wrote:
>> > >
>> > >>
>> > >> If the patch provided in that case will resolve my bug as well then
>> > >> please provide the patch so that I will backport it on 3.7.6
>> > >>
>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>> > >> abhishpali...@gmail.com> wrote:
>> > >>
>> > >>> Hi Team,
>> > >>>
>> > >>> I have noticed that there are so many glusterfsd threads are
>> running in
>> > >>> my system and we observed some of those thread consuming more cpu. I
>> > >>> did “strace” on two such threads (before the problem disappeared by
>> > >>> itself)
>> > >>> and found that there is a continuous activity like below:
>> > >>>
>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
>> 126T113552+.log.gz",
>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
>> 20170126T113552+.log.gz",
>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
>> 20170126T113552+.log.gz",
>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T
>> 180550+.log.gz",
>> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
>> 123T180550+.log.gz",
>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
>> 123T180550+.log.gz",
>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>>
>> > >>> I have found the below existing issue which is very similar to my
>> > >>> scenario.
>> > >>>
>> > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
>> > >>>
>> > >>> We are using the gluster-3.7.6 and it seems that the issue is fixed
>> in
>> > >>> 3.8.4 version.
>> > >>>
>> > >>> Could you please let me know why it showing the number of above
>> logs and
>> > >>> reason behind it as it is not explained in the above bug.
>> > >>>
>> > >>> Regards,
>> > >>> Abhishek
>> > >>>
>> > >>> --
>> > >>>
>> > >>>
>> > >>>
>> > >>>
>> > >>> Regards
>> > >>> Abhishek Paliwal
>> > >>>
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> Regards
>> > >> Abhishek Paliwal
>> > >>
>> > >> ___
>> > >> Gluster-users mailing list
>> > >> gluster-us...@gluster.org
>> > >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > > Pranith
>> > >
>> >
>> >
>> >
>> > --
>> >
>> >
>> >
>> >
>> > Regards
>> > Abhishek Paliwal
>> >
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Pa

[Gluster-devel] Revisiting Quota functionality in GlusterFS

2017-04-24 Thread Sanoj Unnikrishnan
Hi All,

Considering we are coming out with major release plan, we would like to
revisit the QUOTA feature to decide its path forward.
I have been working on quota feature for a couple of months now and have
come across various issues from performance, usability and correctness
perspective.
We do have some initial thoughts on how these can be solved. But, to ensure
we work on the most important things first, we would like to get a pulse of
this from the users of quota feature.

Below is a questionnaire for the same. In order to put the questions in
perspective, I have provided rationale, external links to alternative
design thoughts.
The focus of this mail thread though is to get a user pulse than being a
design review. Please comment on design in the github issue itself.
We can bring the discussion to gluster-devel as they gain traction. We
would like the design discussion to be driven by the generated user
feedback.

1) On how many directories do u generally have quota limits configured.
How often do we see quota limits added/removed/changed. [numbers would
help here than qualitative answers]
Any use case with large number of quota limits (> 100 on a single
volume say)?
Is a filesystem Crawl acceptable each time a new quota limit is set?
(crawl may take time equivalent to du command, this would essentially
introduce a delay for a limit to take effect after it is set)

Rationale:
  Currently, we account the usage of all directories. A performance issue
with this approach is we need to recursively account the usage of a
file/directory on all its ancestors along the path.
  If we consider few directories have limits configured, we could explore
alternatives where accounting information is maintained only in directories
that have limits set [RFC1].

2) How strict accounting is generally acceptable. Is it acceptable if there
is an overshoot by 100MB say?
What are the general value of limits configured. Does anybody set
limits in the order of MBs ?

Rationale:
Relaxing the accounting could be another way to gain in performance. We
can batch / cache xattr updates. [RFC2]

3) Does directory quota suit your needs? Would you prefer if it works like
XFS directory quota?
What are the back-end Filesystem you expect to use quota with?
Is it acceptable to take the route of leveraging backend FS quota with
support for limited FS (or just xfs)?

Rationale:
Behavior of directory quota in GlusterFS largely differs from the XFS
way. both has its pros and cons.
GlusterFS will allow you to set a limit on /home and /home/user1. So if
you write to /home/user1/file1 the limit for both its ancestors are checked
and honored.
An admin who has to give storage to 50 users can configure /home to 1
TB and each user home to 25GB say (Hoping that not all users would
simultaneously use up their 25 GB). This may not make sense for a cloud
storage but it does make sense in a university lab kind of
setting.
The same cannot be done with XFS beacuse XFS directory quota relies on
project id. Only one project id is associated with a directory so only 1
limit can be honored at any time.
XFS directory quota has its advantages though. Please find details in
[RFC3]

4) Do you use quota with large number of bricks in a volume? (How many?)
Do you have quota with large number of volumes hosted on a node? (How
many?)
Have you seen any performance issues in such setting?

Rationale:
   We have a single quotad process (aggregator of accounting info) on each
nodes in the Trusted storage pool.
   All the bricks (even from different volumes) hosted on a node share the
quotad.
   Another issue in this design is large number of bricks within a volume
will increase IPC latency during aggregation.
   One way of mitigating this is by changing quotad and quota layer to work
on a lease based approach [RFC4]

5) If you set directory level quota, do you expect to have hard link across
the directories? or want to support rename() across directories?

Rationale:
Supporting these operations consistently does add complexities in
design.
XFS itself doesn't support these two operations when quota is set on it.

6) Do u use inode-quota feature?
Any user looking for the user/group quota feature?

7) Are you using current quota feature?
  If yes, are you happy?
  if yes, and not happy? what are the things you would like to see
  improve?

RFC1 - Alternative Accounting method in marker (
https://github.com/gluster/glusterfs/issues/182)
RFC2 - Batched updates (https://github.com/gluster/glusterfs/issues/183)
RFC3 - XFS based Quota (https://github.com/gluster/glusterfs/issues/184)
RFC4 - Lease based quotad (https://github.com/gluster/glusterfs/issues/181)

Note: These RFC are not interdependent.


Thanks and Regards,
Sanoj
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
Hi Kotresh,

Could you please update whether it is possible to get the patch or bakport
this patch on Gluster 3.7.6 version.

Regards,
Abhishek

On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL 
wrote:

> What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
> the version?
>
> On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL  > wrote:
>
>> Hi Kotresh,
>>
>> I have seen the patch available on the link which you shared. It seems we
>> don't have some files in gluser 3.7.6 which you modified in the patch.
>>
>> Is there any possibility to provide the patch for Gluster 3.7.6?
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>>> Bitrot requires versioning of files to be down on writes.
>>> This was being done irrespective of whether bitrot is
>>> enabled or not. This takes considerable CPU. With the
>>> fix https://review.gluster.org/#/c/14442/, it is made
>>> optional and is enabled only with bitrot. If bitrot
>>> is not enabled, then you won't see any setxattr/getxattrs
>>> related to bitrot.
>>>
>>> The fix would be available in 3.11.
>>>
>>>
>>> Thanks and Regards,
>>> Kotresh H R
>>>
>>> - Original Message -
>>> > From: "ABHISHEK PALIWAL" 
>>> > To: "Pranith Kumar Karampuri" 
>>> > Cc: "Gluster Devel" , "gluster-users" <
>>> gluster-us...@gluster.org>, "Kotresh Hiremath
>>> > Ravishankar" 
>>> > Sent: Monday, April 24, 2017 11:30:57 AM
>>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>>> >
>>> > Hi Kotresh,
>>> >
>>> > Could you please update me on this?
>>> >
>>> > Regards,
>>> > Abhishek
>>> >
>>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
>>> > pkara...@redhat.com> wrote:
>>> >
>>> > > +Kotresh who seems to have worked on the bug you mentioned.
>>> > >
>>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
>>> > > abhishpali...@gmail.com> wrote:
>>> > >
>>> > >>
>>> > >> If the patch provided in that case will resolve my bug as well then
>>> > >> please provide the patch so that I will backport it on 3.7.6
>>> > >>
>>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>>> > >> abhishpali...@gmail.com> wrote:
>>> > >>
>>> > >>> Hi Team,
>>> > >>>
>>> > >>> I have noticed that there are so many glusterfsd threads are
>>> running in
>>> > >>> my system and we observed some of those thread consuming more cpu.
>>> I
>>> > >>> did “strace” on two such threads (before the problem disappeared by
>>> > >>> itself)
>>> > >>> and found that there is a continuous activity like below:
>>> > >>>
>>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
>>> 126T113552+.log.gz",
>>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
>>> 0170126T113552+.log.gz",
>>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
>>> 0170126T113552+.log.gz",
>>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T
>>> 180550+.log.gz",
>>> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
>>> 123T180550+.log.gz",
>>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
>>> 123T180550+.log.gz",
>>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>>
>>> > >>> I have found the below existing issue which is very similar to my
>>> > >>> scenario.
>>> > >>>
>>> > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
>>> > >>>
>>> > >>> We are using the gluster-3.7.6 and it seems that the issue is
>>> fixed in
>>> > >>> 3.8.4 version.
>>> > >>>
>>> > >>> Could you please let me know why it showing the number of above
>>> logs and
>>> > >>> reason behind it as it is not explained in the above bug.
>>> > >>>
>>> > >>> Regards,
>>> > >>> Abhishek
>>> > >>>
>>> > >>> --
>>> > >>>
>>> > >>>
>>> > >>>
>>> > >>>
>>> > >>> Regards
>>> > >>> Abhishek Paliwal
>>> > >>>
>>> > >>
>>> > >>
>>> > >>
>>> > >> --
>>> > >>
>>> > >>
>>> > >>
>>> > >>
>>> > >> Regards
>>> > >> Abhishek Paliwal
>>> > >>
>>> > >> ___
>>> 

Re: [Gluster-devel] Priority based ping packet for 3.10

2017-04-24 Thread Raghavendra G
On Fri, Apr 21, 2017 at 11:43 AM, Raghavendra G 
wrote:

> Summing up various discussions I had on this,
>
> 1. Current ping frame work should measure just the responsiveness of
> network and rpc layer. This means poller threads shouldn't be winding the
> individual fops at all (as it might add delay in reading the ping
> requests). Instead, they can queue the requests to a common work queue and
> other threads should pick up the requests.
>

Patch can be found at:
https://review.gluster.org/17105


> 4. We've fixed some lock contention issues on the brick stack due to high
> latency on backend fs. However, this is on-going work as contentions can be
> found in various codepaths (mem-pool etc).
>

These patches were contributed by "Krutika Dhananjay" <
kdhananj.at.redhat.com>.
https://review.gluster.org/16869
https://review.gluster.org/16785
https://review.gluster.org/16462

Thanks Krutika for all those patches :).

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel