Re: [Gluster-devel] [Gluster-users] Gluster documentation search

2017-08-28 Thread Aravinda

On 08/28/2017 04:44 PM, Nigel Babu wrote:

Hello folks,

I spend some time today mucking about trying to figure out how to make 
our documentation search a better experience. The short answer is, 
search kind of works now.


Long answer: mkdocs creates a client side file which is used for 
search. RTD overrides this by referring people to Elasticsearch. 
However, that doesn't clear out stale entries and we're plagued with a 
whole lot of stale entries. I've made some changes that other 
consumers of RTD have done to override our search to use the JS file 
rather than Elasticsearch.


--
nigelb


___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Nice.

Please version the generated search_index.json file so that it will be 
easy to invalidate the browser's cache once changed.


--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-28 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra G" 
> To: "Nithya Balachandran" 
> Cc: "Raghavendra Gowdappa" , anoo...@redhat.com, 
> "Gluster Devel" ,
> raghaven...@redhat.com
> Sent: Tuesday, August 29, 2017 8:52:28 AM
> Subject: Re: [Gluster-devel] Need inputs on patch #17985
> 
> On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran 
> wrote:
> 
> > It has been a while but iirc snapview client (loaded abt dht/tier etc) had
> > some issues when we ran tiering tests. Rafi might have more info on this -
> > basically it was expecting to find the inode_ctx populated but it was not.
> >
> 
> Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> ownership of,
> 
> * Identifying whether the patch in question causes the issue?

gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly checked 
whether its the same state stored by gf_svc_lookup_cbk and it looks like the 
same state. So, I guess readdirp is handled correctly by snapview-client and an 
explicit lookup is not required. But, will wait for inputs from rabhat and rafi.

[1] 
https://github.com/gluster/glusterfs/blob/master/xlators/features/snapview-client/src/snapview-client.c#L1962

> * Send a fix or at least evaluate whether a fix is possible.
> 
> @Others,
> 
> With the motivation of getting some traction on this, Is it ok if we:
> * Set a deadline of around 15 days to complete the review (or testing with
> the patch in question) of respective components and to come up with issues
> (if any).
> * Post the deadline, if there are no open issues, go ahead and merge the
> patch?
> 
> If time is not enough, let us know and we can come up with a reasonable
> time.
> 
> regards,
> Raghavendra
> 
> 
> > On 24 August 2017 at 10:13, Raghavendra G 
> > wrote:
> >
> >> Note that we need to consider xlators on brick stack too. I've added
> >> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
> >> whether this patch affects your component.
> >>
> >> For reference, following are the xlators loaded in brick stack
> >>
> >> storage/posix
> >> features/trash
> >> features/changetimerecorder
> >> features/changelog
> >> features/bitrot-stub
> >> features/access-control
> >> features/locks
> >> features/worm
> >> features/read-only
> >> features/leases
> >> features/upcall
> >> performance/io-threads
> >> features/selinux
> >> features/marker
> >> features/barrier
> >> features/index
> >> features/quota
> >> debug/io-stats
> >> performance/decompounder
> >> protocol/server
> >>
> >>
> >> For those not following this thread, the question we need to answer is,
> >> "whether the xlator you are associated with works fine if a non-lookup
> >> fop (like open, setattr, stat etc) hits it without a lookup ever being
> >> done
> >> on that inode"
> >>
> >> regards,
> >> Raghavendra
> >>
> >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> >> rgowd...@redhat.com> wrote:
> >>
> >>> Thanks Pranith and Ashish for your inputs.
> >>>
> >>> - Original Message -
> >>> > From: "Pranith Kumar Karampuri" 
> >>> > To: "Ashish Pandey" 
> >>> > Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" <
> >>> xhernan...@datalab.es>, "Gluster Devel"
> >>> > 
> >>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
> >>> > Subject: Re: Need inputs on patch #17985
> >>> >
> >>> > Raghavendra,
> >>> > As Ashish mentioned, there aren't any known problems if upper
> >>> xlators
> >>> > don't send lookups in EC at the moment.
> >>> >
> >>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey 
> >>> wrote:
> >>> >
> >>> > > Raghvendra,
> >>> > >
> >>> > > I have provided my comment on this patch.
> >>> > > I think EC will not have any issue with this approach.
> >>> > > However, I would welcome comments from Xavi and Pranith too for any
> >>> side
> >>> > > effects which I may not be able to foresee.
> >>> > >
> >>> > > Ashish
> >>> > >
> >>> > > --
> >>> > > *From: *"Raghavendra Gowdappa" 
> >>> > > *To: *"Ashish Pandey" 
> >>> > > *Cc: *"Pranith Kumar Karampuri" , "Xavier
> >>> Hernandez"
> >>> > > , "Gluster Devel" 
> >>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> >>> > > *Subject: *Need inputs on patch #17985
> >>> > >
> >>> > >
> >>> > > Hi Ashish,
> >>> > >
> >>> > > Following are the blockers for making a decision on whether patch
> >>> [1] can
> >>> > > be merged or not:
> >>> > > * Evaluation of dentry operations (like rename etc) in dht
> >>> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
> >>> chmod
> >>> > > etc) hits EC without a single lookup performed on file/inode
> >>> > >
> >>> > > Can you please 

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-28 Thread Raghavendra G
On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran 
wrote:

> It has been a while but iirc snapview client (loaded abt dht/tier etc) had
> some issues when we ran tiering tests. Rafi might have more info on this -
> basically it was expecting to find the inode_ctx populated but it was not.
>

Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
ownership of,

* Identifying whether the patch in question causes the issue?
* Send a fix or at least evaluate whether a fix is possible.

@Others,

With the motivation of getting some traction on this, Is it ok if we:
* Set a deadline of around 15 days to complete the review (or testing with
the patch in question) of respective components and to come up with issues
(if any).
* Post the deadline, if there are no open issues, go ahead and merge the
patch?

If time is not enough, let us know and we can come up with a reasonable
time.

regards,
Raghavendra


> On 24 August 2017 at 10:13, Raghavendra G 
> wrote:
>
>> Note that we need to consider xlators on brick stack too. I've added
>> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
>> whether this patch affects your component.
>>
>> For reference, following are the xlators loaded in brick stack
>>
>> storage/posix
>> features/trash
>> features/changetimerecorder
>> features/changelog
>> features/bitrot-stub
>> features/access-control
>> features/locks
>> features/worm
>> features/read-only
>> features/leases
>> features/upcall
>> performance/io-threads
>> features/selinux
>> features/marker
>> features/barrier
>> features/index
>> features/quota
>> debug/io-stats
>> performance/decompounder
>> protocol/server
>>
>>
>> For those not following this thread, the question we need to answer is,
>> "whether the xlator you are associated with works fine if a non-lookup
>> fop (like open, setattr, stat etc) hits it without a lookup ever being done
>> on that inode"
>>
>> regards,
>> Raghavendra
>>
>> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> Thanks Pranith and Ashish for your inputs.
>>>
>>> - Original Message -
>>> > From: "Pranith Kumar Karampuri" 
>>> > To: "Ashish Pandey" 
>>> > Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" <
>>> xhernan...@datalab.es>, "Gluster Devel"
>>> > 
>>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
>>> > Subject: Re: Need inputs on patch #17985
>>> >
>>> > Raghavendra,
>>> > As Ashish mentioned, there aren't any known problems if upper
>>> xlators
>>> > don't send lookups in EC at the moment.
>>> >
>>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey 
>>> wrote:
>>> >
>>> > > Raghvendra,
>>> > >
>>> > > I have provided my comment on this patch.
>>> > > I think EC will not have any issue with this approach.
>>> > > However, I would welcome comments from Xavi and Pranith too for any
>>> side
>>> > > effects which I may not be able to foresee.
>>> > >
>>> > > Ashish
>>> > >
>>> > > --
>>> > > *From: *"Raghavendra Gowdappa" 
>>> > > *To: *"Ashish Pandey" 
>>> > > *Cc: *"Pranith Kumar Karampuri" , "Xavier
>>> Hernandez"
>>> > > , "Gluster Devel" 
>>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
>>> > > *Subject: *Need inputs on patch #17985
>>> > >
>>> > >
>>> > > Hi Ashish,
>>> > >
>>> > > Following are the blockers for making a decision on whether patch
>>> [1] can
>>> > > be merged or not:
>>> > > * Evaluation of dentry operations (like rename etc) in dht
>>> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
>>> chmod
>>> > > etc) hits EC without a single lookup performed on file/inode
>>> > >
>>> > > Can you please comment on the patch? I'll take care of dht part.
>>> > >
>>> > > [1] https://review.gluster.org/#/c/17985/
>>> > >
>>> > > regards,
>>> > > Raghavendra
>>> > >
>>> > >
>>> >
>>> >
>>> > --
>>> > Pranith
>>> >
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> --
>>> Raghavendra G
>>>
>>> 
>>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>


-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Quota Used Value Incorrect - Fix now or after upgrade

2017-08-28 Thread Raghavendra Gowdappa


- Original Message -
> From: "Matthew B" 
> To: "Sanoj Unnikrishnan" 
> Cc: "Raghavendra Gowdappa" , "Gluster Devel" 
> 
> Sent: Monday, August 28, 2017 9:33:25 PM
> Subject: Re: [Gluster-devel] Quota Used Value Incorrect - Fix now or after 
> upgrade
> 
> Hi Sanoj,
> 
> Thank you for the information - I have applied the changes you specified
> above - but I haven't seen any changes in the xattrs on the directory after
> about 15 minutes:

I think stat is served from cache - either gluster's md-cache or kernel 
attribute cache. For healing to happen we need to force a lookup (which we had 
hoped would be issued as part of stat cmd) and this lookup has to reach marker 
xlator loaded on bricks. To make sure a lookup on the directory reaches marker 
we need to:

1. Turn off kernel attribute and entry cache (using --entrytimeout=0 and 
--attribute-timeout=0 as options to glusterfs while mounting)
2. Turn off md-cache using gluster cli (gluster volume set performance.md-cache 
 off)
3. Turn off readdirplus in the entire stack [1]

Once the above steps are done I guess doing a stat results in a lookup on the 
directory witnessed by marker. Once the issue is fixed you can undo the above 
three steps so that performance is not affected in your setup.

[1] 
http://nongnu.13855.n7.nabble.com/Turning-off-readdirp-in-the-entire-stack-on-fuse-mount-td220297.html

> 
> [root@gluster07 ~]# setfattr -n trusted.glusterfs.quota.dirty -v 0x3100
> /mnt/raid6-storage/storage/data/projects/MEOPAR/
> 
> [root@gluster07 ~]# stat /mnt/raid6-storage/storage/data/projects/MEOPAR
> 
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
> /mnt/raid6-storage/storage/data/projects/MEOPAR
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x7209b677f4b94d82a3820733620e6929
> trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x599f228800088654
> trusted.glusterfs.dht=0x0001b6db6d41db6db6ee
> trusted.glusterfs.quota.d5a5ecda-7511-4bbb-9b4c-4fcc84e3e1da.contri=0xfa3d7c1ba60a9ccb0005fd2f
> trusted.glusterfs.quota.dirty=0x3100
> trusted.glusterfs.quota.limit-set=0x0880
> trusted.glusterfs.quota.size=0xfa3d7c1ba60a9ccb0005fd2f
> 
> [root@gluster07 ~]# gluster volume status storage
> Status of volume: storage
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick 10.0.231.50:/mnt/raid6-storage/storag
> e   49159 0  Y
> 2160
> Brick 10.0.231.51:/mnt/raid6-storage/storag
> e   49153 0  Y
> 16037
> Brick 10.0.231.52:/mnt/raid6-storage/storag
> e   49159 0  Y
> 2298
> Brick 10.0.231.53:/mnt/raid6-storage/storag
> e   49154 0  Y
> 9038
> Brick 10.0.231.54:/mnt/raid6-storage/storag
> e   49153 0  Y
> 32284
> Brick 10.0.231.55:/mnt/raid6-storage/storag
> e   49153 0  Y
> 14840
> Brick 10.0.231.56:/mnt/raid6-storage/storag
> e   49152 0  Y
> 29389
> NFS Server on localhost 2049  0  Y
> 29421
> Quota Daemon on localhost   N/A   N/AY
> 29438
> NFS Server on 10.0.231.51   2049  0  Y
> 18249
> Quota Daemon on 10.0.231.51 N/A   N/AY
> 18260
> NFS Server on 10.0.231.55   2049  0  Y
> 24128
> Quota Daemon on 10.0.231.55 N/A   N/AY
> 24147
> NFS Server on 10.0.231.54   2049  0  Y
> 9397
> Quota Daemon on 10.0.231.54 N/A   N/AY
> 9406
> NFS Server on 10.0.231.53   2049  0  Y
> 18387
> Quota Daemon on 10.0.231.53 N/A   N/AY
> 18397
> NFS Server on 10.0.231.52   2049  0  Y
> 2230
> Quota Daemon on 10.0.231.52 N/A   N/AY
> 2262
> NFS Server on 10.0.231.50   2049  0  Y
> 2113
> Quota Daemon on 10.0.231.50 N/A   N/AY
> 2154
> 
> Task Status of Volume storage
> --
> There are no active volume tasks
> 
> [root@gluster07 ~]# gluster volume quota storage list | egrep "MEOPAR "
> /data/projects/MEOPAR  8.5TB 80%(6.8TB) 16384.0PB
> 17.4TB  No   No
> 
> 
> 
> 
> 

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-28 Thread Shyam Ranganathan

Nigel, Shwetha,

The latest Glusto run [a] that was started by Nigel, post fixing the 
prior timeout issue, failed (much later though) again.


I took a look at the logs and my analysis is here [b]

@atin, @kaushal, @ppai can you take a look and see if the analysis is 
correct?


In short glusterd has got an error when checking for rebalance stats 
from one of the nodes as:

"Received commit RJT from uuid: 6f9524e6-9f9e-44aa-b2f4-393404adfd9d"

and the rebalance deamon on the node with that UUID is not really ready 
to serve requests when this was called, hence I am assuming this is 
causing the error. But need a once over by one of you folks.


@Shwetha, can we add a further timeout between rebalance start and 
checking the status, just so that we avoid this timing issue on these nodes.


Thanks,
Shyam

[a] glusto run: https://ci.centos.org/view/Gluster/job/gluster_glusto/377/

[b] analysis of the failure: 
https://paste.fedoraproject.org/paste/mk6ynJ0B9AH6H9ncbyru5w

On 08/25/2017 04:29 PM, Shyam Ranganathan wrote:
Nigel was kind enough to kick off a glusto run on 3.12 head a couple of 
days back. The status can be seen here [1].


The run failed, but managed to get past what Glusto does on master (see 
[2]). Not that this is a consolation, but just stating the fact.


The run [1] failed at,
17:05:57 
functional/bvt/test_cvt.py::TestGlusterHealSanity_dispersed_glusterfs::test_self_heal_when_io_in_progress 
FAILED


The test case failed due to,
17:10:28 E   AssertionError: ('Volume %s : All process are not 
online', 'testvol_dispersed')


The test case can be seen here [3], and the reason for failure is that 
Glusto did not wait long enough for the down brick to come up (it waited 
for 10 seconds, but the brick came up after 12 seconds or within the 
same second as the test for it being up. The log snippets pointing to 
this problem are here [4]. In short there was no real bug or issue that 
caused the failure as yet.


Glusto as a gating factor for this release was desirable, but having got 
this far on 3.12 does help.


@nigel, we could try post increasing the timeout between bringing the 
brick up to checking if it is up, and try another run, let me know if 
that works, and what is needed from me to get this going.


Shyam

[1] Glusto 3.12 run: 
https://ci.centos.org/view/Gluster/job/gluster_glusto/365/


[2] Glusto on master: 
https://ci.centos.org/view/Gluster/job/gluster_glusto/360/testReport/functional.bvt.test_cvt/ 



[3] Failed test case: 
https://ci.centos.org/view/Gluster/job/gluster_glusto/365/testReport/functional.bvt.test_cvt/TestGlusterHealSanity_dispersed_glusterfs/test_self_heal_when_io_in_progress/ 



[4] Log analysis pointing to the failed check: 
https://paste.fedoraproject.org/paste/znTPiFLrc2~vsWuoYRToZA


"Releases are made better together"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Current master branch fails to build

2017-08-28 Thread Niels de Vos
On Mon, Aug 28, 2017 at 08:18:50PM +, jenk...@build.gluster.org wrote:
> See 
> 
> 
> Changes:
> 
> [Jeff Darcy] statedump: add support for dumping basic mem-pool info
> 
> [Jeff Darcy] mem-pool: add tracking of mem_pool that requested the allocation
> 
> [Jeff Darcy] cluster/ec: coverity, fix for BAD_SHIFT
> 
> --
> Started by timer

... snip!

>   CC libglusterfs_la-stack.lo
> :
>  In function ‘gf_proc_dump_mempool_info’:
> :401:
>  error: ‘glusterfs_ctx_t’ has no member named ‘mempool_list’
> :401:
>  error: ‘struct mem_pool’ has no member named ‘owner’
> :401:
>  error: ‘struct mem_pool’ has no member named ‘owner’
> :401:
>  error: ‘glusterfs_ctx_t’ has no member named ‘mempool_list’
> :401:
>  error: ‘struct mem_pool’ has no member named ‘owner’
> :401:
>  error: ‘struct mem_pool’ has no member named ‘owner’
> :402:
>  error: ‘struct mem_pool’ has no member named ‘active’
> make[3]: *** [libglusterfs_la-statedump.lo] Error 1

This is happening because the mem-pool changes that got merged depend on
https://review.gluster.org/18075 . A revised version to address review
comments is currently undergoing regression testing.

Please review and merge these two patches in the following order:

1. mem-pool: track glusterfs_ctx_t in struct mem_pool
   https://review.gluster.org/18075

2. mem-pool: count allocations done per user-pool
   https://review.gluster.org/18074

In case (1) needs more changes, feel free to modify the change. It is
the easiest to use git-review for that:

   $ git review -r origin -d 18075
   [ edit files ]
   $ git commit -a --fixup=HEAD
   [ test build ]
   $ git rebase -i --autosquash HEAD~2
   $ git review -r origin -R -t rfc master

Thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12: Readiness for release (request attention)

2017-08-28 Thread Shyam Ranganathan

On 08/25/2017 04:48 PM, Shyam Ranganathan wrote:

Hi,

Here is a status of 3.12 from a release readiness perspective.

Most importantly, if there are any known blockers for the release, 
please shout out ASAP. Final release tagging would be slated (around) 
30th Aug, 2017


1) YELLOW: Release notes are in, but need a review by feature owners 
(and others based on interest).


See [1] for release notes, and if there are edits submit the same for 
review, using the issue # for the feature as the github issue reference, 
or the release tracker bug as the BUG ID [2].


Calling out again, to review the release notes.



2) YELLOW: Glusto status: Better than master! see [3]. If we get more 
runs in, then we can decide if there are blockers or not, this is not a 
release gating run yet.


Post the fix to the issue found in [3] glusto tests went further and 
have failed elsewhere. The run details are here: 
https://ci.centos.org/view/Gluster/job/gluster_glusto/377/console




6) RED: Upgrade from 3.8 and 3.10 to 3.12 as stated in the (older) 
upgrade guides [6]


Tested the upgrade from 3.10.5 to 3.12 RC0, and we are good here. This 
task is done.




- Are there any new upgrade related issues (geo-rep, snapshots, anything 
to consider)? Would like maintainers attention on this if there are any 
further things to call-out


- Any help here is appreciated! This is not at risk, as I should be able 
to complete the same, but help is appreciated.


7) RED: At least get in a couple of gbench [7] (Gluster bench marking 
runs) in for replicated and dispersed volumes (action on me)


Still pending



Shyam

[1] Release notes: 
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md 



[2] Release tracker: 
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.0


[3] glusto 3.12 status: 
http://lists.gluster.org/pipermail/gluster-devel/2017-August/053529.html


[4] fstat status since branching: 
https://fstat.gluster.org/summary?start_date=2017-07-20_date=2017-08-26=release-3.12 



[5] 3.12 gerrit dashboard: 
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



[6] Older upgrade guide: 
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/


[7] gbench test: 
https://github.com/gluster/gbench/blob/master/bench-tests/bt--0001/GlusterBench.py 



"Releases are made better together"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2017-08-28-414d3e92 (master branch)

2017-08-28 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-08-28-414d3e92
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster documentation search

2017-08-28 Thread Nigel Babu
Hello folks,

I spend some time today mucking about trying to figure out how to make our
documentation search a better experience. The short answer is, search kind
of works now.

Long answer: mkdocs creates a client side file which is used for search.
RTD overrides this by referring people to Elasticsearch. However, that
doesn't clear out stale entries and we're plagued with a whole lot of stale
entries. I've made some changes that other consumers of RTD have done to
override our search to use the JS file rather than Elasticsearch.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Quota Used Value Incorrect - Fix now or after upgrade

2017-08-28 Thread Sanoj Unnikrishnan
Hi Mathew,

If you are sure that "/mnt/raid6-storage/storage/data/projects/MEOPAR/"
is the only directory with wrong accounting and its immediate sub
directories have correct xattr values, Setting the dirty xattr and doing a
stat after that should resolve the issue.

1) setxattr -n trusted.glusterfs.quota.dirty -v 0x3100
/mnt/raid6-storage/storage/data/projects/MEOPAR/

2) stat /mnt/raid6-storage/storage/data/projects/MEOPAR/

Could you please share what kind of operations that happens on this
directory, to help RCA the issue.

If you think this can be true elsewhere in filesystem as well,use the
following script to identify the same.

1)
https://github.com/gluster/glusterfs/blob/master/extras/quota/xattr_analysis.py
2)
https://github.com/gluster/glusterfs/blob/master/extras/quota/log_accounting.sh

Regards,
Sanoj




On Mon, Aug 28, 2017 at 12:39 PM, Raghavendra Gowdappa 
wrote:

> +sanoj
>
> - Original Message -
> > From: "Matthew B" 
> > To: gluster-devel@gluster.org
> > Sent: Saturday, August 26, 2017 12:45:19 AM
> > Subject: [Gluster-devel] Quota Used Value Incorrect - Fix now or after
>   upgrade
> >
> > Hello,
> >
> > I need some advice on fixing an issue with quota on my gluster volume.
> It's
> > running version 3.7, distributed volume, with 7 nodes.
> >
> > # gluster --version
> > glusterfs 3.7.13 built on Jul 8 2016 15:26:18
> > Repository revision: git:// git.gluster.com/glusterfs.git
> > Copyright (c) 2006-2011 Gluster Inc. < http://www.gluster.com >
> > GlusterFS comes with ABSOLUTELY NO WARRANTY.
> > You may redistribute copies of GlusterFS under the terms of the GNU
> General
> > Public License.
> >
> > # gluster volume info storage
> >
> > Volume Name: storage
> > Type: Distribute
> > Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2
> > Status: Started
> > Number of Bricks: 7
> > Transport-type: tcp
> > Bricks:
> > Brick1: 10.0.231.50:/mnt/raid6-storage/storage
> > Brick2: 10.0.231.51:/mnt/raid6-storage/storage
> > Brick3: 10.0.231.52:/mnt/raid6-storage/storage
> > Brick4: 10.0.231.53:/mnt/raid6-storage/storage
> > Brick5: 10.0.231.54:/mnt/raid6-storage/storage
> > Brick6: 10.0.231.55:/mnt/raid6-storage/storage
> > Brick7: 10.0.231.56:/mnt/raid6-storage/storage
> > Options Reconfigured:
> > changelog.changelog: on
> > geo-replication.ignore-pid-check: on
> > geo-replication.indexing: on
> > nfs.disable: no
> > performance.readdir-ahead: on
> > features.quota: on
> > features.inode-quota: on
> > features.quota-deem-statfs: on
> > features.read-only: off
> >
> > # df -h /storage/
> > Filesystem Size Used Avail Use% Mounted on
> > 10.0.231.50:/storage 255T 172T 83T 68% /storage
> >
> >
> > I am planning to upgrade to 3.10 (or 3.12 when it's available) but I
> have a
> > number of quotas configured, and one of them (below) has a very wrong
> "Used"
> > value:
> >
> > # gluster volume quota storage list | egrep "MEOPAR "
> > /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No
> >
> >
> > I have confirmed the bad value appears in one of the bricks current
> xattrs,
> > and it looks like the issue has been encountered previously on bricks 04,
> > 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as
> it
> > was recently added)
> >
> > $ ansible -i hosts gluster-servers[0:6] -u  --ask-pass -m shell -b
> > --become-method=sudo --ask-become-pass -a "getfattr --absolute-names -m
> . -d
> > -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR | egrep
> > '^trusted.glusterfs.quota.size'"
> > SSH password:
> > SUDO password[defaults to SSH password]:
> >
> > gluster02 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0x011ecfa56c05
> cd6d0006d478
> > trusted.glusterfs.quota.size.1=0x010ad4a45201
> 2a03000150fa
> >
> > gluster05 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0x0033b8e92204
> cde80006b1a4
> > trusted.glusterfs.quota.size.1=0x010dca277c01
> 297d00015005
> >
> > gluster01 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0x003d4d434805
> 76160006afd2
> > trusted.glusterfs.quota.size.1=0x0133fe211e05
> d1610006cfd4
> >
> > gluster04 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0xff396f3e9404
> d7ea00068c62
> > trusted.glusterfs.quota.size.1=0x0106e6724801
> 138f00012fb2
> >
> > gluster03 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0xfd02acabf003
> 599643e2
> > trusted.glusterfs.quota.size.1=0x0114e20f5e01
> 13b300012fb2
> >
> > gluster06 | SUCCESS | rc=0 >>
> > trusted.glusterfs.quota.size=0xff0c98de4405
> 36e400068cf2
> > trusted.glusterfs.quota.size.1=0x013532664e05
> e73f0006cfd4
> >
> > gluster07 | SUCCESS | rc=0 >>
> > 

Re: [Gluster-devel] Quota Used Value Incorrect - Fix now or after upgrade

2017-08-28 Thread Raghavendra Gowdappa
+sanoj

- Original Message -
> From: "Matthew B" 
> To: gluster-devel@gluster.org
> Sent: Saturday, August 26, 2017 12:45:19 AM
> Subject: [Gluster-devel] Quota Used Value Incorrect - Fix now or after
> upgrade
> 
> Hello,
> 
> I need some advice on fixing an issue with quota on my gluster volume. It's
> running version 3.7, distributed volume, with 7 nodes.
> 
> # gluster --version
> glusterfs 3.7.13 built on Jul 8 2016 15:26:18
> Repository revision: git:// git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. < http://www.gluster.com >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU General
> Public License.
> 
> # gluster volume info storage
> 
> Volume Name: storage
> Type: Distribute
> Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2
> Status: Started
> Number of Bricks: 7
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.231.50:/mnt/raid6-storage/storage
> Brick2: 10.0.231.51:/mnt/raid6-storage/storage
> Brick3: 10.0.231.52:/mnt/raid6-storage/storage
> Brick4: 10.0.231.53:/mnt/raid6-storage/storage
> Brick5: 10.0.231.54:/mnt/raid6-storage/storage
> Brick6: 10.0.231.55:/mnt/raid6-storage/storage
> Brick7: 10.0.231.56:/mnt/raid6-storage/storage
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> nfs.disable: no
> performance.readdir-ahead: on
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on
> features.read-only: off
> 
> # df -h /storage/
> Filesystem Size Used Avail Use% Mounted on
> 10.0.231.50:/storage 255T 172T 83T 68% /storage
> 
> 
> I am planning to upgrade to 3.10 (or 3.12 when it's available) but I have a
> number of quotas configured, and one of them (below) has a very wrong "Used"
> value:
> 
> # gluster volume quota storage list | egrep "MEOPAR "
> /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No
> 
> 
> I have confirmed the bad value appears in one of the bricks current xattrs,
> and it looks like the issue has been encountered previously on bricks 04,
> 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as it
> was recently added)
> 
> $ ansible -i hosts gluster-servers[0:6] -u  --ask-pass -m shell -b
> --become-method=sudo --ask-become-pass -a "getfattr --absolute-names -m . -d
> -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR | egrep
> '^trusted.glusterfs.quota.size'"
> SSH password:
> SUDO password[defaults to SSH password]:
> 
> gluster02 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x011ecfa56c05cd6d0006d478
> trusted.glusterfs.quota.size.1=0x010ad4a452012a03000150fa
> 
> gluster05 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x0033b8e92204cde80006b1a4
> trusted.glusterfs.quota.size.1=0x010dca277c01297d00015005
> 
> gluster01 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x003d4d43480576160006afd2
> trusted.glusterfs.quota.size.1=0x0133fe211e05d1610006cfd4
> 
> gluster04 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xff396f3e9404d7ea00068c62
> trusted.glusterfs.quota.size.1=0x0106e6724801138f00012fb2
> 
> gluster03 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xfd02acabf003599643e2
> trusted.glusterfs.quota.size.1=0x0114e20f5e0113b300012fb2
> 
> gluster06 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xff0c98de440536e400068cf2
> trusted.glusterfs.quota.size.1=0x013532664e05e73f0006cfd4
> 
> gluster07 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xfa3d7c1ba60a9ccb0005fd2f
> 
> And reviewing the subdirectories of that folder on the impacted server you
> can see that none of the direct children have such incorrect values:
> 
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
> /mnt/raid6-storage/storage/data/projects/MEOPAR/*
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/
> ...
> trusted.glusterfs.quota.7209b677-f4b9-4d82-a382-0733620e6929.contri=0x00fb6841820074730dae
> trusted.glusterfs.quota.dirty=0x3000
> trusted.glusterfs.quota.size=0x00fb6841820074730dae
> 
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/
> ...
> trusted.glusterfs.quota.7209b677-f4b9-4d82-a382-0733620e6929.contri=0x000416d5f4000baa0441
> trusted.glusterfs.quota.dirty=0x3000
> trusted.glusterfs.quota.limit-set=0x0100
> trusted.glusterfs.quota.size=0x000416d5f4000baa0441
> 
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/
> ...
>