[Gluster-devel] Fwd: Gerrit downtime on Aug 8, 2016

2018-08-06 Thread Nigel Babu
Reminder, this upgrade is tomorrow.

-- Forwarded message -
From: Nigel Babu 
Date: Fri, Jul 27, 2018 at 5:28 PM
Subject: Gerrit downtime on Aug 8, 2016
To: gluster-devel 
Cc: gluster-infra , <
automated-test...@gluster.org>


Hello,

It's been a while since we upgraded Gerrit. We plan to do a full upgrade
and move to 2.15.3. Among other changes, this brings in the new PolyGerrit
interface which brings significant frontend changes. You can take a look at
how this would look on the staging site[1].

## Outage Window
0330 EDT to 0730 EDT
0730 UTC to 1130 UTC
1300 IST to 1700 IST

The actual time needed for the upgrade is about than hour, but we want to
keep a larger window open to rollback in the event of any problems during
the upgrade.

-- 
nigelb


-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-08-06-b982e09f (master branch)

2018-08-06 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-08-06-b982e09f/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 2 August 2018 at 05:46, Shyam Ranganathan  wrote:

> Below is a summary of failures over the last 7 days on the nightly
> health check jobs. This is one test per line, sorted in descending order
> of occurrence (IOW, most frequent failure is on top).
>
> The list includes spurious failures as well, IOW passed on a retry. This
> is because if we do not weed out the spurious errors, failures may
> persist and make it difficult to gauge the health of the branch.
>
> The number at the end of the test line are Jenkins job numbers where
> these failed. The job numbers runs as follows,
> - https://build.gluster.org/job/regression-test-burn-in/ ID: 4048 - 4053
> - https://build.gluster.org/job/line-coverage/ ID: 392 - 407
> - https://build.gluster.org/job/regression-test-with-multiplex/ ID: 811
> - 817
>
> So to get to job 4051 (say), use the link
> https://build.gluster.org/job/regression-test-burn-in/4051
>
> Atin has called out some folks for attention to some tests, consider
> this a call out to others, if you see a test against your component,
> help around root causing and fixing it is needed.
>
> tests/bugs/core/bug-1432542-mpx-restart-crash.t, 4049, 4051, 4052, 405,
> 404, 403, 396, 392
>
> tests/00-geo-rep/georep-basic-dr-tarssh.t, 811, 814, 817, 4050, 4053
>
> tests/bugs/bug-1368312.t, 815, 816, 811, 813, 403
>
> tests/bugs/distribute/bug-1122443.t, 4050, 407, 403, 815, 816
>
> tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t,
> 814, 816, 817, 812, 815
>
> tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-
> txn-on-quorum-failure.t,
> 4049, 812, 814, 405, 392
>
> tests/bitrot/bug-1373520.t, 811, 816, 817, 813
>
> tests/bugs/ec/bug-1236065.t, 812, 813, 815
>
> tests/00-geo-rep/georep-basic-dr-rsync.t, 813, 4046
>
> tests/basic/ec/ec-1468261.t, 817, 812
>
> tests/bugs/glusterd/quorum-validation.t, 4049, 407
>
> tests/bugs/quota/bug-1293601.t, 811, 812
>
> tests/basic/afr/add-brick-self-heal.t, 407
>
> tests/basic/afr/granular-esh/replace-brick.t, 392
>
> tests/bugs/core/multiplex-limit-issue-151.t, 405
>
> tests/bugs/distribute/bug-1042725.t, 405
>

I think this was caused by a failure to cleanup the mounts from the
previous test. It succeeds on retry.

*16:59:10* 
*16:59:10*
[16:59:12] Running tests in file
./tests/bugs/distribute/bug-1042725.t*16:59:27*
./tests/bugs/distribute/bug-1042725.t .. *16:59:27* 1..16*16:59:27*
Aborting.*16:59:27* *16:59:27* /mnt/nfs/1 could not be deleted, here
are the left over items*16:59:27* drwxr-xr-x. 2 root root 6 Jul 31
16:59 /d/backends*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/glusterfs/0*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/glusterfs/1*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/glusterfs/2*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/glusterfs/3*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/nfs/0*16:59:27* drwxr-xr-x. 2 root root 4096 Jul 31 16:59
/mnt/nfs/1*16:59:27* *16:59:27* Please correct the problem and try
again.*16:59:27*


I don't think there is anything to be done for this one.



>
> tests/bugs/distribute/bug-1117851.t, 405
>
> tests/bugs/glusterd/rebalance-operations-in-single-node.t, 405
>
> tests/bugs/index/bug-1559004-EMLINK-handling.t, 405
>
> tests/bugs/replicate/bug-1386188-sbrain-fav-child.t, 4048
>
> tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t, 813
>
>
>
> Thanks,
> Shyam
>
>
> On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
> > On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
> >> 1) master branch health checks (weekly, till branching)
> >>   - Expect every Monday a status update on various tests runs
> >
> > See https://build.gluster.org/job/nightly-master/ for a report on
> > various nightly and periodic jobs on master.
> >
> > RED:
> > 1. Nightly regression (3/6 failed)
> > - Tests that reported failure:
> > ./tests/00-geo-rep/georep-basic-dr-rsync.t
> > ./tests/bugs/core/bug-1432542-mpx-restart-crash.t
> > ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-
> txn-on-quorum-failure.t
> > ./tests/bugs/distribute/bug-1122443.t
> >
> > - Tests that needed a retry:
> > ./tests/00-geo-rep/georep-basic-dr-tarssh.t
> > ./tests/bugs/glusterd/quorum-validation.t
> >
> > 2. Regression with multiplex (cores and test failures)
> >
> > 3. line-coverage (cores and test failures)
> > - Tests that failed:
> > ./tests/bugs/core/bug-1432542-mpx-restart-crash.t (patch
> > https://review.gluster.org/20568 does not fix the timeout entirely, as
> > can be seen in this run,
> > https://build.gluster.org/job/line-coverage/401/consoleFull )
> >
> > Calling out to contributors to take a look at various failures, and post
> > the same as bugs AND to the lists (so that duplication is avoided) to
> > get this to a GREEN status.
> >
> > GREEN:
> > 1. cpp-check
> > 2. RPM builds
> >
> > IGNORE (for now):
> > 1. clang scan (@nigel, this job requires clang warnings to be 

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 6 August 2018 at 18:03, Nithya Balachandran  wrote:

>
>
> On 2 August 2018 at 05:46, Shyam Ranganathan  wrote:
>
>> Below is a summary of failures over the last 7 days on the nightly
>> health check jobs. This is one test per line, sorted in descending order
>> of occurrence (IOW, most frequent failure is on top).
>>
>> The list includes spurious failures as well, IOW passed on a retry. This
>> is because if we do not weed out the spurious errors, failures may
>> persist and make it difficult to gauge the health of the branch.
>>
>> The number at the end of the test line are Jenkins job numbers where
>> these failed. The job numbers runs as follows,
>> - https://build.gluster.org/job/regression-test-burn-in/ ID: 4048 - 4053
>> - https://build.gluster.org/job/line-coverage/ ID: 392 - 407
>> - https://build.gluster.org/job/regression-test-with-multiplex/ ID: 811
>> - 817
>>
>> So to get to job 4051 (say), use the link
>> https://build.gluster.org/job/regression-test-burn-in/4051
>>
>> Atin has called out some folks for attention to some tests, consider
>> this a call out to others, if you see a test against your component,
>> help around root causing and fixing it is needed.
>>
>> tests/bugs/core/bug-1432542-mpx-restart-crash.t, 4049, 4051, 4052, 405,
>> 404, 403, 396, 392
>>
>> tests/00-geo-rep/georep-basic-dr-tarssh.t, 811, 814, 817, 4050, 4053
>>
>> tests/bugs/bug-1368312.t, 815, 816, 811, 813, 403
>>
>> tests/bugs/distribute/bug-1122443.t, 4050, 407, 403, 815, 816
>>
>> tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t,
>> 814, 816, 817, 812, 815
>>
>> tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-
>> on-quorum-failure.t,
>> 4049, 812, 814, 405, 392
>>
>> tests/bitrot/bug-1373520.t, 811, 816, 817, 813
>>
>> tests/bugs/ec/bug-1236065.t, 812, 813, 815
>>
>> tests/00-geo-rep/georep-basic-dr-rsync.t, 813, 4046
>>
>> tests/basic/ec/ec-1468261.t, 817, 812
>>
>> tests/bugs/glusterd/quorum-validation.t, 4049, 407
>>
>> tests/bugs/quota/bug-1293601.t, 811, 812
>>
>> tests/basic/afr/add-brick-self-heal.t, 407
>>
>> tests/basic/afr/granular-esh/replace-brick.t, 392
>>
>> tests/bugs/core/multiplex-limit-issue-151.t, 405
>>
>> tests/bugs/distribute/bug-1042725.t, 405
>>
>> tests/bugs/distribute/bug-1117851.t, 405
>>
>
> From the non-lcov vs lcov runs:
>
> Non-lcov:
>
> [nbalacha@myserver glusterfs]$ grep TEST mnt-glusterfs-0.log
> [2018-07-31 16:30:36.930726]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 72 create_files /mnt/glusterfs/0 ++
> [2018-07-31 16:31:47.649022]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 75 glusterfs --entry-timeout=0 --attribute-timeout=0 -s
> builder104.cloud.gluster.org --volfile-id patchy /mnt/glusterfs/1
> ++
> [2018-07-31 16:31:47.746734]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 77 move_files /mnt/glusterfs/0 ++
> [2018-07-31 16:31:47.783606]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 78 move_files /mnt/glusterfs/1 ++
> [2018-07-31 16:31:47.842878]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 85 done cat /mnt/glusterfs/0/status_0 ++
> [2018-07-31 16:33:14.849807]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 86 done cat /mnt/glusterfs/1/status_1 ++
> [2018-07-31 16:33:14.872184]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 88 Y force_umount /mnt/glusterfs/0 ++
> [2018-07-31 16:33:14.900334]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 89 Y force_umount /mnt/glusterfs/1 ++
> [2018-07-31 16:33:14.929238]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 90 glusterfs --entry-timeout=0 --attribute-timeout=0 -s
> builder104.cloud.gluster.org --volfile-id patchy /mnt/glusterfs/0
> ++
> [2018-07-31 16:33:15.027094]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 91 check_files /mnt/glusterfs/0 ++
> [2018-07-31 16:33:20.268030]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 93 gluster --mode=script --wignore volume stop patchy ++
> [2018-07-31 16:33:22.392247]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 94 Stopped volinfo_field patchy Status ++
> [2018-07-31 16:33:22.492175]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 96 gluster --mode=script --wignore volume delete patchy ++
> [2018-07-31 16:33:25.475566]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 97 ! gluster --mode=script --wignore volume info patchy ++
>
>
> Total time for the tests: *169* seconds
>
>
> Lcov:
>
> [nbalacha@myserver glusterfs]$ grep TEST mnt-glusterfs-0.log
> [2018-08-06 08:33:05.737012]:++ 
> G_LOG:./tests/bugs/distribute/bug-1117851.t:
> TEST: 72 create_files /mnt/glusterfs/0 ++
> [2018-08-06 08:34:29.133045]:++ 
> G_LOG:./tests/bugs/distribute/bug-11

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 2 August 2018 at 05:46, Shyam Ranganathan  wrote:

> Below is a summary of failures over the last 7 days on the nightly
> health check jobs. This is one test per line, sorted in descending order
> of occurrence (IOW, most frequent failure is on top).
>
> The list includes spurious failures as well, IOW passed on a retry. This
> is because if we do not weed out the spurious errors, failures may
> persist and make it difficult to gauge the health of the branch.
>
> The number at the end of the test line are Jenkins job numbers where
> these failed. The job numbers runs as follows,
> - https://build.gluster.org/job/regression-test-burn-in/ ID: 4048 - 4053
> - https://build.gluster.org/job/line-coverage/ ID: 392 - 407
> - https://build.gluster.org/job/regression-test-with-multiplex/ ID: 811
> - 817
>
> So to get to job 4051 (say), use the link
> https://build.gluster.org/job/regression-test-burn-in/4051
>
> Atin has called out some folks for attention to some tests, consider
> this a call out to others, if you see a test against your component,
> help around root causing and fixing it is needed.
>
> tests/bugs/core/bug-1432542-mpx-restart-crash.t, 4049, 4051, 4052, 405,
> 404, 403, 396, 392
>
> tests/00-geo-rep/georep-basic-dr-tarssh.t, 811, 814, 817, 4050, 4053
>
> tests/bugs/bug-1368312.t, 815, 816, 811, 813, 403
>
> tests/bugs/distribute/bug-1122443.t, 4050, 407, 403, 815, 816
>
> tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t,
> 814, 816, 817, 812, 815
>
> tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-
> txn-on-quorum-failure.t,
> 4049, 812, 814, 405, 392
>
> tests/bitrot/bug-1373520.t, 811, 816, 817, 813
>
> tests/bugs/ec/bug-1236065.t, 812, 813, 815
>
> tests/00-geo-rep/georep-basic-dr-rsync.t, 813, 4046
>
> tests/basic/ec/ec-1468261.t, 817, 812
>
> tests/bugs/glusterd/quorum-validation.t, 4049, 407
>
> tests/bugs/quota/bug-1293601.t, 811, 812
>
> tests/basic/afr/add-brick-self-heal.t, 407
>
> tests/basic/afr/granular-esh/replace-brick.t, 392
>
> tests/bugs/core/multiplex-limit-issue-151.t, 405
>
> tests/bugs/distribute/bug-1042725.t, 405
>
> tests/bugs/distribute/bug-1117851.t, 405
>

>From the non-lcov vs lcov runs:

Non-lcov:

[nbalacha@myserver glusterfs]$ grep TEST mnt-glusterfs-0.log
[2018-07-31 16:30:36.930726]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 72 create_files
/mnt/glusterfs/0 ++
[2018-07-31 16:31:47.649022]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 75 glusterfs
--entry-timeout=0 --attribute-timeout=0 -s builder104.cloud.gluster.org
--volfile-id patchy /mnt/glusterfs/1 ++
[2018-07-31 16:31:47.746734]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 77 move_files
/mnt/glusterfs/0 ++
[2018-07-31 16:31:47.783606]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 78 move_files
/mnt/glusterfs/1 ++
[2018-07-31 16:31:47.842878]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 85 done cat
/mnt/glusterfs/0/status_0 ++
[2018-07-31 16:33:14.849807]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 86 done cat
/mnt/glusterfs/1/status_1 ++
[2018-07-31 16:33:14.872184]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 88 Y force_umount
/mnt/glusterfs/0 ++
[2018-07-31 16:33:14.900334]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 89 Y force_umount
/mnt/glusterfs/1 ++
[2018-07-31 16:33:14.929238]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 90 glusterfs
--entry-timeout=0 --attribute-timeout=0 -s builder104.cloud.gluster.org
--volfile-id patchy /mnt/glusterfs/0 ++
[2018-07-31 16:33:15.027094]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 91 check_files
/mnt/glusterfs/0 ++
[2018-07-31 16:33:20.268030]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 93 gluster --mode=script
--wignore volume stop patchy ++
[2018-07-31 16:33:22.392247]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 94 Stopped volinfo_field
patchy Status ++
[2018-07-31 16:33:22.492175]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 96 gluster --mode=script
--wignore volume delete patchy ++
[2018-07-31 16:33:25.475566]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 97 ! gluster
--mode=script --wignore volume info patchy ++


Total time for the tests: *169* seconds


Lcov:

[nbalacha@myserver glusterfs]$ grep TEST mnt-glusterfs-0.log
[2018-08-06 08:33:05.737012]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 72 create_files
/mnt/glusterfs/0 ++
[2018-08-06 08:34:29.133045]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 75 glusterfs
--entry-timeout=0 --attribute-timeout=0 -s builder100.cloud.gluster.org
--volfile-id patchy /mnt/glusterfs/1 ++
[2018-08-06 08:34:29.257888]:++
G_LOG:./tests/bugs/distribute/bug-1117851.t: TEST: 77 move_files
/mnt/gl

Re: [Gluster-devel] [Gluster-users] Gluster Documentation Hackathon - 7/19 through 7/23

2018-08-06 Thread Sankarshan Mukhopadhyay
Was a round-up/summary about this published to the lists?

On Wed, Jul 18, 2018 at 10:27 PM, Vijay Bellur  wrote:
> Hey All,
>
> We are organizing a hackathon to improve our upstream documentation. More
> details about the hackathon can be found at [1].
>
> Please feel free to let us know if you have any questions.
>
> Thanks,
> Amar & Vijay
>
> [1]
> https://docs.google.com/document/d/11LLGA-bwuamPOrKunxojzAEpHEGQxv8VJ68L3aKdPns/edit?usp=sharing
>



-- 
sankarshan mukhopadhyay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] New Coverity Scan

2018-08-06 Thread Nigel Babu
Hello folks,

We've run a new Coverity run that was entirely automated. Current split of
Coverity issues:
High: 132
Medium: 241
Low: 83
Total: 456

We will be pushing a nightly build into scan.coverity.com via Jenkins. So,
you should be able to see updates to these numbers as you merge in fixes.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel