See <http://build.gluster.org/job/regression-test-burn-in/1264/>
------------------------------------------ [...truncated 8398 lines...] ok 38, LINENUM:105 ok 39, LINENUM:107 ok 40, LINENUM:108 volume delete: patchy: success volume delete: patchy2: success ok 41, LINENUM:111 ok 42, LINENUM:112 ok All tests successful. Files=1, Tests=42, 56 wallclock secs ( 0.03 usr 0.01 sys + 11.10 cusr 4.25 csys = 15.39 CPU) Result: PASS End of test ./tests/basic/volume-snapshot-clone.t ================================================================================ ================================================================================ [21:16:22] Running tests in file ./tests/basic/volume-snapshot.t allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support. Falling back to native LVM signature detection. allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support. Falling back to native LVM signature detection. allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support. Falling back to native LVM signature detection. snapshot delete: failed: Commit failed on 127.1.1.2. Snapshot patchy2_snap1_GMT-2016.07.02-21.16.49 might not be in an usable state. volume delete: patchy2: failed: Cannot delete Volume patchy2 ,as it has 1 snapshots. To delete the volume, first delete all the snapshots under it. tar: Removing leading `/' from member names ./tests/basic/volume-snapshot.t .. 1..33 ok 1, LINENUM:82 ok 2, LINENUM:84 ok 3, LINENUM:85 ok 4, LINENUM:87 ok 5, LINENUM:88 ok 6, LINENUM:89 volume create: patchy: success: please start the volume to access data volume create: patchy2: success: please start the volume to access data ok 7, LINENUM:92 ok 8, LINENUM:93 volume start: patchy: success volume start: patchy2: success ok 9, LINENUM:96 ok 10, LINENUM:97 ok 11, LINENUM:99 snapshot create: success: Snap patchy_snap created successfully snapshot create: success: Snap patchy2_snap created successfully ok 12, LINENUM:104 ok 13, LINENUM:105 Snapshot deactivate: patchy2_snap: Snap deactivated successfully Snapshot deactivate: patchy_snap: Snap deactivated successfully ok 14, LINENUM:109 ok 15, LINENUM:110 Snapshot activate: patchy2_snap: Snap activated successfully Snapshot activate: patchy_snap: Snap activated successfully ok 16, LINENUM:114 ok 17, LINENUM:115 Snapshot deactivate: patchy_snap: Snap deactivated successfully Snapshot deactivate: patchy2_snap: Snap deactivated successfully Snapshot activate: patchy_snap: Snap activated successfully Snapshot activate: patchy2_snap: Snap activated successfully ok 18, LINENUM:120 ok 19, LINENUM:121 ok 20, LINENUM:122 ok 21, LINENUM:123 ok 22, LINENUM:125 ok 23, LINENUM:126 ok 24, LINENUM:127 ok 25, LINENUM:128 snapshot create: success: Snap patchy_snap1_GMT-2016.07.02-21.16.49 created successfully snapshot create: success: Snap patchy2_snap1_GMT-2016.07.02-21.16.49 created successfully ok 26, LINENUM:135 ok 27, LINENUM:136 Snapshot command failed snapshot delete: patchy_snap1_GMT-2016.07.02-21.16.49: snap removed successfully volume stop: patchy: success volume stop: patchy2: success ok 28, LINENUM:141 ok 29, LINENUM:142 Snapshot restore: patchy_snap: Snap restored successfully Snapshot restore: patchy2_snap: Snap restored successfully ok 30, LINENUM:145 ok 31, LINENUM:146 volume delete: patchy: success ok 32, LINENUM:149 not ok 33 Got "Y" instead of "N", LINENUM:150 FAILED COMMAND: N volume_exists patchy2 Failed 1/33 subtests Test Summary Report ------------------- ./tests/basic/volume-snapshot.t (Wstat: 0 Tests: 33 Failed: 1) Failed test: 33 Files=1, Tests=33, 44 wallclock secs ( 0.03 usr 0.00 sys + 12.07 cusr 4.10 csys = 16.20 CPU) Result: FAIL End of test ./tests/basic/volume-snapshot.t ================================================================================ Run complete ================================================================================ Number of tests found: 115 Number of tests selected for run based on pattern: 115 Number of tests skipped as they were marked bad: 6 Number of tests skipped because of known_issues: 1 Number of tests that were run: 108 1 test(s) failed ./tests/basic/volume-snapshot.t 0 test(s) generated core Tests ordered by time taken, slowest to fastest: ================================================================================ ./tests/basic/afr/split-brain-favorite-child-policy.t - 565 second ./tests/basic/ec/ec-12-4.t - 338 second ./tests/basic/ec/ec-7-3.t - 190 second ./tests/basic/ec/ec-6-2.t - 172 second ./tests/basic/afr/entry-self-heal.t - 169 second ./tests/basic/tier/tier-heald.t - 168 second ./tests/basic/ec/ec-5-1.t - 151 second ./tests/basic/ec/ec-5-2.t - 144 second ./tests/basic/glusterd/heald.t - 140 second ./tests/basic/afr/self-heal.t - 131 second ./tests/basic/tier/tier.t - 127 second ./tests/basic/ec/ec-4-1.t - 113 second ./tests/basic/ec/ec-root-heal.t - 111 second ./tests/basic/afr/granular-esh/conservative-merge.t - 111 second ./tests/basic/afr/granular-esh/granular-esh.t - 98 second ./tests/basic/afr/granular-esh/add-brick.t - 97 second ./tests/basic/afr/add-brick-self-heal.t - 97 second ./tests/basic/ec/ec-3-1.t - 96 second ./tests/basic/tier/legacy-many.t - 95 second ./tests/basic/ec/ec-new-entry.t - 91 second ./tests/basic/afr/split-brain-heal-info.t - 83 second ./tests/basic/afr/self-heald.t - 77 second ./tests/basic/afr/split-brain-healing.t - 75 second ./tests/basic/quota.t - 70 second ./tests/basic/ec/self-heal.t - 66 second ./tests/basic/ec/ec-background-heals.t - 66 second ./tests/basic/tier/new-tier-cmds.t - 65 second ./tests/basic/tier/tierd_check.t - 64 second ./tests/basic/afr/metadata-self-heal.t - 64 second ./tests/basic/tier/frequency-counters.t - 61 second ./tests/basic/afr/sparse-file-self-heal.t - 59 second ./tests/basic/volume-snapshot-clone.t - 56 second ./tests/basic/tier/fops-during-migration-pause.t - 53 second ./tests/basic/uss.t - 49 second ./tests/basic/volume-snapshot.t - 44 second ./tests/basic/ec/ec-notify.t - 41 second ./tests/basic/mount-nfs-auth.t - 40 second ./tests/basic/tier/locked_file_migration.t - 39 second ./tests/basic/ec/ec-anonymous-fd.t - 38 second ./tests/basic/afr/arbiter.t - 37 second ./tests/basic/tier/unlink-during-migration.t - 35 second ./tests/basic/jbr/jbr.t - 35 second ./tests/basic/ec/ec.t - 35 second ./tests/basic/mgmt_v3-locks.t - 34 second ./tests/basic/afr/data-self-heal.t - 32 second ./tests/basic/ec/ec-readdir.t - 30 second ./tests/basic/afr/quorum.t - 28 second ./tests/basic/quota-ancestry-building.t - 26 second ./tests/basic/tier/file_with_spaces.t - 24 second ./tests/basic/afr/durability-off.t - 24 second ./tests/basic/afr/heal-quota.t - 23 second ./tests/basic/afr/gfid-self-heal.t - 23 second ./tests/basic/geo-replication/marker-xattrs.t - 22 second ./tests/basic/afr/arbiter-add-brick.t - 22 second ./tests/basic/tier/readdir-during-migration.t - 21 second ./tests/basic/op_errnos.t - 21 second ./tests/basic/glusterd/volfile_server_switch.t - 21 second ./tests/basic/ec/quota.t - 21 second ./tests/basic/0symbol-check.t - 19 second ./tests/basic/afr/split-brain-resolution.t - 17 second ./tests/basic/afr/replace-brick-self-heal.t - 17 second ./tests/basic/afr/granular-esh/replace-brick.t - 17 second ./tests/basic/ec/statedump.t - 16 second ./tests/basic/glusterd/disperse-create.t - 15 second ./tests/basic/glusterd/arbiter-volume-probe.t - 15 second ./tests/basic/afr/resolve.t - 15 second ./tests/basic/cdc.t - 14 second ./tests/basic/bd.t - 14 second ./tests/basic/afr/stale-file-lookup.t - 14 second ./tests/basic/afr/root-squash-self-heal.t - 14 second ./tests/basic/afr/client-side-heal.t - 14 second ./tests/basic/tier/ctr-rename-overwrite.t - 13 second ./tests/basic/rpc-coverage.t - 13 second ./tests/basic/pump.t - 13 second ./tests/basic/nufa.t - 13 second ./tests/basic/quota-nfs.t - 12 second ./tests/basic/quota-anon-fd-nfs.t - 12 second ./tests/basic/afr/read-subvol-data.t - 12 second ./tests/basic/stats-dump.t - 11 second ./tests/basic/inode-quota-enforcing.t - 11 second ./tests/basic/glusterd/arbiter-volume.t - 11 second ./tests/basic/ec/ec-read-policy.t - 11 second ./tests/basic/mount.t - 10 second ./tests/basic/fop-sampling.t - 10 second ./tests/basic/afr/arbiter-mount.t - 10 second ./tests/basic/meta.t - 9 second ./tests/basic/distribute/bug-1265677-use-readdirp.t - 9 second ./tests/basic/afr/read-subvol-entry.t - 9 second ./tests/basic/afr/gfid-heal.t - 9 second ./tests/basic/pgfid-feat.t - 8 second ./tests/basic/ec/ec-internal-xattrs.t - 8 second ./tests/basic/ec/dht-rename.t - 8 second ./tests/basic/afr/heal-info.t - 8 second ./tests/basic/afr/arbiter-statfs.t - 8 second ./tests/basic/afr/arbiter-remove-brick.t - 8 second ./tests/basic/quota-rename.t - 7 second ./tests/basic/fops-sanity.t - 7 second ./tests/basic/ec/nfs.t - 7 second ./tests/basic/distribute/throttle-rebal.t - 7 second ./tests/basic/afr/gfid-mismatch.t - 7 second ./tests/basic/afr/arbiter-cli.t - 7 second ./tests/basic/jbr/jbr-volgen.t - 6 second ./tests/basic/gfid-access.t - 6 second ./tests/basic/rpm.t - 1 second ./tests/basic/posixonly.t - 1 second ./tests/basic/netgroup_parsing.t - 1 second ./tests/basic/exports_parsing.t - 1 second ./tests/basic/first-test.t - 0 second Result is 1 + RET=1 ++ ls -l '/*.core' ++ wc -l + cur_count=0 ++ ls '/*.core' + cur_cores= + '[' 0 '!=' 0 ']' + '[' 1 -ne 0 ']' + filename=logs/glusterfs-logs-20160702:19:48:11.tgz + tar -czf /archives/logs/glusterfs-logs-20160702:19:48:11.tgz /var/log/glusterfs /var/log/messages /var/log/messages-20160607 /var/log/messages-20160612 /var/log/messages-20160619 /var/log/messages-20160626 tar: Removing leading `/' from member names + echo Logs archived in http://slave20.cloud.gluster.org/logs/glusterfs-logs-20160702:19:48:11.tgz Logs archived in http://slave20.cloud.gluster.org/logs/glusterfs-logs-20160702:19:48:11.tgz + case $(uname -s) in ++ uname -s + /sbin/sysctl -w kernel.core_pattern=/%e-%p.core kernel.core_pattern = /%e-%p.core + exit 1 + RET=1 + '[' 1 = 0 ']' + V=-1 + VERDICT=FAILED + '[' 0 -eq 1 ']' + exit 1 Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list [email protected] http://www.gluster.org/mailman/listinfo/maintainers
