Build failed in Jenkins: hbase-0.95 #34
See https://builds.apache.org/job/hbase-0.95/34/changes Changes: [stack] HBASE-7996 Clean up resource leak in MultiTableInputFormat [stack] HBASE-8021 TestSplitTransactionOnCluster.testShouldThrowIOExceptionIfStoreFileSizeIsEmptyAndShouldSuccessfullyExecuteRollback() fails consistently [stack] HBASE-7982 TestReplicationQueueFailover* runs for a minute, spews 3/4million lines complaining 'Filesystem closed', has an NPE, and still passes? -- [...truncated 3948 lines...] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.572 sec Running org.apache.hadoop.hbase.master.TestMasterMetricsWrapper Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.678 sec Running org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.341 sec Running org.apache.hadoop.hbase.master.TestSplitLogManager Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.128 sec Results : Tests run: 1468, Failures: 0, Errors: 0, Skipped: 13 [INFO] [INFO] --- maven-jar-plugin:2.4:test-jar (default) @ hbase-server --- [INFO] Building jar: https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-server/target/hbase-server-0.95-SNAPSHOT-tests.jar [INFO] [INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ hbase-server --- [INFO] Building jar: https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-server/target/hbase-server-0.95-SNAPSHOT-sources.jar [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ hbase-server --- [INFO] Building jar: https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-server/target/hbase-server-0.95-SNAPSHOT.jar [INFO] [INFO] --- apache-rat-plugin:0.8:check (default) @ hbase-server --- [INFO] Exclude: **/*.log [INFO] Exclude: **/.* [INFO] Exclude: **/*.tgz [INFO] Exclude: **/*.orig [INFO] Exclude: **/test/** [INFO] Exclude: **/8e8ab58dcf39412da19833fcd8f687ac [INFO] Exclude: **/.git/** [INFO] Exclude: **/.idea/** [INFO] Exclude: **/*.iml [INFO] Exclude: **/target/** [INFO] Exclude: **/CHANGES.txt [INFO] Exclude: **/generated/** [INFO] Exclude: **/gen-*/** [INFO] Exclude: **/conf/* [INFO] Exclude: **/*.avpr [INFO] Exclude: **/*.svg [INFO] Exclude: **/META-INF/services/** [INFO] Exclude: **/html5shiv.js [INFO] Exclude: **/jquery.min.js [INFO] Exclude: **/*.vm [INFO] Exclude: **/control [INFO] Exclude: **/conffile [INFO] Exclude: docs/* [INFO] Exclude: **/src/site/resources/css/freebsd_docbook.css [INFO] Exclude: .git/** [INFO] Exclude: .svn/** [INFO] [INFO] [INFO] Building HBase - Integration Tests 0.95-SNAPSHOT [INFO] [INFO] [INFO] --- maven-remote-resources-plugin:1.4:process (default) @ hbase-it --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hbase-it --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-it/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hbase-it --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hbase-it --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 6 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ hbase-it --- [INFO] Compiling 12 source files to https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-it/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (default-test) @ hbase-it --- [INFO] Surefire report directory: https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-it/target/surefire-reports [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider --- T E S T S --- Results : Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-dependency-plugin:2.1:build-classpath (create-hbase-generated-classpath) @ hbase-it --- [INFO] Wrote classpath file 'https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-it/target/../../target/cached_classpath.txt'. [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-it --- [INFO] Surefire report directory: https://builds.apache.org/job/hbase-0.95/ws/0.95/hbase-it/target/surefire-reports [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider --- T E S T S --- ---
Build failed in Jenkins: HBase-0.94 #886
See https://builds.apache.org/job/HBase-0.94/886/ -- [...truncated 2404 lines...] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.272 sec Running org.apache.hadoop.hbase.regionserver.wal.TestLogRollAbort Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.314 sec Running org.apache.hadoop.hbase.regionserver.handler.TestCloseRegionHandler Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.291 sec Running org.apache.hadoop.hbase.regionserver.handler.TestOpenRegionHandler Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.891 sec Running org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.893 sec Running org.apache.hadoop.hbase.filter.TestColumnRangeFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.586 sec Running org.apache.hadoop.hbase.coprocessor.TestClassLoading Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.584 sec Running org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.752 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.431 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.702 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 321.265 sec Running org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 40.865 sec Running org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 311.381 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplitCompressed Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 321.52 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterObserver Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.207 sec Running org.apache.hadoop.hbase.coprocessor.TestWALObserver Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.072 sec Running org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.205 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.459 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.108 sec Running org.apache.hadoop.hbase.coprocessor.example.TestBulkDeleteProtocol Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.159 sec Running org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.249 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.873 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.595 sec Running org.apache.hadoop.hbase.procedure.TestZKProcedureControllers Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.182 sec Running org.apache.hadoop.hbase.procedure.TestZKProcedure Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.024 sec Running org.apache.hadoop.hbase.TestGlobalMemStoreSize Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.323 sec Running org.apache.hadoop.hbase.mapred.TestTableInputFormat Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.011 sec Running org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.784 sec Running org.apache.hadoop.hbase.mapred.TestTableMapReduce Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.592 sec Running org.apache.hadoop.hbase.mapreduce.TestTableMapReduce Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.944 sec Running org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.487 sec Running org.apache.hadoop.hbase.mapreduce.TestWALPlayer Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.367 sec Running org.apache.hadoop.hbase.mapreduce.TestImportTsv Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 266.41 sec Running org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 438.476 sec Running
Build failed in Jenkins: hbase-0.95 #35
See https://builds.apache.org/job/hbase-0.95/35/changes Changes: [nkeywal] HBASE-8002 Make TimeOut Management for Assignment optional in master and regionservers -- [...truncated 3717 lines...] Running org.apache.hadoop.hbase.regionserver.TestMemStore Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.027 sec Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.871 sec Tests run: 32, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 393.253 sec Running org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.7 sec Running org.apache.hadoop.hbase.regionserver.handler.TestCloseRegionHandler Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.278 sec Running org.apache.hadoop.hbase.regionserver.handler.TestOpenRegionHandler Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.714 sec Running org.apache.hadoop.hbase.regionserver.TestMultiColumnScanner Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.271 sec Running org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad Running org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.92 sec Running org.apache.hadoop.hbase.regionserver.TestMasterAddressManager Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.605 sec Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.311 sec Running org.apache.hadoop.hbase.regionserver.TestSplitLogWorker Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.03 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplitCompressed Running org.apache.hadoop.hbase.regionserver.TestSeekOptimizations Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.919 sec Running org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 1.688 sec Running org.apache.hadoop.hbase.regionserver.TestPriorityRpc Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.848 sec Tests run: 32, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 507.699 sec Running org.apache.hadoop.hbase.regionserver.TestRSKilledWhenMasterInitializing Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 186.18 sec Running org.apache.hadoop.hbase.filter.TestColumnRangeFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.686 sec Running org.apache.hadoop.hbase.filter.TestFilterWithScanLimits Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.131 sec Running org.apache.hadoop.hbase.filter.TestFilterWrapper Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.99 sec Running org.apache.hadoop.hbase.regionserver.TestCompactionState Running org.apache.hadoop.hbase.TestGlobalMemStoreSize Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 170.021 sec Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.272 sec Running org.apache.hadoop.hbase.security.token.TestTokenAuthentication Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.314 sec Running org.apache.hadoop.hbase.security.token.TestZKSecretWatcher Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.002 sec Running org.apache.hadoop.hbase.security.access.TestZKPermissionsWatcher Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.928 sec Running org.apache.hadoop.hbase.security.access.TestAccessControlFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.869 sec Running org.apache.hadoop.hbase.security.access.TestTablePermissions Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.94 sec Running org.apache.hadoop.hbase.TestHBaseTestingUtility Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 158.062 sec Running org.apache.hadoop.hbase.TestMultiVersions Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.561 sec Running org.apache.hadoop.hbase.security.access.TestAccessController Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.873 sec Running org.apache.hadoop.hbase.replication.TestReplicationZookeeper Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.751 sec Running org.apache.hadoop.hbase.replication.regionserver.TestReplicationSourceManager Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.122 sec Running org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.637 sec Running org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.832 sec Running org.apache.hadoop.hbase.TestRegionRebalancing Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 257.362 sec Running
Build failed in Jenkins: HBase-TRUNK-on-Hadoop-2.0.0 #434
See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/434/changes Changes: [nkeywal] HBASE-8002 Make TimeOut Management for Assignment optional in master and regionservers [stack] HBASE-7996 Clean up resource leak in MultiTableInputFormat [stack] HBASE-8021 TestSplitTransactionOnCluster.testShouldThrowIOExceptionIfStoreFileSizeIsEmptyAndShouldSuccessfullyExecuteRollback() fails consistently [stack] HBASE-7982 TestReplicationQueueFailover* runs for a minute, spews 3/4million lines complaining 'Filesystem closed', has an NPE, and still passes? [larsh] HBASE-7153 print gc option in hbase-env.sh affects hbase zkcli (Dave Latham and LarsH) -- [...truncated 22929 lines...] Running org.apache.hadoop.hbase.replication.TestReplicationSmallTests Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.615 sec Forking command line: /bin/sh -c cd https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions -Xmx1900m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter8111861675893443696.jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire794397227727597838tmp https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_1852476464061432899646tmp Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.342 sec Forking command line: /bin/sh -c cd https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions -Xmx1900m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter3672040705443441157.jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire8217120908014207838tmp https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_1863212896418047618961tmp Running org.apache.hadoop.hbase.replication.TestReplicationQueueFailover Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.699 sec Forking command line: /bin/sh -c cd https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions -Xmx1900m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter601179664442257362.jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire6589651175273743154tmp https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_1874381964136510435223tmp Running org.apache.hadoop.hbase.coprocessor.TestMasterObserver Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.672 sec Forking command line: /bin/sh -c cd https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions -Xmx1900m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter1174858258430823943.jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire8700234735456726164tmp https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_188850972752838557233tmp Running org.apache.hadoop.hbase.coprocessor.TestWALObserver Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.202 sec Forking command line: /bin/sh -c cd https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions -Xmx1900m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter6145016412564494507.jar https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire5766283746384364692tmp https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_1895030769768107214839tmp Running org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.867 sec Running
[jira] [Created] (HBASE-8022) Site target fails
Andrew Purtell created HBASE-8022: - Summary: Site target fails Key: HBASE-8022 URL: https://issues.apache.org/jira/browse/HBASE-8022 Project: HBase Issue Type: Bug Affects Versions: 0.95.0, 0.96.0 Reporter: Andrew Purtell {noformat} mvn -DskipTests -Dhadoop.profile=2.0 clean install site assembly:assembly [...] Recoverable error org.xml.sax.SAXParseException: Include operation failed, reverting to fallback. Resource error reading file as XML (href='../../target/site/hbase-default.xml'). Reason: /usr/src/Hadoop/hbase/target/site/hbase-default.xml (No such file or directory) Error on line 672 column 52 of file:///usr/src/Hadoop/hbase/src/docbkx/configuration.xml: Error reported by XML parser: An 'include' failed, and no 'fallback' element was found. [INFO] [INFO] [INFO] Skipping HBase [INFO] This project has been banned from the build due to previous failures. [INFO] [INFO] [INFO] Reactor Summary: [INFO] [INFO] HBase . FAILURE [5:34.980s] [INFO] HBase - Common SKIPPED [INFO] HBase - Protocol .. SKIPPED [INFO] HBase - Client SKIPPED [INFO] HBase - Prefix Tree ... SKIPPED [INFO] HBase - Hadoop Compatibility .. SKIPPED [INFO] HBase - Hadoop Two Compatibility .. SKIPPED [INFO] HBase - Server SKIPPED [INFO] HBase - Integration Tests . SKIPPED [INFO] HBase - Examples .. SKIPPED [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 5:36.029s [INFO] Finished at: Thu Mar 07 21:59:14 CST 2013 [INFO] Final Memory: 29M/297M [INFO] [ERROR] Failed to execute goal com.agilejava.docbkx:docbkx-maven-plugin:2.0.14:generate-html (multipage) on project hbase: Failed to transform configuration.xml. org.xml.sax.SAXParseException: An 'include' failed, and no 'fallback' element was found. - [Help 1] {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8023) Assembly target fails
Andrew Purtell created HBASE-8023: - Summary: Assembly target fails Key: HBASE-8023 URL: https://issues.apache.org/jira/browse/HBASE-8023 Project: HBase Issue Type: Bug Affects Versions: 0.95.0, 0.96.0 Reporter: Andrew Purtell The assembly target fails when using the 2.0 Hadoop profile (at least). {noformat} mvn -DskipTests -Dhadoop.profile=2.0 clean install site assembly:assembly [...] [INFO] --- maven-assembly-plugin:2.3:assembly (default-cli) @ hbase --- [INFO] Reading assembly descriptor: src/assembly/hadoop-two-compat.xml [WARNING] [DEPRECATION] moduleSet/binaries section detected in root-project assembly. MODULE BINARIES MAY NOT BE AVAILABLE FOR THIS ASSEMBLY! To refactor, move this assembly into a child project and use the flag useAllReactorProjectstrue/useAllReactorProjects in each moduleSet. [INFO] Processing sources for module project: org.apache.hbase:hbase-common:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-protocol:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-client:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-prefix-tree:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-hadoop-compat:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-hadoop2-compat:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-server:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-it:jar:0.97-SNAPSHOT [INFO] Processing sources for module project: org.apache.hbase:hbase-examples:jar:0.97-SNAPSHOT [INFO] [INFO] Reactor Summary: [INFO] [INFO] HBase . FAILURE [15.877s] [INFO] HBase - Common SUCCESS [4.633s] [INFO] HBase - Protocol .. SUCCESS [2.629s] [INFO] HBase - Client SUCCESS [2.901s] [INFO] HBase - Prefix Tree ... SUCCESS [3.085s] [INFO] HBase - Hadoop Compatibility .. SUCCESS [2.647s] [INFO] HBase - Hadoop Two Compatibility .. SUCCESS [2.005s] [INFO] HBase - Server SUCCESS [1.888s] [INFO] HBase - Integration Tests . SUCCESS [6.917s] [INFO] HBase - Examples .. SUCCESS [2.815s] [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 6:41.503s [INFO] Finished at: Thu Mar 07 22:14:08 CST 2013 [INFO] Final Memory: 67M/448M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on project hbase: Failed to create assembly: Artifact: org.apache.hbase:hbase-common:jar:0.97-SNAPSHOT (included by module) does not have an artifact with a file. Please ensure the package phase is run before the assembly is generated. - [Help 1] {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8024) Make Store flush algorithm pluggable
Maryann Xue created HBASE-8024: -- Summary: Make Store flush algorithm pluggable Key: HBASE-8024 URL: https://issues.apache.org/jira/browse/HBASE-8024 Project: HBase Issue Type: Sub-task Components: regionserver Affects Versions: 0.94.5 Reporter: Maryann Xue The idea is to make StoreFlusher an interface instead of an implementation class, and have the original StoreFlusher as the default store flush impl. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: HBase-0.94 #887
See https://builds.apache.org/job/HBase-0.94/887/ -- [...truncated 2404 lines...] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 162.795 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogBench Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.337 sec Running org.apache.hadoop.hbase.regionserver.handler.TestCloseRegionHandler Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.692 sec Running org.apache.hadoop.hbase.regionserver.handler.TestOpenRegionHandler Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.714 sec Running org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.713 sec Running org.apache.hadoop.hbase.filter.TestColumnRangeFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.207 sec Running org.apache.hadoop.hbase.coprocessor.TestClassLoading Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.62 sec Running org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.909 sec Running org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.161 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.883 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.796 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 277.165 sec Running org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 37.398 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterObserver Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.808 sec Running org.apache.hadoop.hbase.coprocessor.TestWALObserver Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.452 sec Running org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.83 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.238 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplitCompressed Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 285.056 sec Running org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.736 sec Running org.apache.hadoop.hbase.coprocessor.example.TestBulkDeleteProtocol Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.524 sec Running org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.08 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.27 sec Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.409 sec Running org.apache.hadoop.hbase.procedure.TestZKProcedureControllers Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.971 sec Running org.apache.hadoop.hbase.procedure.TestZKProcedure Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.842 sec Running org.apache.hadoop.hbase.TestGlobalMemStoreSize Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.387 sec Running org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.278 sec Running org.apache.hadoop.hbase.mapred.TestTableInputFormat Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.536 sec Running org.apache.hadoop.hbase.mapred.TestTableMapReduce Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 118.738 sec Running org.apache.hadoop.hbase.mapreduce.TestTableMapReduce Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 145.289 sec Running org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.175 sec Running org.apache.hadoop.hbase.mapreduce.TestWALPlayer Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.271 sec Running org.apache.hadoop.hbase.mapreduce.TestImportTsv Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 287.012 sec Running org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 398.408 sec Running
[jira] [Created] (HBASE-8025) zkcli fails when SERVER_GC_OPTS is enabled
Dave Latham created HBASE-8025: -- Summary: zkcli fails when SERVER_GC_OPTS is enabled Key: HBASE-8025 URL: https://issues.apache.org/jira/browse/HBASE-8025 Project: HBase Issue Type: Bug Affects Versions: 0.94.4 Reporter: Dave Latham Fix For: 0.95.0, 0.98.0, 0.94.7 HBASE-7091 added logic to separate GC logging options for some client commands versus server commands. It uses a list of known client commands (shell hbck hlog hfile zkcli) and uses the server GC logging options for all other invocations of bin/hbase. When zkcli is invoked, it in turn invokes hbase org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServerArg to gather the server command line arguments, but because org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServerArg is not on the white list it enables server GC logging, which causes extra output that causes the zkcli invocation to break. HBASE-1753 addressed this but the fix only solved the array syntax - not the white list. There are many other tools you can invoke that are more likely to client than server options. For example, bin/hbase org.jruby.Main region_mover.rb or bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable or bin/hbase version or bin/hbase org.apache.hadoop.hbase.mapreduce.Export. The whitelist of server commands is shorter and easier to maintain than a whitelist of client commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HBASE-8019) Port HBASE-7779 '[snapshot 130201 merge] Fix TestMultiParallel' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl reopened HBASE-8019: -- Port HBASE-7779 '[snapshot 130201 merge] Fix TestMultiParallel' to 0.94 --- Key: HBASE-8019 URL: https://issues.apache.org/jira/browse/HBASE-8019 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.6 Attachments: 8019-94.txt Richard Ding reported long delay in shutting down RegionServerSnapshotManager Looks like HBASE-7779 wasn't included in the backport -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
Jonathan Natkins created HBASE-8026: --- Summary: HBase Shell docs for scan command don't reference VERSIONS Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8027) hbase-7994 redux; shutdown hbase-example unit tests
stack created HBASE-8027: Summary: hbase-7994 redux; shutdown hbase-example unit tests Key: HBASE-8027 URL: https://issues.apache.org/jira/browse/HBASE-8027 Project: HBase Issue Type: Bug Reporter: stack My patch on hbase-7994 did not stop clusters starting though no test to run (adding the @Ignore in front of the @before and @after class). All tests passed on build #34 but for the failed hbase-examples cluster startups: https://builds.apache.org/job/hbase-0.95/34/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: HBase-0.94 #888
See https://builds.apache.org/job/HBase-0.94/888/ -- [...truncated 2000 lines...] Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec Running org.apache.hadoop.hbase.io.hfile.TestHFile Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.47 sec Running org.apache.hadoop.hbase.io.hfile.TestSeekTo Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec Running org.apache.hadoop.hbase.io.hfile.TestCachedBlockQueue Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec Running org.apache.hadoop.hbase.io.hfile.TestHFileWriterV2 Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.7 sec Running org.apache.hadoop.hbase.io.hfile.slab.TestSlab Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec Running org.apache.hadoop.hbase.io.hfile.TestBlockCacheColumnFamilySummary Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.03 sec Running org.apache.hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec Running org.apache.hadoop.hbase.io.TestHbaseObjectWritable Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.228 sec Running org.apache.hadoop.hbase.io.TestImmutableBytesWritable Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec Running org.apache.hadoop.hbase.io.TestHalfStoreFileReader Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec Running org.apache.hadoop.hbase.io.TestHeapSize Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec Running org.apache.hadoop.hbase.zookeeper.TestZooKeeperMainServerArg Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064 sec Running org.apache.hadoop.hbase.zookeeper.TestHQuorumPeer Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.215 sec Running org.apache.hadoop.hbase.rest.model.TestScannerModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec Running org.apache.hadoop.hbase.rest.model.TestTableInfoModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec Running org.apache.hadoop.hbase.rest.model.TestCellModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.025 sec Running org.apache.hadoop.hbase.rest.model.TestStorageClusterStatusModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec Running org.apache.hadoop.hbase.rest.model.TestColumnSchemaModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec Running org.apache.hadoop.hbase.rest.model.TestRowModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.063 sec Running org.apache.hadoop.hbase.rest.model.TestTableSchemaModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec Running org.apache.hadoop.hbase.rest.model.TestStorageClusterVersionModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec Running org.apache.hadoop.hbase.rest.model.TestTableListModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec Running org.apache.hadoop.hbase.rest.model.TestCellSetModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec Running org.apache.hadoop.hbase.rest.model.TestTableRegionModel Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec Running org.apache.hadoop.hbase.rest.model.TestVersionModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.028 sec Running org.apache.hadoop.hbase.TestHRegionLocation Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.023 sec Running org.apache.hadoop.hbase.metrics.TestExponentiallyDecayingSample Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec Running org.apache.hadoop.hbase.metrics.TestMetricsHistogram Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec Running org.apache.hadoop.hbase.metrics.TestExactCounterMetric Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec Running org.apache.hadoop.hbase.regionserver.TestBlocksScanned Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.234 sec Running org.apache.hadoop.hbase.regionserver.TestScanWildcardColumnTracker Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec Running org.apache.hadoop.hbase.regionserver.TestCompaction Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.672 sec Running org.apache.hadoop.hbase.regionserver.TestRSStatusServlet Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec Running org.apache.hadoop.hbase.regionserver.TestColumnSeeking Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.055 sec Running org.apache.hadoop.hbase.regionserver.TestHBase7051 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time
[jira] [Resolved] (HBASE-8027) hbase-7994 redux; shutdown hbase-example unit tests
[ https://issues.apache.org/jira/browse/HBASE-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-8027. -- Resolution: Fixed Resolving. hbase-7994 redux; shutdown hbase-example unit tests --- Key: HBASE-8027 URL: https://issues.apache.org/jira/browse/HBASE-8027 Project: HBase Issue Type: Bug Reporter: stack Attachments: 8027.txt My patch on hbase-7994 did not stop clusters starting though no test to run (adding the @Ignore in front of the @before and @after class). All tests passed on build #34 but for the failed hbase-examples cluster startups: https://builds.apache.org/job/hbase-0.95/34/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8028) Append, Increment doesn't handle wall-sync exceptions correctly
Himanshu Vashishtha created HBASE-8028: -- Summary: Append, Increment doesn't handle wall-sync exceptions correctly Key: HBASE-8028 URL: https://issues.apache.org/jira/browse/HBASE-8028 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.5 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.95.0 In case there is an exception while doing the log-sync, the memstore is not rollbacked, while the mvcc is _always_ forwarded to the writeentry created at the beginning of the operation. This may lead to scanners seeing results which are not synched to the fs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8029) delete with TS should only delete that cell, not all cells after.
Kevin Odell created HBASE-8029: -- Summary: delete with TS should only delete that cell, not all cells after. Key: HBASE-8029 URL: https://issues.apache.org/jira/browse/HBASE-8029 Project: HBase Issue Type: Bug Components: Client, shell Reporter: Kevin Odell delete with TS specified will delete all older cells. I know overloading the cells is not a great model, but sometimes it is useful and you don't want to delete all old cells. hbase(main):028:0 truncate 'tre' Truncating 'tre' table (it may take a while): - Disabling table... - Dropping table... - Creating table... 0 row(s) in 4.6060 seconds hbase(main):029:0 put 'tre', 'row1', 'cf1:c1', 'abc', 111 0 row(s) in 0.0220 seconds hbase(main):030:0 put 'tre', 'row1', 'cf1:c1', 'abcd', 112 0 row(s) in 0.0060 seconds hbase(main):031:0 put 'tre', 'row1', 'cf1:c1', 'abce', 113 0 row(s) in 0.0120 seconds hbase(main):032:0 scan 'tre', {NAME = 'cf1:c1', VERSIONS = 4} ROW COLUMN+CELL row1column=cf1:c1, timestamp=113, value=abce row1column=cf1:c1, timestamp=112, value=abcd row1column=cf1:c1, timestamp=111, value=abc hbase(main):033:0 delete 'tre', 'row1', 'cf1:c1', 112 0 row(s) in 0.0110 seconds hbase(main):034:0 scan 'tre', {NAME = 'cf1:c1', VERSIONS = 4} ROW COLUMN+CELL row1column=cf1:c1, timestamp=113, value=abce 1 row(s) in 0.0290 seconds -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
reason to do major compaction after split
Hi. Is there a reason to do major compaction after split, instead of allowing the reference files to go away gradually as the normal compactions happen? I could think up two reasons - region with reference files currently cannot be split again (not clear why not though, could just create more references); and avoiding load on the same datanodes from both new regions. Are there some other reasons?
Re: reason to do major compaction after split
Clean the parent would be another one. J-D On Thu, Mar 7, 2013 at 10:50 AM, Sergey Shelukhin ser...@hortonworks.com wrote: Hi. Is there a reason to do major compaction after split, instead of allowing the reference files to go away gradually as the normal compactions happen? I could think up two reasons - region with reference files currently cannot be split again (not clear why not though, could just create more references); and avoiding load on the same datanodes from both new regions. Are there some other reasons?
Re: reason to do major compaction after split
I was thinking of allowing regions with refs to split again, but the cleaning parent logic will get messy a lot. Enis On Thu, Mar 7, 2013 at 10:58 AM, Stack st...@duboce.net wrote: On Thu, Mar 7, 2013 at 10:50 AM, Sergey Shelukhin ser...@hortonworks.com wrote: Hi. Is there a reason to do major compaction after split, instead of allowing the reference files to go away gradually as the normal compactions happen? I could think up two reasons - region with reference files currently cannot be split again (not clear why not though, could just create more references); and avoiding load on the same datanodes from both new regions. Are there some other reasons? We could do references to references but was afraid the linkage would be too fragile and would break in hard-to-trace ways. St.Ack
Build failed in Jenkins: hbase-0.95 #36
See https://builds.apache.org/job/hbase-0.95/36/changes Changes: [stack] HBASE-8027 hbase-7994 redux; shutdown hbase-example unit tests -- [...truncated 3250 lines...] Running org.apache.hadoop.hbase.rest.model.TestColumnSchemaModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec Running org.apache.hadoop.hbase.rest.model.TestRowModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec Running org.apache.hadoop.hbase.rest.model.TestTableSchemaModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec Running org.apache.hadoop.hbase.rest.model.TestStorageClusterVersionModel Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec Running org.apache.hadoop.hbase.rest.model.TestTableListModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.043 sec Running org.apache.hadoop.hbase.rest.model.TestCellSetModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec Running org.apache.hadoop.hbase.rest.model.TestTableRegionModel Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec Running org.apache.hadoop.hbase.rest.model.TestVersionModel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.097 sec Running org.apache.hadoop.hbase.monitoring.TestTaskMonitor Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.779 sec Running org.apache.hadoop.hbase.monitoring.TestMemoryBoundedLogMessageBuffer Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec Running org.apache.hadoop.hbase.TestHRegionLocation Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec Running org.apache.hadoop.hbase.TestHColumnDescriptor Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec Running org.apache.hadoop.hbase.metrics.TestMetricsMBeanBase Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.172 sec Running org.apache.hadoop.hbase.metrics.TestExponentiallyDecayingSample Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec Running org.apache.hadoop.hbase.metrics.TestMetricsHistogram Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.042 sec Running org.apache.hadoop.hbase.metrics.TestExactCounterMetric Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec Running org.apache.hadoop.hbase.regionserver.TestBlocksScanned Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.283 sec Running org.apache.hadoop.hbase.regionserver.TestScanWildcardColumnTracker Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec Running org.apache.hadoop.hbase.regionserver.TestRSStatusServlet Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.532 sec Running org.apache.hadoop.hbase.regionserver.TestColumnSeeking Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.899 sec Running org.apache.hadoop.hbase.regionserver.TestHBase7051 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.672 sec Running org.apache.hadoop.hbase.regionserver.TestMetricsRegion Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec Running org.apache.hadoop.hbase.regionserver.TestOffPeakCompactions Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec Running org.apache.hadoop.hbase.regionserver.wal.TestKeyValueCompression Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec Running org.apache.hadoop.hbase.regionserver.wal.TestCompressor Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 sec Running org.apache.hadoop.hbase.regionserver.wal.TestLogRollingNoCluster Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.903 sec Running org.apache.hadoop.hbase.regionserver.wal.TestWALActionsListener Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec Running org.apache.hadoop.hbase.regionserver.wal.TestHLogMethods Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec Running org.apache.hadoop.hbase.regionserver.wal.TestLRUDictionary Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.303 sec Running org.apache.hadoop.hbase.regionserver.TestKeyValueScanFixture Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec Running org.apache.hadoop.hbase.regionserver.TestExplicitColumnTracker Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.44 sec Running org.apache.hadoop.hbase.regionserver.TestMultiVersionConsistencyControl Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.088 sec Running org.apache.hadoop.hbase.regionserver.TestStoreFile Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.122 sec Running org.apache.hadoop.hbase.regionserver.TestWideScanner Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.093 sec Running
[jira] [Created] (HBASE-8030) znode path of online region servers is hard coded in rolling_restart.sh
rajeshbabu created HBASE-8030: - Summary: znode path of online region servers is hard coded in rolling_restart.sh Key: HBASE-8030 URL: https://issues.apache.org/jira/browse/HBASE-8030 Project: HBase Issue Type: Bug Components: shell Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.94.7 znode path of online region servers($zparent/rs) is hard coded. We need to use configured value of zookeeper.znode.rs as child path. {code} # gracefully restart all online regionservers online_regionservers=`$bin/hbase zkcli ls $zparent/rs 21 | tail -1 | sed s/\[// | sed s/\]//` {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8031) Adopt goraci as an Integration test
Enis Soztutar created HBASE-8031: Summary: Adopt goraci as an Integration test Key: HBASE-8031 URL: https://issues.apache.org/jira/browse/HBASE-8031 Project: HBase Issue Type: Improvement Components: test Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.95.0, 0.98.0, 0.94.7 As you might know, I am a big fan of the goraci test that Keith Turner has developed, which in turn is inspired by the Accumulo test called Continuous Ingest. As much as I hate to say it, having to rely on gora and and external github library makes using this lib cumbersome. And lately we had to use this for testing against secure clusters and with Hadoop2, which gora does not support for now. So, I am proposing we add this test as an IT in the HBase code base so that all HBase devs can benefit from it. The original source code can be found here: * https://github.com/keith-turner/goraci * https://github.com/enis/goraci/ From the javadoc: {code} Apache Accumulo [0] has a simple test suite that verifies that data is not * lost at scale. This test suite is called continuous ingest. This test runs * many ingest clients that continually create linked lists containing 25 * million nodes. At some point the clients are stopped and a map reduce job is * run to ensure no linked list has a hole. A hole indicates data was lost.·· * * The nodes in the linked list are random. This causes each linked list to * spread across the table. Therefore if one part of a table loses data, then it * will be detected by references in another part of the table. * Below is rough sketch of how data is written. For specific details look at * the Generator code. * * 1 Write out 1 million nodes· 2 Flush the client· 3 Write out 1 million that * reference previous million· 4 If this is the 25th set of 1 million nodes, * then update 1st set of million to point to last· 5 goto 1 * * The key is that nodes only reference flushed nodes. Therefore a node should * never reference a missing node, even if the ingest client is killed at any * point in time. * * Some ASCII art time: * [ . . . ] represents one batch of random longs of length WIDTH * *_ * | __ | * | | || * __+_+_ || * v v v ||| * first = [ . . . . . . . . . . . ] ||| * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ||| * | | | | | | | | | | | ||| * prev= [ . . . . . . . . . . . ] ||| * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ||| * | | | | | | | | | | | ||| * current = [ . . . . . . . . . . . ] ||| * ||| * ... ||| * ||| * last= [ . . . . . . . . . . . ] ||| * | | | | | | | | | | |-||| * | ||| * |___| {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: reason to do major compaction after split
Can you create same-level references instead of references to references? On Thu, Mar 7, 2013 at 11:03 AM, Enis Söztutar enis@gmail.com wrote: I was thinking of allowing regions with refs to split again, but the cleaning parent logic will get messy a lot. Enis On Thu, Mar 7, 2013 at 10:58 AM, Stack st...@duboce.net wrote: On Thu, Mar 7, 2013 at 10:50 AM, Sergey Shelukhin ser...@hortonworks.com wrote: Hi. Is there a reason to do major compaction after split, instead of allowing the reference files to go away gradually as the normal compactions happen? I could think up two reasons - region with reference files currently cannot be split again (not clear why not though, could just create more references); and avoiding load on the same datanodes from both new regions. Are there some other reasons? We could do references to references but was afraid the linkage would be too fragile and would break in hard-to-trace ways. St.Ack
Re: reason to do major compaction after split
We do not have to created references to references. We can find the original file, and directly create a ref at the grand daughters. The messy part, is in the cleanup for parent region, where we have to recursively search for all successors to decide whether we can delete this region, and delete the hfile. Enis On Thu, Mar 7, 2013 at 12:58 PM, Sergey Shelukhin ser...@hortonworks.comwrote: Can you create same-level references instead of references to references? On Thu, Mar 7, 2013 at 11:03 AM, Enis Söztutar enis@gmail.com wrote: I was thinking of allowing regions with refs to split again, but the cleaning parent logic will get messy a lot. Enis On Thu, Mar 7, 2013 at 10:58 AM, Stack st...@duboce.net wrote: On Thu, Mar 7, 2013 at 10:50 AM, Sergey Shelukhin ser...@hortonworks.com wrote: Hi. Is there a reason to do major compaction after split, instead of allowing the reference files to go away gradually as the normal compactions happen? I could think up two reasons - region with reference files currently cannot be split again (not clear why not though, could just create more references); and avoiding load on the same datanodes from both new regions. Are there some other reasons? We could do references to references but was afraid the linkage would be too fragile and would break in hard-to-trace ways. St.Ack
Re: reason to do major compaction after split
On Thu, Mar 7, 2013 at 1:14 PM, Enis Söztutar e...@hortonworks.com wrote: We do not have to created references to references. We can find the original file, and directly create a ref at the grand daughters. The messy part, is in the cleanup for parent region, where we have to recursively search for all successors to decide whether we can delete this region, and delete the hfile. Yes. That is a few trips to the NN listing directory contents and then some edits/reading of .META. We would have to introduce a QuarterHFile to go with our HalfHFile (or rename HalfHFile as PieceO'HFile). St.Ack
[jira] [Created] (HBASE-8034) record on-disk data size for store file and make it available during writing
Sergey Shelukhin created HBASE-8034: --- Summary: record on-disk data size for store file and make it available during writing Key: HBASE-8034 URL: https://issues.apache.org/jira/browse/HBASE-8034 Project: HBase Issue Type: Task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor To better estimate the size of data in the file, and to be able to split files intelligently during any multi-file compactor like stripe or level. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: reason to do major compaction after split
/hbase/data/[file1, file 2, file 3, file N] table 1/region 1: [file 2] table 1/region 2: [file 1 (from 0 to 50)] table 1/region 3: [file 1 (from 50 to 100)] table 2/region 1: [file 1, file 2] We do not necessarily have to have a separate dir for files. We can just keep the files in the region dir, until no more references. The problem comes from the fact that we rely on hdfs ls for regions rather than META being the one and only authoritative source. Enis On Thu, Mar 7, 2013 at 3:36 PM, Matteo Bertozzi theo.berto...@gmail.comwrote: sure having the hardlink support (HDFS-3370https://issues.apache.org/jira/browse/HDFS-3370) solve the HFileLink hack but you still need to add extra metadata for splits (reference files) also, if instead of files you think about handling blocks directly you can end up doing more stuff, like a proper compaction that require less I/O if N blocks are not changed, some crazy deduplication on tables with same content similar... On Thu, Mar 7, 2013 at 11:22 PM, Sergey Shelukhin ser...@hortonworks.com wrote: Hmm... ranges sounds good, but for files, it would be nice if there were a hardlink mechanism. It should be trivial to do in HDFS if blocks could belong to several files. Then we don't have to have private cleanup code. On Thu, Mar 7, 2013 at 2:28 PM, Matteo Bertozzi theo.berto...@gmail.com wrote: This is seems to going in a super messy direction. With HBASE-7806 the ideas was to cleanup all this crazy stuff (HFileLink, References, ...) unfortunately the initial decision of tight together the fs layout and the tables/regions/families is bringing to all this workaround to have something cool. If you put the files in one place, and the association in another you can avoid all this complexity. /hbase/data/[file1, file 2, file 3, file N] table 1/region 1: [file 2] table 1/region 2: [file 1 (from 0 to 50)] table 1/region 3: [file 1 (from 50 to 100)] table 2/region 1: [file 1, file 2] On Thu, Mar 7, 2013 at 10:13 PM, Stack st...@duboce.net wrote: Yes. That is a few trips to the NN listing directory contents and then some edits/reading of .META. We would have to introduce a QuarterHFile to go with our HalfHFile (or rename HalfHFile as PieceO'HFile). St.Ack
Re: no 0.94 commits please
Still waiting for a jenkins vm for the security build. From: lars hofhansl la...@apache.org To: hbase-dev dev@hbase.apache.org Sent: Wednesday, March 6, 2013 9:58 PM Subject: no 0.94 commits please cutting 0.94.6rc1 this time
Re: no 0.94 commits please
Already testing the non-secured one ;) Le 7 mars 2013 19:31, lars hofhansl la...@apache.org a écrit : Still waiting for a jenkins vm for the security build. From: lars hofhansl la...@apache.org To: hbase-dev dev@hbase.apache.org Sent: Wednesday, March 6, 2013 9:58 PM Subject: no 0.94 commits please cutting 0.94.6rc1 this time
Re: reason to do major compaction after split
also, if instead of files you think about handling blocks directly you can end up doing more stuff, like a proper compaction that require less I/O if N blocks are not changed, some crazy deduplication on tables with same content similar... Sounds like a step toward using a block pool directly and avoiding the filesystem layer (Hadoop 2+). On Fri, Mar 8, 2013 at 7:36 AM, Matteo Bertozzi theo.berto...@gmail.comwrote: sure having the hardlink support (HDFS-3370https://issues.apache.org/jira/browse/HDFS-3370) solve the HFileLink hack but you still need to add extra metadata for splits (reference files) also, if instead of files you think about handling blocks directly you can end up doing more stuff, like a proper compaction that require less I/O if N blocks are not changed, some crazy deduplication on tables with same content similar... On Thu, Mar 7, 2013 at 11:22 PM, Sergey Shelukhin ser...@hortonworks.com wrote: Hmm... ranges sounds good, but for files, it would be nice if there were a hardlink mechanism. It should be trivial to do in HDFS if blocks could belong to several files. Then we don't have to have private cleanup code. On Thu, Mar 7, 2013 at 2:28 PM, Matteo Bertozzi theo.berto...@gmail.com wrote: This is seems to going in a super messy direction. With HBASE-7806 the ideas was to cleanup all this crazy stuff (HFileLink, References, ...) unfortunately the initial decision of tight together the fs layout and the tables/regions/families is bringing to all this workaround to have something cool. If you put the files in one place, and the association in another you can avoid all this complexity. /hbase/data/[file1, file 2, file 3, file N] table 1/region 1: [file 2] table 1/region 2: [file 1 (from 0 to 50)] table 1/region 3: [file 1 (from 50 to 100)] table 2/region 1: [file 1, file 2] On Thu, Mar 7, 2013 at 10:13 PM, Stack st...@duboce.net wrote: Yes. That is a few trips to the NN listing directory contents and then some edits/reading of .META. We would have to introduce a QuarterHFile to go with our HalfHFile (or rename HalfHFile as PieceO'HFile). St.Ack -- Best regards, - Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)
Build failed in Jenkins: HBase-TRUNK #3925
See https://builds.apache.org/job/HBase-TRUNK/3925/changes Changes: [stack] HBASE-8032 TestNodeHealthCheckChore.testHealthChecker failed 0.95 build #36 [stack] HBASE-8027 hbase-7994 redux; shutdown hbase-example unit tests -- [...truncated 3893 lines...] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:403) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:243) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:133) at java.lang.Thread.run(Thread.java:722) java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:133) at org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1157) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:993) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:151) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:103) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:135) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1118) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:403) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:243) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:133) at java.lang.Thread.run(Thread.java:722) java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:133) at org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1157) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:993) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:151) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:103) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:135) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1118) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:403) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:243) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:133) at java.lang.Thread.run(Thread.java:722) Shutting down the Mini HDFS Cluster Shutting down DataNode 0 Running org.apache.hadoop.hbase.replication.TestMasterReplication Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 323.433 sec FAILURE! Running org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 575.567 sec FAILURE! Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 294.785 sec Running
Build failing in Hbase branch 0.95
Hi, I am trying to test and pacakge just the Hbase-server for the branch-0.95 and I am getting following error in PS. Please let me know what I am missing here. If I use the option -am (also make) it succeeds fine, does it mean that I cannot build hbase-server alone? Thanks, Shivendra mvn package -pl hbase-server -amd [WARNING] [WARNING] Some problems were encountered while building the effective settings [WARNING] Expected root element 'settings' but found 'proxies' (position: START_TAG seen proxies... @1:9) @ /home/shivends/.m2/settings.xml, line 1, column 9 [WARNING] [INFO] Scanning for projects... [INFO] [INFO] Reactor Build Order: [INFO] [INFO] HBase - Server [INFO] HBase - Integration Tests [INFO] HBase - Examples [INFO] [INFO] [INFO] Building HBase - Server 0.95-SNAPSHOT [INFO] [INFO] [INFO] Reactor Summary: [INFO] [INFO] HBase - Server FAILURE [4.452s] [INFO] HBase - Integration Tests . SKIPPED [INFO] HBase - Examples .. SKIPPED [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 5.194s [INFO] Finished at: Thu Mar 07 18:16:40 PST 2013 [INFO] Final Memory: 10M/149M [INFO] [ERROR] Failed to execute goal on project hbase-server: Could not resolve dependencies for project org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT: The following artifacts could not be resolved: org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-protocol:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-client:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-prefix-tree:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-common:jar:tests:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop-compat:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop-compat:jar:tests:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop1-compat:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop1-compat:jar:tests:0.95-SNAPSHOT: Failure to find org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT in http://repository-netty.forge.cloudbees.com/snapshot/ was cached in the local repository, resolution will not be reattempted until the update interval of cloudbees netty has elapsed or updates are force.
Re: Build failing in Hbase branch 0.95
/home/shivends/.m2/settings.**xml I don't have such file under my ~/.m2 What's the reason for building server module alone ? Cheers On Thu, Mar 7, 2013 at 6:36 PM, Shivendra Singh shivendra.p.si...@oracle.com wrote: Hi, I am trying to test and pacakge just the Hbase-server for the branch-0.95 and I am getting following error in PS. Please let me know what I am missing here. If I use the option -am (also make) it succeeds fine, does it mean that I cannot build hbase-server alone? Thanks, Shivendra mvn package -pl hbase-server -amd [WARNING] [WARNING] Some problems were encountered while building the effective settings [WARNING] Expected root element 'settings' but found 'proxies' (position: START_TAG seen proxies... @1:9) @ /home/shivends/.m2/settings.**xml, line 1, column 9 [WARNING] [INFO] Scanning for projects... [INFO] --**--** [INFO] Reactor Build Order: [INFO] [INFO] HBase - Server [INFO] HBase - Integration Tests [INFO] HBase - Examples [INFO] [INFO] --**--** [INFO] Building HBase - Server 0.95-SNAPSHOT [INFO] --**--** [INFO] --**--** [INFO] Reactor Summary: [INFO] [INFO] HBase - Server ..**.. FAILURE [4.452s] [INFO] HBase - Integration Tests . SKIPPED [INFO] HBase - Examples ..** SKIPPED [INFO] --**--** [INFO] BUILD FAILURE [INFO] --**--** [INFO] Total time: 5.194s [INFO] Finished at: Thu Mar 07 18:16:40 PST 2013 [INFO] Final Memory: 10M/149M [INFO] --**--** [ERROR] Failed to execute goal on project hbase-server: Could not resolve dependencies for project org.apache.hbase:hbase-server:**jar:0.95-SNAPSHOT: The following artifacts could not be resolved: org.apache.hbase:hbase-common:**jar:0.95-SNAPSHOT, org.apache.hbase:hbase- **protocol:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-client:**jar:0.95-SNAPSHOT, org.apache.hbase:hbase-prefix-**tree:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-common:**jar:tests:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop-**compat:jar:0.95-SNAPSHOT, org.apache.hbase:hbase-hadoop-**compat:jar:tests:0.95-**SNAPSHOT, org.apache.hbase:hbase-**hadoop1-compat:jar:0.95-**SNAPSHOT, org.apache.hbase:hbase-**hadoop1-compat:jar:tests:0.95-**SNAPSHOT: Failure to find org.apache.hbase:hbase-common:**jar:0.95-SNAPSHOT in http://repository-netty.forge.**cloudbees.com/snapshot/http://repository-netty.forge.cloudbees.com/snapshot/was cached in the local repository, resolution will not be reattempted until the update interval of cloudbees netty has elapsed or updates are force.
[jira] [Resolved] (HBASE-8004) Creating an existing table from Shell does not throw TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-8004. --- Resolution: Fixed Fix Version/s: 0.96.0 0.95.0 Hadoop Flags: Reviewed Committed to trunk and 0.95 branch. Thanks for the patch! Creating an existing table from Shell does not throw TableExistsException - Key: HBASE-8004 URL: https://issues.apache.org/jira/browse/HBASE-8004 Project: HBase Issue Type: Bug Affects Versions: 0.95.0 Reporter: ramkrishna.s.vasudevan Assignee: Jeffrey Zhong Fix For: 0.95.0, 0.96.0 Attachments: hbase-8004_1.patch, hbase-8004.patch When i try to create a same table from shell i don't get TableExistsException instead i get {code} ERROR: cannot load Java class org.apache.hadoop.hbase.TableNotFoundException Here is some help for this command: Creates a table. Pass a table name, and a set of column family specifications (at least one), and, optionally, table configuration. Column specification can be a simple string (name), or a dictionary (dictionaries are described below in main help output), necessarily including NAME attribute. Examples: hbase create 't1', {NAME = 'f1', VERSIONS = 5} hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'} hbase # The above in shorthand would be the following: {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8036) ProtobufUtil.multi behavior is incosnsistent in case of errors
Sergey Shelukhin created HBASE-8036: --- Summary: ProtobufUtil.multi behavior is incosnsistent in case of errors Key: HBASE-8036 URL: https://issues.apache.org/jira/browse/HBASE-8036 Project: HBase Issue Type: Bug Affects Versions: 0.95.0 Reporter: Sergey Shelukhin Fix For: 0.95.0 ProtobufUtil splits operations by regions and performs multiple client.multi calls. In case if there are certain errors inside RS, HRegionServer adds the corresponding exceptions to MultiResponse, and PU continues the batch and returns partial failure. In case of other errors (for example, region not served exception), the entire batch stops executing, and previous successes and partial results are disregarded. ProtobufUtil should probably catch ServiceException separately for multi-region batches, and make it a partial result for all actions for this region (and also continue the batch), to make the behavior consistent. Alternatively server should do that for region-specific errors (add exception to results for each action), if we want to avoid continuing the batch in case of some server-wide errors/connection problems/etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8037) RegionMovedException is handled incorrectly for multi-region requests that fail completely
Sergey Shelukhin created HBASE-8037: --- Summary: RegionMovedException is handled incorrectly for multi-region requests that fail completely Key: HBASE-8037 URL: https://issues.apache.org/jira/browse/HBASE-8037 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Priority: Minor RegionMovedException is currently thrown on global level, and due to how ProtobufUtil does things, it fails the entire multi-request, see HBASE-8036. RME also doesn't specify the region. Thus, if it's thrown for one region and there are multiple regions in the request, HCM applies it to all of them, which causes clients to become confused temporarily. We should either fix HBASE-8036 or add region encoded name in the description. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: no 0.94 commits please
Yep. From: Ted Yu yuzhih...@gmail.com To: dev@hbase.apache.org; lars hofhansl la...@apache.org Sent: Thursday, March 7, 2013 9:05 PM Subject: Re: no 0.94 commits please Is 0.94 open for checkin now ? Thanks On Thu, Mar 7, 2013 at 4:30 PM, lars hofhansl la...@apache.org wrote: Still waiting for a jenkins vm for the security build. From: lars hofhansl la...@apache.org To: hbase-dev dev@hbase.apache.org Sent: Wednesday, March 6, 2013 9:58 PM Subject: no 0.94 commits please cutting 0.94.6rc1 this time
Expected behavior when Snappy is not installed
I was just testing 0.94.6RC1 and I forgot to install Snappy. Then I created a table with a CF that had Snappy enabled. Now it is not possible to disable/drop this table. The master will abort when I try with the following: 2013-03-07 22:45:50,274 FATAL org.apache.hadoop.hbase.master.HMaster: Unexpected state : myTable,,1362725053267.784373d454f50d28427b1b5ff872c49f. state=PENDING_OPEN, ts=1362725150273, server=bunnypig,60020,1362725132107 .. Cannot transit it to OFFLINE. java.lang.IllegalStateException: Unexpected state : myTable,,1362725053267.784373d454f50d28427b1b5ff872c49f. state=PENDING_OPEN, ts=1362725150273, server=bunnypig,60020,1362725132107 .. Cannot transit it to OFFLINE. at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:1813) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1658) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1423) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1398) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1393) at org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:105) I can swear that I disabled/dropped tables before without a problem when that (forgetting to install Snappy) happened. Is anybody aware of a change in the recent RC that could cause this? (not enough to sink the RC, but something - potentially - to track down) -- Lars
Re: Heads-up: Say your goodbyes to -ROOT-
On Thu, Mar 7, 2013 at 9:30 PM, James Taylor jtay...@salesforce.com wrote: Awesome. Who's doing the eulogy, Stack? The same folks as those who are doing Hugo's. St.Ack