[3/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1f2dbe14 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1f2dbe14 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1f2dbe14 Branch: refs/heads/branch-1.3 Commit: 1f2dbe14e3b16363ea63351f308260529f7fb8d0 Parents: 1c8246c Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:19:50 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/1f2dbe14/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 443dd20..2adaeba 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/1f2dbe14/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf "${OUTPUT_DIR}" -mkdir -p "${OUTPUT_DIR}" +if [[ ! -d "${OUTPUT_DIR}" ]]; then + echo "[ERROR] the specified output directory
[1/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
Repository: hbase Updated Branches: refs/heads/branch-1 f6413d559 -> f70f47a03 refs/heads/branch-1.2 e78037107 -> 00a70c2c8 refs/heads/branch-1.3 1c8246c6a -> 1f2dbe14e refs/heads/branch-1.4 e1ab70d51 -> 8a3e1c30b HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f70f47a0 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f70f47a0 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f70f47a0 Branch: refs/heads/branch-1 Commit: f70f47a032ad3c900ad8e1f080410facffda9c23 Parents: f6413d5 Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:18:50 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/f70f47a0/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 443dd20..2adaeba 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/f70f47a0/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@
[4/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/00a70c2c Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/00a70c2c Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/00a70c2c Branch: refs/heads/branch-1.2 Commit: 00a70c2c827e6717f3c2d73e387d05fc4adc3ba6 Parents: e780371 Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:20:13 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/00a70c2c/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index bc613a3..ae0b114 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/00a70c2c/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf "${OUTPUT_DIR}" -mkdir -p "${OUTPUT_DIR}" +if [[ ! -d "${OUTPUT_DIR}" ]]; then + echo "[ERROR] the specified output directory
[2/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8a3e1c30 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8a3e1c30 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8a3e1c30 Branch: refs/heads/branch-1.4 Commit: 8a3e1c30b4d6ec07d7a3e396be49e7f507632ce0 Parents: e1ab70d Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:19:22 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/8a3e1c30/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 443dd20..2adaeba 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/8a3e1c30/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf "${OUTPUT_DIR}" -mkdir -p "${OUTPUT_DIR}" +if [[ ! -d "${OUTPUT_DIR}" ]]; then + echo "[ERROR] the specified output directory
[2/4] hbase git commit: HBASE-20389 Move website building flags into a profile.
HBASE-20389 Move website building flags into a profile. Signed-off-by: Mike DrobConflicts: hbase-spark/pom.xml pom.xml Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1d133c00 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1d133c00 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1d133c00 Branch: refs/heads/branch-2 Commit: 1d133c005c26c01628fcc622c3095ddccb80bd6b Parents: dcd20e9 Author: Sean Busbey Authored: Thu Apr 12 21:55:27 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 22:50:55 2018 -0500 -- .../jenkins-scripts/generate-hbase-website.sh | 11 ++- pom.xml | 31 2 files changed, 33 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/1d133c00/dev-support/jenkins-scripts/generate-hbase-website.sh -- diff --git a/dev-support/jenkins-scripts/generate-hbase-website.sh b/dev-support/jenkins-scripts/generate-hbase-website.sh index b6277d0..0ef9b2d 100644 --- a/dev-support/jenkins-scripts/generate-hbase-website.sh +++ b/dev-support/jenkins-scripts/generate-hbase-website.sh @@ -173,20 +173,13 @@ echo "Building HBase" # But! some sunshine: because we're doing a full install before running site, we can skip all the # compiling in the forked executions. We have to do it awkwardly because MJAVADOC-444. if mvn \ --DskipTests \ --Dmaven.javadoc.skip=true \ --batch-mode \ --Denforcer.skip=true \ --Dcheckstyle.skip=true \ --Dfindbugs.skip=true \ +-Psite-install-step \ --log-file="${working_dir}/hbase-install-log-${CURRENT_HBASE_COMMIT}.txt" \ clean install \ && mvn site \ --batch-mode \ --Denforcer.skip=true \ --Dmaven.main.skip=true \ --Dmaven.test.skip=true \ --DskipTests \ +-Psite-build-step \ --log-file="${working_dir}/hbase-site-log-${CURRENT_HBASE_COMMIT}.txt"; then echo "Successfully built site." else http://git-wip-us.apache.org/repos/asf/hbase/blob/1d133c00/pom.xml -- diff --git a/pom.xml b/pom.xml index 4709004..9c78ef9 100755 --- a/pom.xml +++ b/pom.xml @@ -3376,6 +3376,37 @@ + + + site-install-step + +true +true +true +true +true +true + + + + + site-build-step + +true +true +true +true +true +true +true + +
[1/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
Repository: hbase Updated Branches: refs/heads/branch-2 8cd1201af -> 1d133c005 refs/heads/branch-2.0 7704d3bb1 -> 4de3d7dee HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dcd20e9c Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dcd20e9c Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dcd20e9c Branch: refs/heads/branch-2 Commit: dcd20e9cefa5d0a70728b22d259747b795091ae5 Parents: 8cd1201 Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 22:45:19 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/dcd20e9c/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 6acd4b2..56d53d8 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/dcd20e9c/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf
[4/4] hbase git commit: HBASE-20389 Move website building flags into a profile.
HBASE-20389 Move website building flags into a profile. Signed-off-by: Mike DrobConflicts: hbase-spark/pom.xml pom.xml Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4de3d7de Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4de3d7de Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4de3d7de Branch: refs/heads/branch-2.0 Commit: 4de3d7deefd536025abf2ee81517e8035082274c Parents: 4176237 Author: Sean Busbey Authored: Thu Apr 12 21:55:27 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:16:37 2018 -0500 -- .../jenkins-scripts/generate-hbase-website.sh | 11 ++- pom.xml | 31 2 files changed, 33 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/4de3d7de/dev-support/jenkins-scripts/generate-hbase-website.sh -- diff --git a/dev-support/jenkins-scripts/generate-hbase-website.sh b/dev-support/jenkins-scripts/generate-hbase-website.sh index b6277d0..0ef9b2d 100644 --- a/dev-support/jenkins-scripts/generate-hbase-website.sh +++ b/dev-support/jenkins-scripts/generate-hbase-website.sh @@ -173,20 +173,13 @@ echo "Building HBase" # But! some sunshine: because we're doing a full install before running site, we can skip all the # compiling in the forked executions. We have to do it awkwardly because MJAVADOC-444. if mvn \ --DskipTests \ --Dmaven.javadoc.skip=true \ --batch-mode \ --Denforcer.skip=true \ --Dcheckstyle.skip=true \ --Dfindbugs.skip=true \ +-Psite-install-step \ --log-file="${working_dir}/hbase-install-log-${CURRENT_HBASE_COMMIT}.txt" \ clean install \ && mvn site \ --batch-mode \ --Denforcer.skip=true \ --Dmaven.main.skip=true \ --Dmaven.test.skip=true \ --DskipTests \ +-Psite-build-step \ --log-file="${working_dir}/hbase-site-log-${CURRENT_HBASE_COMMIT}.txt"; then echo "Successfully built site." else http://git-wip-us.apache.org/repos/asf/hbase/blob/4de3d7de/pom.xml -- diff --git a/pom.xml b/pom.xml index 3fe436b..8224f64 100755 --- a/pom.xml +++ b/pom.xml @@ -3375,6 +3375,37 @@ + + + site-install-step + +true +true +true +true +true +true + + + + + site-build-step + +true +true +true +true +true +true +true + +
[3/4] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/41762373 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/41762373 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/41762373 Branch: refs/heads/branch-2.0 Commit: 417623736ff676b151a2b37f4a8107d1ce4872df Parents: 7704d3b Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 23:16:29 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/41762373/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 6acd4b2..56d53d8 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/41762373/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf "${OUTPUT_DIR}" -mkdir -p "${OUTPUT_DIR}" +if [[ ! -d "${OUTPUT_DIR}" ]]; then + echo "[ERROR] the specified output directory
[1/2] hbase git commit: HBASE-20335 ensure each stage of the nightly job gathers machine information.
Repository: hbase Updated Branches: refs/heads/master 73275f177 -> 7b7ab HBASE-20335 ensure each stage of the nightly job gathers machine information. * fix archiving for src tarball stage's machine info * stop nightly wrapper desroying the output dir. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f695ecb2 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f695ecb2 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f695ecb2 Branch: refs/heads/master Commit: f695ecb2db2ec1a816613225a365b787175e2462 Parents: 73275f1 Author: Sean Busbey Authored: Wed Apr 11 10:38:12 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 22:42:37 2018 -0500 -- dev-support/Jenkinsfile| 11 +++ dev-support/hbase_nightly_yetus.sh | 7 +-- 2 files changed, 16 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/f695ecb2/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index b289eaf..f9e1d72 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -150,6 +150,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' // TODO roll this into the hbase_nightly_yetus script sh '''#!/usr/bin/env bash @@ -210,6 +212,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -283,6 +287,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -363,6 +369,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" rm -rf "${OUTPUT_DIR}/machine" && mkdir "${OUTPUT_DIR}/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "${OUTPUT_DIR_RELATIVE}/machine" + echo "got the following saved stats in '${OUTPUT_DIR_RELATIVE}/machine'" + ls -lh "${OUTPUT_DIR_RELATIVE}/machine" ''' sh '''#!/usr/bin/env bash set -e @@ -437,6 +445,8 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" set -e rm -rf "output-srctarball/machine" && mkdir "output-srctarball/machine" "${BASEDIR}/dev-support/gather_machine_environment.sh" "output-srctarball/machine" + echo "got the following saved stats in 'output-srctarball/machine'" + ls -lh "output-srctarball/machine" ''' sh """#!/bin/bash -e if "${env.BASEDIR}/dev-support/hbase_nightly_source-artifact.sh" \ @@ -456,6 +466,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" always { stash name: 'srctarball-result', includes: "output-srctarball/commentfile" archive 'output-srctarball/*' + archive 'output-srctarball/**/*' } } } http://git-wip-us.apache.org/repos/asf/hbase/blob/f695ecb2/dev-support/hbase_nightly_yetus.sh -- diff --git a/dev-support/hbase_nightly_yetus.sh b/dev-support/hbase_nightly_yetus.sh index 4e0200d..bba5f4d 100755 --- a/dev-support/hbase_nightly_yetus.sh +++ b/dev-support/hbase_nightly_yetus.sh @@ -91,8 +91,11 @@ if [[ true == "${DEBUG}" ]]; then YETUS_ARGS=("--debug" "${YETUS_ARGS[@]}") fi -rm -rf "${OUTPUT_DIR}" -mkdir -p "${OUTPUT_DIR}" +if [[ !
[2/2] hbase git commit: HBASE-20389 Move website building flags into a profile.
HBASE-20389 Move website building flags into a profile. Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7b7a Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7b7a Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7b7a Branch: refs/heads/master Commit: 7b7ab6ed842735b47016ba9a6fb68960177f Parents: f695ecb Author: Sean Busbey Authored: Thu Apr 12 21:55:27 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 22:43:19 2018 -0500 -- .../jenkins-scripts/generate-hbase-website.sh | 12 +- hbase-spark/pom.xml | 125 +++ pom.xml | 38 ++ 3 files changed, 113 insertions(+), 62 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/7b7a/dev-support/jenkins-scripts/generate-hbase-website.sh -- diff --git a/dev-support/jenkins-scripts/generate-hbase-website.sh b/dev-support/jenkins-scripts/generate-hbase-website.sh index b6277d0..24b708b 100644 --- a/dev-support/jenkins-scripts/generate-hbase-website.sh +++ b/dev-support/jenkins-scripts/generate-hbase-website.sh @@ -173,20 +173,14 @@ echo "Building HBase" # But! some sunshine: because we're doing a full install before running site, we can skip all the # compiling in the forked executions. We have to do it awkwardly because MJAVADOC-444. if mvn \ --DskipTests \ --Dmaven.javadoc.skip=true \ --batch-mode \ --Denforcer.skip=true \ --Dcheckstyle.skip=true \ --Dfindbugs.skip=true \ +-Psite-install-step \ --log-file="${working_dir}/hbase-install-log-${CURRENT_HBASE_COMMIT}.txt" \ clean install \ && mvn site \ --batch-mode \ --Denforcer.skip=true \ --Dmaven.main.skip=true \ --Dmaven.test.skip=true \ --DskipTests \ +-Dscala.skip=true \ +-Psite-build-step \ --log-file="${working_dir}/hbase-site-log-${CURRENT_HBASE_COMMIT}.txt"; then echo "Successfully built site." else http://git-wip-us.apache.org/repos/asf/hbase/blob/7b7a/hbase-spark/pom.xml -- diff --git a/hbase-spark/pom.xml b/hbase-spark/pom.xml index 05fd779..7654be4 100644 --- a/hbase-spark/pom.xml +++ b/hbase-spark/pom.xml @@ -430,59 +430,6 @@ org.apache.maven.plugins maven-compiler-plugin - -net.alchim31.maven -scala-maven-plugin -3.2.0 - - ${project.build.sourceEncoding} - ${scala.version} - --feature - - - - -scala-compile-first -process-resources - - add-source - compile - - - -scala-test-compile -process-test-resources - - testCompile - - - - - -org.scalatest -scalatest-maven-plugin -1.0 - - ${project.build.directory}/surefire-reports - . - WDF TestSuite.txt - false - - - -test -test - - test - - - -Xmx1536m -XX:ReservedCodeCacheSize=512m - false - - - - + + + build-scala-sources + + + scala.skip + !true + + + + + +net.alchim31.maven +scala-maven-plugin +3.2.0 + + ${project.build.sourceEncoding} + ${scala.version} + +-feature + + + + +scala-compile-first +process-resources + + add-source + compile + + + +scala-test-compile +process-test-resources + + testCompile + + + + + +org.scalatest +scalatest-maven-plugin +1.0 + + ${project.build.directory}/surefire-reports + . + WDF TestSuite.txt + false + + + +test +test + + test + + + -Xmx1536m -XX:ReservedCodeCacheSize=512m + false + + + + + + + http://git-wip-us.apache.org/repos/asf/hbase/blob/7b7a/pom.xml
hbase git commit: HBASE-20291 Fix The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT missing with hadoop 3 profile - revert since it is marked invalid
Repository: hbase Updated Branches: refs/heads/master da7776d42 -> 73275f177 HBASE-20291 Fix The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT missing with hadoop 3 profile - revert since it is marked invalid Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/73275f17 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/73275f17 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/73275f17 Branch: refs/heads/master Commit: 73275f177497048a65abf79ada0d83b2d03f6017 Parents: da7776d Author: tedyuAuthored: Fri Apr 13 13:39:04 2018 -0700 Committer: tedyu Committed: Fri Apr 13 13:39:04 2018 -0700 -- hbase-common/pom.xml | 4 1 file changed, 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/73275f17/hbase-common/pom.xml -- diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml index 6ac590f..5ae8e0b 100644 --- a/hbase-common/pom.xml +++ b/hbase-common/pom.xml @@ -371,10 +371,6 @@ org.apache.htrace htrace-core - - net.minidev - json-smart -
[1/3] hbase git commit: HBASE-20410 update protoc to 3.5.1-1 for rhel6
Repository: hbase Updated Branches: refs/heads/branch-2 bd2dddae6 -> 8cd1201af refs/heads/branch-2.0 8ca2d0ff6 -> 7704d3bb1 refs/heads/master 2f74afd6f -> da7776d42 HBASE-20410 update protoc to 3.5.1-1 for rhel6 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/da7776d4 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/da7776d4 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/da7776d4 Branch: refs/heads/master Commit: da7776d428fa8590ce9bd3654080434963dcc0ba Parents: 2f74afd Author: Mike DrobAuthored: Fri Apr 13 09:44:59 2018 -0500 Committer: Mike Drob Committed: Fri Apr 13 13:09:20 2018 -0500 -- hbase-protocol-shaded/pom.xml | 4 +++- src/main/asciidoc/_chapters/developer.adoc | 12 2 files changed, 3 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/da7776d4/hbase-protocol-shaded/pom.xml -- diff --git a/hbase-protocol-shaded/pom.xml b/hbase-protocol-shaded/pom.xml index 25443e1..ba4fa2d 100644 --- a/hbase-protocol-shaded/pom.xml +++ b/hbase-protocol-shaded/pom.xml @@ -33,8 +33,10 @@ true -3.5.1 +3.5.1-1 http://git-wip-us.apache.org/repos/asf/hbase/blob/da7776d4/src/main/asciidoc/_chapters/developer.adoc -- diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc index 7cbf404..92b4e65 100644 --- a/src/main/asciidoc/_chapters/developer.adoc +++ b/src/main/asciidoc/_chapters/developer.adoc @@ -433,18 +433,6 @@ convenience; however, the plugin may not be able to retrieve appropriate binarie on a platform where protoc fails, you will have to compile protoc from source, and run it independent of our maven build. You can disable the inline code generation by specifying `-Dprotoc.skip` in your maven arguments, allowing your build to proceed further. -A similar failure relates to the stock CentOS 6 docker image providing a too old version of glibc for the version of protoc that we use. -In this case, you would have to install glibc 2.14 and protoc 3.5.1 manually, then execute something like: - -[source,bourne] - -cd hbase-protocol-shaded -LD_LIBRARY_PATH=/opt/glibc-2.14/lib protoc \ - --proto_path=src/main/protobuf \ - --java_out=target/generated-sources/protobuf/java \ - src/main/protobuf/*.proto - - [NOTE] If you need to manually generate your protobuf files, you should not use `clean` in subsequent maven calls, as that will delete the newly generated files.
[2/3] hbase git commit: HBASE-20410 update protoc to 3.5.1-1 for rhel6
HBASE-20410 update protoc to 3.5.1-1 for rhel6 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8cd1201a Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8cd1201a Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8cd1201a Branch: refs/heads/branch-2 Commit: 8cd1201afe7be865f68506bb8de0a40c99853dbc Parents: bd2ddda Author: Mike DrobAuthored: Fri Apr 13 09:44:59 2018 -0500 Committer: Mike Drob Committed: Fri Apr 13 13:09:31 2018 -0500 -- hbase-protocol-shaded/pom.xml | 4 +++- src/main/asciidoc/_chapters/developer.adoc | 12 2 files changed, 3 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/8cd1201a/hbase-protocol-shaded/pom.xml -- diff --git a/hbase-protocol-shaded/pom.xml b/hbase-protocol-shaded/pom.xml index ee7d54b..3598afb 100644 --- a/hbase-protocol-shaded/pom.xml +++ b/hbase-protocol-shaded/pom.xml @@ -33,8 +33,10 @@ true -3.5.1 +3.5.1-1 http://git-wip-us.apache.org/repos/asf/hbase/blob/8cd1201a/src/main/asciidoc/_chapters/developer.adoc -- diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc index 40701e9..8505ba1 100644 --- a/src/main/asciidoc/_chapters/developer.adoc +++ b/src/main/asciidoc/_chapters/developer.adoc @@ -433,18 +433,6 @@ convenience; however, the plugin may not be able to retrieve appropriate binarie on a platform where protoc fails, you will have to compile protoc from source, and run it independent of our maven build. You can disable the inline code generation by specifying `-Dprotoc.skip` in your maven arguments, allowing your build to proceed further. -A similar failure relates to the stock CentOS 6 docker image providing a too old version of glibc for the version of protoc that we use. -In this case, you would have to install glibc 2.14 and protoc 3.5.1 manually, then execute something like: - -[source,bourne] - -cd hbase-protocol-shaded -LD_LIBRARY_PATH=/opt/glibc-2.14/lib protoc \ - --proto_path=src/main/protobuf \ - --java_out=target/generated-sources/protobuf/java \ - src/main/protobuf/*.proto - - [NOTE] If you need to manually generate your protobuf files, you should not use `clean` in subsequent maven calls, as that will delete the newly generated files.
[3/3] hbase git commit: HBASE-20410 update protoc to 3.5.1-1 for rhel6
HBASE-20410 update protoc to 3.5.1-1 for rhel6 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7704d3bb Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7704d3bb Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7704d3bb Branch: refs/heads/branch-2.0 Commit: 7704d3bb1cdd1e3c1cb5958e0ce91c0024c80321 Parents: 8ca2d0f Author: Mike DrobAuthored: Fri Apr 13 09:44:59 2018 -0500 Committer: Mike Drob Committed: Fri Apr 13 13:10:41 2018 -0500 -- hbase-protocol-shaded/pom.xml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/7704d3bb/hbase-protocol-shaded/pom.xml -- diff --git a/hbase-protocol-shaded/pom.xml b/hbase-protocol-shaded/pom.xml index ba73dc1..d43047d 100644 --- a/hbase-protocol-shaded/pom.xml +++ b/hbase-protocol-shaded/pom.xml @@ -33,8 +33,10 @@ true -3.5.1 +3.5.1-1
hbase git commit: HBASE-20394 HBase over rides the value of HBASE_OPTS (if any) set by client
Repository: hbase Updated Branches: refs/heads/branch-2.0 ed3a6564c -> 8ca2d0ff6 HBASE-20394 HBase over rides the value of HBASE_OPTS (if any) set by client Signed-off-by: Josh ElserProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8ca2d0ff Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8ca2d0ff Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8ca2d0ff Branch: refs/heads/branch-2.0 Commit: 8ca2d0ff6a194b7522ada8fd867993ef4f632647 Parents: ed3a656 Author: Nihal Jain Authored: Thu Apr 12 12:38:45 2018 +0530 Committer: Michael Stack Committed: Fri Apr 13 10:42:45 2018 -0700 -- conf/hbase-env.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/8ca2d0ff/conf/hbase-env.sh -- diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh index d9879c6..1ac93cc 100644 --- a/conf/hbase-env.sh +++ b/conf/hbase-env.sh @@ -41,7 +41,7 @@ # Below are what we set by default. May only work with SUN JVM. # For more on why as well as other possible settings, # see http://hbase.apache.org/book.html#performance -export HBASE_OPTS="-XX:+UseConcMarkSweepGC" +export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC" # Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.
[2/2] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0d8262f9 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0d8262f9 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0d8262f9 Branch: refs/heads/branch-2.0 Commit: 0d8262f9e48caf7475178472e07ff26c13ce069f Parents: cba88d1 Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:51:50 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/0d8262f9/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 238d0e6..6acd4b2 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[1/2] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
Repository: hbase Updated Branches: refs/heads/branch-2.0 cba88d15e -> ed3a6564c HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ed3a6564 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ed3a6564 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ed3a6564 Branch: refs/heads/branch-2.0 Commit: ed3a6564c5da625209753dc45af67988209922f3 Parents: 0d8262f Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:51:50 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/ed3a6564/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 90786f2..e047a5a 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -365,6 +365,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[06/10] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e1ab70d5 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e1ab70d5 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e1ab70d5 Branch: refs/heads/branch-1.4 Commit: e1ab70d51a843159f1615306d8baad86767d6388 Parents: e757125 Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:36 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/e1ab70d5/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 819beee..5d2e2a0 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -365,6 +365,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[09/10] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/90ad8c64 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/90ad8c64 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/90ad8c64 Branch: refs/heads/branch-1.2 Commit: 90ad8c64f2944b0582c98c15c4520d0e752beb96 Parents: 2954aea Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:49 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/90ad8c64/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index ffd8c18..bc613a3 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[04/10] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f6413d55 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f6413d55 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f6413d55 Branch: refs/heads/branch-1 Commit: f6413d5594db415a4a336d4ba38310d86607417b Parents: cad65df Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:49:35 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/f6413d55/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 819beee..5d2e2a0 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -365,6 +365,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[01/10] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
Repository: hbase Updated Branches: refs/heads/branch-1 68726b0ee -> f6413d559 refs/heads/branch-1.2 2954aeae2 -> e78037107 refs/heads/branch-1.3 e0536bfc5 -> 1c8246c6a refs/heads/branch-1.4 e6022f455 -> e1ab70d51 refs/heads/branch-2 ae8a21204 -> bd2dddae6 HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bd2dddae Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bd2dddae Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bd2dddae Branch: refs/heads/branch-2 Commit: bd2dddae60146965cdeae016f0652a700176a33c Parents: d5d5c78 Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:48:05 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/bd2dddae/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 3507a1d..2198913 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -365,6 +365,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[02/10] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d5d5c788 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d5d5c788 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d5d5c788 Branch: refs/heads/branch-2 Commit: d5d5c7884927d59a0eeab523e9ccfadacc4b90a8 Parents: ae8a212 Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:48:05 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/d5d5c788/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 238d0e6..6acd4b2 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[10/10] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e7803710 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e7803710 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e7803710 Branch: refs/heads/branch-1.2 Commit: e78037107185e55cf796ef9373e918c698152f56 Parents: 90ad8c6 Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:52 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/e7803710/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 19e7e97..059d7c2 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -361,6 +361,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[08/10] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1c8246c6 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1c8246c6 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1c8246c6 Branch: refs/heads/branch-1.3 Commit: 1c8246c6aa42746045681a89b01d4b0703ffd504 Parents: 64a2097 Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:44 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/1c8246c6/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 19e7e97..059d7c2 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -361,6 +361,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[03/10] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cad65df8 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cad65df8 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cad65df8 Branch: refs/heads/branch-1 Commit: cad65df87e6918acaf36e20e96cabaffe256f440 Parents: 68726b0 Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:49:30 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/cad65df8/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index a8356b4..443dd20 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[05/10] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e7571251 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e7571251 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e7571251 Branch: refs/heads/branch-1.4 Commit: e757125180b7cb7cba5efa0864ecf96ac87906aa Parents: e6022f4 Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:32 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/e7571251/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index a8356b4..443dd20 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[07/10] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/64a20979 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/64a20979 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/64a20979 Branch: refs/heads/branch-1.3 Commit: 64a209795f18fce6fe4cbcd0b40ec0e0dd3de6c3 Parents: e0536bf Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:50:41 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/64a20979/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index a8356b4..443dd20 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[2/3] hbase git commit: HBASE-20391 close out stale or finished PRs on github.
HBASE-20391 close out stale or finished PRs on github. * closes #51 - > 1 month since notification * closes #61 - HBASE-18928 has already closed * closes #62 - HBASE-18929 has already closed * closes #67 - HBASE-19386 has already closed * closes #68 - HBASE-19387 has already closed Also adds a section to the committer guide in the reference guide about closing PRs. Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a5408820 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a5408820 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a5408820 Branch: refs/heads/master Commit: a5408820b58dd5b726e8f756b4fe66c101acd8f6 Parents: 8f1ac01 Author: Sean Busbey Authored: Wed Apr 11 16:30:56 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:39:13 2018 -0500 -- src/main/asciidoc/_chapters/developer.adoc | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/a5408820/src/main/asciidoc/_chapters/developer.adoc -- diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc index 9d9f564..7cbf404 100644 --- a/src/main/asciidoc/_chapters/developer.adoc +++ b/src/main/asciidoc/_chapters/developer.adoc @@ -2177,6 +2177,12 @@ When the amending author is different from the original committer, add notice of - [DISCUSSION] Best practice when amending commits cherry picked from master to branch]. +== Close related GitHub PRs + +As a project we work to ensure there's a JIRA associated with each change, but we don't mandate any particular tool be used for reviews. Due to implementation details of the ASF's integration between hosted git repositories and GitHub, the PMC has no ability to directly close PRs on our GitHub repo. In the event that a contributor makes a Pull Request on GitHub, either because the contributor finds that easier than attaching a patch to JIRA or because a reviewer prefers that UI for examining changes, it's important to make note of the PR in the commit that goes to the master branch so that PRs are kept up to date. + +To read more about the details of what kinds of commit messages will work with the GitHub "close via keyword in commit" mechanism see link:https://help.github.com/articles/closing-issues-using-keywords/[the GitHub documentation for "Closing issues using keywords"]. In summary, you should include a line with the phrase "closes #XXX", where the XXX is the pull request id. The pull request id is usually given in the GitHub UI in grey at the end of the subject heading. + [[committer.tests]] == Committers are responsible for making sure commits do not break the build or tests
[3/3] hbase git commit: HBASE-20379 shadedjars yetus plugin should add a footer link
HBASE-20379 shadedjars yetus plugin should add a footer link Signed-off-by: Mike DrobProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2f74afd6 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2f74afd6 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2f74afd6 Branch: refs/heads/master Commit: 2f74afd6f4a04907b1870479ef5246fb002849d2 Parents: a540882 Author: Sean Busbey Authored: Thu Apr 12 16:31:24 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:40:35 2018 -0500 -- dev-support/hbase-personality.sh | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/2f74afd6/dev-support/hbase-personality.sh -- diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 3507a1d..2198913 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -365,6 +365,7 @@ function shadedjars_rebuild count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 shadedjars "${repostatus} has ${count} errors when building our shaded downstream artifacts." +add_footer_table shadedjars "@@BASE@@/${repostatus}-shadedjars.txt" return 1 fi
[1/3] hbase git commit: HBASE-20112 register nightly junit over hadoop3 results with jenkins.
Repository: hbase Updated Branches: refs/heads/master 5a633adff -> 2f74afd6f HBASE-20112 register nightly junit over hadoop3 results with jenkins. Signed-off-by: Michael StackProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8f1ac01a Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8f1ac01a Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8f1ac01a Branch: refs/heads/master Commit: 8f1ac01ad8c983c22e3298505607a421a8c197f9 Parents: 5a633ad Author: Sean Busbey Authored: Fri Apr 13 00:08:39 2018 -0500 Committer: Sean Busbey Committed: Fri Apr 13 10:37:50 2018 -0500 -- dev-support/Jenkinsfile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/8f1ac01a/dev-support/Jenkinsfile -- diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 3f3066b..b289eaf 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -381,8 +381,7 @@ curl -L -o personality.sh "${env.PROJECT_PERSONALITY}" post { always { stash name: 'hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" - // Not sure how two junit test reports will work. Disabling this for now. - // junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e if [ -d "${OUTPUT_DIR}/archiver" ]; then
[2/2] hbase git commit: HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-endpoint and hbase-rsgroup modules
HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-endpoint and hbase-rsgroup modules Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cba88d15 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cba88d15 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cba88d15 Branch: refs/heads/branch-2.0 Commit: cba88d15eb450e49b004651f418e7e6a49666a0d Parents: ee3c430 Author: tedyuAuthored: Mon Apr 2 17:57:56 2018 -0700 Committer: Peter Somogyi Committed: Fri Apr 13 17:33:48 2018 +0200 -- .../src/test/resources/hbase-site.xml | 39 hbase-rsgroup/src/test/resources/hbase-site.xml | 39 2 files changed, 78 insertions(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/cba88d15/hbase-endpoint/src/test/resources/hbase-site.xml -- diff --git a/hbase-endpoint/src/test/resources/hbase-site.xml b/hbase-endpoint/src/test/resources/hbase-site.xml new file mode 100644 index 000..858d428 --- /dev/null +++ b/hbase-endpoint/src/test/resources/hbase-site.xml @@ -0,0 +1,39 @@ + + + + + +hbase.defaults.for.version.skip +true + + +hbase.hconnection.threads.keepalivetime +3 + + +hbase.localcluster.assign.random.ports +true + + Assign random ports to master and RS info server (UI). + + + http://git-wip-us.apache.org/repos/asf/hbase/blob/cba88d15/hbase-rsgroup/src/test/resources/hbase-site.xml -- diff --git a/hbase-rsgroup/src/test/resources/hbase-site.xml b/hbase-rsgroup/src/test/resources/hbase-site.xml new file mode 100644 index 000..858d428 --- /dev/null +++ b/hbase-rsgroup/src/test/resources/hbase-site.xml @@ -0,0 +1,39 @@ + + + + + +hbase.defaults.for.version.skip +true + + +hbase.hconnection.threads.keepalivetime +3 + + +hbase.localcluster.assign.random.ports +true + + Assign random ports to master and RS info server (UI). + + +
[1/2] hbase git commit: HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-archetypes module
Repository: hbase Updated Branches: refs/heads/branch-2.0 0eacb3ea0 -> cba88d15e HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-archetypes module Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ee3c4303 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ee3c4303 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ee3c4303 Branch: refs/heads/branch-2.0 Commit: ee3c4303bf13e59df3e61b1c611f9c3a813ac0ac Parents: 0eacb3e Author: tedyuAuthored: Tue Mar 27 13:51:44 2018 -0700 Committer: Peter Somogyi Committed: Fri Apr 13 17:33:33 2018 +0200 -- .../src/test/resources/hbase-site.xml | 39 .../src/test/resources/hbase-site.xml | 39 2 files changed, 78 insertions(+) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/ee3c4303/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml -- diff --git a/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml b/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml new file mode 100644 index 000..858d428 --- /dev/null +++ b/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml @@ -0,0 +1,39 @@ + + + + + +hbase.defaults.for.version.skip +true + + +hbase.hconnection.threads.keepalivetime +3 + + +hbase.localcluster.assign.random.ports +true + + Assign random ports to master and RS info server (UI). + + + http://git-wip-us.apache.org/repos/asf/hbase/blob/ee3c4303/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml -- diff --git a/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml b/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml new file mode 100644 index 000..858d428 --- /dev/null +++ b/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml @@ -0,0 +1,39 @@ + + + + + +hbase.defaults.for.version.skip +true + + +hbase.hconnection.threads.keepalivetime +3 + + +hbase.localcluster.assign.random.ports +true + + Assign random ports to master and RS info server (UI). + + +
[14/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/apache_hbase_reference_guide.pdf -- diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf index 7563962..cb19179 100644 --- a/apache_hbase_reference_guide.pdf +++ b/apache_hbase_reference_guide.pdf @@ -5,16 +5,16 @@ /Author (Apache HBase Team) /Creator (Asciidoctor PDF 1.5.0.alpha.15, based on Prawn 2.2.2) /Producer (Apache HBase Team) -/ModDate (D:20180412144701+00'00') -/CreationDate (D:20180412144701+00'00') +/ModDate (D:20180413144621+00'00') +/CreationDate (D:20180413144621+00'00') >> endobj 2 0 obj << /Type /Catalog /Pages 3 0 R /Names 26 0 R -/Outlines 4610 0 R -/PageLabels 4836 0 R +/Outlines 4612 0 R +/PageLabels 4838 0 R /PageMode /UseOutlines /OpenAction [7 0 R /FitH 842.89] /ViewerPreferences << /DisplayDocTitle true @@ -23,8 +23,8 @@ endobj endobj 3 0 obj << /Type /Pages -/Count 720 -/Kids [7 0 R 12 0 R 14 0 R 16 0 R 18 0 R 20 0 R 22 0 R 24 0 R 44 0 R 47 0 R 50 0 R 54 0 R 61 0 R 63 0 R 67 0 R 69 0 R 71 0 R 78 0 R 81 0 R 83 0 R 89 0 R 92 0 R 94 0 R 96 0 R 103 0 R 110 0 R 115 0 R 117 0 R 133 0 R 138 0 R 146 0 R 155 0 R 163 0 R 172 0 R 183 0 R 187 0 R 189 0 R 193 0 R 202 0 R 211 0 R 219 0 R 228 0 R 233 0 R 242 0 R 250 0 R 259 0 R 272 0 R 279 0 R 289 0 R 297 0 R 305 0 R 312 0 R 320 0 R 326 0 R 332 0 R 339 0 R 347 0 R 358 0 R 367 0 R 379 0 R 387 0 R 395 0 R 402 0 R 411 0 R 419 0 R 429 0 R 437 0 R 444 0 R 453 0 R 465 0 R 474 0 R 481 0 R 489 0 R 497 0 R 506 0 R 513 0 R 518 0 R 522 0 R 527 0 R 531 0 R 547 0 R 558 0 R 562 0 R 577 0 R 582 0 R 587 0 R 589 0 R 591 0 R 594 0 R 596 0 R 598 0 R 606 0 R 612 0 R 617 0 R 622 0 R 629 0 R 639 0 R 647 0 R 651 0 R 655 0 R 657 0 R 667 0 R 681 0 R 690 0 R 699 0 R 709 0 R 720 0 R 731 0 R 750 0 R 756 0 R 759 0 R 765 0 R 768 0 R 772 0 R 776 0 R 779 0 R 782 0 R 784 0 R 787 0 R 791 0 R 793 0 R 797 0 R 803 0 R 808 0 R 812 0 R 815 0 R 821 0 R 823 0 R 827 0 R 835 0 R 837 0 R 840 0 R 843 0 R 846 0 R 849 0 R 863 0 R 871 0 R 882 0 R 893 0 R 899 0 R 909 0 R 920 0 R 923 0 R 927 0 R 930 0 R 935 0 R 944 0 R 952 0 R 956 0 R 960 0 R 965 0 R 969 0 R 971 0 R 987 0 R 998 0 R 1003 0 R 1010 0 R 1013 0 R 1021 0 R 1029 0 R 1034 0 R 1039 0 R 1044 0 R 1046 0 R 1048 0 R 1050 0 R 1060 0 R 1068 0 R 1072 0 R 1079 0 R 1086 0 R 1094 0 R 1098 0 R 1104 0 R 1109 0 R 1117 0 R 1121 0 R 1126 0 R 1128 0 R 1134 0 R 1142 0 R 1148 0 R 1155 0 R 1166 0 R 1170 0 R 1172 0 R 1174 0 R 1178 0 R 1181 0 R 1186 0 R 1189 0 R 1201 0 R 1205 0 R 1211 0 R 1219 0 R 1224 0 R 1228 0 R 1232 0 R 1234 0 R 1237 0 R 1240 0 R 1243 0 R 1247 0 R 1251 0 R 1255 0 R 1260 0 R 1264 0 R 1267 0 R 1269 0 R 1279 0 R 1282 0 R 1290 0 R 1299 0 R 1305 0 R 1309 0 R 1311 0 R 1322 0 R 1325 0 R 1331 0 R 1339 0 R 1342 0 R 1349 0 R 1357 0 R 1359 0 R 1361 0 R 1370 0 R 1372 0 R 1374 0 R 1377 0 R 1379 0 R 1381 0 R 1383 0 R 1385 0 R 1388 0 R 1392 0 R 1397 0 R 1399 0 R 1401 0 R 1403 0 R 1408 0 R 1415 0 R 1421 0 R 1424 0 R 1426 0 R 1429 0 R 1433 0 R 1437 0 R 1440 0 R 1442 0 R 1444 0 R 1447 0 R 1452 0 R 1458 0 R 1466 0 R 1480 0 R 1494 0 R 1497 0 R 1502 0 R 1515 0 R 1520 0 R 1535 0 R 1543 0 R 1547 0 R 1556 0 R 1571 0 R 1585 0 R 1597 0 R 1602 0 R 1608 0 R 1618 0 R 1623 0 R 1628 0 R 1636 0 R 1639 0 R 1648 0 R 1654 0 R 1658 0 R 1670 0 R 1675 0 R 1681 0 R 1683 0 R 1690 0 R 1698 0 R 1706 0 R 1710 0 R 1712 0 R 1714 0 R 1726 0 R 1732 0 R 1741 0 R 1747 0 R 1760 0 R 1766 0 R 1772 0 R 1783 0 R 1789 0 R 1794 0 R 1799 0 R 1802 0 R 1805 0 R 1810 0 R 1815 0 R 1822 0 R 1826 0 R 1831 0 R 1840 0 R 1845 0 R 1850 0 R 1852 0 R 1861 0 R 1868 0 R 1874 0 R 1879 0 R 1883 0 R 1887 0 R 1892 0 R 1897 0 R 1903 0 R 1905 0 R 1907 0 R 1910 0 R 1921 0 R 1924 0 R 1931 0 R 1939 0 R 1944 0 R 1948 0 R 1953 0 R 1955 0 R 1958 0 R 1963 0 R 1966 0 R 1968 0 R 1971 0 R 1974 0 R 1977 0 R 1987 0 R 1992 0 R 1997 0 R 1999 0 R 2007 0 R 2014 0 R 2021 0 R 2027 0 R 2032 0 R 2034 0 R 2043 0 R 2053 0 R 2063 0 R 2069 0 R 2076 0 R 2078 0 R 2083 0 R 2085 0 R 2087 0 R 2091 0 R 2094 0 R 2097 0 R 2102 0 R 2106 0 R 2117 0 R 2120 0 R 2125 0 R 2128 0 R 2130 0 R 2135 0 R 2145 0 R 2147 0 R 2149 0 R 2151 0 R 2153 0 R 2156 0 R 2158 0 R 2160 0 R 2163 0 R 2165 0 R 2167 0 R 2171 0 R 2176 0 R 2185 0 R 2187 0 R 2189 0 R 2195 0 R 2197 0 R 2202 0 R 2204 0 R 2206 0 R 2213 0 R 2218 0 R 0 R 2227 0 R 2231 0 R 2233 0 R 2235 0 R 2239 0 R 2242 0 R 2244 0 R 2246 0 R 2250 0 R 2252 0 R 2255 0 R 2257 0 R 2259 0 R 2261 0 R 2268 0 R 2271 0 R 2276 0 R 2278 0 R 2280 0 R 2282 0 R 2284 0 R 2292 0 R 2303 0 R 2317 0 R 2328 0 R 2332 0 R 2337 0 R 2341 0 R 2344 0 R 2349 0 R 2355 0 R 2357 0 R 2360 0 R 2362 0 R 2364 0 R 2366 0 R 2371 0 R 2373 0 R 2386 0 R 2389 0 R 2397 0 R 2403 0 R 2415 0 R 2429 0 R 2442 0 R 2459 0 R 2463 0 R 2465 0 R 2469 0 R 2487 0 R 2493 0 R 2505 0 R 2509 0 R 2513 0 R 2522 0 R 2534 0 R 2539 0 R 2549 0 R 2562 0 R 2581 0 R 2590 0 R 2593 0 R 2602 0 R 2619 0 R 2626 0 R 2629 0 R 2634 0 R 2638 0 R 2641 0 R 2650
[09/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/client/TableState.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/client/TableState.html b/devapidocs/src-html/org/apache/hadoop/hbase/client/TableState.html index 17c30e7..90ef36d 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/client/TableState.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/client/TableState.html @@ -112,145 +112,159 @@ 104 } 105 106 /** -107 * @return True if {@link State#ENABLED} or {@link State#ENABLING} +107 * @return True if table is {@link State#ENABLING}. 108 */ -109 public boolean isEnabledOrEnabling() { -110return isInStates(State.ENABLED, State.ENABLING); +109 public boolean isEnabling() { +110return isInStates(State.ENABLING); 111 } 112 113 /** -114 * @return True if table is disabled. +114 * @return True if {@link State#ENABLED} or {@link State#ENABLING} 115 */ -116 public boolean isDisabled() { -117return isInStates(State.DISABLED); +116 public boolean isEnabledOrEnabling() { +117return isInStates(State.ENABLED, State.ENABLING); 118 } 119 120 /** -121 * @return True if {@link State#DISABLED} or {@link State#DISABLED} +121 * @return True if table is disabled. 122 */ -123 public boolean isDisabledOrDisabling() { -124return isInStates(State.DISABLED, State.DISABLING); +123 public boolean isDisabled() { +124return isInStates(State.DISABLED); 125 } 126 127 /** -128 * Create instance of TableState. -129 * @param tableName name of the table -130 * @param state table state -131 */ -132 public TableState(TableName tableName, State state) { -133this.tableName = tableName; -134this.state = state; -135 } -136 -137 /** -138 * @return table state -139 */ -140 public State getState() { -141return state; -142 } -143 -144 /** -145 * Table name for state -146 * -147 * @return milliseconds -148 */ -149 public TableName getTableName() { -150return tableName; -151 } -152 -153 /** -154 * Check that table in given states -155 * @param state state -156 * @return true if satisfies -157 */ -158 public boolean inStates(State state) { -159return this.state.equals(state); -160 } -161 -162 /** -163 * Check that table in given states -164 * @param states state list -165 * @return true if satisfies -166 */ -167 public boolean inStates(State... states) { -168for (State s : states) { -169 if (s.equals(this.state)) -170return true; -171} -172return false; -173 } -174 +128 * @return True if table is disabling. +129 */ +130 public boolean isDisabling() { +131return isInStates(State.DISABLING); +132 } +133 +134 /** +135 * @return True if {@link State#DISABLED} or {@link State#DISABLED} +136 */ +137 public boolean isDisabledOrDisabling() { +138return isInStates(State.DISABLED, State.DISABLING); +139 } +140 +141 /** +142 * Create instance of TableState. +143 * @param tableName name of the table +144 * @param state table state +145 */ +146 public TableState(TableName tableName, State state) { +147this.tableName = tableName; +148this.state = state; +149 } +150 +151 /** +152 * @return table state +153 */ +154 public State getState() { +155return state; +156 } +157 +158 /** +159 * Table name for state +160 * +161 * @return milliseconds +162 */ +163 public TableName getTableName() { +164return tableName; +165 } +166 +167 /** +168 * Check that table in given states +169 * @param state state +170 * @return true if satisfies +171 */ +172 public boolean inStates(State state) { +173return this.state.equals(state); +174 } 175 176 /** -177 * Covert to PB version of TableState -178 * @return PB -179 */ -180 public HBaseProtos.TableState convert() { -181return HBaseProtos.TableState.newBuilder() -182 .setState(this.state.convert()).build(); -183 } -184 -185 /** -186 * Covert from PB version of TableState -187 * -188 * @param tableName table this state of -189 * @param tableState convert from -190 * @return POJO -191 */ -192 public static TableState convert(TableName tableName, HBaseProtos.TableState tableState) { -193TableState.State state = State.convert(tableState.getState()); -194return new TableState(tableName, state); -195 } -196 -197 public static TableState parseFrom(TableName tableName, byte[] bytes) -198 throws DeserializationException { -199try { -200 return convert(tableName, HBaseProtos.TableState.parseFrom(bytes)); -201} catch (InvalidProtocolBufferException e) { -202 throw new DeserializationException(e); -203} -204 } -205 -206 /** -207 * Static version of state checker -208 * @param target equals to any of -209 * @return true if satisfies -210 */ -211
[08/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html index c0d719f..2f83467 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html @@ -28,341 +28,343 @@ 020 021import java.io.IOException; 022import org.apache.hadoop.hbase.HBaseIOException; -023import org.apache.hadoop.hbase.MetaTableAccessor; -024import org.apache.hadoop.hbase.TableName; -025import org.apache.hadoop.hbase.TableNotEnabledException; -026import org.apache.hadoop.hbase.TableNotFoundException; -027import org.apache.hadoop.hbase.client.BufferedMutator; -028import org.apache.hadoop.hbase.client.RegionInfo; -029import org.apache.hadoop.hbase.client.TableState; -030import org.apache.hadoop.hbase.constraint.ConstraintException; -031import org.apache.hadoop.hbase.master.MasterCoprocessorHost; -032import org.apache.hadoop.hbase.master.MasterFileSystem; -033import org.apache.hadoop.hbase.master.TableStateManager; -034import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; -035import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; -036import org.apache.hadoop.hbase.wal.WALSplitter; -037import org.apache.yetus.audience.InterfaceAudience; -038import org.slf4j.Logger; -039import org.slf4j.LoggerFactory; -040 -041import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; -042import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos; -043import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.DisableTableState; -044 -045@InterfaceAudience.Private -046public class DisableTableProcedure -047extends AbstractStateMachineTableProcedureDisableTableState { -048 private static final Logger LOG = LoggerFactory.getLogger(DisableTableProcedure.class); -049 -050 private TableName tableName; -051 private boolean skipTableStateCheck; -052 -053 private Boolean traceEnabled = null; -054 -055 public DisableTableProcedure() { -056super(); -057 } -058 -059 /** -060 * Constructor -061 * @param env MasterProcedureEnv -062 * @param tableName the table to operate on -063 * @param skipTableStateCheck whether to check table state -064 */ -065 public DisableTableProcedure(final MasterProcedureEnv env, final TableName tableName, -066 final boolean skipTableStateCheck) -067 throws HBaseIOException { -068this(env, tableName, skipTableStateCheck, null); -069 } -070 -071 /** -072 * Constructor -073 * @param env MasterProcedureEnv -074 * @param tableName the table to operate on -075 * @param skipTableStateCheck whether to check table state -076 */ -077 public DisableTableProcedure(final MasterProcedureEnv env, final TableName tableName, -078 final boolean skipTableStateCheck, final ProcedurePrepareLatch syncLatch) -079 throws HBaseIOException { -080super(env, syncLatch); -081this.tableName = tableName; -082preflightChecks(env, true); -083this.skipTableStateCheck = skipTableStateCheck; -084 } -085 -086 @Override -087 protected Flow executeFromState(final MasterProcedureEnv env, final DisableTableState state) -088 throws InterruptedException { -089LOG.trace("{} execute state={}", this, state); -090try { -091 switch (state) { -092case DISABLE_TABLE_PREPARE: -093 if (prepareDisable(env)) { -094 setNextState(DisableTableState.DISABLE_TABLE_PRE_OPERATION); -095 } else { -096assert isFailed() : "disable should have an exception here"; -097return Flow.NO_MORE_STATE; -098 } -099 break; -100case DISABLE_TABLE_PRE_OPERATION: -101 preDisable(env, state); -102 setNextState(DisableTableState.DISABLE_TABLE_SET_DISABLING_TABLE_STATE); -103 break; -104case DISABLE_TABLE_SET_DISABLING_TABLE_STATE: -105 setTableStateToDisabling(env, tableName); -106 setNextState(DisableTableState.DISABLE_TABLE_MARK_REGIONS_OFFLINE); -107 break; -108case DISABLE_TABLE_MARK_REGIONS_OFFLINE: -109 addChildProcedure(env.getAssignmentManager().createUnassignProcedures(tableName)); -110 setNextState(DisableTableState.DISABLE_TABLE_SET_DISABLED_TABLE_STATE); -111 break; -112case DISABLE_TABLE_ADD_REPLICATION_BARRIER: -113 if (env.getMasterServices().getTableDescriptors().get(tableName) -114.hasGlobalReplicationScope()) { -115MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem(); -116try
[15/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
Published site at 5a633adffead3b979f6e1a607994409978b0ea74. Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/9808d50e Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/9808d50e Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/9808d50e Branch: refs/heads/asf-site Commit: 9808d50e6cb75f9f670bb5dd5d8142f848ca60c9 Parents: 93f2e3c Author: jenkinsAuthored: Fri Apr 13 14:48:10 2018 + Committer: jenkins Committed: Fri Apr 13 14:48:10 2018 + -- acid-semantics.html | 4 +- apache_hbase_reference_guide.pdf| 26550 - book.html | 305 +- bulk-loads.html | 4 +- checkstyle-aggregate.html |46 +- coc.html| 4 +- dependencies.html | 4 +- dependency-convergence.html | 4 +- dependency-info.html| 4 +- dependency-management.html | 4 +- devapidocs/constant-values.html |13 +- devapidocs/index-all.html |10 + .../hadoop/hbase/backup/package-tree.html | 6 +- .../hadoop/hbase/class-use/TableName.html |10 + .../apache/hadoop/hbase/client/TableState.html |74 +- .../hadoop/hbase/client/package-tree.html |22 +- .../hadoop/hbase/filter/package-tree.html | 8 +- .../hadoop/hbase/io/hfile/package-tree.html | 4 +- .../hadoop/hbase/mapreduce/package-tree.html| 2 +- .../master/class-use/TableStateManager.html |28 + .../hadoop/hbase/master/package-tree.html | 4 +- .../apache/hadoop/hbase/master/package-use.html |37 +- .../master/procedure/DisableTableProcedure.html |50 +- .../hbase/master/procedure/package-tree.html| 2 +- .../master/replication/AddPeerProcedure.html| 2 +- .../replication/DisablePeerProcedure.html | 2 +- .../master/replication/EnablePeerProcedure.html | 2 +- .../master/replication/ModifyPeerProcedure.html | 129 +- .../master/replication/RemovePeerProcedure.html | 2 +- .../replication/UpdatePeerConfigProcedure.html | 2 +- .../hadoop/hbase/monitoring/package-tree.html | 2 +- .../org/apache/hadoop/hbase/package-tree.html |18 +- .../hadoop/hbase/procedure2/package-tree.html | 6 +- .../store/wal/WALProcedureStore.PushType.html | 8 +- .../procedure2/store/wal/WALProcedureStore.html |90 +- .../hadoop/hbase/quotas/package-tree.html | 6 +- .../hadoop/hbase/regionserver/package-tree.html |16 +- .../regionserver/querymatcher/package-tree.html | 2 +- .../hbase/regionserver/wal/package-tree.html| 2 +- .../hadoop/hbase/security/package-tree.html | 2 +- .../hadoop/hbase/thrift/package-tree.html | 2 +- .../apache/hadoop/hbase/util/package-tree.html |10 +- .../org/apache/hadoop/hbase/Version.html| 6 +- .../hadoop/hbase/client/TableState.State.html | 252 +- .../apache/hadoop/hbase/client/TableState.html | 252 +- .../master/procedure/DisableTableProcedure.html | 672 +- .../master/replication/ModifyPeerProcedure.html | 628 +- .../wal/WALProcedureStore.LeaseRecovery.html| 1865 +- .../store/wal/WALProcedureStore.PushType.html | 1865 +- .../wal/WALProcedureStore.SyncMetrics.html | 1865 +- .../procedure2/store/wal/WALProcedureStore.html | 1865 +- export_control.html | 4 +- index.html | 4 +- integration.html| 4 +- issue-tracking.html | 4 +- license.html| 4 +- mail-lists.html | 4 +- metrics.html| 4 +- old_news.html | 4 +- plugin-management.html | 4 +- plugins.html| 4 +- poweredbyhbase.html | 4 +- project-info.html | 4 +- project-reports.html| 4 +- project-summary.html| 4 +- pseudo-distributed.html | 4 +- replication.html| 4 +- resources.html | 4 +- source-repository.html | 4 +- sponsors.html | 4 +- supportingprojects.html | 4 +- team-list.html | 4 +-
[06/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.LeaseRecovery.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.LeaseRecovery.html b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.LeaseRecovery.html index a8b77ae..f125368 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.LeaseRecovery.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.LeaseRecovery.html @@ -367,957 +367,966 @@ 359lock.lock(); 360try { 361 LOG.trace("Starting WAL Procedure Store lease recovery"); -362 while (isRunning()) { -363FileStatus[] oldLogs = getLogFiles(); -364// Get Log-MaxID and recover lease on old logs -365try { -366 flushLogId = initOldLogs(oldLogs); -367} catch (FileNotFoundException e) { -368 LOG.warn("Someone else is active and deleted logs. retrying.", e); -369 continue; -370} -371 -372// Create new state-log -373if (!rollWriter(flushLogId + 1)) { -374 // someone else has already created this log -375 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); -376 continue; -377} -378 -379// We have the lease on the log -380oldLogs = getLogFiles(); -381if (getMaxLogId(oldLogs) flushLogId) { -382 if (LOG.isDebugEnabled()) { -383LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); -384 } -385 logs.getLast().removeFile(this.walArchiveDir); -386 continue; -387} -388 -389LOG.trace("Lease acquired for flushLogId={}", flushLogId); -390break; -391 } -392} finally { -393 lock.unlock(); -394} -395 } -396 -397 @Override -398 public void load(final ProcedureLoader loader) throws IOException { -399lock.lock(); -400try { -401 if (logs.isEmpty()) { -402throw new RuntimeException("recoverLease() must be called before loading data"); -403 } -404 -405 // Nothing to do, If we have only the current log. -406 if (logs.size() == 1) { -407LOG.trace("No state logs to replay."); -408loader.setMaxProcId(0); -409return; -410 } -411 -412 // Load the old logs -413 final IteratorProcedureWALFile it = logs.descendingIterator(); -414 it.next(); // Skip the current log -415 -416 ProcedureWALFormat.load(it, storeTracker, new ProcedureWALFormat.Loader() { -417@Override -418public void setMaxProcId(long maxProcId) { -419 loader.setMaxProcId(maxProcId); -420} -421 -422@Override -423public void load(ProcedureIterator procIter) throws IOException { -424 loader.load(procIter); -425} -426 -427@Override -428public void handleCorrupted(ProcedureIterator procIter) throws IOException { -429 loader.handleCorrupted(procIter); -430} -431 -432@Override -433public void markCorruptedWAL(ProcedureWALFile log, IOException e) { -434 if (corruptedLogs == null) { -435corruptedLogs = new HashSet(); -436 } -437 corruptedLogs.add(log); -438 // TODO: sideline corrupted log +362 boolean afterFirstAttempt = false; +363 while (isRunning()) { +364// Don't sleep before first attempt +365if (afterFirstAttempt) { +366 LOG.trace("Sleep {} ms after first lease recovery attempt.", +367 waitBeforeRoll); +368 Threads.sleepWithoutInterrupt(waitBeforeRoll); +369} else { +370 afterFirstAttempt = true; +371} +372FileStatus[] oldLogs = getLogFiles(); +373// Get Log-MaxID and recover lease on old logs +374try { +375 flushLogId = initOldLogs(oldLogs); +376} catch (FileNotFoundException e) { +377 LOG.warn("Someone else is active and deleted logs. retrying.", e); +378 continue; +379} +380 +381// Create new state-log +382if (!rollWriter(flushLogId + 1)) { +383 // someone else has already created this log +384 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); +385 continue; +386} +387 +388// We have the lease on the log +389oldLogs = getLogFiles(); +390if (getMaxLogId(oldLogs) flushLogId) { +391 if (LOG.isDebugEnabled()) { +392LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); +393 } +394 logs.getLast().removeFile(this.walArchiveDir); +395 continue; +396
[01/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
Repository: hbase-site Updated Branches: refs/heads/asf-site 93f2e3cc2 -> 9808d50e6 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/TestAddToSerialReplicationPeer.html -- diff --git a/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/TestAddToSerialReplicationPeer.html b/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/TestAddToSerialReplicationPeer.html index 7779bd7..1d332c9 100644 --- a/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/TestAddToSerialReplicationPeer.html +++ b/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/TestAddToSerialReplicationPeer.html @@ -25,182 +25,256 @@ 017 */ 018package org.apache.hadoop.hbase.replication; 019 -020import java.io.IOException; -021import java.util.Collections; -022import org.apache.hadoop.fs.Path; -023import org.apache.hadoop.hbase.HBaseClassTestRule; -024import org.apache.hadoop.hbase.TableName; -025import org.apache.hadoop.hbase.Waiter.ExplainingPredicate; -026import org.apache.hadoop.hbase.client.Put; -027import org.apache.hadoop.hbase.client.RegionInfo; -028import org.apache.hadoop.hbase.client.Table; -029import org.apache.hadoop.hbase.regionserver.HRegionServer; -030import org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL; -031import org.apache.hadoop.hbase.replication.regionserver.Replication; -032import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager; -033import org.apache.hadoop.hbase.testclassification.MediumTests; -034import org.apache.hadoop.hbase.testclassification.ReplicationTests; -035import org.apache.hadoop.hbase.util.Bytes; -036import org.apache.hadoop.hbase.util.CommonFSUtils.StreamLacksCapabilityException; -037import org.apache.hadoop.hbase.wal.AbstractFSWALProvider; -038import org.junit.Before; -039import org.junit.ClassRule; -040import org.junit.Test; -041import org.junit.experimental.categories.Category; -042 -043import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap; -044 -045/** -046 * Testcase for HBASE-20147. -047 */ -048@Category({ ReplicationTests.class, MediumTests.class }) -049public class TestAddToSerialReplicationPeer extends SerialReplicationTestBase { -050 -051 @ClassRule -052 public static final HBaseClassTestRule CLASS_RULE = -053 HBaseClassTestRule.forClass(TestAddToSerialReplicationPeer.class); +020import static org.junit.Assert.assertTrue; +021 +022import java.io.IOException; +023import java.util.Collections; +024import org.apache.hadoop.fs.Path; +025import org.apache.hadoop.hbase.HBaseClassTestRule; +026import org.apache.hadoop.hbase.TableName; +027import org.apache.hadoop.hbase.Waiter.ExplainingPredicate; +028import org.apache.hadoop.hbase.client.Put; +029import org.apache.hadoop.hbase.client.RegionInfo; +030import org.apache.hadoop.hbase.client.Table; +031import org.apache.hadoop.hbase.client.TableState; +032import org.apache.hadoop.hbase.master.TableStateManager; +033import org.apache.hadoop.hbase.regionserver.HRegionServer; +034import org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL; +035import org.apache.hadoop.hbase.replication.regionserver.Replication; +036import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager; +037import org.apache.hadoop.hbase.testclassification.MediumTests; +038import org.apache.hadoop.hbase.testclassification.ReplicationTests; +039import org.apache.hadoop.hbase.util.Bytes; +040import org.apache.hadoop.hbase.util.CommonFSUtils.StreamLacksCapabilityException; +041import org.apache.hadoop.hbase.wal.AbstractFSWALProvider; +042import org.junit.Before; +043import org.junit.ClassRule; +044import org.junit.Test; +045import org.junit.experimental.categories.Category; +046 +047import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap; +048 +049/** +050 * Testcase for HBASE-20147. +051 */ +052@Category({ ReplicationTests.class, MediumTests.class }) +053public class TestAddToSerialReplicationPeer extends SerialReplicationTestBase { 054 -055 @Before -056 public void setUp() throws IOException, StreamLacksCapabilityException { -057setupWALWriter(); -058 } -059 -060 // make sure that we will start replication for the sequence id after move, that's what we want to -061 // test here. -062 private void moveRegionAndArchiveOldWals(RegionInfo region, HRegionServer rs) throws Exception { -063moveRegion(region, rs); -064rollAllWALs(); -065 } -066 -067 private void waitUntilReplicatedToTheCurrentWALFile(HRegionServer rs) throws Exception { -068Path path = ((AbstractFSWAL?) rs.getWAL(null)).getCurrentFileName(); -069String logPrefix = AbstractFSWALProvider.getWALPrefixFromWALName(path.getName()); -070UTIL.waitFor(3, new ExplainingPredicateException() { -071 -072 @Override -073 public boolean evaluate() throws Exception {
[05/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.PushType.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.PushType.html b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.PushType.html index a8b77ae..f125368 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.PushType.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.PushType.html @@ -367,957 +367,966 @@ 359lock.lock(); 360try { 361 LOG.trace("Starting WAL Procedure Store lease recovery"); -362 while (isRunning()) { -363FileStatus[] oldLogs = getLogFiles(); -364// Get Log-MaxID and recover lease on old logs -365try { -366 flushLogId = initOldLogs(oldLogs); -367} catch (FileNotFoundException e) { -368 LOG.warn("Someone else is active and deleted logs. retrying.", e); -369 continue; -370} -371 -372// Create new state-log -373if (!rollWriter(flushLogId + 1)) { -374 // someone else has already created this log -375 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); -376 continue; -377} -378 -379// We have the lease on the log -380oldLogs = getLogFiles(); -381if (getMaxLogId(oldLogs) flushLogId) { -382 if (LOG.isDebugEnabled()) { -383LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); -384 } -385 logs.getLast().removeFile(this.walArchiveDir); -386 continue; -387} -388 -389LOG.trace("Lease acquired for flushLogId={}", flushLogId); -390break; -391 } -392} finally { -393 lock.unlock(); -394} -395 } -396 -397 @Override -398 public void load(final ProcedureLoader loader) throws IOException { -399lock.lock(); -400try { -401 if (logs.isEmpty()) { -402throw new RuntimeException("recoverLease() must be called before loading data"); -403 } -404 -405 // Nothing to do, If we have only the current log. -406 if (logs.size() == 1) { -407LOG.trace("No state logs to replay."); -408loader.setMaxProcId(0); -409return; -410 } -411 -412 // Load the old logs -413 final IteratorProcedureWALFile it = logs.descendingIterator(); -414 it.next(); // Skip the current log -415 -416 ProcedureWALFormat.load(it, storeTracker, new ProcedureWALFormat.Loader() { -417@Override -418public void setMaxProcId(long maxProcId) { -419 loader.setMaxProcId(maxProcId); -420} -421 -422@Override -423public void load(ProcedureIterator procIter) throws IOException { -424 loader.load(procIter); -425} -426 -427@Override -428public void handleCorrupted(ProcedureIterator procIter) throws IOException { -429 loader.handleCorrupted(procIter); -430} -431 -432@Override -433public void markCorruptedWAL(ProcedureWALFile log, IOException e) { -434 if (corruptedLogs == null) { -435corruptedLogs = new HashSet(); -436 } -437 corruptedLogs.add(log); -438 // TODO: sideline corrupted log +362 boolean afterFirstAttempt = false; +363 while (isRunning()) { +364// Don't sleep before first attempt +365if (afterFirstAttempt) { +366 LOG.trace("Sleep {} ms after first lease recovery attempt.", +367 waitBeforeRoll); +368 Threads.sleepWithoutInterrupt(waitBeforeRoll); +369} else { +370 afterFirstAttempt = true; +371} +372FileStatus[] oldLogs = getLogFiles(); +373// Get Log-MaxID and recover lease on old logs +374try { +375 flushLogId = initOldLogs(oldLogs); +376} catch (FileNotFoundException e) { +377 LOG.warn("Someone else is active and deleted logs. retrying.", e); +378 continue; +379} +380 +381// Create new state-log +382if (!rollWriter(flushLogId + 1)) { +383 // someone else has already created this log +384 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); +385 continue; +386} +387 +388// We have the lease on the log +389oldLogs = getLogFiles(); +390if (getMaxLogId(oldLogs) flushLogId) { +391 if (LOG.isDebugEnabled()) { +392LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); +393 } +394 logs.getLast().removeFile(this.walArchiveDir); +395 continue; +396} +397 +398
[11/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html -- diff --git a/devapidocs/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html b/devapidocs/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html index 9b51c4b..9d51c55 100644 --- a/devapidocs/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html +++ b/devapidocs/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html @@ -18,7 +18,7 @@ catch(err) { } //--> -var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":6,"i10":6,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":6}; +var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":6,"i12":6,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":6}; var tabs = {65535:["t0","All Methods"],2:["t2","Instance Methods"],4:["t3","Abstract Methods"],8:["t4","Concrete Methods"]}; var altColor = "altColor"; var rowColor = "rowColor"; @@ -133,7 +133,7 @@ var activeTableTab = "activeTableTab"; @InterfaceAudience.Private -public abstract class ModifyPeerProcedure +public abstract class ModifyPeerProcedure extends AbstractPeerProcedureorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.PeerModificationState The base class for all replication peer related procedure except sync replication state transition. @@ -190,6 +190,10 @@ extends protected static int +SLEEP_INTERVAL_MS + + +protected static int UPDATE_LAST_SEQ_ID_BATCH_SIZE @@ -292,59 +296,69 @@ extends +private boolean +needReopen(TableStateManagertsm, + TableNametn) + + +private boolean +needSetLastPushedSequenceId(TableStateManagertsm, + TableNametn) + + protected org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.PeerModificationState nextStateAfterRefresh() Implementation class can override this method. - + protected abstract void postPeerModification(MasterProcedureEnvenv) Called before we finish the procedure. - + protected abstract void prePeerModification(MasterProcedureEnvenv) Called before we start the actual processing. - + private void refreshPeer(MasterProcedureEnvenv, PeerProcedureInterface.PeerOperationTypetype) - + private void releaseLatch() - + private void reopenRegions(MasterProcedureEnvenv) - + protected void rollbackState(MasterProcedureEnvenv, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.PeerModificationStatestate) called to perform the rollback of the specified state - + protected void setLastPushedSequenceId(MasterProcedureEnvenv, ReplicationPeerConfigpeerConfig) - + protected void setLastPushedSequenceIdForTable(MasterProcedureEnvenv, TableNametableName, https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true; title="class or interface in java.util">Maphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true; title="class or interface in java.lang">LonglastSeqIds) - + protected void updateLastPushedSequenceIdForSerialPeer(MasterProcedureEnvenv) - + protected abstract void updatePeerStorage(MasterProcedureEnvenv) @@ -404,22 +418,35 @@ extends LOG -private static finalorg.slf4j.Logger LOG +private static finalorg.slf4j.Logger LOG - + UPDATE_LAST_SEQ_ID_BATCH_SIZE -protected static finalint UPDATE_LAST_SEQ_ID_BATCH_SIZE +protected static finalint UPDATE_LAST_SEQ_ID_BATCH_SIZE See Also: Constant Field Values + + + + + +SLEEP_INTERVAL_MS +protected static finalint SLEEP_INTERVAL_MS + +See Also: +Constant Field Values + + + @@ -434,7 +461,7 @@ extends ModifyPeerProcedure -protectedModifyPeerProcedure() +protectedModifyPeerProcedure() @@ -443,7 +470,7 @@ extends ModifyPeerProcedure -protectedModifyPeerProcedure(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringpeerId) +protectedModifyPeerProcedure(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringpeerId) @@ -460,7 +487,7 @@ extends prePeerModification -protected abstractvoidprePeerModification(MasterProcedureEnvenv) +protected abstractvoidprePeerModification(MasterProcedureEnvenv) throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException,
hbase-site git commit: INFRA-10751 Empty commit
Repository: hbase-site Updated Branches: refs/heads/asf-site 9808d50e6 -> 47d64f628 INFRA-10751 Empty commit Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/47d64f62 Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/47d64f62 Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/47d64f62 Branch: refs/heads/asf-site Commit: 47d64f62874f539ecab1be0a5c69ef2b5f60ba4c Parents: 9808d50 Author: jenkinsAuthored: Fri Apr 13 14:48:30 2018 + Committer: jenkins Committed: Fri Apr 13 14:48:30 2018 + -- --
[03/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html index a8b77ae..f125368 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html @@ -367,957 +367,966 @@ 359lock.lock(); 360try { 361 LOG.trace("Starting WAL Procedure Store lease recovery"); -362 while (isRunning()) { -363FileStatus[] oldLogs = getLogFiles(); -364// Get Log-MaxID and recover lease on old logs -365try { -366 flushLogId = initOldLogs(oldLogs); -367} catch (FileNotFoundException e) { -368 LOG.warn("Someone else is active and deleted logs. retrying.", e); -369 continue; -370} -371 -372// Create new state-log -373if (!rollWriter(flushLogId + 1)) { -374 // someone else has already created this log -375 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); -376 continue; -377} -378 -379// We have the lease on the log -380oldLogs = getLogFiles(); -381if (getMaxLogId(oldLogs) flushLogId) { -382 if (LOG.isDebugEnabled()) { -383LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); -384 } -385 logs.getLast().removeFile(this.walArchiveDir); -386 continue; -387} -388 -389LOG.trace("Lease acquired for flushLogId={}", flushLogId); -390break; -391 } -392} finally { -393 lock.unlock(); -394} -395 } -396 -397 @Override -398 public void load(final ProcedureLoader loader) throws IOException { -399lock.lock(); -400try { -401 if (logs.isEmpty()) { -402throw new RuntimeException("recoverLease() must be called before loading data"); -403 } -404 -405 // Nothing to do, If we have only the current log. -406 if (logs.size() == 1) { -407LOG.trace("No state logs to replay."); -408loader.setMaxProcId(0); -409return; -410 } -411 -412 // Load the old logs -413 final IteratorProcedureWALFile it = logs.descendingIterator(); -414 it.next(); // Skip the current log -415 -416 ProcedureWALFormat.load(it, storeTracker, new ProcedureWALFormat.Loader() { -417@Override -418public void setMaxProcId(long maxProcId) { -419 loader.setMaxProcId(maxProcId); -420} -421 -422@Override -423public void load(ProcedureIterator procIter) throws IOException { -424 loader.load(procIter); -425} -426 -427@Override -428public void handleCorrupted(ProcedureIterator procIter) throws IOException { -429 loader.handleCorrupted(procIter); -430} -431 -432@Override -433public void markCorruptedWAL(ProcedureWALFile log, IOException e) { -434 if (corruptedLogs == null) { -435corruptedLogs = new HashSet(); -436 } -437 corruptedLogs.add(log); -438 // TODO: sideline corrupted log +362 boolean afterFirstAttempt = false; +363 while (isRunning()) { +364// Don't sleep before first attempt +365if (afterFirstAttempt) { +366 LOG.trace("Sleep {} ms after first lease recovery attempt.", +367 waitBeforeRoll); +368 Threads.sleepWithoutInterrupt(waitBeforeRoll); +369} else { +370 afterFirstAttempt = true; +371} +372FileStatus[] oldLogs = getLogFiles(); +373// Get Log-MaxID and recover lease on old logs +374try { +375 flushLogId = initOldLogs(oldLogs); +376} catch (FileNotFoundException e) { +377 LOG.warn("Someone else is active and deleted logs. retrying.", e); +378 continue; +379} +380 +381// Create new state-log +382if (!rollWriter(flushLogId + 1)) { +383 // someone else has already created this log +384 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); +385 continue; +386} +387 +388// We have the lease on the log +389oldLogs = getLogFiles(); +390if (getMaxLogId(oldLogs) flushLogId) { +391 if (LOG.isDebugEnabled()) { +392LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); +393 } +394 logs.getLast().removeFile(this.walArchiveDir); +395 continue; +396} +397 +398LOG.trace("Lease acquired for flushLogId={}",
[12/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/org/apache/hadoop/hbase/client/package-tree.html -- diff --git a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html index 0ad2430..49f613b 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html @@ -550,24 +550,24 @@ java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true; title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true; title="class or interface in java.io">Serializable) -org.apache.hadoop.hbase.client.Consistency -org.apache.hadoop.hbase.client.MobCompactPartitionPolicy org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanControllerState -org.apache.hadoop.hbase.client.Durability -org.apache.hadoop.hbase.client.RequestController.ReturnCode -org.apache.hadoop.hbase.client.CompactType -org.apache.hadoop.hbase.client.IsolationLevel -org.apache.hadoop.hbase.client.CompactionState org.apache.hadoop.hbase.client.TableState.State +org.apache.hadoop.hbase.client.CompactType +org.apache.hadoop.hbase.client.ScannerCallable.MoreResults org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanResumerState +org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.Retry org.apache.hadoop.hbase.client.Scan.ReadType -org.apache.hadoop.hbase.client.MasterSwitchType -org.apache.hadoop.hbase.client.RegionLocateType +org.apache.hadoop.hbase.client.IsolationLevel +org.apache.hadoop.hbase.client.CompactionState org.apache.hadoop.hbase.client.SnapshotType -org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.Retry org.apache.hadoop.hbase.client.AsyncProcessTask.SubmittedRows +org.apache.hadoop.hbase.client.RequestController.ReturnCode +org.apache.hadoop.hbase.client.Durability +org.apache.hadoop.hbase.client.MobCompactPartitionPolicy +org.apache.hadoop.hbase.client.MasterSwitchType +org.apache.hadoop.hbase.client.RegionLocateType +org.apache.hadoop.hbase.client.Consistency org.apache.hadoop.hbase.client.AbstractResponse.ResponseType -org.apache.hadoop.hbase.client.ScannerCallable.MoreResults http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html -- diff --git a/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html b/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html index 2b8bb1c..1041e69 100644 --- a/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html @@ -183,14 +183,14 @@ java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true; title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true; title="class or interface in java.io">Serializable) -org.apache.hadoop.hbase.filter.CompareFilter.CompareOp org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp -org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType +org.apache.hadoop.hbase.filter.CompareFilter.CompareOp org.apache.hadoop.hbase.filter.FuzzyRowFilter.Order +org.apache.hadoop.hbase.filter.Filter.ReturnCode +org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType +org.apache.hadoop.hbase.filter.FilterWrapper.FilterRowRetCode org.apache.hadoop.hbase.filter.FilterList.Operator org.apache.hadoop.hbase.filter.FuzzyRowFilter.SatisfiesCode -org.apache.hadoop.hbase.filter.FilterWrapper.FilterRowRetCode -org.apache.hadoop.hbase.filter.Filter.ReturnCode http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html -- diff --git a/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html b/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html index 05d42a4..7a15c5d 100644 --- a/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html @@ -273,11 +273,11 @@ java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true; title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or
[13/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/book.html -- diff --git a/book.html b/book.html index f963820..9a71f3d 100644 --- a/book.html +++ b/book.html @@ -1230,10 +1230,8 @@ node-b.example.com: starting master, logging to /home/hbuser/hbase-0.98.3-hadoop On each node of the cluster, run the jps command and verify that the correct processes are running on each server. You may see additional Java processes running on your servers as well, if they are used for other purposes. - -Example 2. node-a jps Output - +node-a jps Output $ jps 20355 Jps @@ -1241,12 +1239,8 @@ You may see additional Java processes running on your servers as well, if they a 20137 HMaster - - - -Example 3. node-b jps Output - +node-b jps Output $ jps 15930 HRegionServer @@ -1255,12 +1249,8 @@ You may see additional Java processes running on your servers as well, if they a 16010 HMaster - - - -Example 4. node-c jps Output - +node-c jps Output $ jps 13901 Jps @@ -1268,8 +1258,6 @@ You may see additional Java processes running on your servers as well, if they a 13737 HRegionServer - - @@ -1570,7 +1558,7 @@ You must set JAVA_HOME on each node of your cluster. hbase-env. Configuring the maximum number of file descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that users ulimit configuration, look at the first line of the HBase log for that instance. -Example 5. ulimit Settings on Ubuntu +Example 2. ulimit Settings on Ubuntu To configure ulimit settings on Ubuntu, edit /etc/security/limits.conf, which is a space-delimited file with four columns. Refer to the man page for limits.conf for details about the format of this file. In the following example, the first line sets both soft and hard limits for the number of open files (nofile) to 32768 for the operating system user with the username hadoop. The second line sets the number of processes to 32000 for the same user. @@ -1969,7 +1957,7 @@ All hosts listed in this file will have their RegionServer processes started and See the ZooKeeper section for ZooKeeper setup instructions for HBase. -Example 6. Example Distributed HBase Cluster +Example 3. Example Distributed HBase Cluster This is a bare-bones conf/hbase-site.xml for a distributed HBase cluster. @@ -7246,7 +7234,7 @@ Spawning HBase Shell commands in this way is slow, so keep that in mind when you -Example 7. Passing Commands to the HBase Shell +Example 4. Passing Commands to the HBase Shell You can pass commands to the HBase Shell in non-interactive mode (see hbase.shell.noninteractive) using the echo command and the | (pipe) operator. @@ -7282,7 +7270,7 @@ DESCRIPTION ENABLED -Example 8. Checking the Result of a Scripted Command +Example 5. Checking the Result of a Scripted Command Since scripts are not designed to be run interactively, you need a way to check whether your command failed or succeeded. @@ -7329,10 +7317,8 @@ For instance, if your script creates a table, but returns a non-zero exit value, You can enter HBase Shell commands into a text file, one command per line, and pass that file to the HBase Shell. - -Example 9. Example Command File - +Example Command File create 'test', 'cf' list 'test' @@ -7346,10 +7332,8 @@ disable 'test' enable 'test' - - -Example 10. Directing HBase Shell to Execute the Commands +Example 6. Directing HBase Shell to Execute the Commands Pass the path to the command file as the only argument to the hbase shell command. @@ -8014,7 +7998,7 @@ Namespace membership is determined during table creation by specifying a fully-q -Example 11. Examples +Example 7. Examples @@ -8059,7 +8043,7 @@ alter_namespace 'my_ns', {METHOD = 'set', 'PROPERTY_NAME' = 'PROPERTY_VA -Example 12. Examples +Example 8. Examples @@ -8234,7 +8218,7 @@ This section is basically a synopsis of this article by Bruno Dumon. Prior to HBase 0.96, the default number of versions kept was 3, but in 0.96 and newer has been changed to 1. -Example 13. Modify the Maximum Number of Versions for a Column Family +Example 9. Modify the Maximum Number of Versions for a Column Family This example uses HBase Shell to keep a maximum of 5 versions of all columns in column family f1. @@ -8248,7 +8232,7 @@ You could also use https://hbase.apache.org/apidocs/org/apache/hadoop/h -Example 14. Modify the Minimum Number of Versions for a Column Family +Example 10. Modify the Minimum Number of Versions for a Column Family You can also specify the minimum number of versions to store per column family. @@ -8746,7
[07/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html b/devapidocs/src-html/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html index 6e1231c..edb4a3d 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.html @@ -26,315 +26,339 @@ 018package org.apache.hadoop.hbase.master.replication; 019 020import java.io.IOException; -021import java.util.HashMap; -022import java.util.Map; -023import org.apache.hadoop.hbase.MetaTableAccessor; -024import org.apache.hadoop.hbase.TableName; -025import org.apache.hadoop.hbase.client.Connection; -026import org.apache.hadoop.hbase.client.RegionInfo; +021import java.io.InterruptedIOException; +022import java.util.HashMap; +023import java.util.Map; +024import org.apache.hadoop.hbase.MetaTableAccessor; +025import org.apache.hadoop.hbase.TableName; +026import org.apache.hadoop.hbase.client.Connection; 027import org.apache.hadoop.hbase.client.TableDescriptor; -028import org.apache.hadoop.hbase.master.MasterFileSystem; +028import org.apache.hadoop.hbase.client.TableState; 029import org.apache.hadoop.hbase.master.TableStateManager; 030import org.apache.hadoop.hbase.master.TableStateManager.TableStateNotFoundException; -031import org.apache.hadoop.hbase.master.assignment.RegionStates; -032import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv; -033import org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch; -034import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; -035import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; -036import org.apache.hadoop.hbase.replication.ReplicationException; -037import org.apache.hadoop.hbase.replication.ReplicationPeerConfig; -038import org.apache.hadoop.hbase.replication.ReplicationQueueStorage; -039import org.apache.hadoop.hbase.replication.ReplicationUtils; -040import org.apache.hadoop.hbase.util.Pair; -041import org.apache.hadoop.hbase.wal.WALSplitter; -042import org.apache.yetus.audience.InterfaceAudience; -043import org.slf4j.Logger; -044import org.slf4j.LoggerFactory; +031import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv; +032import org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch; +033import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +034import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +035import org.apache.hadoop.hbase.replication.ReplicationException; +036import org.apache.hadoop.hbase.replication.ReplicationPeerConfig; +037import org.apache.hadoop.hbase.replication.ReplicationQueueStorage; +038import org.apache.hadoop.hbase.replication.ReplicationUtils; +039import org.apache.hadoop.hbase.util.Pair; +040import org.apache.yetus.audience.InterfaceAudience; +041import org.slf4j.Logger; +042import org.slf4j.LoggerFactory; +043 +044import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.PeerModificationState; 045 -046import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.PeerModificationState; -047 -048/** -049 * The base class for all replication peer related procedure except sync replication state -050 * transition. -051 */ -052@InterfaceAudience.Private -053public abstract class ModifyPeerProcedure extends AbstractPeerProcedurePeerModificationState { +046/** +047 * The base class for all replication peer related procedure except sync replication state +048 * transition. +049 */ +050@InterfaceAudience.Private +051public abstract class ModifyPeerProcedure extends AbstractPeerProcedurePeerModificationState { +052 +053 private static final Logger LOG = LoggerFactory.getLogger(ModifyPeerProcedure.class); 054 -055 private static final Logger LOG = LoggerFactory.getLogger(ModifyPeerProcedure.class); +055 protected static final int UPDATE_LAST_SEQ_ID_BATCH_SIZE = 1000; 056 -057 protected static final int UPDATE_LAST_SEQ_ID_BATCH_SIZE = 1000; -058 -059 protected ModifyPeerProcedure() { -060 } -061 -062 protected ModifyPeerProcedure(String peerId) { -063super(peerId); -064 } -065 -066 /** -067 * Called before we start the actual processing. The implementation should call the pre CP hook, -068 * and also the pre-check for the peer modification. -069 * p -070 * If an IOException is thrown then we will give up and mark the procedure as failed directly. If -071 * all checks passes then the procedure can not be rolled back any more. -072 */ -073 protected abstract void prePeerModification(MasterProcedureEnv env) -074 throws IOException, ReplicationException; -075 -076 protected
[04/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.SyncMetrics.html -- diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.SyncMetrics.html b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.SyncMetrics.html index a8b77ae..f125368 100644 --- a/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.SyncMetrics.html +++ b/devapidocs/src-html/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.SyncMetrics.html @@ -367,957 +367,966 @@ 359lock.lock(); 360try { 361 LOG.trace("Starting WAL Procedure Store lease recovery"); -362 while (isRunning()) { -363FileStatus[] oldLogs = getLogFiles(); -364// Get Log-MaxID and recover lease on old logs -365try { -366 flushLogId = initOldLogs(oldLogs); -367} catch (FileNotFoundException e) { -368 LOG.warn("Someone else is active and deleted logs. retrying.", e); -369 continue; -370} -371 -372// Create new state-log -373if (!rollWriter(flushLogId + 1)) { -374 // someone else has already created this log -375 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); -376 continue; -377} -378 -379// We have the lease on the log -380oldLogs = getLogFiles(); -381if (getMaxLogId(oldLogs) flushLogId) { -382 if (LOG.isDebugEnabled()) { -383LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); -384 } -385 logs.getLast().removeFile(this.walArchiveDir); -386 continue; -387} -388 -389LOG.trace("Lease acquired for flushLogId={}", flushLogId); -390break; -391 } -392} finally { -393 lock.unlock(); -394} -395 } -396 -397 @Override -398 public void load(final ProcedureLoader loader) throws IOException { -399lock.lock(); -400try { -401 if (logs.isEmpty()) { -402throw new RuntimeException("recoverLease() must be called before loading data"); -403 } -404 -405 // Nothing to do, If we have only the current log. -406 if (logs.size() == 1) { -407LOG.trace("No state logs to replay."); -408loader.setMaxProcId(0); -409return; -410 } -411 -412 // Load the old logs -413 final IteratorProcedureWALFile it = logs.descendingIterator(); -414 it.next(); // Skip the current log -415 -416 ProcedureWALFormat.load(it, storeTracker, new ProcedureWALFormat.Loader() { -417@Override -418public void setMaxProcId(long maxProcId) { -419 loader.setMaxProcId(maxProcId); -420} -421 -422@Override -423public void load(ProcedureIterator procIter) throws IOException { -424 loader.load(procIter); -425} -426 -427@Override -428public void handleCorrupted(ProcedureIterator procIter) throws IOException { -429 loader.handleCorrupted(procIter); -430} -431 -432@Override -433public void markCorruptedWAL(ProcedureWALFile log, IOException e) { -434 if (corruptedLogs == null) { -435corruptedLogs = new HashSet(); -436 } -437 corruptedLogs.add(log); -438 // TODO: sideline corrupted log +362 boolean afterFirstAttempt = false; +363 while (isRunning()) { +364// Don't sleep before first attempt +365if (afterFirstAttempt) { +366 LOG.trace("Sleep {} ms after first lease recovery attempt.", +367 waitBeforeRoll); +368 Threads.sleepWithoutInterrupt(waitBeforeRoll); +369} else { +370 afterFirstAttempt = true; +371} +372FileStatus[] oldLogs = getLogFiles(); +373// Get Log-MaxID and recover lease on old logs +374try { +375 flushLogId = initOldLogs(oldLogs); +376} catch (FileNotFoundException e) { +377 LOG.warn("Someone else is active and deleted logs. retrying.", e); +378 continue; +379} +380 +381// Create new state-log +382if (!rollWriter(flushLogId + 1)) { +383 // someone else has already created this log +384 LOG.debug("Someone else has already created log {}. Retrying.", flushLogId); +385 continue; +386} +387 +388// We have the lease on the log +389oldLogs = getLogFiles(); +390if (getMaxLogId(oldLogs) flushLogId) { +391 if (LOG.isDebugEnabled()) { +392LOG.debug("Someone else created new logs. Expected maxLogId " + flushLogId); +393 } +394 logs.getLast().removeFile(this.walArchiveDir); +395 continue; +396} +397
[10/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/devapidocs/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html -- diff --git a/devapidocs/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html b/devapidocs/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html index 8862e06..a60c9b6 100644 --- a/devapidocs/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html +++ b/devapidocs/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.html @@ -1428,7 +1428,7 @@ extends WALS_PATH_FILTER -private static finalorg.apache.hadoop.fs.PathFilter WALS_PATH_FILTER +private static finalorg.apache.hadoop.fs.PathFilter WALS_PATH_FILTER @@ -1437,7 +1437,7 @@ extends FILE_STATUS_ID_COMPARATOR -private static finalhttps://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true; title="class or interface in java.util">Comparatororg.apache.hadoop.fs.FileStatus FILE_STATUS_ID_COMPARATOR +private static finalhttps://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true; title="class or interface in java.util">Comparatororg.apache.hadoop.fs.FileStatus FILE_STATUS_ID_COMPARATOR @@ -1606,7 +1606,7 @@ extends load -publicvoidload(ProcedureStore.ProcedureLoaderloader) +publicvoidload(ProcedureStore.ProcedureLoaderloader) throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Description copied from interface:ProcedureStore Load the Procedures in the store. @@ -1624,7 +1624,7 @@ extends tryCleanupLogsOnLoad -privatevoidtryCleanupLogsOnLoad() +privatevoidtryCleanupLogsOnLoad() @@ -1633,7 +1633,7 @@ extends insert -publicvoidinsert(Procedureproc, +publicvoidinsert(Procedureproc, Procedure[]subprocs) Description copied from interface:ProcedureStore When a procedure is submitted to the executor insert(proc, null) will be called. @@ -1655,7 +1655,7 @@ extends insert -publicvoidinsert(Procedure[]procs) +publicvoidinsert(Procedure[]procs) Description copied from interface:ProcedureStore Serialize a set of new procedures. These procedures are freshly submitted to the executor and each procedure @@ -1672,7 +1672,7 @@ extends update -publicvoidupdate(Procedureproc) +publicvoidupdate(Procedureproc) Description copied from interface:ProcedureStore The specified procedure was executed, and the new state should be written to the store. @@ -1688,7 +1688,7 @@ extends delete -publicvoiddelete(longprocId) +publicvoiddelete(longprocId) Description copied from interface:ProcedureStore The specified procId was removed from the executor, due to completion, abort or failure. @@ -1705,7 +1705,7 @@ extends delete -publicvoiddelete(Procedureproc, +publicvoiddelete(Procedureproc, long[]subProcIds) Description copied from interface:ProcedureStore The parent procedure completed. @@ -1723,7 +1723,7 @@ extends delete -publicvoiddelete(long[]procIds, +publicvoiddelete(long[]procIds, intoffset, intcount) Description copied from interface:ProcedureStore @@ -1744,7 +1744,7 @@ extends delete -privatevoiddelete(long[]procIds) +privatevoiddelete(long[]procIds) @@ -1753,7 +1753,7 @@ extends acquireSlot -privateByteSlotacquireSlot() +privateByteSlotacquireSlot() @@ -1762,7 +1762,7 @@ extends releaseSlot -privatevoidreleaseSlot(ByteSlotslot) +privatevoidreleaseSlot(ByteSlotslot) @@ -1771,7 +1771,7 @@ extends pushData -privatelongpushData(WALProcedureStore.PushTypetype, +privatelongpushData(WALProcedureStore.PushTypetype, ByteSlotslot, longprocId, long[]subProcIds) @@ -1783,7 +1783,7 @@ extends updateStoreTracker -privatevoidupdateStoreTracker(WALProcedureStore.PushTypetype, +privatevoidupdateStoreTracker(WALProcedureStore.PushTypetype, longprocId, long[]subProcIds) @@ -1794,7 +1794,7 @@ extends isSyncAborted -privatebooleanisSyncAborted() +privatebooleanisSyncAborted() @@ -1803,7 +1803,7 @@ extends syncLoop -privatevoidsyncLoop() +privatevoidsyncLoop() throws https://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true; title="class or interface in java.lang">Throwable Throws: @@ -1817,7 +1817,7 @@ extends getSyncMetrics -publichttps://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true; title="class or interface in java.util">ArrayListWALProcedureStore.SyncMetricsgetSyncMetrics() +publichttps://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true; title="class or interface in
[02/15] hbase-site git commit: Published site at 5a633adffead3b979f6e1a607994409978b0ea74.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/export_control.html -- diff --git a/export_control.html b/export_control.html index 84901d3..ff9c4e8 100644 --- a/export_control.html +++ b/export_control.html @@ -7,7 +7,7 @@ - + Apache HBase Export Control @@ -331,7 +331,7 @@ for more details. https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/index.html -- diff --git a/index.html b/index.html index f285a6a..0db1dc6 100644 --- a/index.html +++ b/index.html @@ -7,7 +7,7 @@ - + Apache HBase Apache HBase⢠Home @@ -409,7 +409,7 @@ Apache HBase is an open-source, distributed, versioned, non-relational database https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/integration.html -- diff --git a/integration.html b/integration.html index 4fce7fa..b311669 100644 --- a/integration.html +++ b/integration.html @@ -7,7 +7,7 @@ - + Apache HBase CI Management @@ -291,7 +291,7 @@ https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/issue-tracking.html -- diff --git a/issue-tracking.html b/issue-tracking.html index 50c8a9a..5362f29 100644 --- a/issue-tracking.html +++ b/issue-tracking.html @@ -7,7 +7,7 @@ - + Apache HBase Issue Management @@ -288,7 +288,7 @@ https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/license.html -- diff --git a/license.html b/license.html index 71f9e43..d5b21d9 100644 --- a/license.html +++ b/license.html @@ -7,7 +7,7 @@ - + Apache HBase Project Licenses @@ -491,7 +491,7 @@ https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/mail-lists.html -- diff --git a/mail-lists.html b/mail-lists.html index 06653fa..cf25d66 100644 --- a/mail-lists.html +++ b/mail-lists.html @@ -7,7 +7,7 @@ - + Apache HBase Project Mailing Lists @@ -341,7 +341,7 @@ https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/metrics.html -- diff --git a/metrics.html b/metrics.html index 4629b1f..0881c4f 100644 --- a/metrics.html +++ b/metrics.html @@ -7,7 +7,7 @@ - + Apache HBase Apache HBase (TM) Metrics @@ -459,7 +459,7 @@ export HBASE_REGIONSERVER_OPTS=$HBASE_JMX_OPTS -Dcom.sun.management.jmxrem https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2018-04-12 + Last Published: 2018-04-13 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/9808d50e/old_news.html -- diff --git a/old_news.html b/old_news.html index
hbase git commit: HBASE-20377 Deal with table in enabling and disabling state when modifying serial replication peer
Repository: hbase Updated Branches: refs/heads/master 826909a59 -> 5a633adff HBASE-20377 Deal with table in enabling and disabling state when modifying serial replication peer Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5a633adf Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5a633adf Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5a633adf Branch: refs/heads/master Commit: 5a633adffead3b979f6e1a607994409978b0ea74 Parents: 826909a Author: zhangduoAuthored: Fri Apr 13 16:21:03 2018 +0800 Committer: zhangduo Committed: Fri Apr 13 20:33:29 2018 +0800 -- .../apache/hadoop/hbase/client/TableState.java | 14 .../master/procedure/DisableTableProcedure.java | 6 +- .../master/replication/ModifyPeerProcedure.java | 84 +--- .../TestAddToSerialReplicationPeer.java | 74 + 4 files changed, 146 insertions(+), 32 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/5a633adf/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java -- diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java index cc3b765..40612e9 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java @@ -104,6 +104,13 @@ public class TableState { } /** + * @return True if table is {@link State#ENABLING}. + */ + public boolean isEnabling() { +return isInStates(State.ENABLING); + } + + /** * @return True if {@link State#ENABLED} or {@link State#ENABLING} */ public boolean isEnabledOrEnabling() { @@ -118,6 +125,13 @@ public class TableState { } /** + * @return True if table is disabling. + */ + public boolean isDisabling() { +return isInStates(State.DISABLING); + } + + /** * @return True if {@link State#DISABLED} or {@link State#DISABLED} */ public boolean isDisabledOrDisabling() { http://git-wip-us.apache.org/repos/asf/hbase/blob/5a633adf/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java index f5caff7..685a73e 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.master.procedure; import java.io.IOException; import org.apache.hadoop.hbase.HBaseIOException; +import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotEnabledException; @@ -107,7 +108,7 @@ public class DisableTableProcedure break; case DISABLE_TABLE_MARK_REGIONS_OFFLINE: addChildProcedure(env.getAssignmentManager().createUnassignProcedures(tableName)); - setNextState(DisableTableState.DISABLE_TABLE_SET_DISABLED_TABLE_STATE); + setNextState(DisableTableState.DISABLE_TABLE_ADD_REPLICATION_BARRIER); break; case DISABLE_TABLE_ADD_REPLICATION_BARRIER: if (env.getMasterServices().getTableDescriptors().get(tableName) @@ -119,7 +120,8 @@ public class DisableTableProcedure .getRegionsOfTable(tableName)) { long maxSequenceId = WALSplitter.getMaxRegionSequenceId(mfs.getFileSystem(), mfs.getRegionDir(region)); - mutator.mutate(MetaTableAccessor.makePutForReplicationBarrier(region, maxSequenceId, +long openSeqNum = maxSequenceId > 0 ? maxSequenceId + 1 : HConstants.NO_SEQNUM; + mutator.mutate(MetaTableAccessor.makePutForReplicationBarrier(region, openSeqNum, EnvironmentEdgeManager.currentTime())); } } http://git-wip-us.apache.org/repos/asf/hbase/blob/5a633adf/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java index
hbase git commit: HBASE-20377 Deal with table in enabling and disabling state when modifying serial replication peer
Repository: hbase Updated Branches: refs/heads/branch-2 b1901c9a1 -> ae8a21204 HBASE-20377 Deal with table in enabling and disabling state when modifying serial replication peer Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ae8a2120 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ae8a2120 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ae8a2120 Branch: refs/heads/branch-2 Commit: ae8a21204d788f4af4bbcbdb10680431735f7f3d Parents: b1901c9 Author: zhangduoAuthored: Fri Apr 13 16:21:03 2018 +0800 Committer: zhangduo Committed: Fri Apr 13 20:33:25 2018 +0800 -- .../apache/hadoop/hbase/client/TableState.java | 14 .../master/procedure/DisableTableProcedure.java | 6 +- .../master/replication/ModifyPeerProcedure.java | 84 +--- .../TestAddToSerialReplicationPeer.java | 74 + 4 files changed, 146 insertions(+), 32 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/ae8a2120/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java -- diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java index cc3b765..40612e9 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java @@ -104,6 +104,13 @@ public class TableState { } /** + * @return True if table is {@link State#ENABLING}. + */ + public boolean isEnabling() { +return isInStates(State.ENABLING); + } + + /** * @return True if {@link State#ENABLED} or {@link State#ENABLING} */ public boolean isEnabledOrEnabling() { @@ -118,6 +125,13 @@ public class TableState { } /** + * @return True if table is disabling. + */ + public boolean isDisabling() { +return isInStates(State.DISABLING); + } + + /** * @return True if {@link State#DISABLED} or {@link State#DISABLED} */ public boolean isDisabledOrDisabling() { http://git-wip-us.apache.org/repos/asf/hbase/blob/ae8a2120/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java index f5caff7..685a73e 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.master.procedure; import java.io.IOException; import org.apache.hadoop.hbase.HBaseIOException; +import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotEnabledException; @@ -107,7 +108,7 @@ public class DisableTableProcedure break; case DISABLE_TABLE_MARK_REGIONS_OFFLINE: addChildProcedure(env.getAssignmentManager().createUnassignProcedures(tableName)); - setNextState(DisableTableState.DISABLE_TABLE_SET_DISABLED_TABLE_STATE); + setNextState(DisableTableState.DISABLE_TABLE_ADD_REPLICATION_BARRIER); break; case DISABLE_TABLE_ADD_REPLICATION_BARRIER: if (env.getMasterServices().getTableDescriptors().get(tableName) @@ -119,7 +120,8 @@ public class DisableTableProcedure .getRegionsOfTable(tableName)) { long maxSequenceId = WALSplitter.getMaxRegionSequenceId(mfs.getFileSystem(), mfs.getRegionDir(region)); - mutator.mutate(MetaTableAccessor.makePutForReplicationBarrier(region, maxSequenceId, +long openSeqNum = maxSequenceId > 0 ? maxSequenceId + 1 : HConstants.NO_SEQNUM; + mutator.mutate(MetaTableAccessor.makePutForReplicationBarrier(region, openSeqNum, EnvironmentEdgeManager.currentTime())); } } http://git-wip-us.apache.org/repos/asf/hbase/blob/ae8a2120/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java index
hbase git commit: HBASE-20344 Fix asciidoc warnings
Repository: hbase Updated Branches: refs/heads/master d59a6c816 -> 826909a59 HBASE-20344 Fix asciidoc warnings Signed-off-by: Sean BusbeyProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/826909a5 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/826909a5 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/826909a5 Branch: refs/heads/master Commit: 826909a59cf27adb1578c627e743233b378922a5 Parents: d59a6c8 Author: Peter Somogyi Authored: Wed Apr 4 13:36:48 2018 +0200 Committer: Peter Somogyi Committed: Fri Apr 13 11:32:40 2018 +0200 -- src/main/asciidoc/_chapters/backup_restore.adoc | 106 +-- src/main/asciidoc/_chapters/compression.adoc| 20 .../asciidoc/_chapters/getting_started.adoc | 7 -- src/main/asciidoc/_chapters/hbase_mob.adoc | 4 - src/main/asciidoc/_chapters/ops_mgt.adoc| 3 - src/main/asciidoc/_chapters/performance.adoc| 2 - src/main/asciidoc/_chapters/schema_design.adoc | 2 - src/main/asciidoc/_chapters/security.adoc | 12 --- src/main/asciidoc/_chapters/shell.adoc | 2 - 9 files changed, 53 insertions(+), 105 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/826909a5/src/main/asciidoc/_chapters/backup_restore.adoc -- diff --git a/src/main/asciidoc/_chapters/backup_restore.adoc b/src/main/asciidoc/_chapters/backup_restore.adoc index b02af41..cb7fced 100644 --- a/src/main/asciidoc/_chapters/backup_restore.adoc +++ b/src/main/asciidoc/_chapters/backup_restore.adoc @@ -175,7 +175,7 @@ and its options. The below information is captured in this help message for each // hbase backup create [[br.creating.complete.backup]] -### Creating a Backup Image +=== Creating a Backup Image [NOTE] @@ -204,7 +204,7 @@ dataset with a restore operation, having the backup ID readily available can sav [[br.create.positional.cli.arguments]] - Positional Command-Line Arguments + Positional Command-Line Arguments _type_:: The type of backup to execute: _full_ or _incremental_. As a reminder, an _incremental_ backup requires a _full_ backup to @@ -215,7 +215,7 @@ _backup_path_:: are _hdfs:_, _webhdfs:_, _gpfs:_, and _s3fs:_. [[br.create.named.cli.arguments]] - Named Command-Line Arguments + Named Command-Line Arguments _-t _:: A comma-separated list of tables to back up. If no tables are specified, all tables are backed up. No regular-expression or @@ -242,7 +242,7 @@ _-q _:: is useful to prevent backup tasks from stealing resources away from other MapReduce jobs of high importance. [[br.usage.examples]] - Example usage + Example usage [source] @@ -255,7 +255,7 @@ in the path _/data/backup_. The _-w_ option specifies that no more than three pa // hbase backup restore [[br.restoring.backup]] -### Restoring a Backup Image +=== Restoring a Backup Image Run the following command as an HBase superuser. You can only restore a backup on a running HBase cluster because the data must be redistributed the RegionServers for the operation to complete successfully. @@ -266,7 +266,7 @@ hbase restore [[br.restore.positional.args]] - Positional Command-Line Arguments + Positional Command-Line Arguments _backup_path_:: The _backup_path_ argument specifies the full filesystem URI of where to store the backup image. Valid prefixes are @@ -277,7 +277,7 @@ _backup_id_:: [[br.restore.named.args]] - Named Command-Line Arguments + Named Command-Line Arguments _-t _:: A comma-separated list of tables to restore. See < > for more @@ -304,7 +304,7 @@ _-o_:: [[br.restore.usage]] - Example of Usage + Example of Usage [source] @@ -319,7 +319,7 @@ This command restores two tables of an incremental backup image. In this example // hbase backup merge [[br.merge.backup]] -### Merging Incremental Backup Images +=== Merging Incremental Backup Images This command can be used to merge two or more incremental backup images into a single incremental backup image. This can be used to consolidate multiple, small incremental backup images into a single @@ -332,18 +332,18 @@ $ hbase backup merge [[br.merge.backup.positional.cli.arguments]] - Positional Command-Line Arguments + Positional Command-Line Arguments _backup_ids_:: A comma-separated list of incremental backup image IDs that are to be combined into a single image. [[br.merge.backup.named.cli.arguments]] - Named Command-Line Arguments + Named Command-Line Arguments None.