This is an automated email from the ASF dual-hosted git repository.
stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-operator-tools.git
The following commit(s) were added to refs/heads/master by this push:
new 0a21a70 HBASE-21322 Add a scheduleServerCrashProcedure() API to
HbckService (#11)
0a21a70 is described below
commit 0a21a70571b7c6a2658b241a734cc51b84e9968d
Author: Michael Stack <[email protected]>
AuthorDate: Thu Jul 18 08:26:40 2019 -0700
HBASE-21322 Add a scheduleServerCrashProcedure() API to HbckService (#11)
* HBASE-21322 Add a scheduleServerCrashProcedure() API to HbckService
Adds server version check so we fail fast and complain if remote
server does not support feature.
Refactor so we test server version before we use feature/command.
Named feature scheduleRecoveries
Add a new Version class that has Version math and check in it.
Signed-off-by: Wellington Chevreuil <[email protected]>
Signed-off-by: Guanghao Zhang <[email protected]>
---
hbase-hbck2/README.md | 115 ++++----
.../src/main/java/org/apache/hbase/HBCK2.java | 301 ++++++++++++---------
.../src/main/java/org/apache/hbase/Version.java | 115 ++++++++
.../src/test/java/org/apache/hbase/TestHBCK2.java | 119 ++++----
.../apache/hbase/TestHBCKCommandLineParsing.java | 10 +-
.../org/apache/hbase/TestSchedulingRecoveries.java | 84 ++++++
.../test/java/org/apache/hbase/TestVersion.java | 76 ++++++
pom.xml | 2 +-
8 files changed, 582 insertions(+), 240 deletions(-)
diff --git a/hbase-hbck2/README.md b/hbase-hbck2/README.md
index cbca367..9e6e6f6 100644
--- a/hbase-hbck2/README.md
+++ b/hbase-hbck2/README.md
@@ -18,7 +18,8 @@
# Apache HBase HBCK2 Tool
-HBCK2 is the successor to
[hbck](https://hbase.apache.org/book.html#hbck.in.depth), the hbase-1.x fixup
tool (A.K.A _hbck1_). Use it in place of _hbck1_ making repairs against
hbase-2.x installs.
+HBCK2 is the successor to
[hbck](https://hbase.apache.org/book.html#hbck.in.depth),
+the hbase-1.x fixup tool (A.K.A _hbck1_). Use it in place of _hbck1_ making
repairs against hbase-2.x installs.
## _hbck1_
The _hbck_ tool that ships with hbase-1.x (A.K.A _hbck1_) should not be run
against an
@@ -52,25 +53,26 @@ _HBCK2_ to generate the _HBCK2_ jar file, running the below
will dump out the _H
~~~~
```
+usage: HBCK2 [OPTIONS] COMMAND <ARGS>
Options:
- -d,--debug run with debug output
- -h,--help output this help message
- -p,--hbase.zookeeper.property.clientPort <arg> port of target hbase ensemble
- -q,--hbase.zookeeper.quorum <arg> ensemble of target hbase
- -s,--skip skip hbase version
check/PleaseHoldException/Master initializing
- -v,--version this hbck2 version
- -z,--zookeeper.znode.parent <arg> parent znode of target hbase
-
-Commands:
+ -d,--debug run with debug output
+ -h,--help output this help message
+ -p,--hbase.zookeeper.property.clientPort <arg> port of hbase ensemble
+ -q,--hbase.zookeeper.quorum <arg> hbase ensemble
+ -s,--skip skip hbase version check
+ (PleaseHoldException)
+ -v,--version this hbck2 version
+ -z,--zookeeper.znode.parent <arg> parent znode of hbase
+ ensemble
+Command:
assigns [OPTIONS] <ENCODED_REGIONNAME>...
Options:
-o,--override override ownership by another procedure
- A 'raw' assign that can be used even during Master initialization
- (if the -skip flag is specified). Skirts Coprocessors. Pass one
- or more encoded region names. 1588230740 is the hard-coded name
- for the hbase:meta region and de00010733901a05f5a2a3a382e27dd4 is
- an example of what a user-space encoded region name looks like.
- For example:
+ A 'raw' assign that can be used even during Master initialization (if
+ the -skip flag is specified). Skirts Coprocessors. Pass one or more
+ encoded region names. 1588230740 is the hard-coded name for the
+ hbase:meta region and de00010733901a05f5a2a3a382e27dd4 is an example of
+ what a user-space encoded region name looks like. For example:
$ HBCK2 assign 1588230740 de00010733901a05f5a2a3a382e27dd4
Returns the pid(s) of the created AssignProcedure(s) or -1 if none.
@@ -79,41 +81,27 @@ Commands:
-o,--override override if procedure is running/stuck
-r,--recursive bypass parent and its children. SLOW! EXPENSIVE!
-w,--lockWait milliseconds to wait before giving up; default=1
- Pass one (or more) procedure 'pid's to skip to procedure finish.
- Parent of bypassed procedure will also be skipped to the finish.
- Entities will be left in an inconsistent state and will require
- manual fixup. May need Master restart to clear locks still held.
- Bypass fails if procedure has children. Add 'recursive' if all
- you have is a parent pid to finish parent and children. This
- is SLOW, and dangerous so use selectively. Does not always work.
-
- unassigns <ENCODED_REGIONNAME>...
+ Pass one (or more) procedure 'pid's to skip to procedure finish. Parent
+ of bypassed procedure will also be skipped to the finish. Entities will
+ be left in an inconsistent state and will require manual fixup. May
+ need Master restart to clear locks still held. Bypass fails if
+ procedure has children. Add 'recursive' if all you have is a parent pid
+ to finish parent and children. This is SLOW, and dangerous so use
+ selectively. Does not always work.
+
+ filesystem [OPTIONS] [<TABLENAME...]
Options:
- -o,--override override ownership by another procedure
- A 'raw' unassign that can be used even during Master initialization
- (if the -skip flag is specified). Skirts Coprocessors. Pass one or
- more encoded region names. 1588230740 is the hard-coded name for
- the hbase:meta region and de00010733901a05f5a2a3a382e27dd4 is an
- example of what a userspace encoded region name looks like.
- For example:
- $ HBCK2 unassign 1588230740 de00010733901a05f5a2a3a382e27dd4
- Returns the pid(s) of the created UnassignProcedure(s) or -1 if none.
-
- setTableState <TABLENAME> <STATE>
- Possible table states: ENABLED, DISABLED, DISABLING, ENABLING
- To read current table state, in the hbase shell run:
- hbase> get 'hbase:meta', '<TABLENAME>', 'table:state'
- A value of \x08\x00 == ENABLED, \x08\x01 == DISABLED, etc.
- Can also run a 'describe "<TABLENAME>"' at the shell prompt.
- An example making table name 'user' ENABLED:
- $ HBCK2 setTableState users ENABLED
- Returns whatever the previous table state was.
+ -f, --fix sideline corrupt hfiles, bad links and references.
+ Report corrupt hfiles and broken links. Pass '--fix' to sideline
+ corrupt files and links. Pass one or more tablenames to narrow the
+ checkup. Default checks all tables. Modified regions will need to be
+ reopened to pick-up changes.
setRegionState <ENCODED_REGIONNAME> <STATE>
Possible region states:
- OFFLINE, OPENING, OPEN, CLOSING, CLOSED, SPLITTING, SPLIT,
- FAILED_OPEN, FAILED_CLOSE, MERGING, MERGED, SPLITTING_NEW, MERGING_NEW,
- ABNORMALLY_CLOSED
+ OFFLINE, OPENING, OPEN, CLOSING, CLOSED, SPLITTING, SPLIT,
+ FAILED_OPEN, FAILED_CLOSE, MERGING, MERGED, SPLITTING_NEW,
+ MERGING_NEW, ABNORMALLY_CLOSED
WARNING: This is a very risky option intended for use as last resort.
Example scenarios include unassigns/assigns that can't move forward
because region is in an inconsistent state in 'hbase:meta'. For
@@ -126,6 +114,39 @@ Commands:
setting region 'de00010733901a05f5a2a3a382e27dd4' to CLOSING:
$ HBCK2 setRegionState de00010733901a05f5a2a3a382e27dd4 CLOSING
Returns "0" if region state changed and "1" otherwise.
+
+ setTableState <TABLENAME> <STATE>
+ Possible table states: ENABLED, DISABLED, DISABLING, ENABLING
+ To read current table state, in the hbase shell run:
+ hbase> get 'hbase:meta', '<TABLENAME>', 'table:state'
+ A value of \x08\x00 == ENABLED, \x08\x01 == DISABLED, etc.
+ Can also run a 'describe "<TABLENAME>"' at the shell prompt.
+ An example making table name 'user' ENABLED:
+ $ HBCK2 setTableState users ENABLED
+ Returns whatever the previous table state was.
+
+ scheduleRecovery <SERVERNAME>...
+ Schedule ServerCrashProcedure(SCP) for list of RegionServers. Format
+ server name as '<HOSTNAME>,<PORT>,<STARTCODE>' (See HBase UI/logs).
+ Example using RegionServer 'a.example.org,29100,1540348649479':
+ $ HBCK2 scheduleRecovery a.example.org,29100,1540348649479
+ Returns the pid(s) of the created ServerCrashProcedure(s) or -1 if
+ no procedure created (see master logs for why not).
+ Command only supported in hbase versions 2.0.3, 2.1.2, 2.2.0 (or newer).
+
+ unassigns <ENCODED_REGIONNAME>...
+ Options:
+ -o,--override override ownership by another procedure
+ A 'raw' unassign that can be used even during Master initialization
+ (if the -skip flag is specified). Skirts Coprocessors. Pass one or
+ more encoded region names. 1588230740 is the hard-coded name for the
+ hbase:meta region and de00010733901a05f5a2a3a382e27dd4 is an example
+ of what a userspace encoded region name looks like. For example:
+ $ HBCK2 unassign 1588230740 de00010733901a05f5a2a3a382e27dd4
+ Returns the pid(s) of the created UnassignProcedure(s) or -1 if none.
+
+ SEE ALSO, org.apache.hbase.hbck1.OfflineMetaRepair, the offline
+ hbase:meta tool. See the HBCK2 README for how to use.
```
## _HBCK2_ Overview
diff --git a/hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
b/hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
index 32d1e18..e4657e7 100644
--- a/hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
+++ b/hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
@@ -21,6 +21,7 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.PrintWriter;
import java.io.StringWriter;
+import java.util.ArrayList;
import java.util.Arrays;
import java.util.EnumSet;
import java.util.List;
@@ -33,10 +34,10 @@ import org.apache.hadoop.hbase.ClusterMetrics;
import org.apache.hadoop.hbase.CompareOperator;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.ClusterConnection;
-import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Hbck;
import org.apache.hadoop.hbase.client.Put;
@@ -47,10 +48,7 @@ import org.apache.hadoop.hbase.client.TableState;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.SubstringComparator;
import org.apache.hadoop.hbase.master.RegionState;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.VersionInfo;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -70,13 +68,10 @@ import
org.apache.hbase.thirdparty.org.apache.commons.cli.ParseException;
*/
// TODO:
// + On assign, can we look to see if existing assign and if so fail until
cancelled?
-// + Add test of Master version to ensure it supports hbck functionality.
-// ++ Hard. Server doesn't volunteer its version. Need to read the status?
HBASE-20225
// + Doc how we just take pointer to zk ensemble... If want to do more exotic
config. on client,
// then add a hbase-site.xml onto CLASSPATH for this tool to pick up.
// + Add --version
-// + Add emitting what is supported against remote server?
-public class HBCK2 extends Configured implements Tool {
+public class HBCK2 extends Configured implements org.apache.hadoop.util.Tool {
private static final Logger LOG = LogManager.getLogger(HBCK2.class);
private static final int EXIT_SUCCESS = 0;
static final int EXIT_FAILURE = 1;
@@ -88,10 +83,11 @@ public class HBCK2 extends Configured implements Tool {
private static final String FILESYSTEM = "filesystem";
private static final String VERSION = "version";
private static final String SET_REGION_STATE = "setRegionState";
+ private static final String SCHEDULE_RECOVERIES = "scheduleRecoveries";
private Configuration conf;
- private static final String TWO_POINT_ONE = "2.1.0";
- private static final String MININUM_VERSION = "2.0.3";
+ static String [] MINIMUM_HBCK2_VERSION = {"2.0.3", "2.1.1", "2.2.0",
"3.0.0"};
private boolean skipCheck = false;
+
/**
* Wait 1ms on lock by default.
*/
@@ -99,73 +95,69 @@ public class HBCK2 extends Configured implements Tool {
/**
* Check for HBCK support.
+ * Expects created connection.
+ * @param supportedVersions list of zero or more supported versions.
*/
- private void checkHBCKSupport(Connection connection) throws IOException {
- if(skipCheck){
- LOG.info("hbck support check skipped");
+ void checkHBCKSupport(ClusterConnection connection, String cmd, String ...
supportedVersions)
+ throws IOException {
+ if (skipCheck) {
+ LOG.info("Skipped {} command version check; 'skip' set", cmd);
return;
}
try (Admin admin = connection.getAdmin()) {
-
checkVersion(admin.getClusterMetrics(EnumSet.of(ClusterMetrics.Option.HBASE_VERSION)).
- getHBaseVersion());
- }
- }
-
- static void checkVersion(final String versionStr) {
- if (VersionInfo.compareVersion(MININUM_VERSION, versionStr) > 0) {
- throw new UnsupportedOperationException("Requires " + MININUM_VERSION +
" at least.");
- }
- // except 2.1.0 didn't ship with support
- if (VersionInfo.compareVersion(TWO_POINT_ONE, versionStr) == 0) {
- throw new UnsupportedOperationException(TWO_POINT_ONE + " has no support
for hbck2");
+ String serverVersion = admin.
+
getClusterMetrics(EnumSet.of(ClusterMetrics.Option.HBASE_VERSION)).getHBaseVersion();
+ String [] thresholdVersions = supportedVersions == null?
+ MINIMUM_HBCK2_VERSION: supportedVersions;
+ boolean supported = Version.check(serverVersion, thresholdVersions);
+ if (!supported) {
+ throw new UnsupportedOperationException(cmd + " not supported on
server version=" +
+ serverVersion + "; needs at least a server that matches or exceeds
" +
+ Arrays.toString(thresholdVersions));
+ }
}
}
- TableState setTableState(TableName tableName, TableState.State state) throws
IOException {
- try (ClusterConnection conn =
- (ClusterConnection)
ConnectionFactory.createConnection(getConf())) {
- checkHBCKSupport(conn);
- try (Hbck hbck = conn.getHbck()) {
- return hbck.setTableStateInMeta(new TableState(tableName, state));
- }
- }
+ TableState setTableState(Hbck hbck, TableName tableName, TableState.State
state)
+ throws IOException {
+ return hbck.setTableStateInMeta(new TableState(tableName, state));
}
- int setRegionState(String region, RegionState.State newState)
+ int setRegionState(ClusterConnection connection, String region,
+ RegionState.State newState)
throws IOException {
- if(newState==null){
+ if (newState == null) {
throw new IllegalArgumentException("State can't be null.");
}
- try(Connection connection = ConnectionFactory.createConnection(getConf())){
- RegionState.State currentState = null;
- Table table = connection.getTable(TableName.valueOf("hbase:meta"));
- RowFilter filter = new RowFilter(CompareOperator.EQUAL, new
SubstringComparator(region));
- Scan scan = new Scan();
- scan.setFilter(filter);
- Result result = table.getScanner(scan).next();
- if(result!=null){
- byte[] currentStateValue = result.getValue(HConstants.CATALOG_FAMILY,
- HConstants.STATE_QUALIFIER);
- if(currentStateValue==null){
- System.out.println("WARN: Region state info on meta was NULL");
- }else {
- currentState =
RegionState.State.valueOf(Bytes.toString(currentStateValue));
- }
- Put put = new Put(result.getRow());
- put.addColumn(HConstants.CATALOG_FAMILY, HConstants.STATE_QUALIFIER,
- Bytes.toBytes(newState.name()));
- table.put(put);
- System.out.println("Changed region " + region + " STATE from "
- + currentState + " to " + newState);
- return EXIT_SUCCESS;
+ RegionState.State currentState = null;
+ Table table = connection.getTable(TableName.valueOf("hbase:meta"));
+ RowFilter filter = new RowFilter(CompareOperator.EQUAL, new
SubstringComparator(region));
+ Scan scan = new Scan();
+ scan.setFilter(filter);
+ Result result = table.getScanner(scan).next();
+ if (result != null) {
+ byte[] currentStateValue = result.getValue(HConstants.CATALOG_FAMILY,
+ HConstants.STATE_QUALIFIER);
+ if (currentStateValue == null) {
+ System.out.println("WARN: Region state info on meta was NULL");
} else {
- System.out.println("ERROR: Could not find region " + region + " in
meta.");
+ currentState = RegionState.State.valueOf(
+ org.apache.hadoop.hbase.util.Bytes.toString(currentStateValue));
}
+ Put put = new Put(result.getRow());
+ put.addColumn(HConstants.CATALOG_FAMILY, HConstants.STATE_QUALIFIER,
+ org.apache.hadoop.hbase.util.Bytes.toBytes(newState.name()));
+ table.put(put);
+ System.out.println("Changed region " + region + " STATE from "
+ + currentState + " to " + newState);
+ return EXIT_SUCCESS;
+ } else {
+ System.out.println("ERROR: Could not find region " + region + " in
meta.");
}
return EXIT_FAILURE;
}
- List<Long> assigns(String [] args) throws IOException {
+ List<Long> assigns(Hbck hbck, String [] args) throws IOException {
Options options = new Options();
Option override = Option.builder("o").longOpt("override").build();
options.addOption(override);
@@ -179,16 +171,10 @@ public class HBCK2 extends Configured implements Tool {
return null;
}
boolean overrideFlag = commandLine.hasOption(override.getOpt());
- try (ClusterConnection conn =
- (ClusterConnection)
ConnectionFactory.createConnection(getConf())) {
- checkHBCKSupport(conn);
- try (Hbck hbck = conn.getHbck()) {
- return hbck.assigns(commandLine.getArgList(), overrideFlag);
- }
- }
+ return hbck.assigns(commandLine.getArgList(), overrideFlag);
}
- List<Long> unassigns(String [] args) throws IOException {
+ List<Long> unassigns(Hbck hbck, String [] args) throws IOException {
Options options = new Options();
Option override = Option.builder("o").longOpt("override").build();
options.addOption(override);
@@ -202,20 +188,13 @@ public class HBCK2 extends Configured implements Tool {
return null;
}
boolean overrideFlag = commandLine.hasOption(override.getOpt());
- try (ClusterConnection conn =
- (ClusterConnection)
ConnectionFactory.createConnection(getConf())) {
- checkHBCKSupport(conn);
- try (Hbck hbck = conn.getHbck()) {
- return hbck.unassigns(commandLine.getArgList(), overrideFlag);
- }
- }
+ return hbck.unassigns(commandLine.getArgList(), overrideFlag);
}
/**
* @return List of results OR null if failed to run.
*/
- private List<Boolean> bypass(String[] args)
- throws IOException {
+ private List<Boolean> bypass(String[] args) throws IOException {
// Bypass has two options....
Options options = new Options();
// See usage for 'help' on these options.
@@ -246,14 +225,26 @@ public class HBCK2 extends Configured implements Tool {
boolean overrideFlag = commandLine.hasOption(override.getOpt());
boolean recursiveFlag = commandLine.hasOption(override.getOpt());
List<Long> pids =
Arrays.stream(pidStrs).map(Long::valueOf).collect(Collectors.toList());
- try (ClusterConnection c = (ClusterConnection)
ConnectionFactory.createConnection(getConf())) {
- checkHBCKSupport(c);
- try (Hbck hbck = c.getHbck()) {
- return hbck.bypassProcedure(pids, lockWait, overrideFlag,
recursiveFlag);
- }
+ try (ClusterConnection connection = connect(); Hbck hbck =
connection.getHbck()) {
+ checkHBCKSupport(connection, BYPASS);
+ return hbck.bypassProcedure(pids, lockWait, overrideFlag, recursiveFlag);
}
}
+ List<Long> scheduleRecoveries(Hbck hbck, String[] args) throws IOException {
+ List<HBaseProtos.ServerName> serverNames = new ArrayList<>();
+ for (String serverName: args) {
+ serverNames.add(parseServerName(serverName));
+ }
+ return hbck.scheduleServerCrashProcedure(serverNames);
+ }
+
+ private HBaseProtos.ServerName parseServerName(String serverName) {
+ ServerName sn = ServerName.parseServerName(serverName);
+ return HBaseProtos.ServerName.newBuilder().setHostName(sn.getHostname()).
+ setPort(sn.getPort()).setStartCode(sn.getStartcode()).build();
+ }
+
/**
* Read property from hbck2.properties file.
*/
@@ -269,17 +260,15 @@ public class HBCK2 extends Configured implements Tool {
// NOTE: List commands belonw alphabetically!
StringWriter sw = new StringWriter();
PrintWriter writer = new PrintWriter(sw);
- writer.println();
- writer.println("Commands:");
+ writer.println("Command:");
writer.println(" " + ASSIGNS + " [OPTIONS] <ENCODED_REGIONNAME>...");
writer.println(" Options:");
writer.println(" -o,--override override ownership by another
procedure");
- writer.println(" A 'raw' assign that can be used even during Master
initialization");
- writer.println(" (if the -skip flag is specified). Skirts Coprocessors.
Pass one");
- writer.println(" or more encoded region names. 1588230740 is the
hard-coded name");
- writer.println(" for the hbase:meta region and
de00010733901a05f5a2a3a382e27dd4 is");
- writer.println(" an example of what a user-space encoded region name
looks like.");
- writer.println(" For example:");
+ writer.println(" A 'raw' assign that can be used even during Master
initialization (if");
+ writer.println(" the -skip flag is specified). Skirts Coprocessors. Pass
one or more");
+ writer.println(" encoded region names. 1588230740 is the hard-coded name
for the");
+ writer.println(" hbase:meta region and de00010733901a05f5a2a3a382e27dd4
is an example of");
+ writer.println(" what a user-space encoded region name looks like. For
example:");
writer.println(" $ HBCK2 assign 1588230740
de00010733901a05f5a2a3a382e27dd4");
writer.println(" Returns the pid(s) of the created AssignProcedure(s) or
-1 if none.");
writer.println();
@@ -288,13 +277,13 @@ public class HBCK2 extends Configured implements Tool {
writer.println(" -o,--override override if procedure is
running/stuck");
writer.println(" -r,--recursive bypass parent and its children. SLOW!
EXPENSIVE!");
writer.println(" -w,--lockWait milliseconds to wait before giving up;
default=1");
- writer.println(" Pass one (or more) procedure 'pid's to skip to
procedure finish.");
- writer.println(" Parent of bypassed procedure will also be skipped to
the finish.");
- writer.println(" Entities will be left in an inconsistent state and will
require");
- writer.println(" manual fixup. May need Master restart to clear locks
still held.");
- writer.println(" Bypass fails if procedure has children. Add 'recursive'
if all");
- writer.println(" you have is a parent pid to finish parent and children.
This");
- writer.println(" is SLOW, and dangerous so use selectively. Does not
always work.");
+ writer.println(" Pass one (or more) procedure 'pid's to skip to
procedure finish. Parent");
+ writer.println(" of bypassed procedure will also be skipped to the
finish. Entities will");
+ writer.println(" be left in an inconsistent state and will require
manual fixup. May");
+ writer.println(" need Master restart to clear locks still held. Bypass
fails if");
+ writer.println(" procedure has children. Add 'recursive' if all you have
is a parent pid");
+ writer.println(" to finish parent and children. This is SLOW, and
dangerous so use");
+ writer.println(" selectively. Does not always work.");
writer.println();
// out.println(" -checkCorruptHFiles Check all Hfiles by opening
them to make
// sure they are valid");
@@ -307,14 +296,15 @@ public class HBCK2 extends Configured implements Tool {
writer.println(" Options:");
writer.println(" -f, --fix sideline corrupt hfiles, bad links and
references.");
writer.println(" Report corrupt hfiles and broken links. Pass '--fix' to
sideline");
- writer.println(" corrupt files and links. Pass one or more tablenames to
narrow");
- writer.println(" the checkup. Default checks all tables. Modified
regions will");
- writer.println(" need to be reopened to pickup changes.");
+ writer.println(" corrupt files and links. Pass one or more tablenames to
narrow the");
+ writer.println(" checkup. Default checks all tables. Modified regions
will need to be");
+ writer.println(" reopened to pick-up changes.");
writer.println();
writer.println(" " + SET_REGION_STATE + " <ENCODED_REGIONNAME> <STATE>");
writer.println(" Possible region states:");
- writer.println(" " +
Arrays.stream(RegionState.State.values()).map(Enum::toString).
- collect(Collectors.joining(", ")));
+ writer.println(" OFFLINE, OPENING, OPEN, CLOSING, CLOSED, SPLITTING,
SPLIT,");
+ writer.println(" FAILED_OPEN, FAILED_CLOSE, MERGING, MERGED,
SPLITTING_NEW,");
+ writer.println(" MERGING_NEW, ABNORMALLY_CLOSED");
writer.println(" WARNING: This is a very risky option intended for use
as last resort.");
writer.println(" Example scenarios include unassigns/assigns that can't
move forward");
writer.println(" because region is in an inconsistent state in
'hbase:meta'. For");
@@ -339,15 +329,23 @@ public class HBCK2 extends Configured implements Tool {
writer.println(" $ HBCK2 setTableState users ENABLED");
writer.println(" Returns whatever the previous table state was.");
writer.println();
+ writer.println(" " + SCHEDULE_RECOVERIES + " <SERVERNAME>...");
+ writer.println(" Schedule ServerCrashProcedure(SCP) for list of
RegionServers. Format");
+ writer.println(" server name as '<HOSTNAME>,<PORT>,<STARTCODE>' (See
HBase UI/logs).");
+ writer.println(" Example using RegionServer
'a.example.org,29100,1540348649479':");
+ writer.println(" $ HBCK2 scheduleRecoveries
a.example.org,29100,1540348649479");
+ writer.println(" Returns the pid(s) of the created
ServerCrashProcedure(s) or -1 if");
+ writer.println(" no procedure created (see master logs for why not).");
+ writer.println(" Command support added in hbase versions 2.0.3, 2.1.2,
2.2.0 or newer.");
+ writer.println();
writer.println(" " + UNASSIGNS + " <ENCODED_REGIONNAME>...");
writer.println(" Options:");
writer.println(" -o,--override override ownership by another
procedure");
writer.println(" A 'raw' unassign that can be used even during Master
initialization");
writer.println(" (if the -skip flag is specified). Skirts Coprocessors.
Pass one or");
- writer.println(" more encoded region names. 1588230740 is the hard-coded
name for");
- writer.println(" the hbase:meta region and
de00010733901a05f5a2a3a382e27dd4 is an");
- writer.println(" example of what a userspace encoded region name looks
like.");
- writer.println(" For example:");
+ writer.println(" more encoded region names. 1588230740 is the hard-coded
name for the");
+ writer.println(" hbase:meta region and de00010733901a05f5a2a3a382e27dd4
is an example");
+ writer.println(" of what a userspace encoded region name looks like. For
example:");
writer.println(" $ HBCK2 unassign 1588230740
de00010733901a05f5a2a3a382e27dd4");
writer.println(" Returns the pid(s) of the created UnassignProcedure(s)
or -1 if none.");
writer.println();
@@ -367,7 +365,7 @@ public class HBCK2 extends Configured implements Tool {
}
HelpFormatter formatter = new HelpFormatter();
formatter.printHelp("HBCK2 [OPTIONS] COMMAND <ARGS>",
- "\nOptions:", options, getCommandUsage());
+ "Options:", options, getCommandUsage());
}
@Override
@@ -380,6 +378,9 @@ public class HBCK2 extends Configured implements Tool {
return this.conf;
}
+ /**
+ * Process command line general options.
+ */
@Override
public int run(String[] args) throws IOException {
// Configure Options. The below article was more helpful than the
commons-cli doc:
@@ -390,18 +391,18 @@ public class HBCK2 extends Configured implements Tool {
Option debug = Option.builder("d").longOpt("debug").desc("run with debug
output").build();
options.addOption(debug);
Option quorum =
Option.builder("q").longOpt(HConstants.ZOOKEEPER_QUORUM).hasArg().
- desc("ensemble of target hbase").build();
+ desc("hbase ensemble").build();
options.addOption(quorum);
Option parent =
Option.builder("z").longOpt(HConstants.ZOOKEEPER_ZNODE_PARENT).hasArg()
- .desc("parent znode of target hbase").build();
+ .desc("parent znode of hbase ensemble").build();
options.addOption(parent);
Option peerPort =
Option.builder("p").longOpt(HConstants.ZOOKEEPER_CLIENT_PORT).hasArg()
- .desc("port of target hbase ensemble").type(Integer.class).build();
+ .desc("port of hbase ensemble").type(Integer.class).build();
options.addOption(peerPort);
Option version = Option.builder("v").longOpt(VERSION).desc("this hbck2
version").build();
options.addOption(version);
Option skip = Option.builder("s").longOpt("skip").
- desc("skip hbase version check/PleaseHoldException/Master
initializing").build();
+ desc("skip hbase version check (PleaseHoldException)").build();
options.addOption(skip);
// Parse command-line.
@@ -413,7 +414,6 @@ public class HBCK2 extends Configured implements Tool {
usage(options, e.getMessage());
return EXIT_FAILURE;
}
-
// Process general options.
if (commandLine.hasOption(version.getOpt())) {
System.out.println(readHBCK2BuildProperties(VERSION));
@@ -437,7 +437,7 @@ public class HBCK2 extends Configured implements Tool {
getConf().setInt(HConstants.ZOOKEEPER_CLIENT_PORT,
Integer.valueOf(optionValue));
} else {
usage(options,
- "Invalid client port. Please provide proper port for target hbase
ensemble.");
+ "Invalid client port. Please provide proper port for target hbase
ensemble.");
return EXIT_FAILURE;
}
}
@@ -451,21 +451,41 @@ public class HBCK2 extends Configured implements Tool {
return EXIT_FAILURE;
}
}
- if(commandLine.hasOption(skip.getOpt())){
+ if (commandLine.hasOption(skip.getOpt())) {
skipCheck = true;
}
+ return doCommandLine(commandLine, options);
+ }
- // Now process commands.
+ /**
+ * Create connection.
+ * Needs to be called before we go against remote server.
+ * Be sure to close when done.
+ */
+ ClusterConnection connect() throws IOException {
+ return (ClusterConnection)ConnectionFactory.createConnection(getConf());
+ }
+
+ /**
+ * Process parsed command-line. General options have already been processed
by caller.
+ */
+ private int doCommandLine(CommandLine commandLine, Options options) throws
IOException {
+ // Now process command.
String[] commands = commandLine.getArgs();
String command = commands[0];
switch (command) {
+ // Case handlers all have same format. Check first that the server
supports
+ // the feature FIRST, then move to process the command.
case SET_TABLE_STATE:
if (commands.length < 3) {
usage(options, command + " takes tablename and state arguments: e.g.
user ENABLED");
return EXIT_FAILURE;
}
- System.out.println(setTableState(TableName.valueOf(commands[1]),
- TableState.State.valueOf(commands[2])));
+ try (ClusterConnection connection = connect(); Hbck hbck =
connection.getHbck()) {
+ checkHBCKSupport(connection, command);
+ System.out.println(setTableState(hbck,
TableName.valueOf(commands[1]),
+ TableState.State.valueOf(commands[2])));
+ }
break;
case ASSIGNS:
@@ -473,7 +493,10 @@ public class HBCK2 extends Configured implements Tool {
usage(options, command + " takes one or more encoded region names");
return EXIT_FAILURE;
}
- System.out.println(assigns(purgeFirst(commands)));
+ try (ClusterConnection connection = connect(); Hbck hbck =
connection.getHbck()) {
+ checkHBCKSupport(connection, command);
+ System.out.println(assigns(hbck, purgeFirst(commands)));
+ }
break;
case BYPASS:
@@ -481,6 +504,11 @@ public class HBCK2 extends Configured implements Tool {
usage(options, command + " takes one or more pids");
return EXIT_FAILURE;
}
+ // bypass does the connection setup and the checkHBCKSupport down
+ // inside in the bypass method delaying connection setup until last
+ // moment. It does this because it has another set of command options
+ // to process and wants to do that before setting up connection.
+ // This is why it is not like the other command processings.
List<Boolean> bs = bypass(purgeFirst(commands));
if (bs == null) {
// Something went wrong w/ the parse and command didn't run.
@@ -494,25 +522,46 @@ public class HBCK2 extends Configured implements Tool {
usage(options, command + " takes one or more encoded region names");
return EXIT_FAILURE;
}
- System.out.println(toString(unassigns(purgeFirst(commands))));
+ try (ClusterConnection connection = connect(); Hbck hbck =
connection.getHbck()) {
+ checkHBCKSupport(connection, command);
+ System.out.println(toString(unassigns(hbck, purgeFirst(commands))));
+ }
break;
case SET_REGION_STATE:
- if(commands.length < 3){
+ if (commands.length < 3) {
usage(options, command + " takes region encoded name and state
arguments: e.g. "
- + "35f30b0ce922c34bf5c284eff33ba8b3 CLOSING");
+ + "35f30b0ce922c34bf5c284eff33ba8b3 CLOSING");
return EXIT_FAILURE;
}
- return setRegionState(commands[1],
RegionState.State.valueOf(commands[2]));
+ RegionState.State state = RegionState.State.valueOf(commands[2]);
+ try (ClusterConnection connection = connect()) {
+ checkHBCKSupport(connection, command);
+ return setRegionState(connection, commands[1], state);
+ }
case FILESYSTEM:
- try (FileSystemFsck fsfsck = new FileSystemFsck(getConf())) {
- if (fsfsck.fsck(options, purgeFirst(commands)) != 0) {
- return EXIT_FAILURE;
+ try (ClusterConnection connection = connect()) {
+ checkHBCKSupport(connection, command);
+ try (FileSystemFsck fsfsck = new FileSystemFsck(getConf())) {
+ if (fsfsck.fsck(options, purgeFirst(commands)) != 0) {
+ return EXIT_FAILURE;
+ }
}
}
break;
+ case SCHEDULE_RECOVERIES:
+ if (commands.length < 2) {
+ usage(options, command + " takes one or more serverNames");
+ return EXIT_FAILURE;
+ }
+ try (ClusterConnection connection = connect(); Hbck hbck =
connection.getHbck()) {
+ checkHBCKSupport(connection, command, "2.0.3", "2.1.2", "2.2.0",
"3.0.0");
+ System.out.println(toString(scheduleRecoveries(hbck,
purgeFirst(commands))));
+ }
+ break;
+
default:
usage(options, "Unsupported command: " + command);
return EXIT_FAILURE;
@@ -544,7 +593,7 @@ public class HBCK2 extends Configured implements Tool {
public static void main(String [] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
- int errCode = ToolRunner.run(new HBCK2(conf), args);
+ int errCode = org.apache.hadoop.util.ToolRunner.run(new HBCK2(conf), args);
if (errCode != 0) {
System.exit(errCode);
}
diff --git a/hbase-hbck2/src/main/java/org/apache/hbase/Version.java
b/hbase-hbck2/src/main/java/org/apache/hbase/Version.java
new file mode 100644
index 0000000..284c61a
--- /dev/null
+++ b/hbase-hbck2/src/main/java/org/apache/hbase/Version.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+
+/**
+ * Check versions.
+ */
+public final class Version {
+ // Copied from hbase VersionInfo.
+ private static final int VERY_LARGE_NUMBER = 100000;
+ private static final int MAJOR = 0;
+ private static final int MINOR = 1;
+ private static final int PATCH = 2;
+
+ private Version() {}
+
+ /**
+ * @param thresholdVersions List of versions from oldest to newest.
+ * @return true if <code>version</code> is less-than or equal to
thresholdVersions.
+ * For example, if passed threshold list is <code>{"2.0.2", "2.1.3",
"2.2.1"}</code>
+ * and the version is 2.1.2 then the result should be false since 2.1.2
+ * is less than the matching passed-in 2.1.3 but if version is 2.1.5 then
we return true.
+ */
+ static boolean check(final String version, String ... thresholdVersions) {
+ if (thresholdVersions == null) {
+ return true;
+ }
+ boolean supported = false;
+ // Components of the server version string.
+ String [] versionComponents = getVersionComponents(version);
+ boolean excessiveMajor = false;
+ boolean excessiveMinor = false;
+ for (String thresholdVersion: thresholdVersions) {
+ // Get components of current threshold version.
+ String[] thresholdVersionComponents =
getVersionComponents(thresholdVersion);
+ int serverMajor = Integer.parseInt(versionComponents[MAJOR]);
+ int thresholdMajor = Integer.parseInt(thresholdVersionComponents[MAJOR]);
+ if (serverMajor > thresholdMajor) {
+ excessiveMajor = true;
+ continue;
+ }
+ excessiveMajor = false;
+ if (serverMajor < thresholdMajor) {
+ continue;
+ }
+ int serverMinor = Integer.parseInt(versionComponents[MINOR]);
+ int thresholdMinor = Integer.parseInt(thresholdVersionComponents[MINOR]);
+ if (serverMinor > thresholdMinor) {
+ excessiveMinor = true;
+ continue;
+ }
+ excessiveMinor = false;
+ if (serverMinor < thresholdMinor) {
+ continue;
+ }
+ if (Integer.parseInt(versionComponents[PATCH]) >=
+ Integer.parseInt(thresholdVersionComponents[PATCH])) {
+ supported = true;
+ }
+ break;
+ }
+ return supported || excessiveMajor || excessiveMinor;
+ }
+
+ /**
+ * Copied from hbase VersionInfo.
+ * Returns the version components as String objects
+ * Examples: "1.2.3" returns ["1", "2", "3"], "4.5.6-SNAPSHOT" returns ["4",
"5", "6", "-1"]
+ * "4.5.6-beta" returns ["4", "5", "6", "-2"], "4.5.6-alpha" returns ["4",
"5", "6", "-3"]
+ * "4.5.6-UNKNOW" returns ["4", "5", "6", "-4"]
+ * @return the components of the version string
+ */
+ private static String[] getVersionComponents(final String version) {
+ assert(version != null);
+ String[] strComps = version.split("[\\.-]");
+ assert(strComps.length > 0);
+
+ String[] comps = new String[strComps.length];
+ for (int i = 0; i < strComps.length; ++i) {
+ if (StringUtils.isNumeric(strComps[i])) {
+ comps[i] = strComps[i];
+ } else if (StringUtils.isEmpty(strComps[i])) {
+ comps[i] = String.valueOf(VERY_LARGE_NUMBER);
+ } else {
+ if("SNAPSHOT".equals(strComps[i])) {
+ comps[i] = "-1";
+ } else if("beta".equals(strComps[i])) {
+ comps[i] = "-2";
+ } else if("alpha".equals(strComps[i])) {
+ comps[i] = "-3";
+ } else {
+ comps[i] = "-4";
+ }
+ }
+ }
+ return comps;
+ }
+}
diff --git a/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
b/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
index a98328d..1516503 100644
--- a/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
+++ b/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
@@ -1,4 +1,4 @@
-/**
+/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@@ -29,7 +29,9 @@ import org.apache.hadoop.hbase.HBaseTestingUtility;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ClusterConnection;
import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Hbck;
import org.apache.hadoop.hbase.client.RegionInfo;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
@@ -39,6 +41,7 @@ import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.Threads;
import org.apache.logging.log4j.LogManager;
import org.junit.AfterClass;
+import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
@@ -53,6 +56,11 @@ public class TestHBCK2 {
private static final TableName REGION_STATES_TABLE_NAME = TableName.
valueOf(TestHBCK2.class.getSimpleName() + "-REGIONS_STATES");
+ /**
+ * A 'connected' hbck2 instance.
+ */
+ private HBCK2 hbck2;
+
@BeforeClass
public static void beforeClass() throws Exception {
TEST_UTIL.startMiniCluster(3);
@@ -64,39 +72,27 @@ public class TestHBCK2 {
TEST_UTIL.shutdownMiniCluster();
}
- @Test (expected = UnsupportedOperationException.class)
- public void testCheckVersion202() {
- HBCK2.checkVersion("2.0.2");
+ @Before
+ public void before() {
+ this.hbck2 = new HBCK2(TEST_UTIL.getConfiguration());
}
@Test (expected = UnsupportedOperationException.class)
- public void testCheckVersion210() {
- HBCK2.checkVersion("2.1.0");
- }
-
- @Test
- public void testCheckVersionSpecial210() {
- HBCK2.checkVersion("2.1.0-patchedForHBCK2");
- }
-
- @Test
- public void testCheckVersion203() {
- HBCK2.checkVersion("2.0.3");
- }
-
- @Test
- public void testCheckVersion211() {
- HBCK2.checkVersion("2.1.1");
+ public void testVersions() throws IOException {
+ try (ClusterConnection connection = this.hbck2.connect()) {
+ this.hbck2.checkHBCKSupport(connection, "test", "10.0.0");
+ }
}
@Test
public void testSetTableStateInMeta() throws IOException {
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
- TableState state = hbck.setTableState(TABLE_NAME,
TableState.State.DISABLED);
- TestCase.assertTrue("Found=" + state.getState(), state.isEnabled());
- // Restore the state.
- state = hbck.setTableState(TABLE_NAME, state.getState());
- TestCase.assertTrue("Found=" + state.getState(), state.isDisabled());
+ try (ClusterConnection connection = this.hbck2.connect(); Hbck hbck =
connection.getHbck()) {
+ TableState state = this.hbck2.setTableState(hbck, TABLE_NAME,
TableState.State.DISABLED);
+ TestCase.assertTrue("Found=" + state.getState(), state.isEnabled());
+ // Restore the state.
+ state = this.hbck2.setTableState(hbck, TABLE_NAME, state.getState());
+ TestCase.assertTrue("Found=" + state.getState(), state.isDisabled());
+ }
}
@Test
@@ -108,31 +104,32 @@ public class TestHBCK2 {
getRegionStates().getRegionState(ri.getEncodedName());
LOG.info("RS: {}", rs.toString());
}
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
List<String> regionStrs =
- regions.stream().map(r ->
r.getEncodedName()).collect(Collectors.toList());
+
regions.stream().map(RegionInfo::getEncodedName).collect(Collectors.toList());
String [] regionStrsArray = regionStrs.toArray(new String[] {});
- List<Long> pids = hbck.unassigns(regionStrsArray);
- waitOnPids(pids);
- for (RegionInfo ri: regions) {
- RegionState rs =
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
- getRegionStates().getRegionState(ri.getEncodedName());
- LOG.info("RS: {}", rs.toString());
- TestCase.assertTrue(rs.toString(), rs.isClosed());
- }
- pids = hbck.assigns(regionStrsArray);
- waitOnPids(pids);
- for (RegionInfo ri: regions) {
- RegionState rs =
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
- getRegionStates().getRegionState(ri.getEncodedName());
- LOG.info("RS: {}", rs.toString());
- TestCase.assertTrue(rs.toString(), rs.isOpened());
- }
- // What happens if crappy region list passed?
- pids = hbck.assigns(Arrays.stream(new String [] {"a", "some rubbish
name"}).
- collect(Collectors.toList()).toArray(new String [] {}));
- for (long pid: pids) {
- assertEquals(org.apache.hadoop.hbase.procedure2.Procedure.NO_PROC_ID,
pid);
+ try (ClusterConnection connection = this.hbck2.connect(); Hbck hbck =
connection.getHbck()) {
+ List<Long> pids = this.hbck2.unassigns(hbck, regionStrsArray);
+ waitOnPids(pids);
+ for (RegionInfo ri : regions) {
+ RegionState rs =
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
+ getRegionStates().getRegionState(ri.getEncodedName());
+ LOG.info("RS: {}", rs.toString());
+ TestCase.assertTrue(rs.toString(), rs.isClosed());
+ }
+ pids = this.hbck2.assigns(hbck, regionStrsArray);
+ waitOnPids(pids);
+ for (RegionInfo ri : regions) {
+ RegionState rs =
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
+ getRegionStates().getRegionState(ri.getEncodedName());
+ LOG.info("RS: {}", rs.toString());
+ TestCase.assertTrue(rs.toString(), rs.isOpened());
+ }
+ // What happens if crappy region list passed?
+ pids = this.hbck2.assigns(hbck, Arrays.stream(new String[]{"a", "some
rubbish name"}).
+ collect(Collectors.toList()).toArray(new String[]{}));
+ for (long pid : pids) {
+
assertEquals(org.apache.hadoop.hbase.procedure2.Procedure.NO_PROC_ID, pid);
+ }
}
}
}
@@ -144,9 +141,10 @@ public class TestHBCK2 {
List<RegionInfo> regions = admin.getRegions(REGION_STATES_TABLE_NAME);
RegionInfo info = regions.get(0);
assertEquals(RegionState.State.OPEN, getCurrentRegionState(info));
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
String region = info.getEncodedName();
- hbck.setRegionState(region, RegionState.State.CLOSING);
+ try (ClusterConnection connection = this.hbck2.connect()) {
+ this.hbck2.setRegionState(connection, region,
RegionState.State.CLOSING);
+ }
assertEquals(RegionState.State.CLOSING, getCurrentRegionState(info));
} finally {
TEST_UTIL.deleteTable(REGION_STATES_TABLE_NAME);
@@ -155,9 +153,10 @@ public class TestHBCK2 {
@Test
public void testSetRegionStateInvalidRegion() throws IOException {
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
- assertEquals(HBCK2.EXIT_FAILURE, hbck.setRegionState("NO_REGION",
- RegionState.State.CLOSING));
+ try (ClusterConnection connection = this.hbck2.connect()) {
+ assertEquals(HBCK2.EXIT_FAILURE, this.hbck2.setRegionState(connection,
"NO_REGION",
+ RegionState.State.CLOSING));
+ }
}
@Test (expected = IllegalArgumentException.class)
@@ -167,9 +166,10 @@ public class TestHBCK2 {
List<RegionInfo> regions = admin.getRegions(REGION_STATES_TABLE_NAME);
RegionInfo info = regions.get(0);
assertEquals(RegionState.State.OPEN, getCurrentRegionState(info));
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
String region = info.getEncodedName();
- hbck.setRegionState(region, null);
+ try (ClusterConnection connection = this.hbck2.connect()) {
+ this.hbck2.setRegionState(connection, region, null);
+ }
} finally {
TEST_UTIL.deleteTable(REGION_STATES_TABLE_NAME);
}
@@ -177,8 +177,9 @@ public class TestHBCK2 {
@Test (expected = IllegalArgumentException.class)
public void testSetRegionStateInvalidRegionAndInvalidState() throws
IOException {
- HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
- hbck.setRegionState("NO_REGION", null);
+ try (ClusterConnection connection = this.hbck2.connect()) {
+ this.hbck2.setRegionState(connection, "NO_REGION", null);
+ }
}
private RegionState.State getCurrentRegionState(RegionInfo regionInfo)
throws IOException{
diff --git
a/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCKCommandLineParsing.java
b/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCKCommandLineParsing.java
index 9f64cb9..ca489c6 100644
--- a/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCKCommandLineParsing.java
+++ b/hbase-hbck2/src/test/java/org/apache/hbase/TestHBCKCommandLineParsing.java
@@ -1,4 +1,4 @@
-/**
+/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@@ -26,9 +26,7 @@ import java.io.InputStream;
import java.io.PrintStream;
import java.util.Properties;
-import org.apache.commons.cli.ParseException;
import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.logging.log4j.LogManager;
import org.junit.Test;
/**
@@ -36,12 +34,10 @@ import org.junit.Test;
* @see TestHBCK2 for cluster-tests.
*/
public class TestHBCKCommandLineParsing {
- private static final org.apache.logging.log4j.Logger LOG =
- LogManager.getLogger(TestHBCKCommandLineParsing.class);
private final static HBaseTestingUtility TEST_UTIL = new
HBaseTestingUtility();
@Test
- public void testHelp() throws ParseException, IOException {
+ public void testHelp() throws IOException {
String output = retrieveOptionOutput("-h");
assertTrue(output, output.startsWith("usage: HBCK2"));
}
@@ -56,7 +52,7 @@ public class TestHBCKCommandLineParsing {
@Test (expected=IllegalArgumentException.class)
public void testSetRegionStateCommandInvalidState() throws IOException {
HBCK2 hbck = new HBCK2(TEST_UTIL.getConfiguration());
- // The 'x' below should cause the NumberFormatException. The Options
should all be good.
+ // The 'x' below should cause the IllegalArgumentException. The Options
should all be good.
hbck.run(new String[]{"setRegionState", "region_encoded",
"INVALID_STATE"});
}
diff --git
a/hbase-hbck2/src/test/java/org/apache/hbase/TestSchedulingRecoveries.java
b/hbase-hbck2/src/test/java/org/apache/hbase/TestSchedulingRecoveries.java
new file mode 100644
index 0000000..08ff118
--- /dev/null
+++ b/hbase-hbck2/src/test/java/org/apache/hbase/TestSchedulingRecoveries.java
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.client.Hbck;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+
+
+public class TestSchedulingRecoveries {
+ private final static HBaseTestingUtility TEST_UTIL = new
HBaseTestingUtility();
+ private HBCK2 hbck2;
+
+ @BeforeClass
+ public static void beforeClass() throws Exception {
+ TEST_UTIL.startMiniCluster(2);
+ }
+
+ @AfterClass
+ public static void afterClass() throws Exception {
+ TEST_UTIL.shutdownMiniCluster();
+ }
+
+ @Before
+ public void before() {
+ this.hbck2 = new HBCK2(TEST_UTIL.getConfiguration());
+ }
+
+ @Test
+ public void testSchedulingSCPWithTwoGoodHosts() throws IOException {
+ String sn1 = TEST_UTIL.getHBaseCluster().getRegionServer(0).toString();
+ String sn2 = TEST_UTIL.getHBaseCluster().getRegionServer(1).toString();
+ try (ClusterConnection connection = this.hbck2.connect(); Hbck hbck =
connection.getHbck()) {
+ List<Long> pids = this.hbck2.scheduleRecoveries(hbck, new String[]{sn1,
sn2});
+ assertEquals(2, pids.size());
+ assertTrue(pids.get(0) > 0);
+ assertTrue(pids.get(1) > 0);
+ }
+ }
+
+ @Test
+ public void testSchedulingSCPWithBadHost() {
+ boolean thrown = false;
+ try {
+ try (ClusterConnection connection = this.hbck2.connect(); Hbck hbck =
connection.getHbck()) {
+ this.hbck2.scheduleRecoveries(hbck, new String[]{"a.example.org,1,2"});
+ }
+ } catch (IOException ioe) {
+ thrown = true;
+ // Throws a weird exception complaining about FileNotFoundException down
inside
+ // a RemoteWithExtras... wrapped in a ServiceException. Check for
latter. This
+ // won't change.
+ assertTrue(ioe.getCause() instanceof ServiceException);
+ }
+ assertTrue(thrown);
+ }
+}
diff --git a/hbase-hbck2/src/test/java/org/apache/hbase/TestVersion.java
b/hbase-hbck2/src/test/java/org/apache/hbase/TestVersion.java
new file mode 100644
index 0000000..ea55082
--- /dev/null
+++ b/hbase-hbck2/src/test/java/org/apache/hbase/TestVersion.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import org.junit.Test;
+
+public class TestVersion {
+ @Test
+ public void testThreshold() {
+ assertFalse(Version.check("2.0.2", "10.0.0"));
+ }
+
+ @Test
+ public void testCheckVersion202() {
+ assertFalse(Version.check("2.0.2", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testCheckVersion210() {
+ assertFalse(Version.check("2.1.0", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testCheckVersionSpecial210() {
+ assertFalse(Version.check("2.1.0-patchedForHBCK2",
HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testCheckVersion203() {
+ assertTrue(Version.check("2.0.3", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testCheckVersion211() {
+ assertTrue(Version.check("2.1.1", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testExcessiveMajor() {
+ assertTrue(Version.check("5.0.1", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testExcessiveMinor() {
+ assertTrue(Version.check("2.10.1", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testInferiorMinor() {
+ assertFalse(Version.check("1.0.0", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+
+ @Test
+ public void testABunch() {
+ assertTrue(Version.check("2.1.1-patchedForHBCK2",
HBCK2.MINIMUM_HBCK2_VERSION));
+ assertTrue(Version.check("3.0.1", HBCK2.MINIMUM_HBCK2_VERSION));
+ assertTrue(Version.check("4.0.0", HBCK2.MINIMUM_HBCK2_VERSION));
+ }
+}
diff --git a/pom.xml b/pom.xml
index 698061f..82773d5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -123,7 +123,7 @@
<compileSource>1.8</compileSource>
<java.min.version>${compileSource}</java.min.version>
<maven.min.version>3.3.3</maven.min.version>
- <hbase.version>2.1.1</hbase.version>
+ <hbase.version>2.1.2</hbase.version>
<maven.compiler.version>3.6.1</maven.compiler.version>
<surefire.version>2.21.0</surefire.version>
<surefire.provider>surefire-junit47</surefire.provider>