virajjasani commented on a change in pull request #1901:
URL: https://github.com/apache/hbase/pull/1901#discussion_r442334986



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -435,6 +451,10 @@ public boolean unload() throws InterruptedException, 
ExecutionException, Timeout
           LOG.debug("List of region servers: {}", regionServers);
           return false;
         }
+        // Remove RS present not in the designated file
+        if (designatedFile != null) {
+          filterDesignatedServers(regionServers);

Review comment:
       We should use `stripExcludes()` for both purpose: exclude file and 
designated file.
   Add filename as argument in `stripExcludes()`.

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -205,6 +209,18 @@ public RegionMoverBuilder excludeFile(String excludefile) {
       return this;
     }
 
+    /**
+     * Set the designated file. Designated file contains hostnames where 
region moves. Designated file should
+     * have 'host:port' per line. Port is mandatory here as we can have many 
RS running on a single
+     * host.

Review comment:
       Can you please bring these comment lines within 100 chars. Somehow build 
is not failing for 100+ char space usage on one line.

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -413,7 +429,7 @@ private void loadRegions(List<RegionInfo> regionsToMove)
    * Unload regions from given {@link #hostname} using ack/noAck mode and 
{@link #maxthreads}.In
    * noAck mode we do not make sure that region is successfully online on the 
target region
    * server,hence it is best effort.We do not unload regions to hostnames 
given in
-   * {@link #excludeFile}.
+   * {@link #excludeFile}. We only unload regions to hostnames given in {@link 
#designatedFile}.

Review comment:
       Please put the condition: 
   `If designatedFile is present with some contents, we will unload regions to 
hostnames provided in {@link #designatedFile}`

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -435,6 +451,10 @@ public boolean unload() throws InterruptedException, 
ExecutionException, Timeout
           LOG.debug("List of region servers: {}", regionServers);
           return false;
         }
+        // Remove RS present not in the designated file

Review comment:
       nit: `Remove RS not present in the designated file` 

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -653,21 +673,39 @@ private void deleteFile(String filename) {
   }
 
   /**
-   * @return List of servers from the exclude file in format 'hostname:port'.
+   * @param filename The file should have 'host:port' per line
+   * @return List of servers from the file in format 'hostname:port'.
    */
-  private List<String> readExcludes(String excludeFile) throws IOException {
-    List<String> excludeServers = new ArrayList<>();
-    if (excludeFile == null) {

Review comment:
       Please use inverted logic (improves readability):
   ```
   if (excludeFile != null){
   ...
   ...
   ...
   }
   return excludeServers;
   ```

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -653,21 +673,39 @@ private void deleteFile(String filename) {
   }
 
   /**
-   * @return List of servers from the exclude file in format 'hostname:port'.
+   * @param filename The file should have 'host:port' per line
+   * @return List of servers from the file in format 'hostname:port'.
    */
-  private List<String> readExcludes(String excludeFile) throws IOException {
-    List<String> excludeServers = new ArrayList<>();
-    if (excludeFile == null) {
-      return excludeServers;
+  private List<String> readServersFromFile(String filename) {
+    List<String> servers = new ArrayList<>();
+    if (filename == null) {
+      return servers;
     } else {
       try {
-        Files.readAllLines(Paths.get(excludeFile)).stream().map(String::trim)
-            .filter(((Predicate<String>) 
String::isEmpty).negate()).map(String::toLowerCase)
-            .forEach(excludeServers::add);
+        Files.readAllLines(Paths.get(filename)).stream().map(String::trim)
+          .filter(((Predicate<String>) 
String::isEmpty).negate()).map(String::toLowerCase)
+          .forEach(servers::add);
       } catch (IOException e) {
         LOG.warn("Exception while reading excludes file, continuing anyways", 
e);

Review comment:
       We should not continue if file read fails right?

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java
##########
@@ -653,21 +673,39 @@ private void deleteFile(String filename) {
   }
 
   /**
-   * @return List of servers from the exclude file in format 'hostname:port'.
+   * @param filename The file should have 'host:port' per line
+   * @return List of servers from the file in format 'hostname:port'.
    */
-  private List<String> readExcludes(String excludeFile) throws IOException {
-    List<String> excludeServers = new ArrayList<>();
-    if (excludeFile == null) {
-      return excludeServers;
+  private List<String> readServersFromFile(String filename) {
+    List<String> servers = new ArrayList<>();
+    if (filename == null) {
+      return servers;
     } else {
       try {
-        Files.readAllLines(Paths.get(excludeFile)).stream().map(String::trim)
-            .filter(((Predicate<String>) 
String::isEmpty).negate()).map(String::toLowerCase)
-            .forEach(excludeServers::add);
+        Files.readAllLines(Paths.get(filename)).stream().map(String::trim)
+          .filter(((Predicate<String>) 
String::isEmpty).negate()).map(String::toLowerCase)
+          .forEach(servers::add);
       } catch (IOException e) {
         LOG.warn("Exception while reading excludes file, continuing anyways", 
e);
       }
-      return excludeServers;
+      return servers;
+    }
+  }
+
+  private void filterDesignatedServers(List<ServerName> onlineServers) {
+    List<String> designatedServers = readServersFromFile(designatedFile);
+    if (designatedServers == null || designatedServers.isEmpty()) {

Review comment:
       `designatedServers == null` is redundant

##########
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionMover.java
##########
@@ -180,6 +180,85 @@ public void testExclude() throws Exception {
     }
   }
 
+  @Test
+  public void testDesignatedFile() throws Exception{
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    File designatedFile = new 
File(TEST_UTIL.getDataTestDir().toUri().getPath(),
+      "designated_file");
+    HRegionServer designatedServer = cluster.getRegionServer(0);
+    try(FileWriter fos = new FileWriter(designatedFile)) {
+      String designatedHostname = 
designatedServer.getServerName().getHostname();
+      int designatedServerPort = designatedServer.getServerName().getPort();
+      String excludeServerName = designatedHostname + ":" + 
designatedServerPort;
+      fos.write(excludeServerName);
+    }
+    int regionsInDesignatedServer = 
designatedServer.getNumberOfOnlineRegions();
+    HRegionServer regionServer = cluster.getRegionServer(1);
+    String rsName = regionServer.getServerName().getHostname();
+    int port = regionServer.getServerName().getPort();
+    String rs = rsName + ":" + port;
+    int regionsInRegionServer = regionServer.getNumberOfOnlineRegions();
+    RegionMoverBuilder rmBuilder = new RegionMoverBuilder(rs, 
TEST_UTIL.getConfiguration())
+      .designatedFile(designatedFile.getCanonicalPath());
+    try (RegionMover rm = rmBuilder.build()) {
+      LOG.info("Unloading {} regions", rs);

Review comment:
       Use debug log for test cases :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to