[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-13 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135024#comment-17135024
 ] 

Yiqun Lin edited comment on HDFS-15346 at 6/14/20, 5:05 AM:


[~LiJinglun], thanks for addressing remaining comments.

These two days, I am trying to improve the efficiency of the unit test, current 
unit test is too slow.

I find another way that we don't have to depend on mini yarn cluster in test 
running. The job can be submitted and executed in LocalJobRunner when there is 
no mini yarn cluster env. But we need to make an adjustment in getting job 
status from job client.

I do some refactor in getCurrent method and apply them in DistCpProcedure.

Following are part of some necessary change we need to update.
{noformat}
  @VisibleForTesting
  private Job runningJob;
  static boolean ENABLED_FOR_TEST = false;
...
  private String submitDistCpJob(String srcParam, String dstParam,
  boolean useSnapshotDiff) throws IOException {
...
try {
  LOG.info("Submit distcp job={}", job);
  runningJob = job;   <--- need to reset there
  return job.getJobID().toString();
} catch (Exception e) {
  throw new IOException("Submit job failed.", e);
}
  }

  private RunningJobStatus getCurrentJob() throws IOException {
if (jobId != null) {
  if (ENABLED_FOR_TEST) {
if (this.runningJob != null) {
  Job latestJob = null;
  try {
latestJob = this.runningJob.getCluster()
.getJob(JobID.forName(jobId));
  } catch (InterruptedException e) {
throw new IOException(e);
  }
  return latestJob == null ? null
  : new RunningJobStatus(latestJob, null);
}
  } else {
RunningJob latestJob = client.getJob(JobID.forName(jobId));
return latestJob == null ? null :
  new RunningJobStatus(null, latestJob);
  }
}
return null;
  }

  class RunningJobStatus {
Job testJob;
RunningJob job;

public RunningJobStatus(Job testJob, RunningJob job) {
  this.testJob = testJob;
  this.job = job;
}

String getJobID() {
  return ENABLED_FOR_TEST ? testJob.getJobID().toString()
  : job.getID().toString();
}

boolean isComplete() throws IOException {
  return ENABLED_FOR_TEST ? testJob.isComplete() : job.isComplete();
}

boolean isSuccessful() throws IOException {
  return ENABLED_FOR_TEST ? testJob.isSuccessful() : job.isSuccessful();
}

String getFailureInfo() throws IOException {
  try {
return ENABLED_FOR_TEST ? testJob.getStatus().getFailureInfo()
: job.getFailureInfo();
  } catch (InterruptedException e) {
throw new IOException(e);
  }
}
  }
{noformat}
And mini yarn cluster related code lines can all be removed (include two pom 
dependencies mentioned above)
{code:java}
+mrCluster = new MiniMRYarnCluster(TestDistCpProcedure.class.getName(), 3);
+conf.set(MRJobConfig.MR_AM_STAGING_DIR, "/apps_staging_dir");
+mrCluster.init(conf);
+mrCluster.start();
+conf = mrCluster.getConfig();
{code}
We need additionally set test enabled flag.
{code:java}
 public static void beforeClass() throws IOException {
DistCpProcedure.ENABLED_FOR_TEST = true;
...
}
{code}
After this improvement, the whole test runs very faster than before, it totally 
costs less than 1 min.

In additional, we need to have a cleanup at the end of each test method.

like 
{code:java}
fs.delete(new Path(testRoot), true);
{code}
or
{code:java}
dcProcedure.finish(); (soemtimes need to call this since some case has snapshot 
created and cannot be deleted)
fs.delete(new Path(testRoot), true);
{code}
Also I catch some places still needed to update.
 # Can you update following description in router option? I update this content 
as well but seems this was not addressed in the latest patch.
{noformat}
It will disable read and write by cancelling all permissions of the source 
path. The default value  is `false`."
{noformat}

 # Method name cleanUpBeforeInitDistcp can be renamed to 
pathCheckBeforeInitDistcp since we don't do any cleanup operation now.


was (Author: linyiqun):
[~LiJinglun], thanks for addressing remaining comments.

These two days, I am trying to improve the efficiency of the unit test, current 
unit test is too slow.

I find another way that we don't have to depend on mini yarn cluster in test 
running. The job can be submitted and executed in LocalJobRunner when there is 
no mini yarn cluster env. But we need to make an adjustment in getting job 
status from job client.

I do some refactor in getCurrent method and apply them in DistCpProcedure.

Following are part of some necessary change we need to update.
{noformat}
  @VisibleForTesting
  private Job runningJob;
  static boolean ENABLED_FOR_TEST = false;
...
  private String 

[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-13 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135024#comment-17135024
 ] 

Yiqun Lin edited comment on HDFS-15346 at 6/14/20, 4:59 AM:


[~LiJinglun], thanks for addressing remaining comments.

These two days, I am trying to improve the efficiency of the unit test, current 
unit test is too slow.

I find another way that we don't have to depend on mini yarn cluster in test 
running. The job can be submitted and executed in LocalJobRunner when there is 
no mini yarn cluster env. But we need to make an adjustment in getting job 
status from job client.

I do some refactor in getCurrent method and apply them in DistCpProcedure.

Following are part of some necessary change we need to update.
{noformat}
  @VisibleForTesting
  private Job runningJob;
  static boolean ENABLED_FOR_TEST = false;
...
  private String submitDistCpJob(String srcParam, String dstParam,
  boolean useSnapshotDiff) throws IOException {
...
try {
  LOG.info("Submit distcp job={}", job);
  runningJob = job;   <--- need to reset there
  return job.getJobID().toString();
} catch (Exception e) {
  throw new IOException("Submit job failed.", e);
}
  }

  private RunningJobStatus getCurrentJob() throws IOException {
if (jobId != null) {
  if (ENABLED_FOR_TEST) {
if (this.runningJob != null) {
  Job latestJob = null;
  try {
latestJob = this.runningJob.getCluster()
.getJob(JobID.forName(jobId));
  } catch (InterruptedException e) {
throw new IOException(e);
  }
  return latestJob == null ? null
  : new RunningJobStatus(latestJob, null);
}
  } else {
RunningJob latestJob = client.getJob(JobID.forName(jobId));
return latestJob == null ? null :
  new RunningJobStatus(null, latestJob);
  }
}
return null;
  }

  class RunningJobStatus {
Job testJob;
RunningJob job;

public RunningJobStatus(Job testJob, RunningJob job) {
  this.testJob = testJob;
  this.job = job;
}

String getJobID() {
  return ENABLED_FOR_TEST ? testJob.getJobID().toString()
  : job.getID().toString();
}

boolean isComplete() throws IOException {
  return ENABLED_FOR_TEST ? testJob.isComplete() : job.isComplete();
}

boolean isSuccessful() throws IOException {
  return ENABLED_FOR_TEST ? testJob.isSuccessful() : job.isSuccessful();
}

String getFailureInfo() throws IOException {
  try {
return ENABLED_FOR_TEST ? testJob.getStatus().getFailureInfo()
: job.getFailureInfo();
  } catch (InterruptedException e) {
throw new IOException(e);
  }
}
  }
{noformat}
And mini yarn cluster related code lines can all be removed (include two pom 
dependencies mentioned above)
{code:java}
+mrCluster = new MiniMRYarnCluster(TestDistCpProcedure.class.getName(), 3);
+conf.set(MRJobConfig.MR_AM_STAGING_DIR, "/apps_staging_dir");
+mrCluster.init(conf);
+mrCluster.start();
+conf = mrCluster.getConfig();
{code}
We need additionally set test enabled flag.
{code:java}
 public static void beforeClass() throws IOException {
DistCpProcedure.ENABLED_FOR_TEST = true;
...
}
{code}
After this improvement, the whole test runs very faster than before, it totally 
costs less than 1 min.

Also I catch some places still needed to update.
 # Can you update following description in router option? I update this content 
as well but seems this was not addressed in the latest patch.
{noformat}
It will disable read and write by cancelling all permissions of the source 
path. The default value  is `false`."
{noformat}

 # Method name cleanUpBeforeInitDistcp can be renamed to 
pathCheckBeforeInitDistcp since we don't do any cleanup operation now.


was (Author: linyiqun):
[~LiJinglun], thanks for addressing remaining comments.

These two days, I am trying to improve the efficiency of the unit test, current 
unit test is too slow.

I find another way that we don't have to depend on mini yarn cluster in test 
running. The job can submitted and executed in LocalJobRunner way. But we need 
to make an adjustment in getting job status from job client.

I do some refactor in getCurrent method and apply them in DistCpProcedure.

Following are part of some necessary change we need to update.
{noformat}
  @VisibleForTesting
  private Job runningJob;
  static boolean ENABLED_FOR_TEST = false;
...
  private String submitDistCpJob(String srcParam, String dstParam,
  boolean useSnapshotDiff) throws IOException {
...
try {
  LOG.info("Submit distcp job={}", job);
  runningJob = job;   <--- need to reset there
  return job.getJobID().toString();
} catch (Exception e) {
  throw new IOException("Submit job failed.", e);
}
  }

  private 

[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-09 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128305#comment-17128305
 ] 

Jinglun edited comment on HDFS-15346 at 6/9/20, 1:33 PM:
-

Hi [~linyiqun], thanks your great comments and valuable suggestions !  I'll 
need some time to shoot all of them. So let me respond to the question first.

 
{quote}Here we reset permission to 0, that means no any operation is allowed? 
Is this expected, why not is 400 (only allow read)? The comment said that 
'cancelling the x permission of the source path.' makes me confused.
{quote}
Yes here we reset the permission to 0. Both read and write in the source path 
and all its sub-paths are denied. As far as I know all the read operations need 
to check its parents' execution permission. So setting to 400 can't make it 
only allowing read. We still can't read its sub-paths. I think the only way to 
make it 'only allowing read' is to recursively reduce each directory's 
permission to 555. Reduce permission means: if the original permission is 777 
then change it to 555. If the original permission is 700 then make it to 500. 
 Saving all the directories' permissions is very expensive. A better way may be 
letting the NameNode to support 'readonly-directory'. I think we can first 
using the '0 permission' way to make sure the data is consistent. Then start a 
sub-task to enable the NameNode 'readonly-directory'. Finally change this to 
the NameNode 'readonly-directory'.

 
{quote}One follow-up task I am thinking that we can have a separated config 
file something named fedbalance-default.xml for fedbalance tool, like 
ditcp-default.xml for distcp tool now. I don't prefer to add all tool config 
settings into hdfs-default.xml.
{quote}
Agree with you ! Using a fedbalance-default.xml is much better.

 
{quote}The test need a little long time to execute the whole test.
{quote}
I'll try to figure it out. But it might be quite tricky as the unit tests use 
both MiniDFSCluster and MiniMRYarnCluster. And there are many rounds of distcp. 
Please tell me if you have any suggestions, thanks !


was (Author: lijinglun):
Hi [~linyiqun], thanks your great comments and valuable suggestions !  I'll 
need some time to shoot all of them. So let me respond to the question first.

 
{quote}Here we reset permission to 0, that means no any operation is allowed? 
Is this expected, why not is 400 (only allow read)? The comment said that 
'cancelling the x permission of the source path.' makes me confused.
{quote}
Yes here we reset the permission to 0. Both read and write in the source path 
and all its sub-paths are denied. As far as I know all the read operations need 
to check its parents' execution permission. So setting to 400 can't make it 
only allowing read. We still can't read its sub-paths. I think the only way to 
make it 'only allowing read' is to recursively reduce each directory's 
permission to 555. Reduce permission means: if the original permission is 777 
then change it to 555. If the original permission is 700 then make it to 500. 
Saving all the directories' permissions is very expensive. A better way may be 
letting the NameNode to support 'readonly-directory'. I think we can first 
using the '0 permission' way to make sure the data is consistent. Then start a 
sub-task to enable the NameNode 'readonly-directory'. Finally change this to 
the NameNode 'readonly-directory'.

> RBF: DistCpFedBalance implementation
> 
>
> Key: HDFS-15346
> URL: https://issues.apache.org/jira/browse/HDFS-15346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15346.001.patch, HDFS-15346.002.patch, 
> HDFS-15346.003.patch, HDFS-15346.004.patch, HDFS-15346.005.patch, 
> HDFS-15346.006.patch, HDFS-15346.007.patch, HDFS-15346.008.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the second one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-06 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127489#comment-17127489
 ] 

Yiqun Lin edited comment on HDFS-15346 at 6/7/20, 4:35 AM:
---

Some more detailed review comments:

*HdfsConstants.java*
 Can we rename DOT_SNAPSHOT_SEPARATOR_DIR to the more readable name 
DOT_SNAPSHOT_DIR_SEPARATOR?

*DistCpFedBalance.java*
 # It would good to print the fed context that created from input options, so 
that we will know final options that we passed in.
{noformat}
+. // -->  print fed balancer context
+  // Construct the balance job.
+  BalanceJob.Builder builder = new 
BalanceJob.Builder<>();
+  DistCpProcedure dcp =
+  new DistCpProcedure(DISTCP_PROCEDURE, null, delayDuration, context);
+  builder.nextProcedure(dcp);
{noformat}

 # We can replace this system out in LOG instance,
{noformat}
+for (BalanceJob job : jobs) {
+  if (!job.isJobDone()) {
+unfinished++;
+  }
+  System.out.println(job);
+}
{noformat}

*DistCpProcedure.java*
 # The message in IOException(src + " doesn't exist.") not correctly described, 
should be 'src + " should be the directory."'
 # For each stage change, can we add aN additional output log, like this:
{noformat}
+if (srcFs.exists(new Path(src, HdfsConstants.DOT_SNAPSHOT_DIR))) {
+  throw new IOException(src + " shouldn't enable snapshot.");
+}
 LOG.info("Stage updated from {} to {}.", stage.name(), 
Stage.INIT_DISTCP.name())
+stage = Stage.INIT_DISTCP;
+  }
{noformat}

 # Here we reset permission to 0, that means no any operation is allowed? Is 
this expected, why not is 400 (only allow read)? The comment said that 
'cancelling the x permission of the source path.' makes me confused.
{noformat}
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
{noformat}

 # I prefer to throw IOException rather than doing delete operation in 
cleanUpBeforeInitDistcp. cleanUpBeforeInitDistcp is expected to be the final 
pre-check function before submitting ditcp job. And let admin users to check 
and do delete operation manually by themself.
{noformat}
+  private void initialCheckBeforeInitDistcp() throws IOException {
+if (dstFs.exists(dst)) {
+ throw IOException();
+}
+srcFs.allowSnapshot(src);
+if (srcFs.exists(new Path(src,
+HdfsConstants.DOT_SNAPSHOT_SEPARATOR_DIR + CURRENT_SNAPSHOT_NAME))) {
throw IOException();
+}
{noformat}

*FedBalanceConfigs.java*
 Can we move all keys from BalanceProcedureConfigKeys to this class? We don't 
need two duplicated Config class. One follow-up task I am thinking that we can 
have a separated config file something named fedbalance-default.xml for 
fedbalance tool, like ditcp-default.xml for distcp tool now. I don't prefer to 
add all tool config settings into hdfs-default.xml.

*FedBalanceContext.java*
 Override the toString method in FedBalanceContext to help us know the input 
options that actually be used.

*MountTableProcedure.java*
 The for loop can just break once we find the first source path that matched.
{noformat}
+for (MountTable result : results) {
+  if (mount.equals(result.getSourcePath())) {
+  existingEntry = result;
   break;   
+  }
+}
{noformat}
*TrashProcedure.java*
{noformat}
+  /**
+   * Delete source path to trash.
+   */
+  void moveToTrash() throws IOException {
+Path src = context.getSrc();
+if (srcFs.exists(src)) {
+  switch (context.getTrashOpt()) {
+  case TRASH:
+conf.setFloat(FS_TRASH_INTERVAL_KEY, 1);
+if (!Trash.moveToAppropriateTrash(srcFs, src, conf)) {
+  throw new IOException("Failed move " + src + " to trash.");
+}
+break;
+  case DELETE:
+if (!srcFs.delete(src, true)) {
+  throw new IOException("Failed delete " + src);
+}
+LOG.info("{} is deleted.", src);
+break;
+  default:
+break;
+  }
+}
+  }
{noformat}
For above lines, two review comments:
 # Can we add SKIP option check as well and throw unexpected option error?
{noformat}
case SKIP:
break;
+  default:
+  throw new IOException("Unexpected trash option=" + 
context.getTrashOpt());
+  }
{noformat}

 # FS_TRASH_INTERVAL_KEY defined with 1 is too small, that means we the trash 
will be deleted after 1 minute. Can you increased this to 60? Also please add 
necessary comment in trash option description to say the default trash behavior 
when trash is disabled in server side and client side value will be used.


was (Author: linyiqun):
Some more detailed review comments:

*HdfsConstants.java*
 Can we rename DOT_SNAPSHOT_SEPARATOR_DIR to the more readable name 
DOT_SNAPSHOT_DIR_SEPARATOR?

*DistCpFedBalance.java*
 # It would good to print the fed context that created from input options, so 
that we will 

[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-02 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123467#comment-17123467
 ] 

Yiqun Lin edited comment on HDFS-15346 at 6/2/20, 7:56 AM:
---

[~LiJinglun], can you fix related failure ut and generated checkstyle warnings?

The patch generated 19 new + 2 unchanged - 0 fixed = 21 total (was 2)

[https://builds.apache.org/job/PreCommit-HDFS-Build/29395/artifact/out/diff-checkstyle-root.txt]


was (Author: linyiqun):
[~LiJinglun], can you fix related failure ut and generated checkstyle warnings?

The patch generated 19 new + 2 unchanged - 0 fixed = 21 total (was 2)

> RBF: DistCpFedBalance implementation
> 
>
> Key: HDFS-15346
> URL: https://issues.apache.org/jira/browse/HDFS-15346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15346.001.patch, HDFS-15346.002.patch, 
> HDFS-15346.003.patch, HDFS-15346.004.patch, HDFS-15346.005.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the second one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15346) RBF: DistCpFedBalance implementation

2020-06-01 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120981#comment-17120981
 ] 

Yiqun Lin edited comment on HDFS-15346 at 6/1/20, 12:23 PM:


Hi [~LiJinglun] , some initial review comments from me:

*DistCpFedBalance.java*
 # line 77 I suggest to extract 'submit' as a static variable in this class.
 # line 85 the same comment to extract.
 # line 127 Can you complete the javadoc of this method?
 # line 132: Why the default bandwidth is only 1 for fedbaalance, will not be 
too small?
 # line 137, 140, 150 We can use method CommandLine#hasOption to extract 
Boolean type input value.
 # line 178 Can you complete the javadoc of construct method?
 # line 199, 206, 210, 215 Also suggest to use static variable rather than 
hard-coded value in these places.
 # line 228 rClient not closed after it's used.

*DistCpProcedure.java*
 # line 191 We can use HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR_SEPARATOR to 
replace '/.snapshot/'
 # line 306 It will be better if we can add some necessary describe for the 
steps of diff distcp job submission.
 # line 374 Can we replace '.snapshot' with HdfsConstants.DOT_SNAPSHOT_DIR in 
all other places in this class?

*TestDistCpProcedure.java*
 Can you use replace HdfsConstants.DOT_SNAPSHOT_DIR to replace '.snapshot' in 
this class?

*TestTrashProcedure.java*
{quote}Path src = new Path(nnUri + "/"+getMethodName()+"-src");
 Path dst = new Path(nnUri + "/"+getMethodName()+"-dst");
{quote}
We don't need to use nnUri here because we have already got the Filesystem 
instance. If we don't want to specified for one namespace, URI prefix can be 
ignored, default fs will be used.
 We can simplifed to
{quote}Path src = new Path("/" + +getMethodName() ++ "-src");
 Path dst = new Path("/" + +getMethodName() ++ "-dst");
{quote}


was (Author: linyiqun):
Hi [~LiJinglun] , some initial review comments from me:

*DistCpFedBalance.java*
 # line 77 I suggest to extract 'submit' as a static variable in this class.
 # line 85 the same comment to extract.
 # line 127 Can you complete the javadoc of this method?
 # line 132: Why the default bandwidth is only 1 for fedbaalance, will not be 
too small?
 # line 137, 140, 150 We can use method CommandLine#hasOption to extract 
Boolean type input value.
 # line 178 Can you complete the javadoc of construct method?
 # line 199, 206, 210, 215 Also suggest to use static variable rather than 
hard-coded value in these places.
 # line 228 rClient not closed after it's used.

*DistCpProcedure.java*
 # line 191 We can use HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR_SEPARATOR to 
replace '/.snapshot/'
 # line 306 It will be better if we can add some necessary describe for the 
steps of diff distcp job submission.
 # line 374 Can we replace '.snapshot' with HdfsConstants.DOT_SNAPSHOT_DIR in 
all other places in this class?

*TestDistCpProcedure.java*
Can you use replace HdfsConstants.DOT_SNAPSHOT_DIR to replace '.snapshot' in 
this class?

*TestTrashProcedure.java*
{quote}Path src = new Path(nnUri + "/"+getMethodName()+"-src");
 Path dst = new Path(nnUri + "/"+getMethodName()+"-dst");
{quote}
We don't need to use nnUri here because we have already got the Filesystem 
instance. If we don't want to specified for one namespace, URI prefix can be 
ignored, default fs will be used.
 We can simplifed to
{quote}Path src = new Path("/"+getMethodName()+"-src");
 Path dst = new Path("/"+getMethodName()+"-dst");
{quote}

> RBF: DistCpFedBalance implementation
> 
>
> Key: HDFS-15346
> URL: https://issues.apache.org/jira/browse/HDFS-15346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15346.001.patch, HDFS-15346.002.patch, 
> HDFS-15346.003.patch, HDFS-15346.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the second one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org