[jira] [Created] (HDFS-13874) While there be mass Replication Queue Initializer thread

2018-08-28 Thread Feng Yuan (JIRA)
Feng Yuan created HDFS-13874:


 Summary: While there be mass Replication Queue Initializer thread
 Key: HDFS-13874
 URL: https://issues.apache.org/jira/browse/HDFS-13874
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Feng Yuan


I see dn register pipeline if it is a new dn, then will tick a thread named 
"Replication Queue Initializer thread".
You can see DatanodeManager#checkIfClusterIsNowMultiRack




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13874) While there be mass Replication Queue Initializer thread

2018-08-28 Thread Feng Yuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan resolved HDFS-13874.
--
Resolution: Not A Problem

After i recheck the code, i find it`s a mistake of mine.It is not a problem, 
close.

> While there be mass Replication Queue Initializer thread
> 
>
> Key: HDFS-13874
> URL: https://issues.apache.org/jira/browse/HDFS-13874
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Feng Yuan
>Priority: Major
>
> I see dn register pipeline if it is a new dn, then will tick a thread named 
> "Replication Queue Initializer thread".
> You can see DatanodeManager#checkIfClusterIsNowMultiRack



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-381) Fix TestKeys#testPutAndGetKeyWithDnRestart

2018-08-28 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-381:
--

 Summary: Fix TestKeys#testPutAndGetKeyWithDnRestart
 Key: HDDS-381
 URL: https://issues.apache.org/jira/browse/HDDS-381
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


TestKeys#testPutAndGetKeyWithDnRestart is failing because with the following 
exception.

The problem is that Ozone Datanode is not getting set on datanode restart.

{code}
 got exception when processing ContainerCommandRequestProto {}: {}
java.lang.NullPointerException: scmId cannot be null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:107)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:258)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:181)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
at 
org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:56)
at 
org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:50)
at 
org.apache.ratis.shaded.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
at 
org.apache.ratis.shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:252)
at 
org.apache.ratis.shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:629)
at 
org.apache.ratis.shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
org.apache.ratis.shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-382) Remove RatisTestHelper#RatisTestSuite constructor argument and fix checkstyle in ContainerTestHelper, GenericTestUtils

2018-08-28 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-382:


 Summary: Remove RatisTestHelper#RatisTestSuite constructor 
argument and fix checkstyle in ContainerTestHelper, GenericTestUtils
 Key: HDDS-382
 URL: https://issues.apache.org/jira/browse/HDDS-382
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


Follow-up jira of HDDS-332.
This will fix
 * {{RatisTestHelper#RatisTestSuite}} constructor argument can be removed, it 
is not used at all.
 * Unused imports in ContainerTestHelper
 * Unused imports in GenericTestUtils
 * Unused imports in OzoneConfiguration
 * TestBuckets:L98, 99 can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13875) EOFException log spam in Datanode

2018-08-28 Thread Karthik Palanisamy (JIRA)
Karthik Palanisamy created HDFS-13875:
-

 Summary: EOFException log spam in Datanode
 Key: HDFS-13875
 URL: https://issues.apache.org/jira/browse/HDFS-13875
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Ambari checks datanode liveness by simply connecting to data transfer port. But 
this connection will be closed after a successful TCP handshake without any 
data transfer. Due to which datanode encountered EOFExcetion when reading an 
encrypted message from the closed socket. 

This issue addressed in 
[HDFS-9572|https://issues.apache.org/jira/browse/HDFS-9572]. But not handled 
for encrypted data transfer(SASL message).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/

[Aug 27, 2018 6:55:46 AM] (yqlin) HDFS-13831. Make block increment deletion 
number configurable.
[Aug 27, 2018 9:41:08 AM] (elek) HDDS-334. Update GettingStarted page to 
mention details about Ozone
[Aug 27, 2018 12:59:32 PM] (nanda) HDDS-374. Support to configure container 
size in units lesser than GB.
[Aug 27, 2018 1:51:34 PM] (elek) HDDS-313. Add metrics to containerState 
Machine. Contributed by chencan.
[Aug 27, 2018 2:07:55 PM] (elek) HDDS-227. Use Grpc as the default transport 
protocol for Standalone
[Aug 27, 2018 2:53:06 PM] (haibochen) MAPREDUCE-6861. Add metrics tags for 
ShuffleClientMetrics. (Contributed
[Aug 27, 2018 3:19:38 PM] (xyao) HDDS-377. Make the ScmClient closable and stop 
the started threads.
[Aug 27, 2018 4:22:59 PM] (jzhuge) HADOOP-15633. fs.TrashPolicyDefault: Can't 
create trash directory.
[Aug 27, 2018 5:03:03 PM] (wwei) YARN-8719. Typo correction for yarn 
configuration in
[Aug 27, 2018 5:18:05 PM] (gifuma) HDFS-13849. Migrate logging to slf4j in 
hadoop-hdfs-httpfs,
[Aug 27, 2018 5:32:22 PM] (gifuma) YARN-8705. Refactor the UAM heartbeat thread 
in preparation for
[Aug 27, 2018 5:40:33 PM] (xyao) HDSS-375. ContainerReportHandler should not 
send replication events for
[Aug 27, 2018 6:34:33 PM] (billie) YARN-8675. Remove default hostname for 
docker containers when net=host.
[Aug 27, 2018 7:25:46 PM] (gifuma) HADOOP-15699. Fix some of 
testContainerManager failures in Windows.
[Aug 27, 2018 11:02:35 PM] (weichiu) HDFS-13838. 
WebHdfsFileSystem.getFileStatus() won't return correct




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/882/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/had

[jira] [Created] (HDDS-383) Ozone Client should closed container Info to discard preallocated blocks from closed containers

2018-08-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-383:


 Summary: Ozone Client should closed container Info to discard 
preallocated blocks from closed containers
 Key: HDDS-383
 URL: https://issues.apache.org/jira/browse/HDDS-383
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


When key write happens in Ozone client, based on the initial size given, 
preallocation of blocks happen. While write happens, containers can get closed 
and if the remaining preallocated blocks  belong to closed containers , they 
can be discarded right away instead of trying to write these blocks and failing 
with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13831) Make block increment deletion number configurable

2018-08-28 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-13831:


Sorry to reopen. There are minor code conflicts in branch-3.0. Will attach 
branch-3.0 patch for recommit check.

> Make block increment deletion number configurable
> -
>
> Key: HDFS-13831
> URL: https://issues.apache.org/jira/browse/HDFS-13831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Ryan Wu
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.2
>
> Attachments: HDFS-13831.001.patch, HDFS-13831.002.patch, 
> HDFS-13831.003.patch, HDFS-13831.004.patch
>
>
> When NN deletes a large directory, it will hold the write lock long time. For 
> improving this, we remove the blocks in a batch way. So that other waiters 
> have a chance to get the lock. But right now, the batch number is a 
> hard-coded value.
> {code}
>   static int BLOCK_DELETION_INCREMENT = 1000;
> {code}
> We can make this value configurable, so that we can control the frequency of 
> other waiters to get the lock chance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-08-28 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13876:
-

 Summary: HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
 Key: HDFS-13876
 URL: https://issues.apache.org/jira/browse/HDFS-13876
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Siyao Meng
Assignee: Siyao Meng


Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-08-28 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13877:
-

 Summary: HttpFS: Implement GETSNAPSHOTDIFF
 Key: HDFS-13877
 URL: https://issues.apache.org/jira/browse/HDFS-13877
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Siyao Meng
Assignee: Siyao Meng


Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-08-28 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13878:
-

 Summary: HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
 Key: HDFS-13878
 URL: https://issues.apache.org/jira/browse/HDFS-13878
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Siyao Meng


Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13879) FileSystem: Should allowSnapshot() and disallowSnapshot() be part it?

2018-08-28 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13879:
-

 Summary: FileSystem: Should allowSnapshot() and disallowSnapshot() 
be part it?
 Key: HDFS-13879
 URL: https://issues.apache.org/jira/browse/HDFS-13879
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Siyao Meng


I wonder whether we should add allowSnapshot() and disallowSnapshot() to 
FileSystem abstract class.
My rationale is that createSnapshot(), renameSnapshot() and deleteSnapshot() 
are already part of it.

Any reason why we don't want to do this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-384) Add api to remove handler in EventQueue

2018-08-28 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-384:
---

 Summary: Add api to remove handler in EventQueue
 Key: HDDS-384
 URL: https://issues.apache.org/jira/browse/HDDS-384
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Add api to remove handler in EventQueue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-28 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13880:
-

 Summary: Add mechanism to allow certain RPC calls to bypass sync
 Key: HDFS-13880
 URL: https://issues.apache.org/jira/browse/HDFS-13880
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Chen Liang
Assignee: Chen Liang


Currently, every single call to NameNode will be synced, in the sense that 
NameNode will not process it until state id catches up. But in certain cases, 
we would like to bypass this check and allow the call to return immediately, 
even when the server id is not up to date. One case could be the to-be-added 
new API in HDFS-13749 that request for current state id. Others may include 
calls that do not promise real time responses such as {{getContentSummary}}. 
This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13881) Export or Import a dirImage

2018-08-28 Thread maobaolong (JIRA)
maobaolong created HDFS-13881:
-

 Summary: Export or Import a dirImage
 Key: HDFS-13881
 URL: https://issues.apache.org/jira/browse/HDFS-13881
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 3.1.1
Reporter: maobaolong
Assignee: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org