[jira] [Created] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-20 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13838:
-

 Summary: WebHdfsFileSystem.getFileStatus() won't return correct 
"snapshot enabled" status
 Key: HDFS-13838
 URL: https://issues.apache.org/jira/browse/HDFS-13838
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.3, 3.1.0
Reporter: Siyao Meng
Assignee: Siyao Meng


"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots():

```java

// allow snapshots on /bar using webhdfs
 webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());

```

The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-366) Update functions impacted by SCM chill mode in StorageContainerLocationProtocol

2018-08-20 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-366:
---

 Summary: Update functions impacted by SCM chill mode in 
StorageContainerLocationProtocol
 Key: HDDS-366
 URL: https://issues.apache.org/jira/browse/HDDS-366
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13837) hdfs.TestDistributedFileSystem.testDFSClient: test is flaky

2018-08-20 Thread Shweta (JIRA)
Shweta created HDFS-13837:
-

 Summary: hdfs.TestDistributedFileSystem.testDFSClient: test is 
flaky
 Key: HDFS-13837
 URL: https://issues.apache.org/jira/browse/HDFS-13837
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Shweta
Assignee: Shweta
 Attachments: TestDistributedFileSystem.testDFSClient_Stderr_log

Stack Trace :
 java.lang.AssertionError
 at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:449)
  Stdout:

{no format}
[truncated]kmanagement.BlockManager 
(BlockManager.java:processMisReplicatesAsync(3385)) - Number of blocks being 
written= 0
2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO  
hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* 
Replication Queue initialization scan for invalid, over- and under-replicated 
blocks completed in 5 msec
2018-07-31 21:42:46,676 [IPC Server Responder] INFO  ipc.Server 
(Server.java:run(1307)) - IPC Server Responder: starting
2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO  ipc.Server 
(Server.java:run(1146)) - IPC Server listener on port1: starting
2018-07-31 21:42:46,678 [main] INFO  namenode.NameNode 
(NameNode.java:startCommonServices(831)) - NameNode RPC up at: 
localhost/x.x.x.x:port1
2018-07-31 21:42:46,678 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:startActiveServices(1230)) - Starting services required for 
active state
2018-07-31 21:42:46,678 [main] INFO  namenode.FSDirectory 
(FSDirectory.java:updateCountForQuota(758)) - Initializing quota with 4 
thread(s)
2018-07-31 21:42:46,679 [main] INFO  namenode.FSDirectory 
(FSDirectory.java:updateCountForQuota(767)) - Quota initialization completed in 
0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2018-07-31 21:42:46,682 [CacheReplicationMonitor(752355)] INFO  
blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) 
- Starting CacheReplicationMonitor with interval 3 milliseconds
2018-07-31 21:42:46,686 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with 
dfs.datanode.data.dir: 
[DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,[DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
2018-07-31 21:42:46,687 [main] INFO  checker.ThrottledAsyncChecker 
(ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
[DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
2018-07-31 21:42:46,687 [main] INFO  checker.ThrottledAsyncChecker 
(ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
[DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
2018-07-31 21:42:46,695 [main] INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again)
2018-07-31 21:42:46,695 [main] INFO  common.Util 
(Util.java:isDiskStatsEnabled(395)) - 
dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
profiling
2018-07-31 21:42:46,695 [main] INFO  datanode.BlockScanner 
(BlockScanner.java:(184)) - Initialized block scanner with 
targetBytesPerSec 1048576
2018-07-31 21:42:46,696 [main] INFO  datanode.DataNode 
(DataNode.java:(496)) - Configured hostname is x.x.x.x
2018-07-31 21:42:46,696 [main] INFO  common.Util 
(Util.java:isDiskStatsEnabled(395)) - 
dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
profiling
2018-07-31 21:42:46,696 [main] INFO  datanode.DataNode 
(DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory = 0
2018-07-31 21:42:46,697 [main] INFO  datanode.DataNode 
(DataNode.java:initDataXceiver(1142)) - Opened streaming server at 
/x.x.x.x:port2
2018-07-31 21:42:46,697 [main] INFO  datanode.DataNode 
(DataXceiverServer.java:(78)) - Balancing bandwidth is 10485760 bytes/s
2018-07-31 21:42:46,697 [main] INFO  datanode.DataNode 
(DataXceiverServer.java:(79)) - Number threads for balancing is 50
2018-07-31 21:42:46,699 [main] INFO  server.AuthenticationFilter 
(AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize 
FileSignerSecretProvider, falling back to use random secrets.
2018-07-31 21:42:46,699 [main] INFO  http.HttpRequestLog 
(HttpRequestLog.java:getRequestLog(81)) - Http request log for 
http.requests.datanode is not defined
2018-07-31 21:42:46,700 [main] INFO  http.HttpServer2 
(HttpServer2.java:addGlobalFilter(923)) - Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-07-31 21:42:46,701 [main] INFO  http.HttpServer2 
(HttpServer2.java:addFilter(896)) - Added filter static_user_filter 

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-20 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/564/

[Aug 20, 2018 4:37:51 AM] (aw) YETUS-657. volumes on non-existent files creates 
a directory
[Aug 20, 2018 6:50:29 AM] (brahma) HDFS-13790. RBF: Move ClientProtocol APIs to 
its own module. Contributed
[Aug 20, 2018 8:07:58 AM] (msingh) HDDS-353. Multiple delete Blocks tests are 
failing consistently.


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/874/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   

[jira] [Created] (HDFS-13836) RBF: To handle the exception when the mounttable znode have null value.

2018-08-20 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13836:
--

 Summary: RBF: To handle the exception when the mounttable znode 
have null value.
 Key: HDFS-13836
 URL: https://issues.apache.org/jira/browse/HDFS-13836
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, hdfs
Affects Versions: 3.1.0, 3.0.0, 2.9.0
Reporter: yanghuafeng
Assignee: yanghuafeng


When we are adding the mounttable entry, the router sever is terminated. 
Some error messages show in log, as follow:

 2018-08-20 14:18:32,404 ERROR 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
 Cannot get data for 0SLASH0testzk: null. 

The reason is that router server have created the znode but not to set data 
before being terminated. But the method zkManager.getStringData(path, stat) 
will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
leading to fail in adding the same mounttable entry and deleting the existing 
znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-365) Implement flushStateMachineData for containerStateMachine

2018-08-20 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-365:


 Summary: Implement flushStateMachineData for containerStateMachine
 Key: HDDS-365
 URL: https://issues.apache.org/jira/browse/HDDS-365
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


With RATIS-295 , a new stateMachine API called flushStateMachineData has been 
introduced. This API needs to be implemented in ContainerStateMachine so as 
when actual flush happens via Ratis for the actual log file, the corresponding 
stateMachineData should also get flushed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13835) Unable to Add files after changing the order in RBF

2018-08-20 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-13835:
---

 Summary: Unable to Add files after changing the order in RBF
 Key: HDFS-13835
 URL: https://issues.apache.org/jira/browse/HDFS-13835
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch
Assignee: venkata ram kumar ch


When  a mount point it pointing to multiple sub cluster by default the order is 
HASH.

But After changing the order from HASH to RANDOM i am unable to add files to 
that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org