[jira] [Resolved] (HDFS-16258) HDFS-13671 breaks TestBlockManager in branch-3.2

2021-10-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16258.
--
Resolution: Cannot Reproduce

It passed in the latest qbt job. Closing.
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/15/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/

Please feel free to reopen this if the test fails in a specific environment.

> HDFS-13671 breaks TestBlockManager in branch-3.2
> 
>
> Key: HDFS-16258
> URL: https://issues.apache.org/jira/browse/HDFS-16258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> TestBlockManager in branch-3.2 has two failed tests: 
> * testDeleteCorruptReplicaWithStatleStorages
> * testBlockManagerMachinesArray
> Looks like broken by HDFS-13671. CC: [~brahmareddy]
> Branch-3.3 seems fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15018) DataNode doesn't shutdown although the number of failed disks reaches dfs.datanode.failed.volumes.tolerated

2021-10-06 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-15018.
-
Resolution: Duplicate

> DataNode doesn't shutdown although the number of failed disks reaches 
> dfs.datanode.failed.volumes.tolerated
> ---
>
> Key: HDFS-15018
> URL: https://issues.apache.org/jira/browse/HDFS-15018
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.3
> Environment: HDP-2.6.5
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: thread_dumps.txt
>
>
> In our case, we set dfs.datanode.failed.volumes.tolerated=0 but a DataNode 
> didn't shutdown when a disk in the DataNode host got failed for some reason.
> The the following log messages were shown in the DataNode log which indicates 
> the DataNode detected the disk failure, but the DataNode didn't shutdown:
> {code}
> 2019-09-17T13:15:43.262-0400 WARN 
> org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskErrorAsync callback 
> got 1 failed volumes: [/data2/hdfs/current]
> 2019-09-17T13:15:43.262-0400 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockScanner: Removing scanner for 
> volume /data2/hdfs (StorageID DS-329dec9d-a476-4334-9570-651a7e4d1f44)
> 2019-09-17T13:15:43.263-0400 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: 
> VolumeScanner(/data2/hdfs, DS-329dec9d-a476-4334-9570-651a7e4d1f44) exiting.
> {code}
> Looking at the HDFS code, it looks like when the DataNode detects a disk 
> failure, DataNode waits until the volume reference of the disk is released.
> https://github.com/hortonworks/hadoop/blob/HDP-2.6.5.0-292-tag/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java#L246
> I'm suspecting that the volume reference is not released after the failure 
> detection, but not sure the reason.
> And we took thread dumps when the issue was happening. It looks like the 
> following thread is waiting for the volume reference of the disk to be 
> released:
> {code}
> "pool-4-thread-1" #174 daemon prio=5 os_prio=0 tid=0x7f9e7c7bf800 
> nid=0x8325 in Object.wait() [0x7f9e629cb000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(FsVolumeList.java:262)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.handleVolumeFailures(FsVolumeList.java:246)
> - locked <0x000670559278> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.handleVolumeFailures(FsDatasetImpl.java:2178)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.handleVolumeFailures(DataNode.java:3410)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$100(DataNode.java:248)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$4.call(DataNode.java:2013)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.invokeCallback(DatasetVolumeChecker.java:394)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.cleanup(DatasetVolumeChecker.java:387)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onFailure(DatasetVolumeChecker.java:370)
> at com.google.common.util.concurrent.Futures$6.run(Futures.java:977)
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture.executeListener(AbstractFuture.java:991)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture.complete(AbstractFuture.java:885)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture.setException(AbstractFuture.java:739)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.TimeoutFuture$Fire.run(TimeoutFuture.java:137)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-10-06 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/

[Oct 5, 2021 4:16:42 PM] (noreply) HDFS-16250. Refactor AllowSnapshotMock using 
GMock (#3513)
[Oct 5, 2021 5:17:05 PM] (noreply) HADOOP-17947. Additional element types for 
VisibleForTesting (ADDENDUM) (#3521)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestViewDistributedFileSystemContract 
   hadoop.hdfs.TestSnapshotCommands 
   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes 
   hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.web.TestWebHdfsFileSystemContract 
   hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure 
   hadoop.yarn.csi.client.TestCsiClient 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-compile-javac-root.txt
 [364K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [584K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 [104K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [24K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/650/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer.txt
 [24K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-16262) Async refresh of cached locations in DFSInputStream

2021-10-06 Thread Bryan Beaudreault (Jira)
Bryan Beaudreault created HDFS-16262:


 Summary: Async refresh of cached locations in DFSInputStream
 Key: HDFS-16262
 URL: https://issues.apache.org/jira/browse/HDFS-16262
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bryan Beaudreault
Assignee: Bryan Beaudreault


HDFS-15119 added the ability to invalidate cached block locations in 
DFSInputStream. As written, the feature will affect all DFSInputStreams 
regardless of whether they need it or not. The invalidation also only applies 
on the next request, so the next request will pay the cost of calling openInfo 
before reading the data.

I'm working on a feature for HBase which enables efficient healing of locality 
through Balancer-style low level block moves (HBASE-26250). I'd like to utilize 
the idea started in HDFS-15119 in order to update DFSInputStreams after blocks 
have been moved to local hosts.

I was considering using the feature as is, but some of our clusters are quite 
large and I'm concerned about the impact on the namenode:
 * We have some clusters with over 350k StoreFiles, so that'd be 350k 
DFSInputStreams. With such a large number and very active usage, having the 
refresh be in-line makes it too hard to ensure we don't DDOS the NameNode.
 * Currently we need to pay the price of openInfo the next time a 
DFSInputStream is invoked. Moving that async would minimize the latency hit. 
Also, some StoreFiles might be far less frequently accessed, so they may live 
on for a long time before ever refreshing. We'd like to be able to know that 
all DFSInputStreams are refreshed by a given time.
 * We may have 350k files, but only a small percentage of them are ever 
non-local at a given time. Refreshing only if necessary will save a lot of work.

In order to make this as painless to end users as possible, I'd like to:
 * Update the implementation to utilize an async thread for managing refreshes. 
This will give more control over rate limiting across all DFSInputStreams in a 
DFSClient, and also ensure that all DFSInputStreams are refreshed.
 * Only refresh files which are lacking a local replica or have known deadNodes 
to be cleaned up

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16261) Configurable grace period around deletion of invalidated blocks

2021-10-06 Thread Bryan Beaudreault (Jira)
Bryan Beaudreault created HDFS-16261:


 Summary: Configurable grace period around deletion of invalidated 
blocks
 Key: HDFS-16261
 URL: https://issues.apache.org/jira/browse/HDFS-16261
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Bryan Beaudreault
Assignee: Bryan Beaudreault


When a block is moved with REPLACE_BLOCK, the new location is recorded in the 
NameNode and the NameNode instructs the old host to in invalidate the block 
using DNA_INVALIDATE. As it stands today, this invalidation is async but tends 
to happen relatively quickly.

I'm working on a feature for HBase which enables efficient healing of locality 
through Balancer-style low level block moves. One issue is that HBase tends to 
keep open long running DFSInputStreams and moving blocks from under them causes 
lots of warns in the RegionServer and increases long tail latencies due to the 
necessary retries in the DFSClient.

One way I'd like to fix this is to provide a configurable grace period on async 
invalidations. This would give the DFSClient enough time to refresh block 
locations before hitting any errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16260) Make hdfs_deleteSnapshot tool cross platform

2021-10-06 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16260:
-

 Summary: Make hdfs_deleteSnapshot tool cross platform
 Key: HDFS-16260
 URL: https://issues.apache.org/jira/browse/HDFS-16260
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_deleteSnapshot uses *getopt* for parsing the command 
line arguments. getopt is available only on Linux and thus, isn't cross 
platform. We need to replace getopt with *boost::program_options* to make this 
cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16254) Cleanup protobuf on exit of hdfs_allowSnapshot

2021-10-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-16254.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Cleanup protobuf on exit of hdfs_allowSnapshot
> --
>
> Key: HDFS-16254
> URL: https://issues.apache.org/jira/browse/HDFS-16254
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Need to move the call google::protobuf::ShutdownProtobufLibrary() to main 
> method instead of 
> [AllowSnapshot::HandlePath|https://github.com/apache/hadoop/blob/35a8d48872a13438d4c4199b6ef5b902105e2eb2/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-allow-snapshot/hdfs-allow-snapshot.cc#L116-L117]
>  since we want the clean-up tasks to run only when the program exits.
> The current implementation doesn't cause any issues since 
> AllowSnapshot::HandlePath is called only once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16239) XAttr#toString doesnt print the attribute value in readable format

2021-10-06 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C resolved HDFS-16239.
---
Resolution: Invalid

To print, have we considered using XattrCodec APIs. 

Its not neccessary to print the XAttr.

> XAttr#toString doesnt print the attribute value in readable format
> --
>
> Key: HDFS-16239
> URL: https://issues.apache.org/jira/browse/HDFS-16239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> org.apache.hadoop.fs.XAttr#toString prints the value of attribute in bytes. 
> return "XAttr [ns=" + ns + ", name=" + name + ", value="
>  + Arrays.toString(value) + "]";
> XAttr [ns=SYSTEM, name=az.expression, value=[82, 69, 80, 91, 50, 93..]
> This should be converted to String rather than printing to Array of bytes.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-10-06 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/

[Oct 4, 2021 8:16:40 PM] (Eric Payne) YARN-8127. Resource leak when async 
scheduling is enabled. Contributed by Tao Yang.




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.net.TestClusterTopology 
   hadoop.fs.TestTrash 
   hadoop.fs.TestFileUtil 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.tools.util.TestProducerConsumer 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-mvnsite-root.txt
  [584K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-javadoc-root.txt
  [32K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [236K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [124K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/442/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   

[jira] [Created] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)

2021-10-06 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDFS-16259:


 Summary: Catch and re-throw sub-classes of AccessControlException 
thrown by any permission provider plugins (eg Ranger)
 Key: HDFS-16259
 URL: https://issues.apache.org/jira/browse/HDFS-16259
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


When a permission provider plugin is enabled (eg Ranger) there are some 
scenarios where it can throw a sub-class of an AccessControlException (eg 
RangerAccessControlException). If this exception is allowed to propagate up the 
stack, it can give problems in the HDFS Client, when it unwraps the remote 
exception containing the AccessControlException sub-class.

Ideally, we should make AccessControlException final so it cannot be 
sub-classed, but that would be a breaking change at this point. Therefore I 
believe the safest thing to do, is to catch any AccessControlException that 
comes out of the permission enforcer plugin, and re-throw an 
AccessControlException instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop-3.2.3 Release Update

2021-10-06 Thread Wei-Chiu Chuang
Hi to raise the awareness,
it looks like reverting the FoldedTreeSet HDFS-13671
 breaks TestBlockManager
in branch-3.2.  Branch-3.3 is good.

tracking jira: HDFS-16258 

On Tue, Oct 5, 2021 at 8:45 PM Brahma Reddy Battula 
wrote:

> Hi Akira,
>
> Thanks for your email!!
>
> I am evaluating the CVE’s which needs to go for this release..
>
> Will update soon!!
>
>
> On Tue, 5 Oct 2021 at 1:46 PM, Akira Ajisaka  wrote:
>
> > Hi Brahma,
> >
> > What is the release process going on? Is there any blocker for the RC?
> >
> > -Akira
> >
> > On Wed, Sep 22, 2021 at 7:37 PM Xiaoqiao He  wrote:
> >
> > > Hi Brahma,
> > >
> > > The feature 'BPServiceActor processes commands from NameNode
> > > asynchronously' has been ready for both branch-3.2 and branch-3.2.3.
> > While
> > > cherry-picking there is only minor conflict, So I checked in directly.
> > BTW,
> > > run some unit tests and build pseudo cluster to verify, it seems to
> work
> > > fine.
> > > FYI.
> > >
> > > Regards,
> > > - He Xiaoqiao
> > >
> > > On Thu, Sep 16, 2021 at 10:52 PM Brahma Reddy Battula <
> bra...@apache.org
> > >
> > > wrote:
> > >
> > >> Please go ahead. Let me know any help required on review.
> > >>
> > >> On Tue, Sep 14, 2021 at 6:57 PM Xiaoqiao He 
> > wrote:
> > >>
> > >>> Hi Brahma,
> > >>>
> > >>> I plan to involve HDFS-14997 and related JIRAs if possible. I have
> > >>> resolved the conflict and verified them locally.
> > >>> It will include: HDFS-14997 HDFS-15075 HDFS-15651 HDFS-15113.
> > >>> I would like to hear some more response that if we have enough time
> to
> > >>> wait for it to be ready.
> > >>> Thanks.
> > >>>
> > >>> Best Regards,
> > >>> - He Xiaoqiao
> > >>>
> > >>> On Tue, Sep 14, 2021 at 3:39 PM Xiaoqiao He 
> > wrote:
> > >>>
> >  Hi Brahma, HDFS-15160 has checked in branch-3.2 & branch-3.2.3. FYI.
> > 
> >  On Tue, Sep 14, 2021 at 3:52 AM Brahma Reddy Battula <
> > bra...@apache.org>
> >  wrote:
> > 
> > > Hi All,
> > >
> > > Waiting for the following jira to commit to hadoop-3.2.3 , mostly
> > this
> > > can
> > > be done by this week,then I will try to create the RC next if there
> > is
> > > no
> > > objection.
> > >
> > > https://issues.apache.org/jira/browse/HDFS-15160
> > >
> > >
> > >
> > > On Mon, Aug 16, 2021 at 2:22 PM Brahma Reddy Battula <
> > > bra...@apache.org>
> > > wrote:
> > >
> > > > @Akira Ajisaka   and @Masatake Iwasaki
> > > > 
> > > > Looks all are build related issues when you try with bigtop. We
> can
> > > > discuss and prioritize this.. Will connect with you guys.
> > > >
> > > > On Mon, Aug 16, 2021 at 1:43 PM Masatake Iwasaki <
> > > > iwasak...@oss.nttdata.co.jp> wrote:
> > > >
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch2-exclude-spotbugs-annotations.diff
> > > >> >
> > > >> > This is for building hadoop-3.2.2 against zookeeper-3.4.14.
> > > >> > we do not see the issue usually since branch-3.2 uses
> > > zooekeper-3.4.13,
> > > >> > while it would be harmless to add the exclusion even for
> > > >> zooekeeper-3.4.13.
> > > >>
> > > >> I filed HADOOP-17849 for this.
> > > >>
> > > >> On 2021/08/16 12:02, Masatake Iwasaki wrote:
> > > >> > Thanks for bringing this up, Akira. Let me explain some
> > > background.
> > > >> >
> > > >> >
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch2-exclude-spotbugs-annotations.diff
> > > >> >
> > > >> > This is for building hadoop-3.2.2 against zookeeper-3.4.14.
> > > >> > we do not see the issue usually since branch-3.2 uses
> > > zooekeper-3.4.13,
> > > >> > while it would be harmless to add the exclusion even for
> > > >> zooekeeper-3.4.13.
> > > >> >
> > > >> >
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch3-fix-broken-dir-detection.diff
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch5-fix-kms-shellprofile.diff
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch6-fix-httpfs-sh.diff
> > > >> >
> > > >> > These are relevant to directory structure used by Bigtop
> > package.
> > > >> > If the fix does not break the tarball dist,
> > > >> > it would be nice to have these on Hadoop too.
> > > >> >
> > > >> >
> > > >> >> -
> > > >>
> > >
> >
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch7-remove-phantomjs-in-yarn-ui.diff
> > > >> >
> > > >> > This is for 

[jira] [Created] (HDFS-16258) HDFS-13671 breaks TestBlockManager in branch-3.2

2021-10-06 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDFS-16258:
--

 Summary: HDFS-13671 breaks TestBlockManager in branch-3.2
 Key: HDFS-16258
 URL: https://issues.apache.org/jira/browse/HDFS-16258
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.2.3
Reporter: Wei-Chiu Chuang


TestBlockManager in branch-3.2 has two failed tests: 
* testDeleteCorruptReplicaWithStatleStorages
* testBlockManagerMachinesArray

Looks like broken by HDFS-13671. CC: [~brahmareddy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver

2021-10-06 Thread Janus Chow (Jira)
Janus Chow created HDFS-16257:
-

 Summary: [HDFS] [RBF] Guava cache performance issue in Router 
MountTableResolver
 Key: HDFS-16257
 URL: https://issues.apache.org/jira/browse/HDFS-16257
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.10.1
Reporter: Janus Chow
Assignee: Janus Chow


Branch 2.10.1 uses guava version of 11.0.2, which has a bug which affects the 
performance of cache, which was mentioned in HDFS-13821.

Since upgrading guava version seems affecting too much, this ticket is to add a 
configuration setting when initializing cache to walk around this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org