[jira] [Created] (HDFS-15112) RBF: do not return FileNotFoundException when a subcluster is unavailable
Íñigo Goiri created HDFS-15112: -- Summary: RBF: do not return FileNotFoundException when a subcluster is unavailable Key: HDFS-15112 URL: https://issues.apache.org/jira/browse/HDFS-15112 Project: Hadoop HDFS Issue Type: Improvement Reporter: Íñigo Goiri If we have a mount point using HASH_ALL across two subclusters and one of them is down, we may return FileNotFoundException while the file is just in the unavailable subcluster. We should not return FileNotFoundException but something that shows that the subcluster is unavailable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-15111) start / stopStandbyServices() should log which service it is transitioning to/from.
Konstantin Shvachko created HDFS-15111: -- Summary: start / stopStandbyServices() should log which service it is transitioning to/from. Key: HDFS-15111 URL: https://issues.apache.org/jira/browse/HDFS-15111 Project: Hadoop HDFS Issue Type: Bug Components: hdfs, logging Affects Versions: 2.10.0 Reporter: Konstantin Shvachko Trying to transition Observer to Standby state. Both {{stopStandbyServices()}} and {{startStandbyServices()}} log that they are stopping/starting Standby services. # {{startStandbyServices()}} should log which state it is transitioning TO. # {{stopStandbyServices()}} should log which state it is transitioning FROM. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1377/ [Jan 9, 2020 2:56:51 AM] (ayushsaxena) HDFS-15096. RBF: GetServerDefaults Should be Cached At Router. [Jan 9, 2020 3:32:38 AM] (ayushsaxena) HDFS-15094. RBF: Reuse ugi string in ConnectionPoolID. Contributed by [Jan 9, 2020 6:34:05 AM] (surendralilhore) HDFS-14957. INodeReference Space Consumed was not same in QuotaUsage and [Jan 9, 2020 7:24:58 AM] (tasanuma) HADOOP-15993. Upgrade Kafka to 2.4.0 in hadoop-kafka module. (#1796) [Jan 9, 2020 5:18:44 PM] (ericp) YARN-9018. Add functionality to AuxiliaryLocalPathHandler to return all [Jan 10, 2020 12:52:13 AM] (tasanuma) HDFS-15102. HttpFS: put requests are not supported for path "/". -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged Failed junit tests : hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap hadoop.hdfs.server.namenode.TestRedudantBlocks hadoop.hdfs.TestDatanodeRegistration hadoop.hdfs.TestDeadNodeDetection hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageF
[jira] [Resolved] (HDFS-15101) Only few files[5] created with empty content out of 10000 files ingested to hdfs
[ https://issues.apache.org/jira/browse/HDFS-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HDFS-15101. --- Resolution: Cannot Reproduce > Only few files[5] created with empty content out of 1 files ingested to > hdfs > > > Key: HDFS-15101 > URL: https://issues.apache.org/jira/browse/HDFS-15101 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Environment: Production >Reporter: Chandrashekar S >Priority: Major > > When we ingesting files through spark streaming, we are finding few files are > empty and 99.9% files are created properly with contents. In the yarn we > found that an interruption generated at each failure to write contents to > file. > > > > 20/01/08 16:43:16 INFO DFSClient: Exception in createBlockOutputStream > java.io.InterruptedIOException: Interrupted while waiting for IO on channel > java.nio.channels.SocketChannel[connected local=/10.136.184.59:51154 > remote=/10.136.184.59:1019]. 75000 millis timeout left. > at > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) > at java.io.FilterInputStream.read(FilterInputStream.java:83) > at java.io.FilterInputStream.read(FilterInputStream.java:83) > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2390) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1455) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1374) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552) > 20/01/08 16:43:16 INFO BlockManager: Removing RDD 6 > 20/01/08 16:43:16 INFO DFSClient: Abandoning > BP-383742638-10.136.184.33-1429219667936:blk_2355453392_1283401139 > 20/01/08 16:43:16 WARN Client: interrupted waiting to send rpc request to > server > java.lang.InterruptedException > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404) > at java.util.concurrent.FutureTask.get(FutureTask.java:191) > at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1094) > at org.apache.hadoop.ipc.Client.call(Client.java:1457) > at org.apache.hadoop.ipc.Client.call(Client.java:1398) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) > at com.sun.proxy.$Proxy17.abandonBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:436) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185) > at com.sun.proxy.$Proxy18.abandonBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1378) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552) > 20/01/08 16:43:16 WARN DFSClient: DataStreamer Exception > java.io.IOException: java.lang.InterruptedException > at org.apache.hadoop.ipc.Client.call(Client.java:1463) > at org.apache.hadoop.ipc.Client.call(Client.java:1398) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) > at com.sun.proxy.$Proxy17.abandonBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:436) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203) > at > org.apac
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/ [Jan 9, 2020 3:52:26 PM] (jeagles) MAPREDUCE-7252. Handling 0 progress in SimpleExponential task runtime [Jan 9, 2020 5:52:26 PM] (ericp) YARN-9018. Add functionality to AuxiliaryLocalPathHandler to return all -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.util.TestReadWriteDiskValidator hadoop.net.TestClusterTopology hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [164K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-li