[jira] [Created] (HDFS-11874) [SPS]: Document the SPS feature
Uma Maheswara Rao G created HDFS-11874: -- Summary: [SPS]: Document the SPS feature Key: HDFS-11874 URL: https://issues.apache.org/jira/browse/HDFS-11874 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Reporter: Uma Maheswara Rao G This JIRA is for tracking the documentation about the feature -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11873) Ozone: Object store handler cannot serve requests from same http client
Weiwei Yang created HDFS-11873: -- Summary: Ozone: Object store handler cannot serve requests from same http client Key: HDFS-11873 URL: https://issues.apache.org/jira/browse/HDFS-11873 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Reporter: Weiwei Yang Assignee: Weiwei Yang Priority: Critical This issue was found when I worked on HDFS-11846. Instead of creating a new http client instance per request, I tried to reuse {{CloseableHttpClient}} in {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, every second request from the http client hangs, which could not get dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something wrong in the netty pipeline, this jira aims to 1) fix the problem in the server side 2) use the pool for client http clients to reduce the resource overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11871) balance include Parameter Usage Error
[ https://issues.apache.org/jira/browse/HDFS-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang resolved HDFS-11871. Resolution: Not A Problem > balance include Parameter Usage Error > - > > Key: HDFS-11871 > URL: https://issues.apache.org/jira/browse/HDFS-11871 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: kevy liu >Assignee: Weiwei Yang >Priority: Trivial > > [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h > Usage: hdfs balancer > [-policy ] the balancing policy: datanode or blockpool > [-threshold ]Percentage of disk capacity > [-exclude [-f | ]] > Excludes the specified datanodes. > [-include [-f | ]] > Includes only the specified datanodes. > [-idleiterations ] Number of consecutive idle > iterations (-1 for Infinite) before exit. > Parameter Description: > -f | > The parse separator in the code is: > String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11872) Ozone : implement StorageContainerManager#getStorageContainerLocations
Chen Liang created HDFS-11872: - Summary: Ozone : implement StorageContainerManager#getStorageContainerLocations Key: HDFS-11872 URL: https://issues.apache.org/jira/browse/HDFS-11872 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Chen Liang Assignee: Chen Liang We should implement {{StorageContainerManager#getStorageContainerLocations}} . Although the comment says it will be moved to KSM, the functionality of container lookup by name it should actually be part of SCM functionality. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11599) distcp interrupt does not kill hadoop job
[ https://issues.apache.org/jira/browse/HDFS-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-11599. - Resolution: Not A Problem Yes. The program running on the command line is just a client after job launch. To kill the program actually doing the work, you'll need to use the yarn or mapred commands. Closing as "Not a problem" > distcp interrupt does not kill hadoop job > - > > Key: HDFS-11599 > URL: https://issues.apache.org/jira/browse/HDFS-11599 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: David Fagnan > > keyboard interrupt for example leaves the hadoop job & copy still running, is > this intended behavior? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/ [May 22, 2017 6:16:25 PM] (brahma) HDFS-11863. Document missing metrics for blocks count in pending IBR. [May 22, 2017 6:39:19 PM] (brahma) HDFS-11849. JournalNode startup failure exception should be logged in [May 22, 2017 9:26:13 PM] (wangda) YARN-2113. Add cross-user preemption within CapacityScheduler's [May 22, 2017 9:28:55 PM] (wangda) YARN-6493. Print requested node partition in assignContainer logs. [May 23, 2017 12:53:47 AM] (arp) HDFS-11866. JournalNode Sync should be off by default in [May 23, 2017 3:25:34 AM] (arp) HDFS-11419. Performance analysis of new DFSNetworkTopology#chooseRandom. [May 23, 2017 11:33:28 AM] (rakeshr) HDFS-11794. Add ec sub command -listCodec to show currently supported ec -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.security.TestShellBasedUnixGroupsMapping hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-mvninstall-root.txt [496K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [144K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [740K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/a
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/412/ [May 22, 2017 8:40:06 AM] (sunilg) YARN-6584. Correct license headers in hadoop-common, hdfs, yarn and [May 22, 2017 6:16:25 PM] (brahma) HDFS-11863. Document missing metrics for blocks count in pending IBR. [May 22, 2017 6:39:19 PM] (brahma) HDFS-11849. JournalNode startup failure exception should be logged in [May 22, 2017 9:26:13 PM] (wangda) YARN-2113. Add cross-user preemption within CapacityScheduler's [May 22, 2017 9:28:55 PM] (wangda) YARN-6493. Print requested node partition in assignContainer logs. [May 23, 2017 12:53:47 AM] (arp) HDFS-11866. JournalNode Sync should be off by default in [May 23, 2017 3:25:34 AM] (arp) HDFS-11419. Performance analysis of new DFSNetworkTopology#chooseRandom. -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 350] org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet iterator instead of entrySet iterator At ECSchema.java:[line 193] Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance
[jira] [Created] (HDFS-11871) Parameter Usage Error
kevy liu created HDFS-11871: --- Summary: Parameter Usage Error Key: HDFS-11871 URL: https://issues.apache.org/jira/browse/HDFS-11871 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover Affects Versions: 2.7.3 Reporter: kevy liu Priority: Trivial [hadoop@bigdata-hdp-apache505 hadoop-2.7.2]$ bin/hdfs balancer -h Usage: hdfs balancer [-policy ] the balancing policy: datanode or blockpool [-threshold ]Percentage of disk capacity [-exclude [-f | ]] Excludes the specified datanodes. [-include [-f | ]] Includes only the specified datanodes. [-idleiterations ] Number of consecutive idle iterations (-1 for Infinite) before exit. Parameter Description: -f | The parse separator in the code is: String[] nodes = line.split("[ \t\n\f\r]+"); -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11870) Add CLI cmd to enable/disable an erasure code policy
SammiChen created HDFS-11870: Summary: Add CLI cmd to enable/disable an erasure code policy Key: HDFS-11870 URL: https://issues.apache.org/jira/browse/HDFS-11870 Project: Hadoop HDFS Issue Type: Task Reporter: SammiChen Assignee: SammiChen This task is to develop a CLI command to help user enable or disable and existing erasure coding policy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org