wiki.apache.org is no longer editable
Hi folks, https://wiki.apache.org/hadoop/ is no longer editable. If you want to edit a wiki page in wiki.apache.org, you need to migrate the page to https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Home. If you want to edit the wiki, please tell me your account for cwiki.apache.org. There are some error messages refer to the wiki page. We need to update the error messages when the page is migrated. Regards, Akira - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11740) Ozone: Differentiate time interval for different DatanodeStateMachine state tasks
[ https://issues.apache.org/jira/browse/HDFS-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang resolved HDFS-11740. Resolution: Later Will revisit this if necessary in feature. > Ozone: Differentiate time interval for different DatanodeStateMachine state > tasks > - > > Key: HDFS-11740 > URL: https://issues.apache.org/jira/browse/HDFS-11740 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11740-HDFS-7240.001.patch, > HDFS-11740-HDFS-7240.002.patch, HDFS-11740-HDFS-7240.003.patch, > statemachine_1.png, statemachine_2.png > > > Currently datanode state machine transitioned between tasks in a fixed time > interval, defined by {{ScmConfigKeys#OZONE_SCM_HEARTBEAT_INTERVAL_SECONDS}}, > the default value is 30s. Once datanode is started, it will need 90s before > transited to {{Heartbeat}} state, such a long lag is not necessary. Propose > to improve the logic of time interval handling, it seems only the heartbeat > task needs to be scheduled in {{OZONE_SCM_HEARTBEAT_INTERVAL_SECONDS}} > interval, rest should be done without any lagging. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby
James Clampffer created HDFS-11908: -- Summary: libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby Key: HDFS-11908 URL: https://issues.apache.org/jira/browse/HDFS-11908 Project: Hadoop HDFS Issue Type: Sub-task Reporter: James Clampffer Assignee: James Clampffer Library won't properly authenticate to kerberized HA cluster if the first namenode it tries to connect to is the standby. RpcConnection ends up attempting to use simple auth. Control flow to connect to NN for the first time: # RpcConnection constructed with a pointer to the RpcEngine as the only argument # RpcConnection::Connect(server endpoints, auth_info, callback called) ** auth_info contains the SASL mechanism to use + the delegation token if we already have one Control flow to connect to NN after failover: # RpcEngine::NewConnection called, allocates an RpcConnection exactly how step 1 above would # RpcEngine::InitializeConnection called, sets event hooks and a string for cluster name # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC message that didn't make it on last call due to standby exception # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes server endpoints, no auth info To fix: RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ from the existing RpcEngine::auth_info_, even better would be setting this in the constructor so if an RpcConnection exists it can be expected to be in a usable state. I'll get a diff up once I sort out CI build failures. Also really need to get CI test coverage for HA and kerberos because this issue should not have been around for so long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/ [May 30, 2017 5:07:58 PM] (brahma) HADOOP-14456. Modifier 'static' is redundant for inner enums. [May 30, 2017 6:10:12 PM] (lei) HDFS-11659. TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail [May 30, 2017 11:58:15 PM] (haibochen) YARN-6477. Dispatcher no longer needs the raw types suppression. (Maya [May 31, 2017 10:45:35 AM] (vvasudev) YARN-6366. Refactor the NodeManager DeletionService to support -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.sftp.TestSFTPFileSystem hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.namenode.TestProcessCorruptBlocks hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.server.mover.TestStorageMover hadoop.hdfs.TestRollingUpgrade hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.fs.http.server.TestHttpFSServerNoXAttrs hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestAMRMProxy hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.yarn.sls.appmaster.TestAMSimulator Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-mvninstall-root.txt [492K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [144K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [900K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K]
[jira] [Created] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently
Chen Liang created HDFS-11907: - Summary: NameNodeResourceChecker should avoid calling df.getAvailable too frequently Key: HDFS-11907 URL: https://issues.apache.org/jira/browse/HDFS-11907 Project: Hadoop HDFS Issue Type: Improvement Reporter: Chen Liang Assignee: Chen Liang Currently, {{HealthMonitor#doHealthChecks}} invokes {{NameNode#monitorHealth}} which ends up invoking {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per second by default. And NameNodeResourceChecker#isResourceAvailable invokes {{df.getAvailable();}} every time it is called. Which can be a potentially very expensive operation. Since available space information should rarely be changing dramatically at the pace of per second. A cached value should be sufficient. i.e. only try to get the updated value when the cached value is too old. otherwise simply return the cached value. This way df.getAvailable() gets invoked less. Thanks [~arpitagarwal] for the offline discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11906) Add log for NameNode#monitorHealth
Chen Liang created HDFS-11906: - Summary: Add log for NameNode#monitorHealth Key: HDFS-11906 URL: https://issues.apache.org/jira/browse/HDFS-11906 Project: Hadoop HDFS Issue Type: Improvement Reporter: Chen Liang Assignee: Chen Liang Priority: Minor We've seen cases where NN had long delays that we suspect were due to {{NameNode#monitorHealth}} was spending too much time on {{getNamesystem().checkAvailableResources();}}. However due to the lack of logging, it can be hard to verify. This JIRA tries to add some log to this function, that display the actual time spent. Thanks [~arpitagarwal] for the offline discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/420/ [May 30, 2017 8:22:40 AM] (sunilg) YARN-6635. Refactor yarn-app pages in new YARN UI. Contributed by Akhil [May 30, 2017 5:07:58 PM] (brahma) HADOOP-14456. Modifier 'static' is redundant for inner enums. [May 30, 2017 6:10:12 PM] (lei) HDFS-11659. TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail [May 30, 2017 11:58:15 PM] (haibochen) YARN-6477. Dispatcher no longer needs the raw types suppression. (Maya -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 350] org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet iterator instead of entrySet iterator At ECSchema.java:[line 193] Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory) unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl At DefaultMetricsFactory.java:[line 49] org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) unconditionally sets the field miniClusterMode At
[jira] [Created] (HDFS-11905) There still has a license header error in hdfs
Yeliang Cang created HDFS-11905: --- Summary: There still has a license header error in hdfs Key: HDFS-11905 URL: https://issues.apache.org/jira/browse/HDFS-11905 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0-alpha3 Reporter: Yeliang Cang Priority: Trivial Fix For: 3.0.0-alpha3 I have write a shell script to find license errors in hadoop, mapreduce, yarn and hdfs. An error still remains! -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org