Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/ [Apr 22, 2018 3:07:19 PM] (arp) HDFS-13055. Aggregate usage statistics from datanodes. Contributed by -1 overall The following subsystems voted -1: asflicense unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.yarn.api.resource.TestPlacementConstraintTransformations cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-compile-javac-root.txt [288K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/xml.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [308K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [40K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15405) adl:// use Configuration.getPassword() to look up fs.adl.oauth2.refresh.url
Steve Loughran created HADOOP-15405: --- Summary: adl:// use Configuration.getPassword() to look up fs.adl.oauth2.refresh.url Key: HADOOP-15405 URL: https://issues.apache.org/jira/browse/HADOOP-15405 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 3.1.0 Reporter: Steve Loughran the adl connector uses {{Configuration.getPassword()}} to look up the {{fs.adl.oauth2.refresh.url}} value; reports it as an unknown password on failure. it should be using getTrimmed() to get a trimmed string. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15406) hadoop-nfs dependencies for mockito and junit are not test scope
Jason Lowe created HADOOP-15406: --- Summary: hadoop-nfs dependencies for mockito and junit are not test scope Key: HADOOP-15406 URL: https://issues.apache.org/jira/browse/HADOOP-15406 Project: Hadoop Common Issue Type: Bug Components: nfs Reporter: Jason Lowe hadoop-nfs asks for mockito-all and junit for its unit tests but it does not mark the dependency as being required only for tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop
Esfandiar Manii created HADOOP-15407: Summary: Support Windows Azure Storage - Blob file system in Hadoop Key: HADOOP-15407 URL: https://issues.apache.org/jira/browse/HADOOP-15407 Project: Hadoop Common Issue Type: New Feature Components: fs/azure Affects Versions: 3.2.0 Reporter: Esfandiar Manii Assignee: Esfandiar Manii {color:#212121}{color:#33}Description{color}{color} {color:#212121}This JIRA adds a new file system implementation, ABFS, for running Big Data and Analytics workloads against Azure Storage. This is a complete rewrite of the previous WASB driver with a heavy focus on optimizing both performance and cost.{color} {color:#212121} {color} {color:#212121}{color:#33}High level design{color}{color} {color:#212121}At a high level, the code here extends the FileSystem class to provide an implementation for accessing blobs in Azure Storage. The scheme abfs is used for accessing it over HTTP, and abfss for accessing over HTTPS. The following URI scheme is used to address individual paths:{color} {color:#212121} {color} {color:#212121}abfs[s]://@.dfs.core.windows.net/{color} {color:#212121} {color} {color:#212121}ABFS is intended as a replacement to WASB. WASB is not deprecated but is in pure maintenance mode and customers should upgrade to ABFS once it hits General Availability later in CY18.{color} {color:#212121}Benefits of ABFS include:{color} {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big Data and Analytics workloads by allowing higher limits on storage accounts{color} {color:#212121}· Removing any ramp up time with Storage backend partitioning; blocks are now automatically sharded across partitions in the Storage backend{color} {color:#212121}oThis avoids the need for using temporary/intermediate files, increasing the cost (and framework complexity around committing jobs/tasks){color} {color:#212121}· Enabling much higher read and write throughput on single files (tens of Gbps by default){color} {color:#212121}· Still retaining all of the Azure Blob features customers are familiar with and expect, and gaining the benefits of future Blob features as well{color} {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the file system throughput and operations. Ambari metrics are not currently implemented for ABFS, but will be available soon.{color} {color:#212121} {color} {color:#212121}{color:#33}Credits and history{color}{color} {color:#212121}Credit for this work goes to (hope I don't forget anyone): Shane Mainali, {color}{color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, and James Baker. {color} {color:#212121}{color:#33} {color}{color} {color:#212121}{color:#33}Test{color}{color} {color:#212121}ABFS has gone through many test procedures including Hadoop file system contract tests, unit testing, functional testing, and manual testing. All the Junit tests provided with the driver are capable of running in both sequential/parallel fashion in order to reduce the testing time.{color} {color:#212121}Besides unit tests, we have used ABFS as the default file system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a storage option. (HDFS is also used but not as default file system.) Various different customer and test workloads have been run against clusters with such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, Spark Streaming and Spark SQL, and others have been run to do scenario, performance, and functional testing. Third parties and customers have also done various testing of ABFS.{color} {color:#212121}The current version reflects to the version of the code tested and used in our production environment.{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Hadoop-trunk-Commit failing due to libprotoc version
This appears to be happening again, e.g. https://builds.apache.org/job/Hadoop-trunk-Commit/14050/ On Mon, Mar 26, 2018 at 3:31 PM, Xiaoyu Yao wrote: > Could this caused by the docker base image changes? As shown in the error > message, we are still expecting protobuf v2.5.0 but the one in the docker > image is changed to libprotoc 2.6.1. > > In hadoop/dev-support/docker/Dockerfile, we have the following lines to > install libprotoc without specifying version like below. > > RUN apt-get -q update && apt-get -q install -y libprotoc-dev > > I think this can be fixed by specifying the version like: > > RUN apt-get -q update && apt-get -q install -y libprotoc-dev=2.5.0 > > On 3/26/18, 2:37 PM, "Sean Mackrory" wrote: > > Most of the commit jobs in the last few hours have failed. I would > suspect > a change in the machines or images used to run the job. Who has access > to > confirm such a change and perhaps correct it? > > [ERROR] Failed to execute goal > org.apache.hadoop:hadoop-maven-plugins:3.2.0-SNAPSHOT:protoc > (compile-protoc) on project hadoop-common: > org.apache.maven.plugin.MojoExecutionException: protoc version is > 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1] > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org >
[jira] [Resolved] (HADOOP-10859) Native implementation of java Checksum interface
[ https://issues.apache.org/jira/browse/HADOOP-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HADOOP-10859. -- Resolution: Won't Fix No plans to work on this. > Native implementation of java Checksum interface > > > Key: HADOOP-10859 > URL: https://issues.apache.org/jira/browse/HADOOP-10859 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Minor > > Some parts of our code such as IFileInputStream/IFileOutputStream use the > java Checksum interface to calculate/verify checksums. Currently we don't > have a native implementation of these. For CRC32C in particular, we can get a > very big speedup with a native implementation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-9545) Improve logging in ActiveStandbyElector
[ https://issues.apache.org/jira/browse/HADOOP-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HADOOP-9545. - Resolution: Won't Fix > Improve logging in ActiveStandbyElector > --- > > Key: HADOOP-9545 > URL: https://issues.apache.org/jira/browse/HADOOP-9545 > Project: Hadoop Common > Issue Type: Improvement > Components: auto-failover, ha >Affects Versions: 2.1.0-beta >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Minor > > The ActiveStandbyElector currently logs a lot of stuff at DEBUG level which > would be useful for troubleshooting. We've seen one instance in the wild of a > ZKFC thinking it should be in standby state when in fact it won the election, > but the logging is insufficient to understand why. I'd like to bump most of > the existing DEBUG logs to INFO and add some additional logs as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/446/ [Apr 22, 2018 3:07:19 PM] (arp) HDFS-13055. Aggregate usage statistics from datanodes. Contributed by [Apr 23, 2018 2:49:35 AM] (inigoiri) HDFS-13388. RequestHedgingProxyProvider calls multiple configured NNs [Apr 23, 2018 8:02:09 AM] (sunilg) YARN-7956. [UI2] Avoid duplicating Components link under [Apr 23, 2018 8:06:27 AM] (sunilg) YARN-8177. Documentation changes for auto creation of Leaf Queues with -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec hadoop.fs.contract.rawlocal.TestRawlocalContractAppend hadoop.fs.TestFileUtil hadoop.fs.TestFsShellCopy hadoop.fs.TestFsShellList hadoop.fs.TestLocalFileSystem hadoop.fs.TestRawLocalFileSystemContract hadoop.fs.TestSymlinkLocalFSFileContext hadoop.fs.TestTrash hadoop.http.TestHttpServer hadoop.http.TestHttpServerLogs hadoop.io.nativeio.TestNativeIO hadoop.ipc.TestIPC hadoop.ipc.TestSocketFactory hadoop.metrics2.impl.TestStatsDMetrics hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal hadoop.security.TestSecurityUtil hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.token.TestDtUtilShell hadoop.util.TestNativeCodeLoader hadoop.util.TestNodeHealthScriptRunner hadoop.fs.TestResolveHdfsSymlink hadoop.hdfs.crypto.TestHdfsCryptoStreams hadoop.hdfs.qjournal.client.TestQuorumJournalManager hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.balancer.TestBalancerRPCDelay hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage hadoop.hdfs.server.datanode.TestBlockRecovery hadoop.hdfs.server.datanode.TestBlockScanner hadoop.hdfs.server.datanode.TestDataNodeFaultInjector hadoop.hdfs.server.datanode.TestDataNodeMetrics hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestHSync hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.mover.TestStorageMover hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport hadoop.hdfs.server.namenode.TestAddBlock hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands hadoop.hdfs.server.namenode.TestCheckpoint hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate hadoop.hdfs.server.namenode.TestEditLogRace hadoop.hdfs.server.namenode.TestFileTruncate hadoop.hdfs.server.namenode.TestFsck hadoop.hdfs.server.namenode.TestFSImage hadoop.hdfs.server.namenode.TestFSImageWithSnapshot hadoop.hdfs.server.namenode.TestNamenodeCapacityReport hadoop.hdfs.server.namenode.TestNameNodeMXBean hadoop.hdfs.server.namenode.TestNestedEncryptionZones hadoop.hdfs.server.namenode.TestQuotaByStorageType hadoop.hdfs.