Thank you very much Allen for making the Windows build work again. We are going through the unit tests and fixing them for Windows (as you said mostly paths). We got a couple related patches in already; this will help us track progress.
On Thu, Mar 15, 2018 at 10:15 AM, Allen Wittenauer <a...@apache.org> wrote: > > For my part of the HDFS bug bash, I’ve gotten the ASF Windows > build working again. Starting tomorrow, results will be sent to the *-dev > lists. > > A few notes: > > * It only runs the unit tests. There’s not much point in running the > other Yetus plugins since those are covered by the Linux one and this build > is slow enough as it is. > > * There are two types of ASF build nodes: Windows Server 2012 and Windows > Server 2016. This job can run on both and will use whichever one has a free > slot. > > * It ALWAYS applies HADOOP-14667.05.patch prior to running. As a result, > this is only set up for trunk with no parameterization to run other > branches. > > * The URI handling for file paths in hadoop-common and elsewhere is pretty > broken on Windows, so many many many unit tests are failing and I wouldn't > be surprised if Windows hadoop installs are horked as a result. > > * Runtime is about 12-13 hours with many tests taking significantly longer > than their UNIX counterparts. My guess is that this caused by winutils. > Changing from winutils to Java 7 API calls would get this more in line and > be a significant performance boost for Windows clients/servers as well. > > Have fun. > > ===== > > For more details, see https://builds.apache.org/job/hadoop-trunk-win/406/ > <https://builds.apache.org/job/hadoop-trunk-win/406/> > > [Mar 14, 2018 6:26:58 PM] (xyao) HDFS-13251. Avoid using hard coded > datanode data dirs in unit tests. > [Mar 14, 2018 8:05:24 PM] (jlowe) MAPREDUCE-7064. Flaky test > [Mar 14, 2018 8:14:36 PM] (inigoiri) HDFS-13198. RBF: > RouterHeartbeatService throws out CachedStateStore > [Mar 14, 2018 8:36:53 PM] (wangda) Revert "HADOOP-13707. If kerberos is > enabled while HTTP SPNEGO is not > [Mar 14, 2018 10:47:56 PM] (fabbri) HADOOP-15278 log s3a at info. > Contributed by Steve Loughran. > > > > > -1 overall > > > The following subsystems voted -1: > unit > > > The following subsystems are considered long running: > (runtime bigger than 1h 00m 00s) > unit > > > Specific tests: > > Failed CTEST tests : > > test_test_libhdfs_threaded_hdfs_static > > Failed junit tests : > > hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec > hadoop.fs.contract.rawlocal.TestRawlocalContractAppend > hadoop.fs.TestFsShellCopy > hadoop.fs.TestFsShellList > hadoop.fs.TestLocalFileSystem > hadoop.http.TestHttpServer > hadoop.http.TestHttpServerLogs > hadoop.io.compress.TestCodec > hadoop.io.nativeio.TestNativeIO > hadoop.ipc.TestSocketFactory > hadoop.metrics2.impl.TestStatsDMetrics > hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal > hadoop.security.TestSecurityUtil > hadoop.security.TestShellBasedUnixGroupsMapping > hadoop.security.token.TestDtUtilShell > hadoop.util.TestNativeCodeLoader > hadoop.fs.TestWebHdfsFileContextMainOperations > hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy > hadoop.hdfs.crypto.TestHdfsCryptoStreams > hadoop.hdfs.qjournal.client.TestQuorumJournalManager > hadoop.hdfs.qjournal.server.TestJournalNode > hadoop.hdfs.qjournal.server.TestJournalNodeSync > hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks > hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped > hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages > hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks > hadoop.hdfs.server.blockmanagement.TestReplicationPolicy > hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl > hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles > hadoop.hdfs.server.datanode.fsdataset.impl. > TestLazyPersistLockedMemory > hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy > hadoop.hdfs.server.datanode.fsdataset.impl. > TestLazyPersistReplicaPlacement > hadoop.hdfs.server.datanode.fsdataset.impl. > TestLazyPersistReplicaRecovery > hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter > hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl > hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation > hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica > hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage > hadoop.hdfs.server.datanode.TestBlockRecovery > hadoop.hdfs.server.datanode.TestBlockScanner > hadoop.hdfs.server.datanode.TestDataNodeFaultInjector > hadoop.hdfs.server.datanode.TestDataNodeMetrics > hadoop.hdfs.server.datanode.TestDataNodeUUID > hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure > hadoop.hdfs.server.datanode.TestDirectoryScanner > hadoop.hdfs.server.datanode.TestHSync > hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame > hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand > hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC > hadoop.hdfs.server.federation.router.TestRouterAdminCLI > hadoop.hdfs.server.mover.TestStorageMover > hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA > hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA > hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics > hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstruction > WithSnapshot > hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot > hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots > hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff > hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > hadoop.hdfs.server.namenode.TestAddBlock > hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands > hadoop.hdfs.server.namenode.TestCheckpoint > hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate > hadoop.hdfs.server.namenode.TestEditLogRace > hadoop.hdfs.server.namenode.TestFileTruncate > hadoop.hdfs.server.namenode.TestFsck > hadoop.hdfs.server.namenode.TestFSImage > hadoop.hdfs.server.namenode.TestFSImageWithSnapshot > hadoop.hdfs.server.namenode.TestNamenodeCapacityReport > hadoop.hdfs.server.namenode.TestNameNodeMXBean > hadoop.hdfs.server.namenode.TestNestedEncryptionZones > hadoop.hdfs.server.namenode.TestQuotaByStorageType > hadoop.hdfs.server.namenode.TestStartup > hadoop.hdfs.TestDatanodeRegistration > hadoop.hdfs.TestDecommission > hadoop.hdfs.TestDFSOutputStream > hadoop.hdfs.TestDFSShell > hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy > hadoop.hdfs.TestDFSStripedOutputStreamWithFailure > hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy > hadoop.hdfs.TestFetchImage > hadoop.hdfs.TestFileAppend > hadoop.hdfs.TestFileConcurrentReader > hadoop.hdfs.TestFileCreation > hadoop.hdfs.TestFileLengthOnClusterRestart > hadoop.hdfs.TestHDFSFileSystemContract > hadoop.hdfs.TestHDFSServerPorts > hadoop.hdfs.TestHDFSTrash > hadoop.hdfs.TestHFlush > hadoop.hdfs.TestLeaseRecovery > hadoop.hdfs.TestLocalDFS > hadoop.hdfs.TestMaintenanceState > hadoop.hdfs.TestMiniDFSCluster > hadoop.hdfs.TestPread > hadoop.hdfs.TestQuota > hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData > hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData > hadoop.hdfs.TestReadStripedFileWithMissingBlocks > hadoop.hdfs.TestReconstructStripedFile > hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy > hadoop.hdfs.TestSecureEncryptionZoneWithKMS > hadoop.hdfs.TestTrashWithSecureEncryptionZones > hadoop.hdfs.tools.TestDebugAdmin > hadoop.hdfs.tools.TestDFSAdmin > hadoop.hdfs.tools.TestDFSAdminWithHA > hadoop.hdfs.tools.TestGetConf > hadoop.hdfs.web.TestWebHDFS > hadoop.hdfs.web.TestWebHdfsUrl > hadoop.fs.http.server.TestHttpFSServerWebServer > hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3 > hadoop.hdfs.nfs.nfs3.TestWrites > hadoop.yarn.logaggregation.filecontroller.ifile. > TestLogAggregationIndexFileController > hadoop.yarn.logaggregation.TestAggregatedLogFormat > hadoop.yarn.server.nodemanager.containermanager. > launcher.TestContainerLaunch > hadoop.yarn.server.nodemanager.containermanager.linux.privileged. > TestPrivilegedOperationExecutor > hadoop.yarn.server.nodemanager.containermanager.linux.resources. > TestCGroupsHandlerImpl > hadoop.yarn.server.nodemanager.containermanager.linux.runtime. > TestDockerContainerRuntime > hadoop.yarn.server.nodemanager.containermanager.linux.runtime. > TestJavaSandboxLinuxContainerRuntime > hadoop.yarn.server.nodemanager.containermanager.logaggregation. > TestAppLogAggregatorImpl > hadoop.yarn.server.nodemanager.containermanager.TestContainerManager > hadoop.yarn.server.nodemanager.recovery. > TestNMLeveldbStateStoreService > hadoop.yarn.server.nodemanager.TestContainerExecutor > hadoop.yarn.server.nodemanager.TestLocalDirsHandlerService > hadoop.yarn.server.nodemanager.TestNodeManagerResync > hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage > hadoop.yarn.server.applicationhistoryservice. > TestApplicationHistoryServer > hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilt > erForV1 > hadoop.yarn.server.resourcemanager.metrics. > TestSystemMetricsPublisher > hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore > hadoop.yarn.server.resourcemanager.scheduler.capacity.conf. > TestLeveldbConfigurationStore > hadoop.yarn.server.resourcemanager.scheduler. > capacity.TestCapacityScheduler > hadoop.yarn.server.resourcemanager.scheduler.constraint. > TestPlacementProcessor > hadoop.yarn.server.resourcemanager.scheduler.fair. > TestAllocationFileLoaderService > hadoop.yarn.server.resourcemanager.TestResourceTrackerService > hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore > hadoop.yarn.server.timelineservice.reader. > TestTimelineReaderWebServicesHBaseStorage > hadoop.yarn.server.timelineservice.storage.flow. > TestHBaseStorageFlowActivity > hadoop.yarn.server.timelineservice.storage.flow. > TestHBaseStorageFlowRun > hadoop.yarn.server.timelineservice.storage.flow. > TestHBaseStorageFlowRunCompaction > hadoop.yarn.server.timelineservice.storage. > TestHBaseTimelineStorageApps > hadoop.yarn.server.timelineservice.storage. > TestHBaseTimelineStorageEntities > hadoop.yarn.server.timelineservice.storage. > TestHBaseTimelineStorageSchema > hadoop.yarn.applications.distributedshell.TestDistributedShell > hadoop.mapred.TestTaskProgressReporter > hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter > hadoop.mapreduce.lib.output.TestFileOutputCommitter > hadoop.mapred.TestJavaSerialization > hadoop.mapred.TestLocalJobSubmission > hadoop.mapred.TestMiniMRChildTask > hadoop.mapred.TestMRTimelineEventHandling > hadoop.mapreduce.v2.TestMRJobs > hadoop.mapreduce.v2.TestUberAM > hadoop.yarn.service.client.TestBuildExternalComponents > hadoop.yarn.service.client.TestServiceCLI > hadoop.yarn.service.monitor.TestServiceMonitor > hadoop.yarn.service.providers.TestAbstractClientProvider > hadoop.yarn.service.TestServiceAM > hadoop.yarn.service.TestYarnNativeServices > hadoop.mapred.nativetask.handlers.TestNativeCollectorOnlyHandler > hadoop.mapred.uploader.TestFrameworkUploader > hadoop.tools.TestDistCpSystem > hadoop.tools.TestHadoopArchiveLogs > hadoop.mapred.gridmix.TestGridMixClasses > hadoop.fs.azure.TestClientThrottlingAnalyzer > hadoop.yarn.sls.TestReservationSystemInvariants > > > CTEST: > > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt> > [28K] > > unit: > > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-common-project_hadoop-common.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt> [276K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt> [1.5M] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt> > [68K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt> [20K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt> [16K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-common.txt> [40K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-server_hadoop-yarn-server-nodemanager.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit- > hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server- > nodemanager.txt> [216K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-server_hadoop-yarn-server-applicationhistoryservice.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt> > [32K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-server_hadoop-yarn-server-resourcemanager.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt> [116K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt> > [12K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt> > [12K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-applications_hadoop-yarn-applications-distributedshell.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-applications_hadoop-yarn-applications- > distributedshell.txt> [20K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop- > mapreduce-client-core.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit-hadoop-mapreduce-project_ > hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt> [136K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop- > mapreduce-client-jobclient.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit-hadoop-mapreduce-project_ > hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt> [108K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop- > yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-yarn-project_hadoop- > yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop- > yarn-services-core.txt> [92K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop- > mapreduce-client-nativetask.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit-hadoop-mapreduce-project_ > hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt> [12K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop- > mapreduce-client-uploader.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit-hadoop-mapreduce-project_ > hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt> [20K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-tools_hadoop-distcp.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit- > hadoop-tools_hadoop-distcp.txt> [16K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-tools_hadoop-archive-logs.txt < > https://builds.apache.org/job/hadoop-trunk-win/406/ > artifact/out/patch-unit-hadoop-tools_hadoop-archive-logs.txt> [8.0K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-tools_hadoop-gridmix.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit- > hadoop-tools_hadoop-gridmix.txt> [12K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-tools_hadoop-azure.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit- > hadoop-tools_hadoop-azure.txt> [16K] > https://builds.apache.org/job/hadoop-trunk-win/406/artifact/ > out/patch-unit-hadoop-tools_hadoop-sls.txt <https://builds.apache.org/ > job/hadoop-trunk-win/406/artifact/out/patch-unit- > hadoop-tools_hadoop-sls.txt> [12K] > > Powered by Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org < > http://yetus.apache.org/>