[ https://issues.apache.org/jira/browse/HDFS-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16257528#comment-16257528 ]
Xiao Chen commented on HDFS-12837: ---------------------------------- {noformat} Error Message expected:<5> but was:<0> Stacktrace java.lang.AssertionError: expected:<5> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.server.namenode.TestReencryption.testReencryptionKMSDown(TestReencryption.java:1776) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Standard Output 2017-11-16 12:31:00,596 [Thread-164] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.kms is not defined 2017-11-16 12:31:00,597 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(922)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-11-16 12:31:00,598 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(895)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context kms 2017-11-16 12:31:00,598 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-11-16 12:31:00,598 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-11-16 12:31:00,599 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:bindListener(1139)) - Jetty bound to port 40412 2017-11-16 12:31:00,599 [Thread-164] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-11-16 12:31:00,600 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@3d2acaa3{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-11-16 12:31:00,601 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@4a20fcf1{/static,jar:file:/home/jenkins/yetus-m2/hadoop-trunk-patch-0/org/apache/hadoop/hadoop-kms/3.1.0-SNAPSHOT/hadoop-kms-3.1.0-SNAPSHOT.jar!/webapps/static,AVAILABLE} 2017-11-16 12:31:00,629 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(123)) - ------------------------------------------------------------- 2017-11-16 12:31:00,629 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(124)) - Java runtime version : 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12 2017-11-16 12:31:00,629 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(126)) - User: jenkins 2017-11-16 12:31:00,630 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(127)) - KMS Hadoop Version: 3.1.0-SNAPSHOT 2017-11-16 12:31:00,630 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(128)) - ------------------------------------------------------------- 2017-11-16 12:31:00,630 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'CREATE' ACL '*' 2017-11-16 12:31:00,630 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'DELETE' ACL '*' 2017-11-16 12:31:00,631 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'ROLLOVER' ACL '*' 2017-11-16 12:31:00,631 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'GET' ACL '*' 2017-11-16 12:31:00,631 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'GET_KEYS' ACL '*' 2017-11-16 12:31:00,631 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'GET_METADATA' ACL '*' 2017-11-16 12:31:00,631 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'SET_KEY_MATERIAL' ACL '*' 2017-11-16 12:31:00,632 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'GENERATE_EEK' ACL '*' 2017-11-16 12:31:00,632 [Thread-164] INFO server.KMSACLs (KMSACLs.java:setKMSACLs(107)) - 'DECRYPT_EEK' ACL '*' 2017-11-16 12:31:00,637 [Thread-164] INFO server.KMSACLs (KMSACLs.java:parseAclsWithPrefix(185)) - default.key.acl. for KEY_OP 'READ' is set to '*' 2017-11-16 12:31:00,637 [Thread-164] INFO server.KMSACLs (KMSACLs.java:parseAclsWithPrefix(185)) - default.key.acl. for KEY_OP 'MANAGEMENT' is set to '*' 2017-11-16 12:31:00,638 [Thread-164] INFO server.KMSACLs (KMSACLs.java:parseAclsWithPrefix(185)) - default.key.acl. for KEY_OP 'GENERATE_EEK' is set to '*' 2017-11-16 12:31:00,638 [Thread-164] INFO server.KMSACLs (KMSACLs.java:parseAclsWithPrefix(185)) - default.key.acl. for KEY_OP 'DECRYPT_EEK' is set to '*' 2017-11-16 12:31:00,641 [Thread-164] INFO server.KMSAudit (KMSAudit.java:initializeAuditLoggers(157)) - Initializing audit logger class org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger 2017-11-16 12:31:00,643 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(173)) - Initialized KeyProvider CachingKeyProvider: jceks://file@/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/aca11bad-81b2-4be2-89b7-8aeb41637cca/kms.keystore 2017-11-16 12:31:00,644 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(187)) - Initialized KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: KeyProviderCryptoExtension: CachingKeyProvider: jceks://file@/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/aca11bad-81b2-4be2-89b7-8aeb41637cca/kms.keystore 2017-11-16 12:31:00,645 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(192)) - Default key bitlength is 128 2017-11-16 12:31:00,645 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextInitialized(193)) - KMS Started 2017-11-16 12:31:00,647 [Thread-164] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:updateCurrentKey(342)) - Updating the current master key for generating delegation tokens 2017-11-16 12:31:00,647 [Thread[Thread-173,5,main]] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:run(669)) - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2017-11-16 12:31:00,649 [Thread[Thread-173,5,main]] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:updateCurrentKey(342)) - Updating the current master key for generating delegation tokens 2017-11-16 12:31:00,906 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@7af46a80{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/4/kms/webapp/,AVAILABLE}{/kms} 2017-11-16 12:31:00,910 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@583b8443{HTTP/1.1,[http/1.1]}{localhost:40412} 2017-11-16 12:31:00,910 [Thread-164] INFO server.Server (Server.java:doStart(414)) - Started @16246ms 2017-11-16 12:31:00,915 [Thread-164] INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(121)) - loaded properties from hadoop-metrics2.properties 2017-11-16 12:31:00,917 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 0 second(s). 2017-11-16 12:31:00,917 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - KMS metrics system started 2017-11-16 12:31:00,919 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@47e0ba74] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-11-16 12:31:00,931 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(469)) - starting cluster: numNameNodes=1, numDataNodes=1 Formatting using clusterid: testClusterID 2017-11-16 12:31:00,933 [Thread-164] INFO namenode.FSEditLog (FSEditLog.java:newInstance(229)) - Edit logging is async:true 2017-11-16 12:31:00,935 [Thread-164] INFO kms.KMSClientProvider (KMSClientProvider.java:<init>(403)) - KMSClientProvider for KMS url: http://localhost:40412/kms/v1/ delegation token service: 127.0.0.1:40412 created. 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(746)) - KeyProvider: KeyProviderCryptoExtension: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@6ab99280 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(120)) - fsLock is fair: true 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(136)) - Detailed lock hold time metrics enabled: false 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(767)) - fsOwner = jenkins (auth:SIMPLE) 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(768)) - supergroup = supergroup 2017-11-16 12:31:00,936 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(769)) - isPermissionEnabled = true 2017-11-16 12:31:00,937 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(780)) - HA Enabled: false 2017-11-16 12:31:00,937 [Thread-164] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-11-16 12:31:00,937 [Thread-164] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(304)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2017-11-16 12:31:00,937 [Thread-164] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(312)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2017-11-16 12:31:00,938 [Thread-164] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2017-11-16 12:31:00,938 [Thread-164] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will start around 2017 Nov 16 12:31:00 2017-11-16 12:31:00,938 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap 2017-11-16 12:31:00,938 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:00,938 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB 2017-11-16 12:31:00,939 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304 entries 2017-11-16 12:31:00,942 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(585)) - dfs.block.access.token.enable = false 2017-11-16 12:31:00,942 [Thread-164] INFO Configuration.deprecation (Configuration.java:logDeprecation(1375)) - No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2017-11-16 12:31:00,942 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 0 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(571)) - defaultReplication = 1 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(572)) - maxReplication = 512 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(573)) - minReplication = 1 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(574)) - maxReplicationStreams = 2 2017-11-16 12:31:00,943 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(575)) - redundancyRecheckInterval = 3000ms 2017-11-16 12:31:00,944 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(576)) - encryptDataTransfer = false 2017-11-16 12:31:00,944 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(577)) - maxNumBlocksToLog = 1000 2017-11-16 12:31:00,944 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap 2017-11-16 12:31:00,944 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:00,945 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB 2017-11-16 12:31:00,945 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries 2017-11-16 12:31:00,946 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(284)) - ACLs enabled? false 2017-11-16 12:31:00,947 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(288)) - POSIX ACL inheritance enabled? true 2017-11-16 12:31:00,947 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(292)) - XAttrs enabled? true 2017-11-16 12:31:00,947 [Thread-164] INFO namenode.NameNode (FSDirectory.java:<init>(356)) - Caching file names occurring more than 10 times 2017-11-16 12:31:00,947 [Thread-164] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:<init>(214)) - Configured throttleLimitHandlerRatio=1.0 for re-encryption 2017-11-16 12:31:00,947 [Thread-164] INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(117)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true 2017-11-16 12:31:00,948 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks 2017-11-16 12:31:00,948 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:00,948 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB 2017-11-16 12:31:00,948 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288 entries 2017-11-16 12:31:00,949 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2017-11-16 12:31:00,949 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10 2017-11-16 12:31:00,949 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2017-11-16 12:31:00,950 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(972)) - Retry cache on namenode is enabled 2017-11-16 12:31:00,950 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(980)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2017-11-16 12:31:00,950 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache 2017-11-16 12:31:00,950 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:00,950 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB 2017-11-16 12:31:00,951 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536 entries 2017-11-16 12:31:00,952 [Thread-164] INFO namenode.FSImage (FSImage.java:format(164)) - Allocated new BlockPoolId: BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:00,954 [Thread-164] INFO common.Storage (NNStorage.java:format(583)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 has been successfully formatted. 2017-11-16 12:31:00,956 [Thread-164] INFO common.Storage (NNStorage.java:format(583)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 has been successfully formatted. 2017-11-16 12:31:00,957 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(451)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression 2017-11-16 12:31:00,957 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(451)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression 2017-11-16 12:31:00,963 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(454)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 392 bytes saved in 0 seconds. 2017-11-16 12:31:00,964 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(454)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 392 bytes saved in 0 seconds. 2017-11-16 12:31:00,969 [Thread-164] INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1 images with txid >= 0 2017-11-16 12:31:00,971 [Thread-164] INFO namenode.NameNode (NameNode.java:createNameNode(1595)) - createNameNode [] 2017-11-16 12:31:00,971 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - NameNode metrics system started (again) 2017-11-16 12:31:00,972 [Thread-164] INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(467)) - fs.defaultFS is hdfs://127.0.0.1:0 2017-11-16 12:31:00,978 [Thread-164] INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1659)) - Starting Web-server for hdfs at: http://localhost:0 2017-11-16 12:31:00,979 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:getWebAppsPath(1004)) - Web server is in development mode. Resources will be read from the source tree. 2017-11-16 12:31:00,980 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@c3f5460] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-11-16 12:31:00,981 [Thread-164] INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-11-16 12:31:00,982 [Thread-164] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined 2017-11-16 12:31:00,982 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:getWebAppsPath(1004)) - Web server is in development mode. Resources will be read from the source tree. 2017-11-16 12:31:00,984 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(922)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-11-16 12:31:00,984 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(895)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2017-11-16 12:31:00,984 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-11-16 12:31:00,985 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-11-16 12:31:00,986 [Thread-164] INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(98)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2017-11-16 12:31:00,986 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(780)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2017-11-16 12:31:00,986 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:bindListener(1139)) - Jetty bound to port 41989 2017-11-16 12:31:00,987 [Thread-164] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-11-16 12:31:00,990 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@79be4e12{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-11-16 12:31:00,990 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@59f8025d{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,AVAILABLE} 2017-11-16 12:31:00,995 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@5adb0bc7{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/,AVAILABLE}{/hdfs} 2017-11-16 12:31:00,997 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@6879ed99{HTTP/1.1,[http/1.1]}{localhost:41989} 2017-11-16 12:31:00,997 [Thread-164] INFO server.Server (Server.java:doStart(414)) - Started @16333ms 2017-11-16 12:31:01,000 [Thread-164] INFO namenode.FSEditLog (FSEditLog.java:newInstance(229)) - Edit logging is async:true 2017-11-16 12:31:01,001 [Thread-164] INFO kms.KMSClientProvider (KMSClientProvider.java:<init>(403)) - KMSClientProvider for KMS url: http://localhost:40412/kms/v1/ delegation token service: 127.0.0.1:40412 created. 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(746)) - KeyProvider: KeyProviderCryptoExtension: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@4a40e2aa 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(120)) - fsLock is fair: true 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(136)) - Detailed lock hold time metrics enabled: false 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(767)) - fsOwner = jenkins (auth:SIMPLE) 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(768)) - supergroup = supergroup 2017-11-16 12:31:01,002 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(769)) - isPermissionEnabled = true 2017-11-16 12:31:01,003 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(780)) - HA Enabled: false 2017-11-16 12:31:01,003 [Thread-164] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-11-16 12:31:01,003 [Thread-164] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(304)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2017-11-16 12:31:01,003 [Thread-164] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(312)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2017-11-16 12:31:01,004 [Thread-164] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2017-11-16 12:31:01,004 [Thread-164] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will start around 2017 Nov 16 12:31:01 2017-11-16 12:31:01,004 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap 2017-11-16 12:31:01,004 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:01,005 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB 2017-11-16 12:31:01,005 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304 entries 2017-11-16 12:31:01,008 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(585)) - dfs.block.access.token.enable = false 2017-11-16 12:31:01,009 [Thread-164] INFO Configuration.deprecation (Configuration.java:logDeprecation(1375)) - No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2017-11-16 12:31:01,009 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2017-11-16 12:31:01,009 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0 2017-11-16 12:31:01,009 [Thread-164] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 0 2017-11-16 12:31:01,009 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(571)) - defaultReplication = 1 2017-11-16 12:31:01,009 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(572)) - maxReplication = 512 2017-11-16 12:31:01,010 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(573)) - minReplication = 1 2017-11-16 12:31:01,010 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(574)) - maxReplicationStreams = 2 2017-11-16 12:31:01,010 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(575)) - redundancyRecheckInterval = 3000ms 2017-11-16 12:31:01,010 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(576)) - encryptDataTransfer = false 2017-11-16 12:31:01,010 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:<init>(577)) - maxNumBlocksToLog = 1000 2017-11-16 12:31:01,011 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap 2017-11-16 12:31:01,011 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:01,011 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB 2017-11-16 12:31:01,011 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries 2017-11-16 12:31:01,015 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(284)) - ACLs enabled? false 2017-11-16 12:31:01,015 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(288)) - POSIX ACL inheritance enabled? true 2017-11-16 12:31:01,015 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:<init>(292)) - XAttrs enabled? true 2017-11-16 12:31:01,016 [Thread-164] INFO namenode.NameNode (FSDirectory.java:<init>(356)) - Caching file names occurring more than 10 times 2017-11-16 12:31:01,016 [Thread-164] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:<init>(214)) - Configured throttleLimitHandlerRatio=1.0 for re-encryption 2017-11-16 12:31:01,016 [Thread-164] INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(117)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true 2017-11-16 12:31:01,016 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks 2017-11-16 12:31:01,016 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:01,017 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB 2017-11-16 12:31:01,017 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288 entries 2017-11-16 12:31:01,018 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2017-11-16 12:31:01,018 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10 2017-11-16 12:31:01,018 [Thread-164] INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2017-11-16 12:31:01,018 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(972)) - Retry cache on namenode is enabled 2017-11-16 12:31:01,018 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(980)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2017-11-16 12:31:01,019 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache 2017-11-16 12:31:01,019 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-11-16 12:31:01,019 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB 2017-11-16 12:31:01,019 [Thread-164] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536 entries 2017-11-16 12:31:01,022 [Thread-164] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/in_use.lock acquired by nodename 6390@c6c891eb9e98 2017-11-16 12:31:01,023 [Thread-164] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/in_use.lock acquired by nodename 6390@c6c891eb9e98 2017-11-16 12:31:01,024 [Thread-164] INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(396)) - Recovering unfinalized segments in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current 2017-11-16 12:31:01,025 [Thread-164] INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(396)) - Recovering unfinalized segments in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current 2017-11-16 12:31:01,025 [Thread-164] INFO namenode.FSImage (FSImage.java:loadFSImage(712)) - No edit log streams selected. 2017-11-16 12:31:01,025 [Thread-164] INFO namenode.FSImage (FSImage.java:loadFSImageFile(775)) - Planning to load image: FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2017-11-16 12:31:01,027 [Thread-164] INFO namenode.ErasureCodingPolicyManager (ErasureCodingPolicyManager.java:enablePolicy(390)) - Enable the erasure coding policy RS-6-3-1024k 2017-11-16 12:31:01,028 [Thread-164] INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(266)) - Loading 1 INodes. 2017-11-16 12:31:01,029 [Thread-164] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(190)) - Loaded FSImage in 0 seconds. 2017-11-16 12:31:01,029 [Thread-164] INFO namenode.FSImage (FSImage.java:loadFSImage(941)) - Loaded image for txid 0 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage_0000000000000000000 2017-11-16 12:31:01,029 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1084)) - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2017-11-16 12:31:01,030 [Thread-164] INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1361)) - Starting log segment at 1 2017-11-16 12:31:01,039 [Thread-164] INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups 2017-11-16 12:31:01,039 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(718)) - Finished loading FSImage in 18 msecs 2017-11-16 12:31:01,040 [Thread-164] INFO namenode.NameNode (NameNodeRpcServer.java:<init>(441)) - RPC server is binding to localhost:0 2017-11-16 12:31:01,040 [Thread-164] INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-11-16 12:31:01,041 [Socket Reader #1 for port 39526] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 39526 2017-11-16 12:31:01,045 [Thread-164] INFO namenode.NameNode (NameNode.java:initialize(740)) - Clients are to use localhost:39526 to access this namenode/service. 2017-11-16 12:31:01,045 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4937)) - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans. 2017-11-16 12:31:01,064 [Thread-164] INFO namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(159)) - Number of blocks under construction: 0 2017-11-16 12:31:01,065 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:initializeReplQueues(4711)) - initializing replication queues 2017-11-16 12:31:01,065 [Thread-164] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(396)) - STATE* Leaving safe mode after 0 secs 2017-11-16 12:31:01,065 [Thread-164] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(402)) - STATE* Network topology has 0 racks and 0 datanodes 2017-11-16 12:31:01,065 [Thread-164] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(404)) - STATE* UnderReplicatedBlocks has 0 blocks 2017-11-16 12:31:01,072 [Thread-164] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:updateCurrentKey(342)) - Updating the current master key for generating delegation tokens 2017-11-16 12:31:01,075 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3397)) - Total number of blocks = 0 2017-11-16 12:31:01,075 [Thread[Thread-203,5,main]] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:run(669)) - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2017-11-16 12:31:01,075 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3398)) - Number of invalid blocks = 0 2017-11-16 12:31:01,076 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3399)) - Number of under-replicated blocks = 0 2017-11-16 12:31:01,076 [Thread[Thread-203,5,main]] INFO delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:updateCurrentKey(342)) - Updating the current master key for generating delegation tokens 2017-11-16 12:31:01,076 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3400)) - Number of over-replicated blocks = 0 2017-11-16 12:31:01,076 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3402)) - Number of blocks being written = 0 2017-11-16 12:31:01,076 [Reconstruction Queue Initializer] INFO hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3405)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 11 msec 2017-11-16 12:31:01,078 [IPC Server listener on 39526] INFO ipc.Server (Server.java:run(1146)) - IPC Server listener on 39526: starting 2017-11-16 12:31:01,082 [IPC Server Responder] INFO ipc.Server (Server.java:run(1307)) - IPC Server Responder: starting 2017-11-16 12:31:01,093 [Thread-164] INFO namenode.NameNode (NameNode.java:startCommonServices(833)) - NameNode RPC up at: localhost/127.0.0.1:39526 2017-11-16 12:31:01,094 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1196)) - Starting services required for active state 2017-11-16 12:31:01,094 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(761)) - Initializing quota with 4 thread(s) 2017-11-16 12:31:01,095 [Thread-164] INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(770)) - Quota initialization completed in 0 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 2017-11-16 12:31:01,102 [reencryptionHandlerThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:run(327)) - Starting up re-encrypt thread with interval=60000 millisecond. 2017-11-16 12:31:01,111 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:startSPS(4915)) - Failed to start StoragePolicySatisfier as dfs.storage.policy.enabled set to true and dfs.storage.policy.satisfier.enabled set to false. 2017-11-16 12:31:01,111 [Warm Up EDEK Cache Thread #0] INFO namenode.NameNode (FSDirEncryptionZoneOp.java:run(567)) - Warming up 0 EDEKs... (initialDelay=3000, retryInterval=1000) 2017-11-16 12:31:01,111 [CacheReplicationMonitor(420712187)] INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor with interval 30000 milliseconds 2017-11-16 12:31:01,120 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2 2017-11-16 12:31:01,121 [Thread-164] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 2017-11-16 12:31:01,121 [Thread-164] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2 2017-11-16 12:31:01,142 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) 2017-11-16 12:31:01,143 [Thread-164] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-11-16 12:31:01,143 [Thread-164] INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576 2017-11-16 12:31:01,144 [Thread-164] INFO datanode.DataNode (DataNode.java:<init>(497)) - Configured hostname is 127.0.0.1 2017-11-16 12:31:01,145 [Thread-164] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-11-16 12:31:01,145 [Thread-164] INFO datanode.DataNode (DataNode.java:startDataNode(1386)) - Starting DataNode with maxLockedMemory = 0 2017-11-16 12:31:01,146 [Thread-164] INFO datanode.DataNode (DataNode.java:initDataXceiver(1143)) - Opened streaming server at /127.0.0.1:37608 2017-11-16 12:31:01,146 [Thread-164] INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s 2017-11-16 12:31:01,146 [Thread-164] INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50 2017-11-16 12:31:01,147 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:getWebAppsPath(1004)) - Web server is in development mode. Resources will be read from the source tree. 2017-11-16 12:31:01,149 [Thread-164] INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-11-16 12:31:01,149 [Thread-164] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined 2017-11-16 12:31:01,149 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:getWebAppsPath(1004)) - Web server is in development mode. Resources will be read from the source tree. 2017-11-16 12:31:01,152 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(922)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-11-16 12:31:01,153 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(895)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2017-11-16 12:31:01,153 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-11-16 12:31:01,153 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:addFilter(905)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-11-16 12:31:01,155 [Thread-164] INFO http.HttpServer2 (HttpServer2.java:bindListener(1139)) - Jetty bound to port 38701 2017-11-16 12:31:01,155 [Thread-164] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-11-16 12:31:01,158 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@6dc86c2f{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-11-16 12:31:01,159 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2c631819{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,AVAILABLE} 2017-11-16 12:31:01,166 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@35c113a{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/,AVAILABLE}{/datanode} 2017-11-16 12:31:01,168 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@5fa0b45e{HTTP/1.1,[http/1.1]}{localhost:38701} 2017-11-16 12:31:01,168 [Thread-164] INFO server.Server (Server.java:doStart(414)) - Started @16504ms 2017-11-16 12:31:01,179 [Thread-164] INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(239)) - Listening HTTP traffic on /127.0.0.1:35870 2017-11-16 12:31:01,183 [Thread-164] INFO datanode.DataNode (DataNode.java:startDataNode(1413)) - dnUserName = jenkins 2017-11-16 12:31:01,183 [Thread-164] INFO datanode.DataNode (DataNode.java:startDataNode(1414)) - supergroup = supergroup 2017-11-16 12:31:01,184 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@29b937a6] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-11-16 12:31:01,184 [Thread-164] INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-11-16 12:31:01,185 [Socket Reader #1 for port 41645] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 41645 2017-11-16 12:31:01,191 [Thread-164] INFO datanode.DataNode (DataNode.java:initIpcServer(1030)) - Opened IPC server at /127.0.0.1:41645 2017-11-16 12:31:01,197 [Thread-164] INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null 2017-11-16 12:31:01,197 [Thread-164] INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices for nameservices: <default> 2017-11-16 12:31:01,198 [Thread-241] INFO datanode.DataNode (BPServiceActor.java:run(839)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:39526 starting to offer service 2017-11-16 12:31:01,211 [IPC Server Responder] INFO ipc.Server (Server.java:run(1307)) - IPC Server Responder: starting 2017-11-16 12:31:01,211 [IPC Server listener on 41645] INFO ipc.Server (Server.java:run(1146)) - IPC Server listener on 41645: starting 2017-11-16 12:31:01,233 [Thread-241] INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(364)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:39526 2017-11-16 12:31:01,233 [Thread-241] INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2) 2017-11-16 12:31:01,235 [Thread-241] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/in_use.lock acquired by nodename 6390@c6c891eb9e98 2017-11-16 12:31:01,236 [Thread-241] INFO common.Storage (DataStorage.java:loadStorageDirectory(273)) - Storage directory with location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 is not formatted for namespace 1877003900. Formatting... 2017-11-16 12:31:01,236 [Thread-241] INFO common.Storage (DataStorage.java:createStorageID(150)) - Generated new storageID DS-7c710733-1c9a-4bd6-9624-8545d63cf48b for directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 2017-11-16 12:31:01,237 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc 2017-11-16 12:31:01,238 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2653)) - dnInfo.length != numDataNodes 2017-11-16 12:31:01,239 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2574)) - Waiting for cluster to become active 2017-11-16 12:31:01,241 [Thread-241] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2/in_use.lock acquired by nodename 6390@c6c891eb9e98 2017-11-16 12:31:01,241 [Thread-241] INFO common.Storage (DataStorage.java:loadStorageDirectory(273)) - Storage directory with location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2 is not formatted for namespace 1877003900. Formatting... 2017-11-16 12:31:01,241 [Thread-241] INFO common.Storage (DataStorage.java:createStorageID(150)) - Generated new storageID DS-06da148c-d814-4f49-b09a-f728513f8c9c for directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2 2017-11-16 12:31:01,255 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage directories for bpid BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,256 [Thread-241] INFO common.Storage (Storage.java:lock(806)) - Locking is disabled for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/current/BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,256 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage directory for location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 and block pool id BP-1262690748-172.17.0.2-1510835460952 is not formatted for BP-1262690748-172.17.0.2-1510835460952. Formatting ... 2017-11-16 12:31:01,256 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:format(280)) - Formatting block pool BP-1262690748-172.17.0.2-1510835460952 directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/current/BP-1262690748-172.17.0.2-1510835460952/current 2017-11-16 12:31:01,273 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage directories for bpid BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,273 [Thread-241] INFO common.Storage (Storage.java:lock(806)) - Locking is disabled for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2/current/BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,273 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage directory for location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2 and block pool id BP-1262690748-172.17.0.2-1510835460952 is not formatted for BP-1262690748-172.17.0.2-1510835460952. Formatting ... 2017-11-16 12:31:01,273 [Thread-241] INFO common.Storage (BlockPoolSliceStorage.java:format(280)) - Formatting block pool BP-1262690748-172.17.0.2-1510835460952 directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2/current/BP-1262690748-172.17.0.2-1510835460952/current 2017-11-16 12:31:01,275 [Thread-241] INFO datanode.DataNode (DataNode.java:initStorage(1697)) - Setting up storage: nsid=1877003900;bpid=BP-1262690748-172.17.0.2-1510835460952;lv=-57;nsInfo=lv=-64;cid=testClusterID;nsid=1877003900;c=1510835460952;bpid=BP-1262690748-172.17.0.2-1510835460952;dnuuid=null 2017-11-16 12:31:01,275 [Thread-241] INFO datanode.DataNode (DataNode.java:checkDatanodeUuid(1521)) - Generated and persisted new Datanode UUID 1fd36863-86c1-4652-a710-49bf92f7d2b8 2017-11-16 12:31:01,285 [Thread-241] INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(306)) - Added new volume: DS-7c710733-1c9a-4bd6-9624-8545d63cf48b 2017-11-16 12:31:01,285 [Thread-241] INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(431)) - Added volume - [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1, StorageType: DISK 2017 ...[truncated 5198 chars]... ch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2, DS-06da148c-d814-4f49-b09a-f728513f8c9c): finished scanning block pool BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,318 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2)] INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2, DS-06da148c-d814-4f49-b09a-f728513f8c9c): no suitable block pools found to scan. Waiting 1814399999 ms. 2017-11-16 12:31:01,318 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(386)) - Now scanning bpid BP-1262690748-172.17.0.2-1510835460952 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 2017-11-16 12:31:01,319 [Thread-241] INFO datanode.DirectoryScanner (DirectoryScanner.java:start(281)) - Periodic Directory Tree Verification scan starting at 11/16/17 6:01 PM with interval of 21600000ms 2017-11-16 12:31:01,319 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1, DS-7c710733-1c9a-4bd6-9624-8545d63cf48b): finished scanning block pool BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,320 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BPServiceActor.java:register(793)) - Block pool BP-1262690748-172.17.0.2-1510835460952 (Datanode Uuid 1fd36863-86c1-4652-a710-49bf92f7d2b8) service to localhost/127.0.0.1:39526 beginning handshake with NN 2017-11-16 12:31:01,320 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1, DS-7c710733-1c9a-4bd6-9624-8545d63cf48b): no suitable block pools found to scan. Waiting 1814399998 ms. 2017-11-16 12:31:01,321 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(1047)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:37608, datanodeUuid=1fd36863-86c1-4652-a710-49bf92f7d2b8, infoPort=35870, infoSecurePort=0, ipcPort=41645, storageInfo=lv=-57;cid=testClusterID;nsid=1877003900;c=1510835460952) storage 1fd36863-86c1-4652-a710-49bf92f7d2b8 2017-11-16 12:31:01,322 [IPC Server handler 2 on 39526] INFO net.NetworkTopology (NetworkTopology.java:add(145)) - Adding a new node: /default-rack/127.0.0.1:37608 2017-11-16 12:31:01,322 [IPC Server handler 2 on 39526] INFO blockmanagement.BlockReportLeaseManager (BlockReportLeaseManager.java:registerNode(204)) - Registered DN 1fd36863-86c1-4652-a710-49bf92f7d2b8 (127.0.0.1:37608). 2017-11-16 12:31:01,324 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BPServiceActor.java:register(812)) - Block pool Block pool BP-1262690748-172.17.0.2-1510835460952 (Datanode Uuid 1fd36863-86c1-4652-a710-49bf92f7d2b8) service to localhost/127.0.0.1:39526 successfully registered with NN 2017-11-16 12:31:01,324 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BPServiceActor.java:offerService(642)) - For namenode localhost/127.0.0.1:39526 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 2017-11-16 12:31:01,327 [IPC Server handler 3 on 39526] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(946)) - Adding new storage ID DS-7c710733-1c9a-4bd6-9624-8545d63cf48b for DN 127.0.0.1:37608 2017-11-16 12:31:01,327 [IPC Server handler 3 on 39526] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(946)) - Adding new storage ID DS-06da148c-d814-4f49-b09a-f728513f8c9c for DN 127.0.0.1:37608 2017-11-16 12:31:01,331 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2479)) - BLOCK* processReport 0x2612e9789e1bb44a: Processing first storage report for DS-06da148c-d814-4f49-b09a-f728513f8c9c from datanode 1fd36863-86c1-4652-a710-49bf92f7d2b8 2017-11-16 12:31:01,331 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2505)) - BLOCK* processReport 0x2612e9789e1bb44a: from storage DS-06da148c-d814-4f49-b09a-f728513f8c9c node DatanodeRegistration(127.0.0.1:37608, datanodeUuid=1fd36863-86c1-4652-a710-49bf92f7d2b8, infoPort=35870, infoSecurePort=0, ipcPort=41645, storageInfo=lv=-57;cid=testClusterID;nsid=1877003900;c=1510835460952), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2017-11-16 12:31:01,332 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2479)) - BLOCK* processReport 0x2612e9789e1bb44a: Processing first storage report for DS-7c710733-1c9a-4bd6-9624-8545d63cf48b from datanode 1fd36863-86c1-4652-a710-49bf92f7d2b8 2017-11-16 12:31:01,332 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2505)) - BLOCK* processReport 0x2612e9789e1bb44a: from storage DS-7c710733-1c9a-4bd6-9624-8545d63cf48b node DatanodeRegistration(127.0.0.1:37608, datanodeUuid=1fd36863-86c1-4652-a710-49bf92f7d2b8, infoPort=35870, infoSecurePort=0, ipcPort=41645, storageInfo=lv=-57;cid=testClusterID;nsid=1877003900;c=1510835460952), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2017-11-16 12:31:01,332 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BPServiceActor.java:blockReport(424)) - Successfully sent block report 0x2612e9789e1bb44a, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2017-11-16 12:31:01,333 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BPOfferService.java:processCommandFromActive(745)) - Got finalize command for block pool BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:01,341 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc 2017-11-16 12:31:01,342 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2629)) - Cluster is active 2017-11-16 12:31:01,346 [IPC Server handler 6 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc 2017-11-16 12:31:01,347 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2629)) - Cluster is active 2017-11-16 12:31:01,754 [qtp1923447936-292] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=CREATE_KEY, key=test_key, user=jenkins] UserProvidedMaterial:false Description:test_key 2017-11-16 12:31:01,760 [IPC Server handler 7 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:01,770 [qtp1923447936-298] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=GET_METADATA, key=test_key, user=jenkins] 2017-11-16 12:31:02,218 [qtp1923447936-297] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditEvent(59)) - OK[op=GENERATE_EEK, key=test_key, user=jenkins, accessCount=1, interval=0ms] 2017-11-16 12:31:02,227 [IPC Server handler 8 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=createEncryptionZone src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,229 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,232 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/0 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,236 [Thread-164] INFO kms.KMSClientProvider (KMSClientProvider.java:<init>(403)) - KMSClientProvider for KMS url: http://localhost:40412/kms/v1/ delegation token service: 127.0.0.1:40412 created. 2017-11-16 12:31:02,458 [qtp1923447936-293] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditEvent(59)) - OK[op=DECRYPT_EEK, key=test_key, user=jenkins, accessCount=1, interval=0ms] 2017-11-16 12:31:02,464 [IPC Server handler 7 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:37608 for /zones/zone/0 2017-11-16 12:31:02,468 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53148 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001 src: /127.0.0.1:53148 dest: /127.0.0.1:37608 2017-11-16 12:31:02,474 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53148, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001, duration(ns): 3412214 2017-11-16 12:31:02,474 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,477 [IPC Server handler 8 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/0 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,478 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,480 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/1 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,491 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:37608 for /zones/zone/1 2017-11-16 12:31:02,493 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53149 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002 src: /127.0.0.1:53149 dest: /127.0.0.1:37608 2017-11-16 12:31:02,499 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53149, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002, duration(ns): 2662295 2017-11-16 12:31:02,499 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,501 [IPC Server handler 4 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/1 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,502 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,506 [IPC Server handler 6 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/2 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,514 [IPC Server handler 7 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:37608 for /zones/zone/2 2017-11-16 12:31:02,516 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53150 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003 src: /127.0.0.1:53150 dest: /127.0.0.1:37608 2017-11-16 12:31:02,523 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53150, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003, duration(ns): 4766611 2017-11-16 12:31:02,523 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,525 [IPC Server handler 8 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/2 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,527 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,529 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/3 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,538 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741828_1004, replicas=127.0.0.1:37608 for /zones/zone/3 2017-11-16 12:31:02,540 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53151 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004 src: /127.0.0.1:53151 dest: /127.0.0.1:37608 2017-11-16 12:31:02,545 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53151, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004, duration(ns): 3129760 2017-11-16 12:31:02,545 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,546 [IPC Server handler 4 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/3 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,547 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,550 [IPC Server handler 6 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/4 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,560 [IPC Server handler 7 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741829_1005, replicas=127.0.0.1:37608 for /zones/zone/4 2017-11-16 12:31:02,563 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53152 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005 src: /127.0.0.1:53152 dest: /127.0.0.1:37608 2017-11-16 12:31:02,570 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53152, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005, duration(ns): 4722785 2017-11-16 12:31:02,570 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,572 [IPC Server handler 8 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/4 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,574 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,576 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/5 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,586 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741830_1006, replicas=127.0.0.1:37608 for /zones/zone/5 2017-11-16 12:31:02,588 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53153 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006 src: /127.0.0.1:53153 dest: /127.0.0.1:37608 2017-11-16 12:31:02,593 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53153, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006, duration(ns): 3080018 2017-11-16 12:31:02,593 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,595 [IPC Server handler 4 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/5 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,597 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,599 [IPC Server handler 6 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/6 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,608 [IPC Server handler 7 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741831_1007, replicas=127.0.0.1:37608 for /zones/zone/6 2017-11-16 12:31:02,611 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53154 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007 src: /127.0.0.1:53154 dest: /127.0.0.1:37608 2017-11-16 12:31:02,617 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53154, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007, duration(ns): 4270890 2017-11-16 12:31:02,617 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,618 [IPC Server handler 8 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/6 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,620 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,622 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/7 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,633 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741832_1008, replicas=127.0.0.1:37608 for /zones/zone/7 2017-11-16 12:31:02,635 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53155 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008 src: /127.0.0.1:53155 dest: /127.0.0.1:37608 2017-11-16 12:31:02,645 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53155, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008, duration(ns): 7153531 2017-11-16 12:31:02,645 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,647 [IPC Server handler 4 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/7 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,648 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,650 [IPC Server handler 6 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/8 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,661 [IPC Server handler 7 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741833_1009, replicas=127.0.0.1:37608 for /zones/zone/8 2017-11-16 12:31:02,663 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53156 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009 src: /127.0.0.1:53156 dest: /127.0.0.1:37608 2017-11-16 12:31:02,670 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53156, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009, duration(ns): 4511720 2017-11-16 12:31:02,670 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,671 [IPC Server handler 8 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/8 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:02,673 [IPC Server handler 0 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/zones/zone dst=null perm=jenkins:supergroup:rwxr-xr-x proto=rpc 2017-11-16 12:31:02,675 [IPC Server handler 1 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/zones/zone/9 dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-11-16 12:31:02,682 [IPC Server handler 2 on 39526] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741834_1010, replicas=127.0.0.1:37608 for /zones/zone/9 2017-11-16 12:31:02,684 [DataXceiver for client DFSClient_NONMAPREDUCE_-1336185029_289 at /127.0.0.1:53157 [Receiving block BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010]] INFO datanode.DataNode (DataXceiver.java:writeBlock(744)) - Receiving BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010 src: /127.0.0.1:53157 dest: /127.0.0.1:37608 2017-11-16 12:31:02,687 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010, type=LAST_IN_PIPELINE] INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1528)) - src: /127.0.0.1:53157, dest: /127.0.0.1:37608, bytes: 8196, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1336185029_289, offset: 0, srvID: 1fd36863-86c1-4652-a710-49bf92f7d2b8, blockid: BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010, duration(ns): 1694254 2017-11-16 12:31:02,688 [PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1501)) - PacketResponder: BP-1262690748-172.17.0.2-1510835460952:blk_1073741834_1010, type=LAST_IN_PIPELINE terminating 2017-11-16 12:31:02,689 [IPC Server handler 4 on 39526] INFO hdfs.StateChange (FSNamesystem.java:completeFile(2823)) - DIR* completeFile: /zones/zone/9 is closed by DFSClient_NONMAPREDUCE_-1336185029_289 2017-11-16 12:31:03,094 [qtp1923447936-292] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=ROLL_NEW_VERSION, key=test_key, user=jenkins] UserProvidedMaterial:false NewVersion:test_key@1 2017-11-16 12:31:03,111 [qtp1923447936-298] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=INVALIDATE_CACHE, key=test_key, user=jenkins] 2017-11-16 12:31:03,475 [qtp1923447936-292] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=INVALIDATE_CACHE, key=test_key, user=jenkins] 2017-11-16 12:31:03,645 [IPC Server handler 5 on 39526] INFO namenode.FSNamesystem (FSNamesystem.java:reencryptEncryptionZoneInt(7246)) - Re-encryption using key version test_key@1 for zone /zones/zone 2017-11-16 12:31:03,646 [IPC Server handler 5 on 39526] INFO namenode.EncryptionZoneManager (EncryptionZoneManager.java:reencryptEncryptionZone(605)) - Zone /zones/zone(16387) is submitted for re-encryption. 2017-11-16 12:31:03,646 [IPC Server handler 5 on 39526] DEBUG protocol.ReencryptionStatus (ReencryptionStatus.java:addZoneIfNecessary(129)) - Adding zone 16387 for re-encryption status 2017-11-16 12:31:03,646 [IPC Server handler 5 on 39526] DEBUG namenode.ReencryptionHandler (ReencryptionHandler.java:notifyNewSubmission(591)) - Notifying handler for new re-encryption command. 2017-11-16 12:31:03,646 [reencryptionHandlerThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:run(349)) - Executing re-encrypt commands on zone 16387. Current zones:[zone:16387 state:Submitted lastProcessed:null filesReencrypted:0 fileReencryptionFailures:0] 2017-11-16 12:31:03,646 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=STARTreencryption src=/zones/zone dst=null perm=null proto=rpc 2017-11-16 12:31:03,646 [reencryptionHandlerThread #0] INFO protocol.ReencryptionStatus (ReencryptionStatus.java:markZoneStarted(95)) - Zone 16387 starts re-encryption processing 2017-11-16 12:31:03,647 [reencryptionHandlerThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:reencryptEncryptionZone(410)) - Re-encrypting zone /zones/zone(id=16387) 2017-11-16 12:31:03,647 [Thread-164] INFO namenode.TestReencryption (TestReencryption.java:waitForReencryptedZones(1286)) - Waiting for re-encrypted zones to be 1 2017-11-16 12:31:03,648 [reencryptionHandlerThread #0] INFO namenode.FSTreeTraverser (ReencryptionHandler.java:submitCurrentBatch(720)) - Submitted batch (start:/zones/zone/0, size:5) of zone 16387 to re-encrypt. 2017-11-16 12:31:03,649 [reencryptionHandlerThread #0] INFO namenode.FSTreeTraverser (ReencryptionHandler.java:submitCurrentBatch(720)) - Submitted batch (start:/zones/zone/5, size:5) of zone 16387 to re-encrypt. 2017-11-16 12:31:03,649 [reencryption edek Thread-1] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:call(540)) - Processing batched re-encryption for zone 16387, batch size 5, start:/zones/zone/5 2017-11-16 12:31:03,649 [reencryptionHandlerThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:reencryptEncryptionZone(423)) - Submission completed of zone 16387 for re-encryption. 2017-11-16 12:31:03,649 [reencryption edek Thread-0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:call(540)) - Processing batched re-encryption for zone 16387, batch size 5, start:/zones/zone/0 2017-11-16 12:31:03,839 [qtp1923447936-294] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=REENCRYPT_EEK_BATCH, key=test_key, user=jenkins] reencrypted 5 keys 2017-11-16 12:31:03,839 [qtp1923447936-297] INFO kms-audit (SimpleKMSAuditLogger.java:logAuditSimpleFormat(93)) - OK[op=REENCRYPT_EEK_BATCH, key=test_key, user=jenkins] reencrypted 5 keys 2017-11-16 12:31:03,839 [qtp1923447936-294] INFO server.KMS (KMS.java:reencryptEncryptedKeys(593)) - reencryptEncryptedKeys 5 keys for key test_key took 184.8 ms 2017-11-16 12:31:03,840 [qtp1923447936-297] INFO server.KMS (KMS.java:reencryptEncryptedKeys(593)) - reencryptEncryptedKeys 5 keys for key test_key took 185.2 ms 2017-11-16 12:31:03,842 [reencryption edek Thread-1] WARN namenode.ReencryptionHandler (ReencryptionHandler.java:reencryptEdeks(571)) - Failed to re-encrypt one batch of 5 edeks, start:/zones/zone/5 java.io.IOException: Injected KMS failure at org.apache.hadoop.hdfs.server.namenode.TestReencryption$2MyInjector.reencryptEncryptedKeys(TestReencryption.java:1741) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.reencryptEdeks(ReencryptionHandler.java:569) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call(ReencryptionHandler.java:549) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call(ReencryptionHandler.java:525) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2017-11-16 12:31:03,842 [reencryption edek Thread-0] WARN namenode.ReencryptionHandler (ReencryptionHandler.java:reencryptEdeks(571)) - Failed to re-encrypt one batch of 5 edeks, start:/zones/zone/0 java.io.IOException: Injected KMS failure at org.apache.hadoop.hdfs.server.namenode.TestReencryption$2MyInjector.reencryptEncryptedKeys(TestReencryption.java:1741) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.reencryptEdeks(ReencryptionHandler.java:569) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call(ReencryptionHandler.java:549) at org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call(ReencryptionHandler.java:525) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2017-11-16 12:31:03,843 [reencryption edek Thread-1] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:call(553)) - Failed to re-encrypting one batch of 5 edeks from KMS, time consumed: 193.5 ms, start: /zones/zone/5. 2017-11-16 12:31:03,843 [reencryption edek Thread-0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:call(553)) - Failed to re-encrypting one batch of 5 edeks from KMS, time consumed: 194.0 ms, start: /zones/zone/0. 2017-11-16 12:31:03,844 [reencryptionUpdaterThread #0] INFO namenode.ReencryptionUpdater (ReencryptionUpdater.java:processTask(474)) - Processing returned re-encryption task for zone /zones/zone(16387), batch size 5, start:/zones/zone/5 2017-11-16 12:31:03,845 [reencryptionUpdaterThread #0] INFO namenode.ReencryptionUpdater (ReencryptionUpdater.java:processTask(474)) - Processing returned re-encryption task for zone /zones/zone(16387), batch size 5, start:/zones/zone/0 2017-11-16 12:31:03,845 [reencryptionUpdaterThread #0] DEBUG namenode.ReencryptionUpdater (ReencryptionUpdater.java:processCheckpoints(396)) - Updating re-encryption checkpoint with completed task. last: null size:5. 2017-11-16 12:31:03,846 [reencryptionUpdaterThread #0] DEBUG namenode.ReencryptionUpdater (ReencryptionUpdater.java:processCheckpoints(396)) - Updating re-encryption checkpoint with completed task. last: null size:5. 2017-11-16 12:31:03,846 [reencryptionUpdaterThread #0] DEBUG namenode.ReencryptionUpdater (ReencryptionUpdater.java:processCheckpoints(414)) - Removed re-encryption tracker for zone /zones/zone because it completed with 2 tasks. 2017-11-16 12:31:03,846 [reencryptionUpdaterThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:completeReencryption(450)) - Re-encryption completed on zone /zones/zone. Re-encrypted 0 files, failures encountered: 10. 2017-11-16 12:31:03,846 [reencryptionUpdaterThread #0] INFO protocol.ReencryptionStatus (ReencryptionStatus.java:markZoneCompleted(102)) - Zone 16387 completed re-encryption. 2017-11-16 12:31:03,848 [Thread-164] INFO namenode.TestReencryption (TestReencryption.java:waitForReencryptedZones(1295)) - Re-encrypted zones = 1 2017-11-16 12:31:03,849 [IPC Server handler 5 on 39526] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7804)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=listReencryptionStatus src=null dst=null perm=null proto=rpc 2017-11-16 12:31:03,850 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster 2017-11-16 12:31:03,850 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2013)) - Shutting down DataNode 0 2017-11-16 12:31:03,850 [Thread-164] WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called 2017-11-16 12:31:03,851 [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@596140b9] INFO datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all peers. 2017-11-16 12:31:03,851 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2)] INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2, DS-06da148c-d814-4f49-b09a-f728513f8c9c) exiting. 2017-11-16 12:31:03,852 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1, DS-7c710733-1c9a-4bd6-9624-8545d63cf48b) exiting. 2017-11-16 12:31:03,895 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@35c113a{/,null,UNAVAILABLE}{/datanode} 2017-11-16 12:31:03,897 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@5fa0b45e{HTTP/1.1,[http/1.1]}{localhost:0} 2017-11-16 12:31:03,897 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@2c631819{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,UNAVAILABLE} 2017-11-16 12:31:03,898 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@6dc86c2f{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-11-16 12:31:03,899 [Thread-164] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 41645 2017-11-16 12:31:03,900 [IPC Server listener on 41645] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 41645 2017-11-16 12:31:03,900 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-11-16 12:31:03,901 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] WARN datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - IncrementalBlockReportManager interrupted 2017-11-16 12:31:03,901 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] WARN datanode.DataNode (BPServiceActor.java:run(882)) - Ending block pool service for: Block pool BP-1262690748-172.17.0.2-1510835460952 (Datanode Uuid 1fd36863-86c1-4652-a710-49bf92f7d2b8) service to localhost/127.0.0.1:39526 2017-11-16 12:31:03,901 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1262690748-172.17.0.2-1510835460952 (Datanode Uuid 1fd36863-86c1-4652-a710-49bf92f7d2b8) 2017-11-16 12:31:03,901 [BP-1262690748-172.17.0.2-1510835460952 heartbeating to localhost/127.0.0.1:39526] INFO impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2663)) - Removing block pool BP-1262690748-172.17.0.2-1510835460952 2017-11-16 12:31:03,903 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/current/BP-1262690748-172.17.0.2-1510835460952] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-11-16 12:31:03,903 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2/current/BP-1262690748-172.17.0.2-1510835460952] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-11-16 12:31:03,905 [Thread-164] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk service threads 2017-11-16 12:31:03,905 [Thread-164] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service threads have been shut down 2017-11-16 12:31:03,905 [Thread-164] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async lazy persist service threads 2017-11-16 12:31:03,905 [Thread-164] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist service threads have been shut down 2017-11-16 12:31:03,906 [Thread-164] INFO datanode.DataNode (DataNode.java:shutdown(2121)) - Shutdown complete. 2017-11-16 12:31:03,906 [Thread-164] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(2046)) - Shutting down the namenode 2017-11-16 12:31:03,907 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1307)) - Stopping services started for active state 2017-11-16 12:31:03,907 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:stopSPS(4938)) - Storage policy satisfier is not enabled. 2017-11-16 12:31:03,907 [Thread[Thread-203,5,main]] ERROR delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:run(690)) - ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2017-11-16 12:31:03,907 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@26e88c03] INFO namenode.FSNamesystem (FSNamesystem.java:run(4077)) - LazyPersistFileScrubber was interrupted, exiting 2017-11-16 12:31:03,907 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@4e724db] INFO namenode.FSNamesystem (FSNamesystem.java:run(3986)) - NameNodeEditLogRoller was interrupted, exiting 2017-11-16 12:31:03,907 [Thread-164] INFO namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1406)) - Ending log segment 1, 68 2017-11-16 12:31:03,907 [Warm Up EDEK Cache Thread #0] INFO namenode.NameNode (FSDirEncryptionZoneOp.java:run(572)) - EDEKCacheLoader interrupted before warming up. 2017-11-16 12:31:03,908 [Thread-164] INFO namenode.FSEditLog (FSEditLog.java:printStatistics(775)) - Number of transactions: 69 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 20 Number of syncs: 50 SyncTimes(ms): 5 1 2017-11-16 12:31:03,908 [Thread-164] INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/edits_inprogress_0000000000000000001 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000069 2017-11-16 12:31:03,909 [Thread-164] INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/edits_inprogress_0000000000000000001 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000069 2017-11-16 12:31:03,909 [FSEditLogAsync] INFO namenode.FSEditLog (FSEditLogAsync.java:run(198)) - FSEditLogAsync was interrupted, exiting 2017-11-16 12:31:03,910 [reencryptionUpdaterThread #0] WARN namenode.ReencryptionUpdater (ReencryptionUpdater.java:run(266)) - Re-encryption updater thread interrupted. Exiting. 2017-11-16 12:31:03,910 [reencryptionHandlerThread #0] INFO namenode.ReencryptionHandler (ReencryptionHandler.java:run(336)) - Re-encrypt handler interrupted. Exiting 2017-11-16 12:31:03,910 [CacheReplicationMonitor(420712187)] INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(169)) - Shutting down CacheReplicationMonitor 2017-11-16 12:31:03,912 [Thread-164] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 39526 2017-11-16 12:31:03,915 [IPC Server listener on 39526] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 39526 2017-11-16 12:31:03,915 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-11-16 12:31:03,916 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:stopSPS(4938)) - Storage policy satisfier is not enabled. 2017-11-16 12:31:03,916 [RedundancyMonitor] INFO blockmanagement.BlockManager (BlockManager.java:run(4476)) - Stopping RedundancyMonitor. 2017-11-16 12:31:03,917 [StorageInfoMonitor] INFO blockmanagement.BlockManager (BlockManager.java:run(4511)) - Stopping thread. 2017-11-16 12:31:03,926 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1307)) - Stopping services started for active state 2017-11-16 12:31:03,926 [Thread-164] INFO blockmanagement.BlockManager (BlockManager.java:stopSPS(4938)) - Storage policy satisfier is not enabled. 2017-11-16 12:31:03,927 [Thread-164] INFO namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1407)) - Stopping services started for standby state 2017-11-16 12:31:03,928 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@5adb0bc7{/,null,UNAVAILABLE}{/hdfs} 2017-11-16 12:31:03,930 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@6879ed99{HTTP/1.1,[http/1.1]}{localhost:0} 2017-11-16 12:31:03,930 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@59f8025d{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,UNAVAILABLE} 2017-11-16 12:31:03,930 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@79be4e12{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-11-16 12:31:03,939 [Thread[Thread-173,5,main]] ERROR delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:run(690)) - ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2017-11-16 12:31:03,939 [Thread-164] INFO server.KMSWebApp (KMSWebApp.java:contextDestroyed(221)) - KMS Stopped 2017-11-16 12:31:03,940 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@7af46a80{/,null,UNAVAILABLE}{/kms} 2017-11-16 12:31:03,942 [Thread-164] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@583b8443{HTTP/1.1,[http/1.1]}{localhost:0} 2017-11-16 12:31:03,942 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@4a20fcf1{/static,jar:file:/home/jenkins/yetus-m2/hadoop-trunk-patch-0/org/apache/hadoop/hadoop-kms/3.1.0-SNAPSHOT/hadoop-kms-3.1.0-SNAPSHOT.jar!/webapps/static,UNAVAILABLE} 2017-11-16 12:31:03,943 [Thread-164] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@3d2acaa3{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-11-16 12:31:04,185 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping DataNode metrics system... 2017-11-16 12:31:04,186 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - DataNode metrics system stopped. 2017-11-16 12:31:04,187 [Thread-164] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - DataNode metrics system shutdown complete. Standard Error Nov 16, 2017 12:31:00 PM com.sun.jersey.api.core.PackagesResourceConfig init INFO: Scanning for root resource and provider classes in the packages: org.apache.hadoop.crypto.key.kms.server Nov 16, 2017 12:31:00 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses INFO: Root resource classes found: class org.apache.hadoop.crypto.key.kms.server.KMS Nov 16, 2017 12:31:00 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses INFO: Provider classes found: class org.apache.hadoop.crypto.key.kms.server.KMSJSONReader class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider Nov 16, 2017 12:31:00 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator buildModelAndSchemas SEVERE: Failed to generate the schema for the JAX-B elements com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions java.util.Map is an interface, and JAXB can't handle interfaces. this problem is related to the following location: at java.util.Map java.util.Map does not have a no-arg default constructor. this problem is related to the following location: at java.util.Map at com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) at com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) at com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:142) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:531) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:01 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator buildModelAndSchemas SEVERE: Failed to generate the schema for the JAX-B elements com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions java.util.Map is an interface, and JAXB can't handle interfaces. this problem is related to the following location: at java.util.Map java.util.Map does not have a no-arg default constructor. this problem is related to the following location: at java.util.Map at com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) at com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) at com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:142) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:531) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class javax.ws.rs.core.Response Nov 16, 2017 12:31:02 PM com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator attachTypes INFO: Couldn't find grammar element for class java.util.Map Exception in thread "org.apache.hadoop.crypto.key.kms.ValueQueue_thread" java.lang.RuntimeException: java.net.ConnectException: Connection refused (Connection refused) at org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:433) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:840) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:516) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:501) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.access$200(KMSClientProvider.java:96) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$EncryptedQueueRefiller.fillQueueForKey(KMSClientProvider.java:157) at org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:423) ... 3 more {noformat} Thanks for reporting this [~surendrasingh], pasting the output in case it's rolled. I have a rough idea on what's wrong - should be test-only. Will work on this soon > Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown > -------------------------------------------------------------------- > > Key: HDFS-12837 > URL: https://issues.apache.org/jira/browse/HDFS-12837 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test > Affects Versions: 3.0.0-beta1 > Reporter: Surendra Singh Lilhore > Assignee: Xiao Chen > > https://builds.apache.org/job/PreCommit-HDFS-Build/22112/testReport/org.apache.hadoop.hdfs.server.namenode/TestReencryptionWithKMS/testReencryptionKMSDown/ -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org