[ 
https://issues.apache.org/jira/browse/SENTRY-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974749#comment-15974749
 ] 

Na Li commented on SENTRY-1649:
-------------------------------

After moving starting HMSfollower to end of runServer(). the following test fail

Regression

org.apache.sentry.tests.e2e.metastore.TestDbNotificationListenerSentryDeserializer.testAlterPartition

Failing for the past 1 build (Since Failed#2535 )
Took 0.61 sec.
Error Message

Database N_db17 already exists
Stacktrace

org.apache.hadoop.hive.metastore.api.AlreadyExistsException: Database N_db17 
already exists
Standard Output

2017-04-19 04:54:17,617 (Thread-0) [INFO - 
org.apache.sentry.tests.e2e.hive.AbstractTestWithStaticConfiguration.setupTestStaticConfiguration(AbstractTestWithStaticConfiguration.java:284)]
 AbstractTestWithStaticConfiguration setupTestStaticConfiguration
2017-04-19 04:54:17,634 (Thread-0) [INFO - 
org.apache.sentry.tests.e2e.hive.AbstractTestWithStaticConfiguration.setupTestStaticConfiguration(AbstractTestWithStaticConfiguration.java:293)]
 BaseDir = /tmp/1492577657634-0
2017-04-19 04:54:18,329 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:446)] starting 
cluster: numNameNodes=1, numDataNodes=2
Formatting using clusterid: testClusterID
2017-04-19 04:54:18,948 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:716)]
 No KeyProvider found.
2017-04-19 04:54:18,948 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:726)]
 fsLock is fair:true
2017-04-19 04:54:18,999 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:239)]
 dfs.block.invalidate.limit=1000
2017-04-19 04:54:19,000 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:245)]
 dfs.namenode.datanode.registration.ip-hostname-check=true
2017-04-19 04:54:19,001 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.printBlockDeletionTime(InvalidateBlocks.java:71)]
 dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-04-19 04:54:19,003 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.printBlockDeletionTime(InvalidateBlocks.java:76)]
 The block deletion will start around 2017 Apr 19 04:54:19
2017-04-19 04:54:19,006 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map BlocksMap
2017-04-19 04:54:19,006 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:19,008 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 2.0% max memory 1.8 GB = 36.4 MB
2017-04-19 04:54:19,009 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^22 = 4194304 entries
2017-04-19 04:54:19,045 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createBlockTokenSecretManager(BlockManager.java:358)]
 dfs.block.access.token.enable=false
2017-04-19 04:54:19,046 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:344)]
 defaultReplication         = 2
2017-04-19 04:54:19,046 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:345)]
 maxReplication             = 512
2017-04-19 04:54:19,048 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:346)]
 minReplication             = 1
2017-04-19 04:54:19,049 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:347)]
 maxReplicationStreams      = 2
2017-04-19 04:54:19,049 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:348)]
 replicationRecheckInterval = 3000
2017-04-19 04:54:19,049 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:349)]
 encryptDataTransfer        = false
2017-04-19 04:54:19,049 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:350)]
 maxNumBlocksToLog          = 1000
2017-04-19 04:54:19,056 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:746)]
 fsOwner             = jenkins (auth:SIMPLE)
2017-04-19 04:54:19,057 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:747)]
 supergroup          = supergroup
2017-04-19 04:54:19,057 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:748)]
 isPermissionEnabled = true
2017-04-19 04:54:19,057 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:759)]
 HA Enabled: false
2017-04-19 04:54:19,060 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:796)]
 Append Enabled: true
2017-04-19 04:54:19,309 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map INodeMap
2017-04-19 04:54:19,310 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:19,310 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 1.0% max memory 1.8 GB = 18.2 MB
2017-04-19 04:54:19,310 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^21 = 2097152 entries
2017-04-19 04:54:19,312 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:235)]
 ACLs enabled? false
2017-04-19 04:54:19,312 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:239)]
 XAttrs enabled? true
2017-04-19 04:54:19,312 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:247)]
 Maximum size of an xattr: 16384
2017-04-19 04:54:19,313 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:298)]
 Caching file names occuring more than 10 times
2017-04-19 04:54:19,322 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map cachedBlocks
2017-04-19 04:54:19,322 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:19,323 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 0.25% max memory 1.8 GB = 4.6 MB
2017-04-19 04:54:19,323 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^19 = 524288 entries
2017-04-19 04:54:19,325 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5166)]
 dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-04-19 04:54:19,325 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5167)]
 dfs.namenode.safemode.min.datanodes = 0
2017-04-19 04:54:19,325 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5168)]
 dfs.namenode.safemode.extension     = 0
2017-04-19 04:54:19,329 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:65)]
 NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-04-19 04:54:19,329 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:67)]
 NNTop conf: dfs.namenode.top.num.users = 10
2017-04-19 04:54:19,330 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:69)]
 NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-04-19 04:54:19,331 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initRetryCache(FSNamesystem.java:905)]
 Retry cache on namenode is enabled
2017-04-19 04:54:19,332 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initRetryCache(FSNamesystem.java:913)]
 Retry cache will use 0.03 of total heap and retry cache entry expiry time is 
600000 millis
2017-04-19 04:54:19,335 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map NameNodeRetryCache
2017-04-19 04:54:19,335 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:19,335 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2017-04-19 04:54:19,335 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^16 = 65536 entries
2017-04-19 04:54:19,419 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:158)] 
Allocated new BlockPoolId: BP-1490004680-67.195.81.141-1492577659354
2017-04-19 04:54:19,443 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:552)] 
Storage directory /tmp/1492577657634-0/dfs/name1 has been successfully 
formatted.
2017-04-19 04:54:19,445 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:552)] 
Storage directory /tmp/1492577657634-0/dfs/name2 has been successfully 
formatted.
2017-04-19 04:54:19,590 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.getImageTxIdToRetain(NNStorageRetentionManager.java:203)]
 Going to retain 1 images with txid >= 0
2017-04-19 04:54:19,592 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1417)]
 createNameNode []
2017-04-19 04:54:19,636 (Thread-0) [WARN - 
org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:125)]
 Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2017-04-19 04:54:19,720 (Thread-0) [INFO - 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.startTimer(MetricsSystemImpl.java:377)]
 Scheduled snapshot period at 10 second(s).
2017-04-19 04:54:19,721 (Thread-0) [INFO - 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:192)]
 NameNode metrics system started
2017-04-19 04:54:19,725 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:397)]
 fs.defaultFS is hdfs://127.0.0.1:0
2017-04-19 04:54:19,768 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1703)] 
Starting Web-server for hdfs at: http://localhost:0
2017-04-19 04:54:19,845 (Thread-0) [INFO - 
org.mortbay.log.Slf4jLog.info(Slf4jLog.java:67)] Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-04-19 04:54:19,854 (Thread-0) [INFO - 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.constructSecretProvider(AuthenticationFilter.java:284)]
 Unable to initialize FileSignerSecretProvider, falling back to use random 
secrets.
2017-04-19 04:54:19,859 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpRequestLog.getRequestLog(HttpRequestLog.java:80)] 
Http request log for http.requests.namenode is not defined
2017-04-19 04:54:19,865 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpServer2.addGlobalFilter(HttpServer2.java:710)] Added 
global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-04-19 04:54:19,869 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpServer2.addFilter(HttpServer2.java:685)] Added 
filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context hdfs
2017-04-19 04:54:19,869 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpServer2.addFilter(HttpServer2.java:693)] Added 
filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2017-04-19 04:54:19,888 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.initWebHdfs(NameNodeHttpServer.java:86)]
 Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' 
(class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-04-19 04:54:19,890 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpServer2.addJerseyResourcePackage(HttpServer2.java:609)]
 addJerseyResourcePackage: 
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*
2017-04-19 04:54:19,904 (Thread-0) [INFO - 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:915)] Jetty 
bound to port 40824
2017-04-19 04:54:19,904 (Thread-0) [INFO - 
org.mortbay.log.Slf4jLog.info(Slf4jLog.java:67)] jetty-6.1.26
2017-04-19 04:54:19,933 (Thread-0) [INFO - 
org.mortbay.log.Slf4jLog.info(Slf4jLog.java:67)] Extract 
jar:file:/home/jenkins/jenkins-slave/workspace/PreCommit-SENTRY-Build/maven-repo/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2-tests.jar!/webapps/hdfs
 to /tmp/Jetty_localhost_40824_hdfs____.z4wlaw/webapp
2017-04-19 04:54:20,110 (Thread-0) [INFO - 
org.mortbay.log.Slf4jLog.info(Slf4jLog.java:67)] Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40824
2017-04-19 04:54:20,119 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:716)]
 No KeyProvider found.
2017-04-19 04:54:20,119 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:726)]
 fsLock is fair:true
2017-04-19 04:54:20,121 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:239)]
 dfs.block.invalidate.limit=1000
2017-04-19 04:54:20,121 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:245)]
 dfs.namenode.datanode.registration.ip-hostname-check=true
2017-04-19 04:54:20,122 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.printBlockDeletionTime(InvalidateBlocks.java:71)]
 dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-04-19 04:54:20,122 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.printBlockDeletionTime(InvalidateBlocks.java:76)]
 The block deletion will start around 2017 Apr 19 04:54:20
2017-04-19 04:54:20,122 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map BlocksMap
2017-04-19 04:54:20,122 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:20,123 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 2.0% max memory 1.8 GB = 36.4 MB
2017-04-19 04:54:20,123 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^22 = 4194304 entries
2017-04-19 04:54:20,126 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createBlockTokenSecretManager(BlockManager.java:358)]
 dfs.block.access.token.enable=false
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:344)]
 defaultReplication         = 2
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:345)]
 maxReplication             = 512
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:346)]
 minReplication             = 1
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:347)]
 maxReplicationStreams      = 2
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:348)]
 replicationRecheckInterval = 3000
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:349)]
 encryptDataTransfer        = false
2017-04-19 04:54:20,127 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:350)]
 maxNumBlocksToLog          = 1000
2017-04-19 04:54:20,128 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:746)]
 fsOwner             = jenkins (auth:SIMPLE)
2017-04-19 04:54:20,128 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:747)]
 supergroup          = supergroup
2017-04-19 04:54:20,128 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:748)]
 isPermissionEnabled = true
2017-04-19 04:54:20,129 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:759)]
 HA Enabled: false
2017-04-19 04:54:20,129 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:796)]
 Append Enabled: true
2017-04-19 04:54:20,130 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map INodeMap
2017-04-19 04:54:20,130 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:20,130 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 1.0% max memory 1.8 GB = 18.2 MB
2017-04-19 04:54:20,130 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^21 = 2097152 entries
2017-04-19 04:54:20,132 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:235)]
 ACLs enabled? false
2017-04-19 04:54:20,132 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:239)]
 XAttrs enabled? true
2017-04-19 04:54:20,133 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:247)]
 Maximum size of an xattr: 16384
2017-04-19 04:54:20,133 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:298)]
 Caching file names occuring more than 10 times
2017-04-19 04:54:20,134 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map cachedBlocks
2017-04-19 04:54:20,134 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:20,135 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 0.25% max memory 1.8 GB = 4.6 MB
2017-04-19 04:54:20,135 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^19 = 524288 entries
2017-04-19 04:54:20,136 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5166)]
 dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-04-19 04:54:20,136 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5167)]
 dfs.namenode.safemode.min.datanodes = 0
2017-04-19 04:54:20,137 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.<init>(FSNamesystem.java:5168)]
 dfs.namenode.safemode.extension     = 0
2017-04-19 04:54:20,137 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:65)]
 NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-04-19 04:54:20,138 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:67)]
 NNTop conf: dfs.namenode.top.num.users = 10
2017-04-19 04:54:20,138 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.logConf(TopMetrics.java:69)]
 NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-04-19 04:54:20,138 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initRetryCache(FSNamesystem.java:905)]
 Retry cache on namenode is enabled
2017-04-19 04:54:20,139 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initRetryCache(FSNamesystem.java:913)]
 Retry cache will use 0.03 of total heap and retry cache entry expiry time is 
600000 millis
2017-04-19 04:54:20,139 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:354)]
 Computing capacity for map NameNodeRetryCache
2017-04-19 04:54:20,139 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:355)]
 VM type       = 64-bit
2017-04-19 04:54:20,140 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:356)]
 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2017-04-19 04:54:20,140 (Thread-0) [INFO - 
org.apache.hadoop.util.LightWeightGSet.computeCapacity(LightWeightGSet.java:361)]
 capacity      = 2^16 = 65536 entries
2017-04-19 04:54:20,148 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:715)]
 Lock on /tmp/1492577657634-0/dfs/name1/in_use.lock acquired by nodename 
20...@asf921.gq1.ygridcore.net
2017-04-19 04:54:20,151 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:715)]
 Lock on /tmp/1492577657634-0/dfs/name2/in_use.lock acquired by nodename 
20...@asf921.gq1.ygridcore.net
2017-04-19 04:54:20,157 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:362)]
 Recovering unfinalized segments in /tmp/1492577657634-0/dfs/name1/current
2017-04-19 04:54:20,158 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:362)]
 Recovering unfinalized segments in /tmp/1492577657634-0/dfs/name2/current
2017-04-19 04:54:20,159 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:669)] 
No edit log streams selected.
2017-04-19 04:54:20,185 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:255)]
 Loading 1 INodes.
2017-04-19 04:54:20,197 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:181)]
 Loaded FSImage in 0 seconds.
2017-04-19 04:54:20,198 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:970)] 
Loaded image for txid 0 from 
/tmp/1492577657634-0/dfs/name1/current/fsimage_0000000000000000000
2017-04-19 04:54:20,206 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:982)]
 Need to save fs image? false (staleImage=false, haEnabled=false, 
isRollingUpgrade=false)
2017-04-19 04:54:20,207 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1218)]
 Starting log segment at 1
2017-04-19 04:54:20,302 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameCache.initialized(NameCache.java:143)]
 initialized with 0 entries 0 lookups
2017-04-19 04:54:20,303 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:688)]
 Finished loading FSImage in 159 msecs
2017-04-19 04:54:20,487 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:342)]
 RPC server is binding to localhost:0
2017-04-19 04:54:20,495 (Thread-0) [INFO - 
org.apache.hadoop.ipc.CallQueueManager.<init>(CallQueueManager.java:53)] Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2017-04-19 04:54:20,509 (Socket Reader #1 for port 33849) [INFO - 
org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:606)] Starting 
Socket Reader #1 for port 33849
2017-04-19 04:54:20,541 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:652)] 
Clients are to use localhost:33849 to access this namenode/service.
2017-04-19 04:54:20,544 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerMBean(FSNamesystem.java:6031)]
 Registered FSNamesystemState MBean
2017-04-19 04:54:20,579 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.getNumUnderConstructionBlocks(LeaseManager.java:136)]
 Number of blocks under construction: 0
2017-04-19 04:54:20,579 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.getNumUnderConstructionBlocks(LeaseManager.java:136)]
 Number of blocks under construction: 0
2017-04-19 04:54:20,579 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initializeReplQueues(FSNamesystem.java:1182)]
 initializing replication queues
2017-04-19 04:54:20,580 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.leave(FSNamesystem.java:5241)]
 STATE* Leaving safe mode after 0 secs
2017-04-19 04:54:20,580 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.leave(FSNamesystem.java:5253)]
 STATE* Network topology has 0 racks and 0 datanodes
2017-04-19 04:54:20,581 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$SafeModeInfo.leave(FSNamesystem.java:5256)]
 STATE* UnderReplicatedBlocks has 0 blocks
2017-04-19 04:54:20,592 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateHeartbeatState(DatanodeDescriptor.java:451)]
 Number of failed storage changes from 0 to 0
2017-04-19 04:54:20,593 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2763)]
 Total number of blocks            = 0
2017-04-19 04:54:20,594 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2764)]
 Number of invalid blocks          = 0
2017-04-19 04:54:20,594 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2765)]
 Number of under-replicated blocks = 0
2017-04-19 04:54:20,594 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2766)]
 Number of  over-replicated blocks = 0
2017-04-19 04:54:20,594 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2768)]
 Number of blocks being written    = 0
2017-04-19 04:54:20,594 (Replication Queue Initializer) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatesAsync(BlockManager.java:2770)]
 STATE* Replication Queue initialization scan for invalid, over- and 
under-replicated blocks completed in 15 msec
2017-04-19 04:54:20,641 (IPC Server Responder) [INFO - 
org.apache.hadoop.ipc.Server$Responder.run(Server.java:836)] IPC Server 
Responder: starting
2017-04-19 04:54:20,641 (IPC Server listener on 33849) [INFO - 
org.apache.hadoop.ipc.Server$Listener.run(Server.java:676)] IPC Server listener 
on 33849: starting
2017-04-19 04:54:20,646 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:695)]
 NameNode RPC up at: localhost/127.0.0.1:33849
2017-04-19 04:54:20,646 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1103)]
 Starting services required for active state
2017-04-19 04:54:20,654 (CacheReplicationMonitor(827881346)) [INFO - 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:160)]
 Starting CacheReplicationMonitor with interval 30000 milliseconds
2017-04-19 04:54:20,672 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1413)] 
Starting DataNode 0 with dfs.datanode.data.dir: 
[DISK]file:/tmp/1492577657634-0/dfs/data/data1,[DISK]file:/tmp/1492577657634-0/dfs/data/data2
2017-04-19 04:54:20,759 (Thread-0) [WARN - 
org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:62)] 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2017-04-19 04:54:20,768 (Thread-0) [INFO - 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:159)]
 DataNode metrics system started (again)
2017-04-19 04:54:20,776 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.datanode.BlockScanner.<init>(BlockScanner.java:172)]
 Initialized block scanner with targetBytesPerSec 1048576
2017-04-19 04:54:20,777 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:427)] 
Configured hostname is 127.0.0.1
2017-04-19 04:54:20,785 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1100)]
 Starting DataNode with maxLockedMemory = 0
2017-04-19 04:54:20,798 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:898)]
 Opened streaming server at /127.0.0.1:35465
2017-04-19 04:54:20,802 (Thread-0) [INFO - 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer$BlockBalanceThrottler.<init>

> Initialize HMSFollower when sentry server actually starts
> ---------------------------------------------------------
>
>                 Key: SENTRY-1649
>                 URL: https://issues.apache.org/jira/browse/SENTRY-1649
>             Project: Sentry
>          Issue Type: Sub-task
>          Components: Hdfs Plugin
>    Affects Versions: sentry-ha-redesign
>            Reporter: Hao Hao
>            Assignee: Na Li
>            Priority: Critical
>             Fix For: sentry-ha-redesign
>
>         Attachments: lina_test.patch, 
> SENTRY-1649.001-sentry-ha-redesign.patch, 
> SENTRY-1649.002-sentry-ha-redesign.patch, 
> SENTRY-1649.003-sentry-ha-redesign.patch, 
> SENTRY-1649.004-sentry-ha-redesign.patch, 
> SENTRY-1649.005-sentry-ha-redesign.patch, 
> SENTRY-1649.006-sentry-ha-redesign.patch, 
> SENTRY-1649.007-sentry-ha-redesign.patch, 
> SENTRY-1649.008-sentry-ha-redesign.patch, 
> SENTRY-1649.009-sentry-ha-redesign.patch, 
> SENTRY-1649.010-sentry-ha-redesign.patch, 
> SENTRY-1649.011-sentry-ha-redesign.patch, 
> SENTRY-1649.012-sentry-ha-redesign.patch, 
> SENTRY-1649.013-sentry-ha-redesign.patch, 
> SENTRY-1649.014-sentry-ha-redesign.patch, 
> SENTRY-1649.015-sentry-ha-redesign.patch, 
> SENTRY-1649.016-sentry-ha-redesign.patch, 
> SENTRY-1649.017-sentry-ha-redesign.patch, 
> SENTRY-1649.018-sentry-ha-redesign.patch, 
> SENTRY-1649.019-sentry-ha-redesign.patch, 
> SENTRY-1649.020-sentry-ha-redesign.patch, 
> SENTRY-1649.021-sentry-ha-redesign.patch
>
>
> Now HMSFollower has been initialized at the constructor of SentryService. It 
> would be better to initialize it when the service starts, e.g runServer().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to