Hafiz, thanks

If you don’t mind, can you create a JIRA also?

Ramesh, do you think this could be anything to do with the new shim code we 
added?

Thanks

Bosco


From:  Hafiz Mujadid <hafizmujadi...@gmail.com>
Reply-To:  <user@ranger.incubator.apache.org>
Date:  Sunday, November 29, 2015 at 8:46 PM
To:  <user@ranger.incubator.apache.org>
Subject:  Re: hdfs plugin enable issue

Bosco!


Here are the some lines of the logs


2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication      
       = 1
2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
maxReplicationStreams      = 2
2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
shouldCheckForEnoughRacks  = false
2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
replicationRecheckInterval = 3000
2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer 
       = false
2015-11-29 14:49:37,076 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog   
       = 1000
2015-11-29 14:49:37,082 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = 
hduser (auth:SIMPLE)
2015-11-29 14:49:37,082 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = 
supergroup
2015-11-29 14:49:37,082 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-11-29 14:49:37,082 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-11-29 14:49:37,084 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-11-29 14:49:37,317 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map INodeMap
2015-11-29 14:49:37,317 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-11-29 14:49:37,318 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 
MB = 8.9 MB
2015-11-29 14:49:37,318 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 
= 1048576 entries
2015-11-29 14:49:37,319 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-11-29 14:49:37,319 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-11-29 14:49:37,319 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 
16384
2015-11-29 14:49:37,319 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Caching file names occuring more than 10 times
2015-11-29 14:49:37,326 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map cachedBlocks
2015-11-29 14:49:37,326 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-11-29 14:49:37,327 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 
MB = 2.2 MB
2015-11-29 14:49:37,327 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 
= 262144 entries
2015-11-29 14:49:37,328 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-11-29 14:49:37,328 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.min.datanodes = 0
2015-11-29 14:49:37,328 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.extension     = 30000
2015-11-29 14:49:37,331 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.window.num.buckets = 10
2015-11-29 14:49:37,331 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.num.users = 10
2015-11-29 14:49:37,331 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.windows.minutes = 1,5,25
2015-11-29 14:49:37,333 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is 
enabled
2015-11-29 14:49:37,333 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 
of total heap and retry cache entry expiry time is 600000 millis
2015-11-29 14:49:37,335 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map NameNodeRetryCache
2015-11-29 14:49:37,335 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-11-29 14:49:37,335 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% 
max memory 889 MB = 273.1 KB
2015-11-29 14:49:37,335 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 
= 32768 entries
2015-11-29 14:49:37,456 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:843)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:673)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:132)
... 8 more
Caused by: java.lang.StackOverflowError
at java.lang.Exception.<init>(Exception.java:102)
at 
java.lang.ReflectiveOperationException.<init>(ReflectiveOperationException.java:89)
at 
java.lang.reflect.InvocationTargetException.<init>(InvocationTargetException.java:72)
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)



On Mon, Nov 30, 2015 at 9:36 AM, Don Bosco Durai <bo...@apache.org> wrote:
I think, we need to look into this issue. The trunk should have worked with the 
latest Hadoop, unless something changed on Hadoop side.

I don’t have bandwidth this week, can you anyone investigate this?

Hafiz, if you still have the old logs, can you send some lines above the 
exception you pasted in the original email?

Thanks

Bosco


From:  Hafiz Mujadid <hafizmujadi...@gmail.com>
Reply-To:  <user@ranger.incubator.apache.org>
Date:  Sunday, November 29, 2015 at 6:50 AM
To:  <user@ranger.incubator.apache.org>
Subject:  Re: hdfs plugin enable issue

yes issue resolved thanks all :)

one more thing i want to know what is the meaning of delegate admin option in 
policy permission options?


thanks

On Sun, Nov 29, 2015 at 6:53 PM, Dilli Dorai <dilli.do...@gmail.com> wrote:
Hafiz,

Please see mail on user mailing list with subject line "Ranger 0.5 Source 
location".
The mail was from Selva.
If you checkout 0.5 branch and build it, you are not really getting the real 
0.5 release.

You can get the release source code for 0.5  release from:
http://people.apache.org/~sneethir/ranger/ranger-0.5.0-rc3/ranger-0.5.0.tar.gz

Please try building from that source and confirm if you still see the problem.
I think you would not see the problem with that source.

Thanks
Dilli



On Sun, Nov 29, 2015 at 3:18 AM, Hafiz Mujadid <hafizmujadi...@gmail.com> wrote:
Hi,
I tried with hadoop 2.7.0 but facing same exception. 
Anybody please what is the issue ?


On Sun, Nov 29, 2015 at 2:52 PM, Hafiz Mujadid <hafizmujadi...@gmail.com> wrote:
hadoop/lib contains following jar files, I also placed these ranger related 
jars in hadoop/share/hdfs/lib folder

 ranger-hdfs-plugin-shim-0.5.0.jar  
 ranger-plugin-classloader-0.5.0.jar
native folder with following files
libhadoop.a  libhadooppipes.a  libhadoop.so  libhadoop.so.1.0.0  
libhadooputils.a  libhdfs.a  libhdfs.so  libhdfs.so.0.0.0



attached is the complete log file 

thanks
    

On Sun, Nov 29, 2015 at 11:37 AM, Ramesh Mani <rm...@hortonworks.com> wrote:
Hafiz,

Please list the files in  hadoop/lib directory , also provide the complete 
namenode log for me to review?

Thanks,
Rame

From: Hafiz Mujadid <hafizmujadi...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Date: Saturday, November 28, 2015 at 11:51 AM
To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject: hdfs plugin enable issue

Hi!

I have enabled hdfs plugin but after that name node is not starting, when i 
start namenode, i get following exception


2015-11-29 00:44:20,534 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
        at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:843)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:673)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:132)
        ... 8 more
Caused by: java.lang.StackOverflowError
        at java.lang.Exception.<init>(Exception.java:102)
        at 
java.lang.ReflectiveOperationException.<init>(ReflectiveOperationException.java:89)
        at 
java.lang.reflect.InvocationTargetException.<init>(InvocationTargetException.java:72)
        at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:383)
        at 
org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer.init(RangerHdfsAuthorizer.java:65)







-- 
Regards: HAFIZ MUJADID



-- 
Regards: HAFIZ MUJADID




-- 
Regards: HAFIZ MUJADID



-- 
Regards: HAFIZ MUJADID

Reply via email to