[ https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828048#comment-13828048 ]
Tsz Wo (Nicholas), SZE commented on HADOOP-9478: ------------------------------------------------ After this change, I somehow get "NoClassDefFoundError: org/apache/commons/collections/map/UnmodifiableMap" when I run any test under trunk/hadoop-hdfs-project/hadoop-hdfs. Running tests under project root (i.e. trunk/) is fine. I wonder if it is a problem in my local environment. Do you get the same thing? {noformat} Running org.apache.hadoop.hdfs.TestFileCreation Tests run: 22, Failures: 0, Errors: 20, Skipped: 2, Time elapsed: 0.161 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileCreation testServerDefaults(org.apache.hadoop.hdfs.TestFileCreation) Time elapsed: 0.016 sec <<< ERROR! java.lang.NoClassDefFoundError: org/apache/commons/collections/map/UnmodifiableMap at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at org.apache.hadoop.conf.Configuration$DeprecationContext.<init>(Configuration.java:394) at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:432) at org.apache.hadoop.hdfs.TestFileCreation.testServerDefaults(TestFileCreation.java:149) {noformat} > Fix race conditions during the initialization of Configuration related to > deprecatedKeyMap > ------------------------------------------------------------------------------------------ > > Key: HADOOP-9478 > URL: https://issues.apache.org/jira/browse/HADOOP-9478 > Project: Hadoop Common > Issue Type: Bug > Components: conf > Affects Versions: 2.0.0-alpha > Environment: OS: > CentOS release 6.3 (Final) > JDK: > java version "1.6.0_27" > Java(TM) SE Runtime Environment (build 1.6.0_27-b07) > Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode) > Hadoop: > hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0 > Security: > Kerberos > Reporter: Dongyong Wang > Assignee: Colin Patrick McCabe > Fix For: 2.2.1 > > Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, > HADOOP-9478.003.patch, HADOOP-9478.004.patch, HADOOP-9478.005.patch, > hadoop-9478-1.patch, hadoop-9478-2.patch > > > When we lanuch the client appliation which use kerberos security,the > FileSystem can't be create because the exception ' > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.hadoop.security.SecurityUtil'. > I check the exception stack trace,it maybe caused by the unsafe get operation > of the deprecatedKeyMap which used by the > org.apache.hadoop.conf.Configuration. > So I write a simple test case: > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.hdfs.HdfsConfiguration; > public class HTest { > public static void main(String[] args) throws Exception { > Configuration conf = new Configuration(); > conf.addResource("core-site.xml"); > conf.addResource("hdfs-site.xml"); > FileSystem fileSystem = FileSystem.get(conf); > System.out.println(fileSystem); > System.exit(0); > } > } > Then I launch this test case many times,the following exception is thrown: > Exception in thread "TGT Renewer for XXX" > java.lang.ExceptionInInitializerError > at > org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719) > at > org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 16 > at java.util.HashMap.getEntry(HashMap.java:345) > at java.util.HashMap.containsKey(HashMap.java:335) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:712) > at > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731) > at > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047) > at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:76) > ... 4 more > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:436) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:403) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162) > at HTest.main(HTest.java:11) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) > at java.lang.reflect.Constructor.newInstance(Constructor.java:513) > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442) > ... 11 more > Caused by: java.lang.NoClassDefFoundError: Could not initialize class > org.apache.hadoop.security.SecurityUtil > at > org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148) > at > org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452) > at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434) > at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496) > at > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:88) > ... 16 more > If the HashMap used at multi-thread enviroment,not only the put operation be > synchronized,the get operation(eg. containKey) should be synchronzied too. > The simple solution is trigger the init of SecurityUtil before creating the > FileSystem,but I think it's should be synchronized for get of > deprecatedKeyMap. > Thanks. -- This message was sent by Atlassian JIRA (v6.1#6144)