[ 
https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9478:
--------------------------------

    Attachment: hadoop-9478-2.patch

Good points. I was hoping to escape slapping locks everywhere since the 
deprecated list is append-only, but we do need consistent multi-key and 
cross-map operations.

Unfortunately, I don't think {{AtomicReference}} or {{ConcurrentHashMap}} 
really save us when we need to do multiple operations atomically. I think the 
only *really* safe solution is slapping down class monitor locks everywhere. 
This is heavyweight, but applications really shouldn't be accessing 
Configuration so much that it becomes a problem.

I did leave a few one-off ops unsynchronized since they should be handled by 
the {{CHM}}.

> The get operation of deprecatedKeyMap of org.apache.hadoop.conf.Configuration 
> should be synchronized. 
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9478
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9478
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.0.0-alpha
>         Environment: OS:
> CentOS release 6.3 (Final)
> JDK:
> java version "1.6.0_27"
> Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
> Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
> Hadoop:
> hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0
> Security:
> Kerberos
>            Reporter: Dongyong Wang
>            Assignee: Andrew Wang
>         Attachments: hadoop-9478-1.patch, hadoop-9478-2.patch
>
>
> When we lanuch the client appliation which use kerberos security,the 
> FileSystem can't be create because the exception ' 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.security.SecurityUtil'.
> I check the exception stack trace,it maybe caused by the unsafe get operation 
> of the deprecatedKeyMap which used by the 
> org.apache.hadoop.conf.Configuration.
> So I write a simple test case:
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.hdfs.HdfsConfiguration;
> public class HTest {
>     public static void main(String[] args) throws Exception {
>         Configuration conf = new Configuration();
>         conf.addResource("core-site.xml");
>         conf.addResource("hdfs-site.xml");
>         FileSystem fileSystem = FileSystem.get(conf);
>         System.out.println(fileSystem);
>         System.exit(0);
>     }
> }
> Then I launch this test case many times,the following exception is thrown:
> Exception in thread "TGT Renewer for XXX" 
> java.lang.ExceptionInInitializerError
>      at 
> org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
>      at 
> org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
>      at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
>      at java.util.HashMap.getEntry(HashMap.java:345)
>      at java.util.HashMap.containsKey(HashMap.java:335)
>      at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
>      at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
>      at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
>      at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
>      at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
>      at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
>      at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:76)
>      ... 4 more
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:436)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:403)
>      at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
>      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
>      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
>      at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
>      at HTest.main(HTest.java:11)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>      at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>      at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442)
>      ... 11 more
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.security.SecurityUtil
>      at 
> org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148)
>      at 
> org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452)
>      at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434)
>      at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496)
>      at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:88)
>      ... 16 more
> If the HashMap used at multi-thread enviroment,not only the put operation be 
> synchronized,the get operation(eg. containKey) should be synchronzied too.
> The simple solution is trigger the init of SecurityUtil before creating the 
> FileSystem,but I think it's should be synchronized for get of 
> deprecatedKeyMap.
> Thanks. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to