[ 
https://issues.apache.org/jira/browse/HBASE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866890#comment-13866890
 ] 

stack commented on HBASE-10304:
-------------------------------

bq. Isn't this a change that would benefit the entire protobuf community?

it is a change that goes against the pb lib philosophy of making a copy before 
going to work on it.  The pb team also talks of 'copy is cheap' and 'object 
creation is cheap' in java up on discussion lists (which is 'true' but no copy 
and no creation will always be better) so it might take a while and some work 
getting it contributed.

If we were to go this route, we'd want to push more than just this one ZCLBS 
change or just go the route of this gentleman 
https://code.google.com/p/protobuf-gcless/ altogether.

Let me measure what we lose reverting (option 1.).  ZCLBS came in as part of 
the effort at getting us back to 0.94 numbers.

Interesting that folks here think 2. is viable; I'd think we'd just be pissing 
folks off... but it'd be easy to require.

Will get some more on 3. too. 

Will be back.

> Running an hbase job jar: IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-10304
>                 URL: https://issues.apache.org/jira/browse/HBASE-10304
>             Project: HBase
>          Issue Type: Bug
>          Components: mapreduce
>    Affects Versions: 0.98.0, 0.96.1.1
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.98.0
>
>
> (Jimmy has been working on this one internally.  I'm just the messenger 
> raising this critical issue upstream).
> So, if you make job jar and bundle up hbase inside in it because you want to 
> access hbase from your mapreduce task, the deploy of the job jar to the 
> cluster fails with:
> {code}
> 14/01/05 08:59:19 INFO Configuration.deprecation: 
> topology.node.switch.mapping.impl is deprecated. Instead, use 
> net.topology.node.switch.mapping.impl
> 14/01/05 08:59:19 INFO Configuration.deprecation: io.bytes.per.checksum is 
> deprecated. Instead, use dfs.bytes-per-checksum
> Exception in thread "main" java.lang.IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
>       at java.lang.ClassLoader.defineClass1(Native Method)
>       at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
>       at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>       at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>       at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>       at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
>       at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
>       at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
>       at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
>       at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
>       at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
>       at 
> com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:124)
>       at 
> com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:64)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>       at 
> com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.main(HBaseMapReduceIndexerTool.java:51)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> {code}
> So, ZCLBS is a hack.  This class is in the hbase-protocol module.  It is "in" 
> the com.google.protobuf package.  All is well and good usually.
> But when we make a job jar and bundle up hbase inside it, our 'trick' breaks. 
>  RunJar makes a new class loader to run the job jar.  This URLCLassLoader 
> 'attaches' all the jars and classes that are in jobjar so they can be found 
> when it does to do a lookup only Classloaders work by always delegating to 
> their parent first (unless you are a WAR file in a container where delegation 
> is 'off' for the most part) and in this case, the parent classloader will 
> have access to a pb jar since pb is in the hadoop CLASSPATH.  So, the parent 
> loads the pb classes.
> We then load ZCLBS only this is done in the claslsloader made by RunJar; 
> ZKCLBS has a different classloader from its superclass and we get the above 
> IllegalAccessError.
> Now (Jimmy's work comes in here), this can't be fixed by reflection -- you 
> can't setAccess on a 'Class' -- and though it probably could be fixed by 
> hacking RunJar so it was somehow made configurable so we could put in place 
> our own ClassLoader to do something like containers do for WAR files 
> (probably not a bad idea), there would be some fierce hackery involved and 
> besides, this won't show up in hadoop anytime too soon leaving hadoop 2.2ers 
> out in the cold.
> So, the alternatives are:
> 1. Undo the ZCLSB hack.  We'd lose a lot of nice perf improvement but I'd say 
> this is preferable to crazy CLASSPATH hacks.
> 2. Require folks put hbase-protocol -- thats all you'd need -- on the hadoop 
> CLASSPATH.  This is kinda crazy.
> 3. We could try shading the pb jar content or probably better, just pull pb 
> into hbase altogether only under a different package.  If it was in our code 
> base, we could do more ZCLSB-like speedups.
> I was going to experiment with #3 above unless anyone else has a better idea.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to