[
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120576#comment-15120576
]
Andrew Purtell edited comment on PHOENIX-2636 at 1/28/16 1:55 AM:
------------------------------------------------------------------
No, our internal version is 0.98.16 plus Enis's patch, so there is no
prevention of any upgrading. We should talk more in house. :-)
Edit: Also, you may or may not be aware our internal version of Phoenix is
harmonized with the HBase version and all components in the stack are
recompiled against each other for every release. We found the
ClassCastException in release validation, cherry picked the fix from upstream,
and proceeded without further incident. Coprocessors should be thought of like
Linux kernel modules and managed the same way.
I would also suggest because of the decoupled nature of the whole Hadoop
ecosystem and pace of change, binary convenience artifacts are not really that
convenient. HBase binary artifacts are of limited utility as they ship because
it's probable the user is running a different version of Hadoop than the
default for the build. This is true to some extent up and down the stack. Some
projects resort to shims. I think the whole ecosystem would ultimately do users
a favor by switching to source only releases. It won't happen but it should.
Let Apache Bigtop handle the heavy lifting of producing sets of binary
artifacts known to integrate cleanly.
was (Author: apurtell):
No, our internal version is 0.98.16 plus Enis's patch, so there is no
prevention of any upgrading. We should talk more in house. :-)
> Figure out a work around for java.lang.NoSuchFieldError: in when compiling
> against HBase < 0.98.17
> --------------------------------------------------------------------------------------------------
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
> Issue Type: Bug
> Reporter: Samarth Jain
> Assignee: Samarth Jain
> Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version
> prior to 0.98.17 and running against 0.98.17, region assignments fails to
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
> at
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.<init>(IndexedWALEditCodec.java:111)
> at
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.<init>(IndexedWALEditCodec.java:126)
> at
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
> at
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
> at
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
> at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
> at
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)