trunk version would correspond to hbase 3.0 which has lot more changes
compared to hbase 2.
The trunk build wouldn't serve you if your goal is to run hbase on hadoop
3.1 (see HBASE-20244)

FYI

On Sat, Jun 30, 2018 at 3:11 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Thanks Ted.
>
> I downloaded the latest Hbase binary which is 2.0.1 2018/06/19
>
> Is there any trunc version build for Hadoop 3.1 please and if so where can
> I download it?
>
> Regards,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 30 Jun 2018 at 22:52, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Which hadoop release was the 2.0.1 built against ?
> >
> > In order to build hbase 2 against hadoop 3.0.1+ / 3.1.0+, you will need
> > HBASE-20244.
> >
> > FYI
> >
> > On Sat, Jun 30, 2018 at 2:34 PM, Mich Talebzadeh <
> > mich.talebza...@gmail.com>
> > wrote:
> >
> > > I am using the following hbase-site.xml
> > >
> > > <configuration>
> > >   <property>
> > >     <name>hbase.rootdir</name>
> > >     <value>hdfs://rhes75:9000/hbase</value>
> > >   </property>
> > >   <property>
> > >     <name>hbase.zookeeper.property.dataDir</name>
> > >     <value>/home/hduser/zookeeper-3.4.6</value>
> > >   </property>
> > > <property>
> > >     <name>hbase.master</name>
> > >     <value>localhost:60000</value>
> > > </property>
> > > <property>
> > >       <name>hbase.zookeeper.property.clientPort</name>
> > >       <value>2181</value>
> > >    </property>
> > >   <property>
> > >     <name>hbase.cluster.distributed</name>
> > >     <value>true</value>
> > >   </property>
> > > <property>
> > >      <name>hbase.defaults.for.version.skip</name>
> > >      <value>true</value>
> > > </property>
> > > <property>
> > >      <name>phoenix.query.dateFormatTimeZone</name>
> > >      <value>Europe/London</value>
> > > </property>
> > > <property>
> > >     <name>hbase.procedure.store.wal.use.hsync</name>
> > >     <value>false</value>
> > > </property>`
> > > <property>
> > >   <name>hbase.unsafe.stream.capability.enforce</name>
> > >   <value>false</value>
> > > </property>
> > > </configuration>
> > >
> > > master starts OK but region server throws some errors
> > >
> > > 2018-06-30 22:23:56,607 INFO  [regionserver/rhes75:16020]
> > > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128
> MB,
> > > prefix=rhes75%2C16020%2C1530393832024, suffix=,
> > > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,153
> > > 0393832024, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > > 2018-06-30 22:23:56,629 ERROR [regionserver/rhes75:16020]
> > > regionserver.HRegionServer: ason:
> > >     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
> stack[1])
> > > is
> > > not assignable to 'org/apache/hadoop/fs/QuotaUsage'
> > >   Current Frame:
> > >     bci: @105
> > >     flags: { }
> > >     locals: { 'org/apache/hadoop/fs/ContentSummary',
> > > 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$
> > > ContentSummaryProto$Builder'
> > > }
> > >     stack: {
> > > 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$
> > > ContentSummaryProto$Builder',
> > > 'org/apache/hadoop/fs/ContentSummary' }
> > >   Bytecode:
> > >     0x0000000: 2ac7 0005 01b0 b805 984c 2b2a b605 99b6
> > >     0x0000010: 059a 2ab6 059b b605 9c2a b605 9db6 059e
> > >     0x0000020: 2ab6 059f b605 a02a b605 a1b6 05a2 2ab6
> > >     0x0000030: 05a3 b605 a42a b605 a5b6 05a6 2ab6 05a7
> > >     0x0000040: b605 a82a b605 a9b6 05aa 2ab6 05ab b605
> > >     0x0000050: ac2a b605 adb6 05ae 572a b605 af9a 000a
> > >     0x0000060: 2ab6 05b0 9900 0c2b 2ab8 0410 b605 b157
> > >     0x0000070: 2bb6 05b2 b0
> > >   Stackmap Table:
> > >     same_frame(@6)
> > >     append_frame(@103,Object[#2940])
> > >     same_frame(@112)
> > >  *****
> > > java.lang.VerifyError: Bad type on operand stack
> > > Exception Details:
> > >   Location:
> > >
> > > org/apache/hadoop/hdfs/protocolPB/PBHelperClient.
> > > convert(Lorg/apache/hadoop/fs/ContentSummary;)Lorg/apache/
> > > hadoop/hdfs/protocol/proto/HdfsProtos$ContentSummaryProto;
> > > @105: invokestatic
> > >   Reason:
> > >     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
> stack[1])
> > > is
> > > not assignable to 'org/apache/hadoop/fs/QuotaUsage'
> > >   Current Frame:
> > >     bci: @105
> > >     flags: { }
> > >     locals: { 'org/apache/hadoop/fs/ContentSummary',
> > > 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$
> > > ContentSummaryProto$Builder'
> > > }
> > >     stack: {
> > > 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$
> > > ContentSummaryProto$Builder',
> > > 'org/apache/hadoop/fs/ContentSummary' }
> > >   Bytecode:
> > >     0x0000000: 2ac7 0005 01b0 b805 984c 2b2a b605 99b6
> > >     0x0000010: 059a 2ab6 059b b605 9c2a b605 9db6 059e
> > >     0x0000020: 2ab6 059f b605 a02a b605 a1b6 05a2 2ab6
> > >     0x0000030: 05a3 b605 a42a b605 a5b6 05a6 2ab6 05a7
> > >     0x0000040: b605 a82a b605 a9b6 05aa 2ab6 05ab b605
> > >     0x0000050: ac2a b605 adb6 05ae 572a b605 af9a 000a
> > >     0x0000060: 2ab6 05b0 9900 0c2b 2ab8 0410 b605 b157
> > >     0x0000070: 2bb6 05b2 b0
> > >   Stackmap Table:
> > >     same_frame(@6)
> > >     append_frame(@103,Object[#2940])
> > >     same_frame(@112)
> > >
> > > any ideas?
> > >
> > > thanks
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > > OABUrV8Pw>*
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> > from
> > > such loss, damage or destruction.
> > >
> >
>

Reply via email to