Hi Mich,

I mostly agree with you, but I would comment on the part about using HBase
as a maintenance free core product:
I would say that most medium company using Hadoop rely on Hortonworks or
Cloudera, that both provides a pre-packaged HBase installation. It would
probably make sense for them to ship pre-installed versions of Hive relying
on HBase as metastore.
And as Alan stated, it would also be a good way to improve the integration
between Hive and HBase.

I am not well placed to give an opinion on this, but I agree that
maintaining integration between both HBase and regular RDBMS might be a
real pain.
I am also worried about the fact that if indeed HBase grant us the
possibility to have all nodes calling the metastore, then any optimization
making use
of this will only work for a cluster with a Hive metastore on HBase?

Anyway, I am still looking forward to this, as despite working in a small
company, our metastore sometimes seems to be a bottleneck, especially
when running more than 20 queries on tables with 10 000 partitions...
But perhaps migrating it on a bigger host would be enough for us...



On Mon, Oct 24, 2016 at 10:21 PM, Mich Talebzadeh <mich.talebza...@gmail.com
> wrote:

> Thanks Alan for detailed explanation.
>
> Please bear in mind that any tool that needs to work with some repository
> (Oracle TimesTen IMDB has its metastore on Oracle classic), SAP Replication
> Server has its repository RSSD on SAP ASE and others
> First thing they do, they go and cache those tables and keep it in memory
> of the big brother database until they are shutdown. I reversed engineered
> and created Hive data model from physical schema (on Oracle). There are
> around 194 tables in total that can be easily cached.
>
> For small medium enterprise (SME), they don't really have much data so
> anything will do and they are the ones that use open source databases. For
> bigger companies, they already pay bucks for Oracle and alike and they are
> the one that would not touch an open source database (not talking about big
> data), because in this new capital-sensitive risk-averse world, they do
> not want to expose themselves to unnecessary risk.  So I am not sure
> whether they will take something like Hbase as a core product, unless it is
> going to be maintenance free.
>
> Going back to your point
>
> ".. but you have to pay for an expensive commercial license to make the
> metadata really work well is a non-starter"
>
> They already do and pay more if they have to. We will stick with Hive
> metadata on Oracle with schema on SSD
> .
>
> HTH
>
>
>
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 24 October 2016 at 20:14, Alan Gates <alanfga...@gmail.com> wrote:
>
>> Some thoughts on this:
>>
>> First, there’s no plan to remove the option to use an RDBMS such as
>> Oracle as your backend.  Hive’s RawStore interface is built such that
>> various implementations of the metadata storage can easily coexist.
>> Obviously different users will make different choices about what metadata
>> store makes sense for them.
>>
>> As to why HBase:
>> 1) We desperately need to get rid of the ORM layer.  It’s causing us
>> performance problems, as evidenced by things like it taking several minutes
>> to fetch all of the partition data for queries that span many partitions.
>> HBase is a way to achieve this, not the only way.  See in particular
>> Yahoo’s work on optimizing Oracle access https://issues.apache.org/jira
>> /browse/HIVE-14870  The question around this is whether we can optimize
>> for Oracle, MySQL, Postgres, and SQLServer without creating a maintenance
>> and testing nightmare for ourselves.  I’m skeptical, but others think it’s
>> possible.  See comments on that JIRA.
>>
>> 2) We’d like to scale to much larger sizes, both in terms of data and
>> access from nodes.  Not that we’re worried about the amount of metadata,
>> but we’d like to be able to cache more stats, file splits, etc.  And we’d
>> like to allow nodes in the cluster to contact the metastore, which we do
>> not today since many RDBMSs don’t handle a thousand plus simultaneous
>> connections well.  Obviously both data and connection scale can be met with
>> high end commercial stores.  But saying that we have this great open source
>> database but you have to pay for an expensive commercial license to make
>> the metadata really work well is a non-starter.
>>
>> 3) By using tools within the Hadoop ecosystem like HBase we are helping
>> to drive improvement in the system
>>
>> To explain the HBase work a little more, it doesn’t use Phoenix, but
>> works directly against HBase, with the help of a transaction manager
>> (Omid).  In performance tests we’ve done so far it’s faster than Hive 1
>> with the ORM layer, but not yet to the 10x range that we’d like to see.  We
>> haven’t yet done the work to put in co-processors and such that we expect
>> would speed it up further.
>>
>> Alan.
>>
>> > On Oct 23, 2016, at 15:46, Mich Talebzadeh <mich.talebza...@gmail.com>
>> wrote:
>> >
>> >
>> > A while back there was some notes on having Hive metastore on Hbase as
>> opposed to conventional RDBMSs
>> >
>> > I am currently involved with some hefty work with Hbase and Phoenix for
>> batch ingestion of trade data. As long as you define your Hbase table
>> through Phoenix and with secondary Phoenix indexes on Hbase, the speed is
>> impressive.
>> >
>> > I am not sure how much having Hbase as Hive metastore is going to add
>> to Hive performance. We use Oracle 12c as Hive metastore and the Hive
>> database/schema is built on solid state disks. Never had any issues with
>> lock and concurrency.
>> >
>> > Therefore I am not sure what one is going to gain by having Hbase as
>> the Hive metastore? I trust that we can still use our existing schemas on
>> Oracle.
>> >
>> > HTH
>> >
>> >
>> >
>> > Dr Mich Talebzadeh
>> >
>> > LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJ
>> d6zP6AcPCCdOABUrV8Pw
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> > Disclaimer: Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>> >
>>
>>
>

Reply via email to