Hi Austin

You can import the existing tables to hive as such using sqoop. Hive is a 
wrapper over mapreduce that gives you the flexibility to create optimized 
mapreduce jobs using SQL like syntax. The is no relational style maintained in 
hive and don't treat hive as a typical database/datawarehouse. In short no 
referencial integrity.


However if you have all the data in hive almost all queries that work on mysql 
would work in hive as well. Some queries may not but still you'll have work 
arounds.

You can have whole data sets/tables in hive itself. 
 
You don't need denormalization much I guess. Joins work well in hive.

Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: Austin Chungath <austi...@gmail.com>
Date: Mon, 22 Oct 2012 16:47:04 
To: <user@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: Implementing a star schema (facts & dimension model)

Hi,

I am new to data warehousing in hadoop. This might be a trivial question
but I was unable to find any answers in the mailing list.
My questions are:
A person has an existing data warehouse that uses a star schema
(implemented in a mysql database).How to migrate it to Hadoop?
I can use sqoop to copy my tables to hive, that much I know.

But what happens to referential integrity? since there are no primary key /
foreign key concepts.
I have seen that I can use Hive & Hbase together. Is there a method for
storing facts and dimension tables in hadoop using Hive & Hbase together?
Does putting dimensions in Hbase & facts in Hive make any sense? or should
it be the other way around?

Consider de-normalization is not an option.
What is the best practice to port an existing data warehouse to hadoop,
with minimum changes to the database model?

Please let me know with whatever views you have on this.

Thanks,
Austin

Reply via email to