Hi,

How are you going to sync your data following migration?

Spark SQL is a tool for querying data. It is not a database per se like
Hive or anything else.

I am just doing the same migrating Sybase IQ to Hive.

Sqoop can do the initial ELT (read ELT not ETL). In other words use Sqoop
to get data as is from Teradata to Hive table and then use Hive for further
cleansing etc.

It all depends how you want to approach this and how many tables are
involved and your schema. For example are we talking about FACT tables
only. You can easily keep your DIMENSION tables in Teradata and use Spark
SQL to load data from Teradata and Hive.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 4 May 2016 at 02:29, Tapan Upadhyay <tap...@gmail.com> wrote:

> Hi,
>
> We are planning to move our adhoc queries from teradata to spark. We have
> huge volume of queries during the day. What is best way to go about it -
>
> 1) Read data directly from teradata db using spark jdbc
>
> 2) Import data using sqoop by EOD jobs into hive tables stored as parquet
> and then run queries on hive tables using spark sql or spark hive context.
>
> any other ways through which we can do it in a better/efficiently?
>
> Please guide.
>
> Regards,
> Tapan
>
>

Reply via email to