Look at lambda architecture.

What is the motivation of your migration?

> On 04 May 2016, at 03:29, Tapan Upadhyay <tap...@gmail.com> wrote:
> 
> Hi,
> 
> We are planning to move our adhoc queries from teradata to spark. We have 
> huge volume of queries during the day. What is best way to go about it - 
> 
> 1) Read data directly from teradata db using spark jdbc
> 
> 2) Import data using sqoop by EOD jobs into hive tables stored as parquet and 
> then run queries on hive tables using spark sql or spark hive context.
> 
> any other ways through which we can do it in a better/efficiently?
> 
> Please guide.
> 
> Regards,
> Tapan
> 

Reply via email to