Hi Chalcy,

I am using the group_concat function in my query and that actually puts all 
columns in the memory and I am afraid Hive does not have this feature.


Regards,
Sambit.

From: Chalcy [mailto:[email protected]]
Sent: Thursday, January 16, 2014 7:19 PM
To: [email protected]
Subject: Re: Joins in Sqoop

Hi Sambit,

I would import all the relevant tables into hive and then do the join there if 
you have enough space in the hadoop cluster.

Hope this helps,
Chalcy

On Thu, Jan 16, 2014 at 8:20 AM, Sambit Tripathy (RBEI/PJ-NBS) 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

I have written query which has 5 Join clauses and I am passing this query in 
Sqoop import.

Problem: This produces a large temp file in the MySQL server temp directory and 
throws back an error saying No Space left on the device. Yes this can be fixed 
by increasing the size of the temp directory in the MySQL server, but what if 
you actually don't have any space left on MySQL server. Are there any 
workarounds for this? I mean something like a batch import which does not 
create a big temp file in the server.


Regards,
Sambit.


Reply via email to