I see exception like:

Moving data to: hdfs://../hive/atangri_test_1
FAILED: Error in metadata: org.apache.thrift.transport.TTransportException:
java.net.SocketException: Connection timed out
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask



There is enough space in /user and /tmp

Thanks,
Anurag Tangri



On Sat, Aug 11, 2012 at 12:49 AM, Jagat Singh <jagatsi...@gmail.com> wrote:

> Hi Anurag,
>
> How much space is for /user and /tmp directory on client.
>
> Did you check that part? , anything which might stop move task from
> finishing.
>
> -----------
> Sent from Mobile , short and crisp.
> On 11-Aug-2012 1:37 PM, "Anurag Tangri" <tangri.anu...@gmail.com> wrote:
>
>> Hi,
>> We are facing this issue where we run a hive job over huge data about ~6
>> TB input.
>>
>> We run this from hive client and hive metastore server is on another
>> machine.
>>
>>
>> If we have smaller input, this job succeeds but for above input size, it
>> fails with error :
>>
>> 2012-08-11 01:34:01,722 Stage-1 map = 100%,  reduce = 100%
>>
>> 2012-08-11 01:35:02,195 Stage-1 map = 100%,  reduce = 100%
>>
>> 2012-08-11 01:36:02,682 Stage-1 map = 100%,  reduce = 100%
>>
>> 2012-08-11 01:37:03,215 Stage-1 map = 100%,  reduce = 100%
>>
>> 2012-08-11 01:38:03,719 Stage-1 map = 100%,  reduce = 100%
>>
>> 2012-08-11 01:39:04,311 Stage-1 map = 100%,  reduce = 100%
>>
>> Ended Job = job_201207072204_34432
>>
>> Loading data to table default.atangri_test_1
>>
>> Failed with exception Unable to fetch table atangri_test_1
>>
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.MoveTask
>>
>>
>> If we have smaller input (~2 TB), this job succeeds but for above input
>> size, it fails with error : We have set
>> hive.metastore.client.socket.timeout to big value like 86400  but still it
>> fails after about 8-9 hours.
>>
>> Does anyone face the same issue or any pointers ?
>>
>> The job succeeds if it is directly run on hive server.
>>
>> Thanks,
>> Anurag Tangri
>>
>

Reply via email to