What jdbc driver are you using? Also compiled from trunk? I ask because I
remember a jira a while back where the jdbc driver didn’t let the server know
the connection should be closed ().
If that’s the case updating the jdbc driver could work. However that might be a
bit of a long shot.
From: G
Hi Venkatesh,
I checked and found that /tmp is having less space left.
I moved my db to other location having space and it is working fine now.
Thanks,
Chunky.
On Thu, Feb 7, 2013 at 12:41 AM, Venkatesh Kavuluri
wrote:
> Looks like it's memory/ disk space issue with your database server used to
Hi all,
I'm occasionally getting the following error, usually after running an
expensive Hive query (creating 20 or so MR jobs):
***
Error during job, obtaining debugging information...
Examining task ID: task_201301291405_1640_r_01 (and more) from job
job_201301291405_1640
Exception in threa
Are you seeing this after a few of the jobs have finished or on the first
stage itself ? Also is this error on all boxes or just a few ? You can
check MR logs and see which box or boxes are the culprits and debug from
there.
Viral
--
From: Krishna Rao
Sent: 2/7/2013 2:4
Hi All,
Are there anyways to close the long running hive query through hive-jdbc?
Since when ever Hive hangs, my application also hang, So I want to close
the hive connection forcefully after a certain time.
--
Thanks,
With Regards,
Abimaran
hive waits till the hadoop job is completed. so unless you kill the job of
jdbc connection is dropped I don't see any other way to reduce the load on
application.
best when you think its long enough, you will need to find a way to kill
the hadoop job. That will automatically release the resources
Hi,
We solved this problem in the following way (this is really not a simple
solution):
- start hive query in a different thread
- we generated an unique id for the query and used the SET key=value;
command (before the long query command) to give this unique id to the MR
jobs related to the query
One reason I know for this error is not setting up HADOOP_HOME. It is right
to not set this variable since it was deprecated and replaced with
HADOOP_PREFIX and HADOOP_MAPRED_HOME. However, it seems like hive still has
some haunting references to HADOOP_HOME causing this error, specially after
the
That is a good way to do it. We do it with comment sometimes.
select /* myid bla bla*/ x,y,z
Edward
On Thu, Feb 7, 2013 at 11:12 AM, Gabor Makrai wrote:
> Hi,
>
> We solved this problem in the following way (this is really not a simple
> solution):
> - start hive query in a different thread
> -
Hello all,
I am trying to join two tables, the smaller being of size 4GB. When I set
hive.mapjoin.smalltable.filesize parameter above 500MB, Hive tries to
perform a local task to read the smaller file. This of-course fails since
the file size is greater and the backup common join is then run. Wha
Hi All,
I have created a partitioned HIVE external table as follows
create external table test_part (key int, val int) partitioned by (part int)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED
AS TEXTFILE LOCATION '/test/';
I have the following folders and files in
Suresh,
Take a look at this:
https://issues.apache.org/jira/browse/HIVE-3231
On Thu, Feb 7, 2013 at 11:46 AM, Krishnappa, Suresh <
suresh.krishna...@rsa.com> wrote:
> Hi All,
>
> I have created a partitioned HIVE external table as follows
>
> ** **
>
> create external table test_part (key
Thanks Mark
From: Mark Grover [mailto:grover.markgro...@gmail.com]
Sent: Thursday, February 07, 2013 2:54 PM
To: user@hive.apache.org
Subject: Re: msck repair table not adding partitions which contains data.
Suresh,
Take a look at this:
https://issues.apache.org/jira/browse/HIVE-3231
On Thu, Feb
Hi,
I am having issues to execute the following multi insert query:
FROM
${tmp_users_table} u
JOIN
${user_evens_table} ue
ON (
u.id = ue.user
)
INSERT OVERWRITE TABLE ${dau_table} PARTITION (dt='${date}')
SELECT
u.country,
u.platform,
u.gender,
COUNT(DIST
14 matches
Mail list logo