I don't think your solution works, as after more than 4 minutes I could
still see logs of my job showing that it was running.
Do you have a way to check that even if the job was running, it was not
being killed by Hive ?
Or another solution ?
Thanks for your help,
Loïc
Loïc CHANEL
Engineering
I am using HDInsight, and can issue table commands to my MongoDB system, but
its not acting like i think it should.I realize that when i create an external
table in hive, and give it mongodb properties, it is a shell of a table within
hive, and that the real table resides in MongoDB. If I drop
this works for me:
In hive-site.xml:
1. hive.server2.session.check.interva=3000;
2. hive.server2.idle.operation.timeou=-3;
restart HiveServer2.
at beeline, I do analyze table X compute statistics for columns, which
takes longer than 30s. it was aborted by HS2 because of above settings. I
You should set hive.server2.enable.doAs to 'true' in hive-site.xml and
change hadoop.proxyuser.hive.hosts in core-site.xml.
On Tue, Jul 28, 2015 at 1:51 AM, Han-Cheol Cho hancheol@nhn-playart.com
wrote:
Hi, all
Even I user impersonation, running create database test01 in beeline
Hi,
I am using the OrcFile.createWriter() function to get an ORC writer to
write ORC files. However, upon inspecting the ORC files in HDFS, I notice
that they have a block size of 256MB. Now, I would like to set it to 64MB
instead.
I digged around the net and tried a few options:
- I set the
Hi
OrcFile.createWriter() methods accepts WriterOptions where you can specify the
block size to use using blockSize() method.
Thanks
Prasanth
On Jul 29, 2015, at 5:22 PM, Ashish Shenoy
ashe...@instartlogic.commailto:ashe...@instartlogic.com wrote:
Hi,
I am using the OrcFile.createWriter()
Hi folks,
I need to set up a connection between perl and hive using
thrift. Can anyone sugggest me the steps involved in making this happen?.
Thanka and regrads,
siva.
iam using hive 1.0 and tez 0.7 whenever iam performing insert query its
returns following error:
execution error, return code 1 from
org.apache.hadoop.hive.ql.exec.tez.teztask
No, because I thought the idea of infinite operation was not very
compatible with the idle word (as the operation will not stop running),
but I'll try :-)
Thanks for the idea,
Loïc
Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne
2015-07-29 15:27 GMT+02:00
Hi all,
As I'm trying to build a secured and multi-tenant Hadoop cluster with Hive,
I am desperately trying to set a timeout to Hive requests.
My idea is that some users can make mistakes such as a join with wrong
keys, and therefore start an infinite loop believing that they are just
launching a
Have you tried hive.server2.idle.operation.timeout?
--Xuefu
On Wed, Jul 29, 2015 at 5:52 AM, Loïc Chanel loic.cha...@telecomnancy.net
wrote:
Hi all,
As I'm trying to build a secured and multi-tenant Hadoop cluster with
Hive, I am desperately trying to set a timeout to Hive requests.
My
Hi,
I am using Hive 1.2, and I am trying to run some queries based on TPCH
schema. My original query is:
SELECT N_NAME, AVERAGE(C_ACCTBAL)
FROM customer JOIN nation
on C_NATIONKEY=N_NATIONKEY
GROUP BY N_NAME;
for which I get:
FAILED: SemanticException [Error 10025]: Line 1:15 Expression not in
Just a follow-up on the issue.
It was really happening because of using AVERAGE() instead of AVG().
Sorry, but the error was mis-leading (It did not tell me that function name
is invalid).
I had borrowed the query from a benchmark spec, and they had used AVERAGE
in their sql statements, and I
Yes, I set it to negative 60.
It's not a problem if the session is killed. That's actually what I try to
do, because I can't allow to a user to try to end an infinite request.
Therefore I'll try your solution :)
Thanks,
Loïc
Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at
I confirm : I just tried hive.server2.idle.operation.timeout setting it to
-60 (seconds), but my veery slow job have not been killed. The issue
here is what if another user come and try to submit a MapReduce job but
the cluster is stuck in an infinite loop ?.
Do you or anyone else have
Please check the hive log for more useful info which is located in
/tmp/${user}/hive.log by default.
Best Regard,
Jeff Zhang
From: Sateesh Karuturi
sateesh.karutu...@gmail.commailto:sateesh.karutu...@gmail.com
Reply-To: u...@tez.apache.orgmailto:u...@tez.apache.org
16 matches
Mail list logo