Fantastic.
Thanks !
2013/1/30 Qiang Wang
> Every hive query has a history file, and you can get these info from hive
> history file
>
> Following java code can be an example:
>
> https://github.com/anjuke/hwi/blob/master/src/main/java/org/apache/hadoop/hive/hwi/util/QueryUtil.java
>
> Regard,
>
for all the queries you run as user1 .. hive stores the hive cli history
into .hive_history file (please check the limits on how many queries it
stores)
For all the jobs hive cli runs, it keeps the details in /tmp/user.name/
all these values are configurable into hive-site.xml
On Wed, Jan 30, 2
Every hive query has a history file, and you can get these info from hive
history file
Following java code can be an example:
https://github.com/anjuke/hwi/blob/master/src/main/java/org/apache/hadoop/hive/hwi/util/QueryUtil.java
Regard,
Qiang
2013/1/30 Mathieu Despriee
> Hi folks,
>
> I would
Hi folks,
I would like to run a list of generated HIVE queries. For each, I would
like to retrieve the MR job_id (or ids, in case of multiple stages). And
then, with this job_id, collect statistics from job tracker (cumulative
CPU, read bytes...)
How can I send HIVE queries from a bash or python