*
ViSolve Hadoop Support Team
ViSolve Inc. | San Jose, California
Website: www.visolve.com
email: servi...@visolve.com | Phone: 408-850-2243
*From:* Krishna Rao [mailto:krishnanj...@gmail.com]
*Sent:* Thursday, February 26, 2015 9:48 PM
*To:* user@hive.apache.org; u...@hadoop.apache.org
Hi,
we occasionally run into a BindException causing long running jobs to
occasionally fail.
The stacktrace is below.
Any ideas what this could be caused by?
Cheers,
Krishna
Stacktrace:
379969 [Thread-980] ERROR org.apache.hadoop.hive.ql.exec.Task - Job
Submission failed with exception
Last time I looked there wasn't much info available on how to reduce the
size of the logs written here (the only suggestions being delete them after
a day).
Is there anything I can do now to reduce what's logged there in the first
place?
Cheers,
Krishna
Hi all,
we've experienced a bug which seems to be caused by having a query
constraint involving partitioned columns. The following query results in
FAILED: NullPointerException null being returned nearly instantly:
EXPLAIN SELECT
col1
FROM
tbl1
WHERE
(part_col1 = 2014 AND part_col2 = 2)
OR
Hi all,
We've been running into this problem a lot recently on a particular reduce
task. I'm aware that I can work around it by uping the
mapred.task.timeout.
However, I would like to know what the underlying problem is. How can I
find this out?
Alternatively, can I force a generated hive task
Hi all,
I'm using the hive json serde and need to run: ADD JAR
/usr/lib/hive/lib/hive-json-serde-0.2.jar;, before I can use tables that
require it.
Is it possible to have this jar available automatically?
I could do it via adding the statement to a .hiverc file, but I was
wondering if there is
}/auxlib dir. There always is the
HIVE_AUX_JARS_PATH environment variable (but this introduces a dependency
on the environment).
On Wed, Mar 13, 2013 at 10:26 AM, Krishna Rao krishnanj...@gmail.comwrote:
Hi all,
I'm using the hive json serde and need to run: ADD JAR
/usr/lib/hive/lib/hive
Hi all,
I'm occasionally getting the following error, usually after running an
expensive Hive query (creating 20 or so MR jobs):
***
Error during job, obtaining debugging information...
Examining task ID: task_201301291405_1640_r_01 (and more) from job
job_201301291405_1640
Exception in
Hi all,
On running a statement of the form INSERT INTO TABLE tbl1 PARTITION(p1)
SELECT x1 FROM tbl2, I get the following error:
Failed with exception java.lang.ClassCastException:
org.apache.hadoop.hive.metastore.api.InvalidOperationException cannot be
cast to java.lang.RuntimeException
How can
wrote:
can you give table definition of both the tables?
are both the columns of same type ?
On Wed, Jan 9, 2013 at 5:15 AM, Krishna Rao krishnanj...@gmail.comwrote:
Hi all,
On running a statement of the form INSERT INTO TABLE tbl1 PARTITION(p1)
SELECT x1 FROM tbl2, I get the following
).
cheers,
Alex
On Jan 2, 2013, at 10:35 AM, Krishna Rao krishnanj...@gmail.com wrote:
A particular query that I run fails with the following error:
***
Job 18: Map: 2 Reduce: 1 Cumulative CPU: 3.67 sec HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread main
On 18 December 2012 02:05, Mark Grover grover.markgro...@gmail.com wrote:
I usually put it in my home directory and that works. Did you try that?
I need it to work for all users. So the cleanest non-duplicating solution,
seems to be in the hive bin directory (and then conf dir, when I upgrade
and set
the parameters you want, these will be included in each session
On Fri, Dec 14, 2012 at 4:05 PM, Krishna Rao krishnanj...@gmail.comwrote:
Hi all,
is it possible to set: mapreduce.map.log.level
mapreduce.reduce.log.level, within some config file?
At the moment I have to remember
Hi all,
is it possible to set: mapreduce.map.log.level
mapreduce.reduce.log.level, within some config file?
At the moment I have to remember to set these at the start of a hive
session, or script.
Cheers,
Krishna
Hi all,
I'm haivng trouble transfering NULLs in a VARCHAR column in a table in
PostgresQL into Hive. A null value ends up as an empty value in Hive,
rather than NULL.
I'm running the following command:
sqoop import --username username -P --hive-import --hive-overwrite
--null-string='\\N'
suggested workaround is to use JDBC based import by dropping the
--direct argument.
Links:
1: https://issues.apache.org/jira/browse/SQOOP-654
On Tue, Dec 04, 2012 at 05:04:56PM +, Krishna Rao wrote:
Hi all,
I'm haivng trouble transfering NULLs in a VARCHAR column in a table in
PostgresQL
that are in block
format are always split table regardless of what compression for the
block is chosen.The Programming Hive book has an entire section
dedicated to the permutations of compression options.
Edward
On Mon, Nov 5, 2012 at 10:57 AM, Krishna Rao krishnanj...@gmail.com
wrote:
Hi all
Hi all,
I'm looking into finding a suitable format to store data in HDFS, so that
it's available for processing by Hive. Ideally I would like to satisfy the
following:
1. store the data in a format that is readable by multiple Hadoop projects
(eg. Pig, Mahout, etc.), not just Hive
2. work with a
18 matches
Mail list logo