I have implemented a simple udf for test.
public class test extends UDF {
private Text t;
public Text evaluate () {
if(t==null) {
t=new Text("initialization");
}
else {
t=new Text("OK");
}
return t;
}
}
And the test que
How is that related?
My question is about being able to track mapred jobs even after a HS2
restart.
On Thu, Mar 13, 2014 at 9:28 PM, Navis류승우 wrote:
> User provided classes (by adding jars) should be unloaded when the
> session is closed. https://issues.apache.org/jira/browse/HIVE-3969 is
> abo
It looks like his job failed in OOM in mapper tasks:
Job failed as tasks failed. failedMaps:1 failedReduces:0
So what he need is to increase the mapper heap size request.
Yong
Date: Mon, 24 Mar 2014 16:16:50 -0400
Subject: Re: Joins Failing
From: divakarredd...@gmail.com
To: user@hive.apache.org
C
Your UDF object will only initialized once per map or reducer.
When you said your UDF object being initialized for each row, why do you think
so? Do you have log to make you think that way?
If OK, please provide more information, so we can help you, like your example
code, log etc
Yong
Date
I hope, this property will fix your issue.
set mapred.reduce.child.java.opts=-Xmx4096m;
On Mon, Mar 24, 2014 at 3:59 PM, Clay McDonald <
stuart.mcdon...@bateswhite.com> wrote:
> I believe I found my issue.
>
> 2014-03-24 15:49:38,775 FATAL [main] org.apache.hadoop.mapred.YarnChild:
> Error r
I believe I found my issue.
2014-03-24 15:49:38,775 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error
running child : java.lang.OutOfMemoryError: Java heap space
Clay
From: Clay McDonald [mailto:stuart.mcdon...@bateswhite.com]
Sent: Monday, March 24, 2014 2:07 PM
To: 'user@hive.apache.
My join query is failing. Any suggestions on how I should troubleshoot this?
92651139753_0036_1_conf.xml to
hdfs://dc-bigdata5.bateswhite.com:8020/mr-history/tmp/root/job_1392651139753_0036_conf.xml_tmp
2014-03-24 13:48:58,244 INFO [Thread-65]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryE
Hi Rahman,
On 24 March 2014 16:45, Abdelrahman Shettia wrote:
> Hi Oliver,
>
> Can you perform a simple test of hadoop fs -cat
> hdfs:///logs/2014/03-24/actual_log_file_name.seq by the same user? Also
> what are the configurations setting for the following?
>
Yes, I can access that file with the
Hi all,
I'm trying to implement a udf which makes use of some data structures
like binary tree. However, it seems that hive instantiates new udf
object for each row in the table. Then the data structures would be also
initialized again and again for each row.
Hi Rishabh,
Not sure if this entirly right. I did a quick test and I can create an external
table without the write permission.
hive> CREATE EXTERNAL TABLE mytest (name string) location '/test/mytest';
OK
Time taken: 1.675 seconds
hive> dfs -ls /test ;
Found 1 items
drwxr-xr-x - hive hdfs
Hi all,
I was trying out some hive optimization features and encountered such
problem: I cannot use bucket map join in hive 0.12. After all the setting I
tried below, only one hashtable file is generated and the join turn out to be
just map join.
I have two tables both in rcfil
Hi Oliver,
In order to create external tables you must be having write access to the
folder.
Also in order to create external table just give the location of the folder in
which your file is located.
For example if your file "actual_log_file_name.seq" is stored in "03-24" folder
then try one a
Hi Oliver,
Can you perform a simple test of hadoop fs -cat
hdfs:///logs/2014/03-24/actual_log_file_name.seq by the same user? Also what
are the configurations setting for the following?
hive.metastore.execute.setugi
hive.metastore.warehouse.dir
hive.metastore.uris
Thanks,
Rahman
On Mar 24
Hi,
I have a bunch of data already in place in a directory on HDFS containing
many different logs of different types, so I'm attempting to load these
externally like so:
CREATE EXTERNAL TABLE mylogs (line STRING) STORED AS SEQUENCEFILE LOCATION
'hdfs:///logs/2014/03-24/actual_log_file_name.seq';
14 matches
Mail list logo