Hi,
I am trying to kick off a mapreduce job via WebHCat. The following is the
hadoop jar command.
hadoop jar
/home/hadoop/camus-non-avro-consumer-1.0-SNAPSHOT-jar-with-dependencies.jar
com.linkedin.camus.etl.kafka.CamusJob -P
/home/hadoop/camus_non_avro.properties
As you can see there is an
:
/templeton-hadoop/jobs/job_201312212124_0161/callback
Any ideas?
On Mon, Dec 30, 2013 at 12:13 PM, Jonathan Hodges hodg...@gmail.com wrote:
Hi,
I am trying to kick off a mapreduce job via WebHCat. The following is the
hadoop jar command.
hadoop jar
/home/hadoop/camus-non-avro-consumer-1.0
you tried adding
-d arg=-P
before
-d arg=/tmp/properites
On Mon, Dec 30, 2013 at 11:14 AM, Jonathan Hodges hodg...@gmail.comwrote:
Sorry accidentally hit send before adding the lines from webhcat.log
DEBUG | 30 Dec 2013 19:08:01,042 | org.apache.hcatalog.templeton.Server |
queued
DEBUG level log4j output in hive 0.12).
It should print the command that TempletonControllerJob's launcher task
(LaunchMapper) is trying to launch
On Mon, Dec 30, 2013 at 12:55 PM, Jonathan Hodges hodg...@gmail.comwrote:
I didn't try that before, but I just did.
curl -s -d user.name=hadoop
:
It looks like in 0.11 it writes to stderr (limited logging anyway).
Perhaps you can try adding '*statusdir*' param to your REST call and see
if anything useful is written to that directory.
On Mon, Dec 30, 2013 at 2:22 PM, Jonathan Hodges hodg...@gmail.comwrote:
I don't see
Would it be advisable to try 0.12, maybe this issue is resolved?
On Wed, Dec 4, 2013 at 6:17 PM, Jonathan Hodges hodg...@gmail.com wrote:
Hi Thejas,
Thanks for your reply. The 'templeton.storage.root' property is set to
the default value, '/templeton-hadoop'. Sorry I wasn't clear above
/templeton-hadoop/jobs/job_201311281741_0020/user
Any other ideas? Could using S3 instead of HDFS for the Pig and Hive
archives be a problem? Based on the logs it seems to find the archives
just fine and fails somewhere in the Hive execution.
-Jonathan
On Tue, Dec 3, 2013 at 6:23 PM, Thejas
,
Jonathan
}} or {1:{baz:42}} and I don't
know how to destructure that any further. LATERAL VIEW doesn't seem to
work for this and my Google-fu is otherwise failing me.
Thanks,
--Jonathan Bryant
this to hadoop-common?
Thanks in advance,
Matt
On Wed, May 9, 2012 at 7:11 PM, Jonathan Seidman
jonathan.seid...@gmail.com wrote:
Varun – So yes, Hive stores the full URI to the NameNode in the metadata
for every table and partition. From my experience you're best off modifying
the metadata
Varun – So yes, Hive stores the full URI to the NameNode in the metadata
for every table and partition. From my experience you're best off modifying
the metadata to point to the new NN, as opposed to trying to manipulate
DNS. Fortunately, this is fairly straightforward since there's mainly one
you want to make sure that none of the Hadoop processes are getting started.
Jonathan
On Thu, Mar 1, 2012 at 10:20 AM, Omer, Farah fo...@microstrategy.comwrote:
Hello,
Could anybody tell me how can I load data into a Hive table when the flat
file is existing on another server and bit locally
hand if I just do a select on the map data type I get the
values from the map.
SELECT ipaddress, user_agent, querystring['cid'], querystring['pid'],
querystring['PlacementId'], returncode, size FROM beacon_processed limit
200;
Is this a bug or am I doing something wrong
Jonathan Meed
University
Hi,
I am trying to parse an apache2 log using
the 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'. I am able to load
the tables using the script below but its showing up each of the 3 rows as
null for every entry.
CREATE TABLE apachelog4 (
ip STRING,
time STRING,
beacon STRING)
ROW
So the regex has to match every piece of the line completely. I wrote the
regex so that it just takes a few helpful things out of the log line.
Thanks for your help
Jonathan
On Sat, Oct 1, 2011 at 12:14 AM, Vijay tec...@gmail.com wrote:
The log lines are in some kind of JSON format though
this works with AWS/EMR, but that's the first thing I'd check.
Jonathan
On Mon, Sep 26, 2011 at 5:16 PM, Bradford Stephens
bradfordsteph...@gmail.com wrote:
Hey amigos,
I'm doing a EMR load for HDFS to S3 data. My example looks correct,
but I'm getting an odd error. Since all the EMR data
Hi,
I'm trying to do what I think should be a simple task, but I'm running
into some issues with carrying through column names. All I want to do
is essentially copy an existing table but change the serialization
format (if you're curious, this is to help integrate with some existing
map
Hey all,
I have a quick question about using a select statement as an input to a user
defined function.
I have a table TABLE, with columns (segment_ID STRING, user_IDs
mapSTRING,STRING)
I have a UDF myfunction (mapSTRING,STRING A).
If i did tried to do this statement: myfunction(SELECT
Hey all,
Just wondering if there is native support for input arguments on Hive
scripts.
eg. $ bin/hive -f script.q arg1 arg2
Any documentation I could reference to look into this further?
Cheers,
Jon
Thanks everybody.
More reader friendly version of that SVN doc:
http://archive.cloudera.com/cdh/3/hive/language_manual/var_substitution.html
On Wed, May 4, 2011 at 3:15 PM, Time Less timelessn...@gmail.com wrote:
Just wondering if there is native support for input arguments on Hive
scripts.
I apologize in advance if this is a basic question... I haven't found a
straight answer to the question, though, and am new to Hive so forgive the
ignorance.
I've done some searching around, and it looks like HUE may be one solution,
but pending looking into that, I was wondering if anyone has
21 matches
Mail list logo