hey Lac,
its showing like you dont have DBS table in metastore(derby or mysql),
actually you have to again install the hive or again build hive through ANT.
Check you metastore(that DBS is exists or not)
Thanks & regards
Vikas Srivastava
On Fri, Feb 10, 2012 at 8:33 AM, Lac Trung wrote:
> Tha
I'm trying to run the Terasort benchmarks and i'm getting the
following exception:
java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf
(ReflectionUtils.java:93)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUt
Thanks for your reply !
I think i installed Hadoop correctly because i run wordcount example i have
correct output. But i didn't know how to install Hive, so i installed Hive
via https://cwiki.apache.org/confluence/display/Hive/GettingStarted include
installed Hadoop 20.0 (may be not install on si
I am having some troubles in understanding how the whole stuff works..
Compiling with ant works ok and I am able to compile a jar which is
afterwards deployed to the cluster. On the cluster I've set the
HADOOP_CLASSPATH variable to point just to jar files in the lib folder
($HD_HOME/lib/*.jar), wh
Hi Lac,
Could you provide some more information on what kind of errors you are
encountering?
Did the Hadoop setup install correctly? If so, how did you try to install Hive?
Once you have a single node setup of Hadoop running, you can install Hive by
building from source:
https://cwiki.apache
Hi everyone !
I try to setup Hive on single node to configure and use before i setup on
multi node.
First, i install Hadoop on single node (
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/).
Then, i setup Hive (https://cwiki.apache.org/confluence/display/
On one machine, where a pseudo-distributed cluster works properly, the Machine
column for the task tracker lists "/default-rack/[ip.address]" with actual
numbers. The links under the Task Logs column contain URLs by ip address (and
port 50060) as well. Everything works great.
On another machi
So.. is waiting until 1.1.0 to have support for splittable bzip2 my best
choice?
-Leo
On Wed, Feb 8, 2012 at 1:57 PM, Leonardo Urbina wrote:
> Hi Bejoy,
>
> Thanks for your response. I know how to index Lzo files, however I am
> curious on whether I can still use my custom InputFormats to proces
This post might be helpful for u:
https://groups.google.com/a/cloudera.org/group/cdh-user/browse_thread/thread/4165f39d8b0bbc56
On Thu, Feb 9, 2012 at 11:42 AM, Anil Gupta wrote:
> Hi,
> I have dealt with this kind of this problem earlier.
> Check the logs of datanode as well as namenode.
>
> In
Hi,
I have dealt with this kind of this problem earlier.
Check the logs of datanode as well as namenode.
In order to test the connectivity:
ssh into slave from master and ssh into master from the same slave. Leave the
ssh session open for as long as u can.
In my case when I did the above exper
Mark,
Any other clients deleted the file while write inprogress?
could you please grep in namenode about this file
/user/mark/output33/_temporary/_attempt_201202090811_0005_m_000247_0/part-00247
if any delete requests?
From: Mark question [markq2...@gmail
Hi guys,
Even though there is enough space on HDFS as shown by -report ... I get the
following 2 error shown first in
the log of a datanode and the second on Namenode log:
1)2012-02-09 10:18:37,519 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.addToInvalidates: blk_844811798682217395
All,
This a follow up to my last post.
Turns out there was yet another hadoop cluster available running Ubuntu
10.10 on identical hardware to what i was using in may last post. The
difference is between the two versions of Ubuntu: 10.10 vs 11.10. All else
is equal.
Things work on 10.10. Things
Maybe, check your iptables first. For hadoop on multi-machines, do shut
down iptables. And it will block the connections between all the nodes.
#/etc/init.d/iptables stop
2012/2/9 alo alt
> Please use jdk 6 latest.
>
> best,
> Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On
Hi Vikas,
On 02/09/2012 11:53 AM, hadoop hive wrote:
hey luca,
you can use
conf.set("*mapred.textoutputformat.separator*", " ");
hope it works fine
regards
Vikas Srivastava
I'm using pipes for this task. I can't specify the configuration
property via Java code. In addition, I don't wan
hey luca,
you can use
conf.set("*mapred.textoutputformat.separator*", " ");
hope it works fine
regards
Vikas Srivastava
On Thu, Feb 9, 2012 at 3:57 PM, Luca Pireddu wrote:
> Hello list,
>
> I'm trying to specify from the command line an empty string as the
> key-value separator for TextOutpu
Please use jdk 6 latest.
best,
Alex
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 9, 2012, at 11:11 AM, hadoop hive wrote:
> did you make check the ssh between localhost means its should be ssh password
> less between localhost
>
> public-key =authorized_key
>
> On Thu, Feb 9, 2
Hello list,
I'm trying to specify from the command line an empty string as the
key-value separator for TextOutputFormat.
I'm specifying a blank as the value for the
mapred.textoutputformat.separator property:
-D mapred.textoutputformat.separator=""
And, when I look at the job's configura
did you make check the ssh between localhost means its should be
ssh password less between localhost
public-key =authorized_key
On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady <
robin.mueller-b...@oracle.com> wrote:
> Dear Guruprasad,
>
> it would be very helpful to provide details from your
19 matches
Mail list logo