Hi All,
I am running hadoop0.19.1 in hp-ux and now encounter a problem. Jobtracker
always say :
Tip is null
Serious problem. While updating status, cannot find tasked
Below is jobtrack log:
2012-02-24 19:20:41,894 INFO org.apache.hadoop.mapred.TaskInProgress: oldState
is RUNNING,newState is RUN
Thanks to everyone with their help on this.
We are currently using pig, but I don't think that this is something that we
are currently using, I will pass this recommendation on!
Thanks again, Dan.
-Original Message-
From: Srinivas Surasani [mailto:hivehadooplearn...@gmail.com]
Sent: 2
Hi Yonggang,
Unfortunately you're using a very old version, so its hard to tell
what was wrong with it.
Could you please try upgrading the the most recent stable release
(1.0.x)? We've not seen this issue come up in the last couple of
years, so it may have been a bug fixed quite some time ago.
O
Can someone please suggest if parameters like dfs.block.size,
mapred.tasktracker.map.tasks.maximum are only cluster wide settings or can
these be set per client job configuration?
On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia wrote:
> If I want to change the block size then can I use Configurati
dfs.block.size can be set per job.
mapred.tasktracker.map.tasks.maximum is per tasktracker.
-Joey
On Mon, Feb 27, 2012 at 10:19 AM, Mohit Anchlia wrote:
> Can someone please suggest if parameters like dfs.block.size,
> mapred.tasktracker.map.tasks.maximum are only cluster wide settings or can
>
Hello,
I am running into the following problem building hadoop-1.0.1:
-
[exec] make[1]: Entering directory
`/home/kumar/hadoop-1.0.1/src/contrib/fuse-dfs'
[exec] make[1]: Nothing to be done for `all-am'.
[exec] make[1]: Leaving directory
`/home/kumar/hadoop-
Hello All,
I am a beginning hadoop user. I am trying to install hadoop as part of a
single-node setup. I read in the documentation that the supported platforms
are GNU/Linux and Win32. I have a Mac OS X and wish to run the single-node
setup. I am guessing I need to use some virtualization solution
Hi
I have detailed instructions online here:
http://hadoopway.blogspot.com/
It works on MAC and all software is open source.
Serge
On 2/26/12 8:28 PM, "Sriram Ganesan" wrote:
>Hello All,
>
>I am a beginning hadoop user. I am trying to install hadoop as part of a
>single-node setup. I read
You don't need any virtualization. Mac OS X is Linux and runs Hadoop as is.
Good to know about the VirtualBox instructions.
Here are a couple of other links that might help on single node:
Single Node Setup
http://hadoop.apache.org/common/docs/stable/single_node_setup.html
Running_Hadoop_On_OS_X_10.5_64-bit_(Single-Node_Cluster)
http://wiki.apache.org/hadoop/Running_Had
You could also use vmware Fusion on a MacŠ I do this when I'm creating a
distributed hadoop cluster with a few data nodes, but just for a single
node, you can install that on a Mac OSX, no need for virtualization.
Peter J
On 2/26/12 8:28 PM, "Sriram Ganesan" wrote:
>Hello All,
>
>I am a beginn
Hello,
I found a work around for this problem
-- The libhdfs files were elsewhere in the build in $HADOOP_HOME/build/c+
+/Linux-amd64-64/lib/ and not in the $HADOOP_HOME/build/libhdfs directory
as the Makefile in fuse-dfs were pointing to.
Regards,
Kumar
Kumar Ravi
Seconded, I've setup and run Hadoop CDH3 on a recent 10.7(.2) Mac. Works like a
charm.
Sent from my phone, please excuse my brevity.
Keith Wiley, kwi...@keithwiley.com, http://keithwiley.com
Serge Blazhievsky wrote:
Hi
I have detailed instructions onli
I submitted a map reduce job that had 9 tasks killed out of 139. But I
don't see any errors in the admin page. The entire job however has
SUCCEDED. How can I track down the reason?
Also, how do I determine if this is something to worry about?
You probably have speculative execution enabled. That¹s normal for job
tracker to launch multiple tasks and take result of the ones that
complited first.
Regards,
Serge
On 2/27/12 11:55 AM, "Mohit Anchlia" wrote:
>I submitted a map reduce job that had 9 tasks killed out of 139. But I
>don't s
On 2/27/2012 1:55 PM, Mohit Anchlia wrote:
I submitted a map reduce job that had 9 tasks killed out of 139. But I
don't see any errors in the admin page. The entire job however has
SUCCEDED. How can I track down the reason?
Also, how do I determine if this is something to worry about?
Hi,
You
How do I verify the block size of a given file? Is there a command?
On Mon, Feb 27, 2012 at 7:59 AM, Joey Echeverria wrote:
> dfs.block.size can be set per job.
>
> mapred.tasktracker.map.tasks.maximum is per tasktracker.
>
> -Joey
>
> On Mon, Feb 27, 2012 at 10:19 AM, Mohit Anchlia
> wrote:
>
"hadoop fsck -blocks" is something that I think of quickly.
http://hadoop.apache.org/common/docs/current/commands_manual.html#fsck has more
details
Kai
Am 28.02.2012 um 02:30 schrieb Mohit Anchlia:
> How do I verify the block size of a given file? Is there a command?
>
> On Mon, Feb 27, 2012
What's the best way to write records to a different file? I am doing xml
processing and during processing I might come accross invalid xml format.
Current I have it under try catch block and writing to log4j. But I think
it would be better to just write it to an output file that just contains
error
Try setting numbers of the reducers to 0.
On 2/27/12 2:34 PM, "Mohit Anchlia" wrote:
>Is there a way to completely bypass reduce step? Pig is able to do it but
>it doesn't work for me in map reduce program even though I've commented
>setReducerClass
Hi Harsh,
I have tried to install hadoop1.0 in hp-ux but fail to run it. because The
shell of hp-ux and Linux syntax is slightly different.
Best Regards
Yonggang Li
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Monday, February 27, 2012 8:01 PM
To: common-user@h
On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia wrote:
> For some reason I am getting invocation exception and I don't see any more
> details other than this exception:
>
> My job is configured as:
>
>
> JobConf conf = *new* JobConf(FormMLProcessor.*class*);
>
> conf.addResource("hdfs-site.xml");
>
Does it matter if reducer is set even if the no of reducers is 0? Is there
a way to get more clear reason?
On Mon, Feb 27, 2012 at 8:23 PM, Subir S wrote:
> On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia >wrote:
>
> > For some reason I am getting invocation exception and I don't see any
> more
Tom White's Definitive Guide book is a great reference. Answers to
most of your questions could be found there.
Sent from my iPhone
On Feb 27, 2012, at 8:54 PM, Mohit Anchlia wrote:
> Does it matter if reducer is set even if the no of reducers is 0? Is there
> a way to get more clear reason?
>
On Mon, Feb 27, 2012 at 8:58 PM, Prashant Kommireddi wrote:
> Tom White's Definitive Guide book is a great reference. Answers to
> most of your questions could be found there.
>
> I've been through that book but haven't come accross how to debug this
exception. Can you point me to the topic in tha
Mohit,
Use the MultipleOutputs API:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html
to have a named output of bad records. There is an example of use
detailed on the link.
On Tue, Feb 28, 2012 at 3:48 AM, Mohit Anchlia wrote:
> What's the best w
Thanks that's helpful. In that example what is "A" and "B" referring to? Is
that the output file name?
mos.getCollector("seq", "A", reporter).collect(key, new Text("Bye"));
mos.getCollector("seq", "B", reporter).collect(key, new Text("Chau"));
On Mon, Feb 27, 2012 at 9:53 PM, Harsh J wrote:
>
Hi all,
I am trying to use hadoop eclipse plugin on my windows machine to connect
to the my remote hadoop cluster. I am currently using putty to login to the
cluster. So ssh is enable and my windows machine is able to listen to my
hadoop cluster.
I am using hadoop 0.20.205, hadoop-eclipse plugin
When I am using more than one reducer in hadoop streaming where I am using
my custom separater rather than the tab, it looks like the hadoop shuffling
process is not happening as it should.
This is the reducer output when I am using '\t' to separate my key value
pair that is output from the mapper
29 matches
Mail list logo