2008-09-25 17:12:18,250 INFO org.apache.hadoop.mapred.ReduceTask:
attempt_200809180916_0027_r_07_2: Got 2 new map-outputs number
of known map outputs is 21
2008-09-25 17:12:18,251 WARN org.apache.hadoop.mapred.ReduceTask:
attempt_200809180916_0027_r_07_2 Merge of the inmemory files
hi
i'm writing an appln which computes using the entire data from a file.
for that purpose i dont want to split my file and the entire file shd go to
map task..
i've been able to override isSplitable() do it and the file is not getting
split now..
then..
i had to store the input values to an
Edward,
Can you describe more about Hama, with respect to Hadoop?
I've read through the Incubator proposal and your blog -- it's a great approach.
Are there any benchmarks available? E.g., size of data sets used,
kinds of operations performed, etc.
Will this project be able to make use of
could you please attach your configurations and logs?
On Fri, Sep 26, 2008 at 6:12 AM, Ski Gh3 [EMAIL PROTECTED] wrote:
Hi all,
I'm trying to set up a small cluster with 3 machines. I'd like to have one
machine serves as the namenode and the jobtracker, while the 3 all serve as
the
maybe you can use
bin/hadoop jar -libjars ${your-depends-jars} your.mapred.jar args
see details:
http://hadoop.apache.org/core/docs/r0.18.1/api/org/apache/hadoop/mapred/JobShell.html
On Thu, Sep 25, 2008 at 12:26 PM, David Hall [EMAIL PROTECTED]wrote:
On Sun, Sep 21, 2008 at 9:41 PM, David
Hi,
On Fri, Sep 26, 2008 at 10:50 AM, Samuel Guo [EMAIL PROTECTED] wrote:
maybe you can use
bin/hadoop jar -libjars ${your-depends-jars} your.mapred.jar args
see details:
http://hadoop.apache.org/core/docs/r0.18.1/api/org/apache/hadoop/mapred/JobShell.html
Indeed, I was having the same
What does jstack show for this?
Probably better suited for jira discussion.
Raghu.
Goel, Ankur wrote:
Hi Folks,
We have developed a simple log writer in Java that is plugged into
Apache custom log and writes log entries directly to our hadoop cluster
(50 machines, quad core, each with 16
Hi,
I'm trying to figure out which log files are used by the job
tracker's web interface to display the following information:
Job Name: my job
Job File: hdfs://localhost:9000/tmp/hadoop-scohen/mapred/system/
job_200809260816_0001/job.xml
Status: Succeeded
Started at: Fri Sep 26 08:18:04
Shirley Cohen wrote:
Hi,
I'm trying to figure out which log files are used by the job tracker's
web interface to display the following information:
Job Name: my job
Job File:
hdfs://localhost:9000/tmp/hadoop-scohen/mapred/system/job_200809260816_0001/job.xml
Status: Succeeded
Started at:
I would imagine something like:
FSDataInputStream inFileStream = dfsFileSystem.open(dfsFilePath);
Don't forget to close after.
Thanks,
Htin
-Original Message-
From: Amit_Gupta [mailto:[EMAIL PROTECTED]
Sent: Friday, September 26, 2008 5:47 AM
To: core-user@hadoop.apache.org
Subject:
On Fri, Sep 26, 2008 at 7:50 AM, Samuel Guo [EMAIL PROTECTED] wrote:
maybe you can use
bin/hadoop jar -libjars ${your-depends-jars} your.mapred.jar args
see details:
http://hadoop.apache.org/core/docs/r0.18.1/api/org/apache/hadoop/mapred/JobShell.html
Most of our classes are in non-jars. I
I've create a jira describing my problems running under IsolationRunner.
https://issues.apache.org/jira/browse/HADOOP-4041
If anyone is using I.R. successfully to re-run failed tasks in a single JVM,
can you please, pretty please, describe on how you do that?
Thank you,
-Yuri
On Friday 08
Hi,
I encountered following FileNotFoundException resulting from too many
open files error when i tried to run a job. The job had been run for
several times before without problem. I am confused by the exception
because my code closes all the files and even it doesn't, the job only
have
Hey all.
We've been running into a very annoying problem pretty frequently
lately. We'll be running some job, for instance a distcp, and it'll
be moving along quite nicely, until all of the sudden, it sort of
freezes up. It takes a while, and then we'll get an error like this one:
Does your failed map task open a lot of files to write? Could you please check
the log of the datanode running at the machine where the map tasks failed? Do
you see any error message containing exceeds the limit of concurrent xcievers?
Hairong
From: Bryan
On 26-Sep-08, at 3:09 PM, Eric Zhang wrote:
Hi,
I encountered following FileNotFoundException resulting from too
many open files error when i tried to run a job. The job had been
run for several times before without problem. I am confused by the
exception because my code closes all the
Do you config hostname right in all node?
2008/9/26 Jeremy Chow [EMAIL PROTECTED]
Hi list,
I've created my hadoop cluster following the tutorial on
Hey,
I've fixed it. :) The server has turn on a firewall.
Regards,
Jeremy
Well, I did find some more errors in the datanode log. Here's a
sampling:
2008-09-26 10:43:57,287 ERROR org.apache.hadoop.dfs.DataNode:
DatanodeRegistration(10.100.11.115:50010,
storageID=DS-1784982905-10.100.11.115-50010-1221785192226,
infoPort=50075, ipcPort=50020):DataXceiver:
19 matches
Mail list logo