after how much time interval, the jobtracker reschedules the task
on other node when tasktracker lost, and it depends on which parameter.
Thanks Regards,
Mohmmadanis Moulavi
Ramon,
If you want to submit with an attached priority, use the APIs [0] and [1] to
set the appropriate level before you submit from your JobClient/Job instances.
[0 - New API] -
Hi,
I have a Hadoop app in which each mapper generates a floating point result
which I output, along with a single LongWritable key, set to zero.
My reducer receives a set of the results from which I need to select the
minimum. I output the minimum as the result, again as a zero-valued key
Hi,
I have a big file consisting of XML data.the XML is not represented as a
single line in the file. if we stream this file using ./hadoop dfs -put
command to a hadoop directory .How the distribution happens.?
Basically in My mapreduce program i am expecting a complete XML as my
input.i have a
Hi,
I have a big file consisting of XML data.the XML is not represented as a
single line in the file. if we stream this file using ./hadoop dfs -put
command to a hadoop directory .How the distribution happens.?
Basically in My mapreduce program i am expecting a complete XML as my
input.i have a
Hi,
I have a big file consisting of XML data.the XML is not represented as a
single line in the file. if we stream this file using ./hadoop dfs -put
command to a hadoop directory .How the distribution happens.?
Basically in My mapreduce program i am expecting a complete XML as my
input.i have a
hello,
Please help me on this.
Hi,
I have a big file consisting of XML data.the XML is not represented as a
single line in the file. if we stream this file using ./hadoop dfs -put
command to a hadoop directory .How the distribution happens.?
Basically in My mapreduce program i am expecting a
Hey, that book doesnt include material about Hadoop MR2. Would be worth
looking into some Arun C murthy sirs
presentations.
On Tue, Nov 22, 2011 at 6:53 AM, hari708 hari...@gmail.com wrote:
hello,
Please help me on this.
Hi,
I have a big file consisting of XML data.the XML is not
I have scenario where I am receiving large files from a messaging
adapter (think reliable SFTP)
and I have to store them into HDFS.
What would be a file system interface you would recommend? How is FUSE
for this task?
Thank you,
Edmon
Patai,
Did you take a look at Ambari?
http://incubator.apache.org/projects/ambari.html
Might want to get on there dev mailing lists to find out more and see
if you want to join hands.
thanks
mahadev
On Mon, Nov 21, 2011 at 5:56 PM, Patai Sangbutsarakum
silvianhad...@gmail.com wrote:
Besides
Also i am surprising, how you are writing mapreduce application here. Map and
reduce will work with key value pairs.
From: Uma Maheswara Rao G
Sent: Tuesday, November 22, 2011 8:33 AM
To: common-user@hadoop.apache.org; core-u...@hadoop.apache.org
Subject:
I think the developer of OceanSync is looking to go open source with it under
GPL after he finishes the first build.
- Original Message -
From: Patai Sangbutsarakum silvianhad...@gmail.com
To: common-user@hadoop.apache.org
Cc:
Sent: Monday, November 21, 2011 8:56 PM
Subject: Hadoop
Hi Harsh It seems the hadoop i'm using doesn't have the API you talked about,
the version i'm using is hadoop-branch-0.20-append. Thanks Ramon
From: ha...@cloudera.com
Subject: Re: How to manage hadoop job submit?
Date: Mon, 21 Nov 2011 13:39:06 +0530
To: common-user@hadoop.apache.org
Just wanted to address this:
Basically in My mapreduce program i am expecting a complete XML as my
input.i have a CustomReader(for XML) in my mapreduce job configuration.My
main confusion is if namenode distribute data to DataNodes ,there is a
chance that a part of xml can go to one data node
Thanks for all input..
On Mon, Nov 21, 2011 at 7:19 PM, Richard Dixon rich.dixon2...@yahoo.com wrote:
I think the developer of OceanSync is looking to go open source with it under
GPL after he finishes the first build.
- Original Message -
From: Patai Sangbutsarakum
I have the same problem. My hadoop version is CDH3U2. I don't know if there has
been a change in data files or not. I just need to set it again in a correct way
but i don't know the way i should set the permissions.
Regards,
Afshin
Hi,
I'm planning to use Fume in order to stream data from a local client
machine into HDFS running on a cloud environment.
Is there a way to start a mapper already on an incomplete file? As I
know a file in HDFS has to be closed first before a mapper can start.
Is this true?
Any possible
Hi All
I'm sharing my understanding here. Please correct me if I'm
wrong (Uma and Michael).
The explanation by Michael is the common working of map
reduce programs I believe. Just take case of a common text file of size
96MB and if my HDFS block size is 64 MB then this
18 matches
Mail list logo