Hi,
Hadoop On Demand is no longer supported with recent releases of Hadoop.
There is no separate user list for HOD related questions.
Which version of Hadoop are you using right now ?
Thanks
hemanth
On Wed, Feb 6, 2013 at 8:59 PM, Mehmet Belgin
mehmet.bel...@oit.gatech.eduwrote:
Hello
You could, but this is generally discouraged. Pig does something like this by
taking the object serializing it out into a byte array and then using base64
encoding turns it into a string that is put in the config. The problem with
this is that the config can grow very large. In the 1.0 line
Hi,
I'm wondering what's the best way to install FUSE with Hadoop 1.0.3?
I'm trying to follow all the steps described here:
http://wiki.apache.org/hadoop/MountableHDFS but it's failing on each
one, taking hours to fix it and move to the next one.
So I think I'm following the wrong path. There
Hi!
I've followed the hadoop cluster tutorial on hadoop site (hadoop 1.1.1 on
64bit machines with openjdk 1.6). I've set-up 1 namenode, 1 jobtracker,
and 3 slaves acting as datanode and tasktracker.
I have a problem setting up hdfs on the cluster: dfs daemon start fine on
namenode and
Tony, I think the first step would be to verify if the S3 filesystem
implementation rename works as expected.
Thx
On Fri, Feb 1, 2013 at 7:12 AM, Tony Burton tbur...@sportingindex.comwrote:
** **
Thanks for the reply Alejandro. Using a temp output directory was my first
guess as well.
Hi,
I am thinking to write some mapper to do conversion of mainframe files to
ascii format and contribute back.
And before even i do something i wanted to confirm from you guys the
following
- Do we already have some mapreduce library doing the same work ?
- Is there anything in Hadoop
I have a cluster of boxes with 3 reducers per node. I want to limit a
particular job to only run 1 reducer per node.
This job is network IO bound, gathering images from a set of webservers.
My job has certain parameters set to meet web politeness standards (e.g.
limit connects and
I think set tasktracker.reduce.tasks.maximum to be 1 may meet your requirement
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Friday, 8 February, 2013 at 10:54 PM, David Parks wrote:
I have a cluster of boxes with 3 reducers per node. I want to limit a
particular
I haven't use AWS MR before…..if your instances are configured with 3 reducer
slots, it means that 3 reducers can run at the same time in this node,
what do you mean by this property is already set to 1 on my cluster?
actually this value can be node-specific, if AWS MR instance allows you to
Looking at the Job File for my job I see that this property is set to 1,
however I have 3 reducers per node (I’m not clear what configuration is causing
this behavior).
My problem is that, on a 15 node cluster, I set 15 reduce tasks on my job, in
hopes that each would be assigned to a
those nodes with 2 reducers were running these two r at the same time? if yes,
I think you can change mapred-site.xml as I suggested,
if no, i.e. your goal is to make all nodes take the same number of tasks in the
life cycle of job….I don't know if there is any provided property can do this….
Hey David,
There's no readily available way to do this today (you may be
interested in MAPREDUCE-199 though) but if your Job scheduler's not
doing multiple-assignments on reduce tasks, then only one is assigned
per TT heartbeat, which gives you almost what you're looking for: 1
reduce task per
12 matches
Mail list logo