Please refer to https://issues.apache.org/jira/browse/MAPREDUCE-334
You can patch it for yourself, or I can give you code snippets, you
can use it without hadoop re-compilation.
2010/3/12 Chris Bates :
> Hi,
> I am trying to upgrade my scripts to the new MapReduce API in
> org.apache.hadoop.mapre
Hi,
I am trying to upgrade my scripts to the new MapReduce API in
org.apache.hadoop.mapreduce. I had a join operation that relied on the
MultipleInputs.class in the mapred folder, but I see it is not in the
mapreduce folder, despite this message:
http://mail-archives.apache.org/mod_mbox/hadoop-co
Does your cluster share a same NFS ?
On Fri, Mar 12, 2010 at 12:28 AM, Lu welman wrote:
> Dear Jeff,
>
> First, thank you very much for your selfless help!
> For these three computers, it just seems like they are sharing a same disk
> area. I don't know whether they will redundant copy data or o
On 3/11/10 11:05 AM, "Gregory Lawrence" wrote:
> Is there a way to set the output group for a mapreduce (or hdfs fs operation)
> job? For example -Ddfs.umaskmode=027 successfully sets the permissions. I
> would think the -Dgroup.name=GROUP would do a similar thing for the file's
> group. Howeve
Hi,
Is there a way to set the output group for a mapreduce (or hdfs fs operation)
job? For example -Ddfs.umaskmode=027 successfully sets the permissions. I would
think the -Dgroup.name=GROUP would do a similar thing for the file's group.
However, this does not appear to be the case. Any help wo
Moving to mapreduce-user@, bcc: common-user
Have you tried bumping up the heap for the map task?
Since you are setting io.sort.mb to 256M, pls set heap-size to 512M at
least, if not more.
mapred.child.java.opts -> -Xmx512M or -Xmx1024m
Arun
On Mar 11, 2010, at 8:24 AM, Boyu Zhang wrote:
Dear Jeff,
First, thank you very much for your selfless help!
For these three computers, it just seems like they are sharing a same disk
area. I don't know whether they will redundant copy data or other way.
Any way, let me use an example to explain that.
When I create any file, e.g., foo in one o
Sorry, but I still quite understand why you said about "When the HDFS
starts, then only one datanode can lock the directory, and the other two are
fail." As my understanding, the three computers are independently, the
failure of two data node should not have anything to do with lock, you need
look
Hi Jeff,
>From my viewpoint, I can't see the disks of these three computers. All I can
see is only one $HOME directory.
Whatever computer I log into, I can see the same contents inside this $HOME
directory.
I borrowed these three computers from a big cluster. And I only use ssh to
remote control
Hi Lu,
All the variable is in System.getProperties(),
and what do you mean "all three computers will set their data directory into
a same one."? The 3 computers are independent, so why they share the same
directory ?
On Thu, Mar 11, 2010 at 7:41 AM, Lu welman wrote:
> Hi, Jeff,
>
> I think I
Hi, Jeff,
I think I misunderstand what you said about.
I think you want to say is that I can set ${hostname} in *-site.xml,
then I can use the codes you mentioned to get the hostname in my own
program, right?
Sorry that I didn't make my question clear.
The problem of mine now, is that I am deplo
Hi Lu,
I assume you are implementing the Tool interface to run your mapreduce job.
Then put the code in the run method
@Override
public int run(String[] args) throws Exception {
JobConf conf=new JobConf();
conf.set("hostname", InetAddress.getLocalHost().getHostName());
Hi, Jeff.
Thank you very much for the reply.
Unfortunately, I don't where I can set the codes you mentioned.
Can you tell me more about that?
Thanks!
Regards
welman Lu
There's no such environment variable internally, But there's a work around.
Get the host name by using java api, and put the value into configuration,
just like this:
Configuration conf=new Configuration();
conf.set("hostname",InetAddress.getLocalHost().getHostName());
then you can use the ${host
Hi, all
I saw that in *-site.xml, we can use ${user.name} to get the username of
present user.
If I want to get the environment $HOSTNAME, what should I do?
I tried ${HOSTNAME}, ${env.hostname}, both of them can't work.
It just return the string of "${HOSTNAME}" and "${env.hostname}" themselves.
15 matches
Mail list logo