It seems the parameter mapreduce.map.memory.mb is parsed from client.
2015-06-07 15:05 GMT+08:00 J. Rottinghuis jrottingh...@gmail.com:
On each node you can configure how much memory is available for containers
to run.
On the other hand, for each application you can configure how large
config.addResource(new
Path(/usr/local/hadoop-2.6.0/etc/hadoop/core-site.xml));
the path is org.apache.hadoop.fs.Path, so the resource should be in hdfs,
do you have the resource in hdfs?
can you try the API
config.addResource(InputStream in)
2015-05-25 18:36 GMT+08:00 Carmen Manzulli
If the cluster have enough resource, then more than one job will run at the
same time
2015-04-18 2:27 GMT+08:00 xeonmailinglist-gmail xeonmailingl...@gmail.com:
Hi,
I have a MapReduce runtime where I put several jobs running in
concurrency. How I manage the job scheduler so that it won't run
the
performance of the cluster it works good.
Issue was fixed in Namenode server network port.
Regards,
Sandeep.v
On Thu, Apr 9, 2015 at 12:30 PM, 杨浩 yangha...@gmail.com wrote:
Root cause: Network related issue?
can you tell us more detailedly? Thank you
2015-04-09 13:51 GMT+08:00 sandeep vura
I think you can have a try at http://hdt.incubator.apache.org/
2015-04-12 2:04 GMT+08:00 Answer Agrawal yrsna.tse...@gmail.com:
Thanks Jonathan
I have installed and configured my own hadoop cluster with one master node
and 7 slave nodes. Now I just want to make sure that job running through
you can mailto user-unsubscr...@hadoop.apache.org
2015-04-10 1:16 GMT+08:00 Liaw, Huat (MTO) huat.l...@ontario.ca:
What do you unsubscribe?
*From:* Rajeev Yadav [mailto:rajeya...@gmail.com]
*Sent:* April 9, 2015 1:02 PM
*To:* user@hadoop.apache.org
*Subject:* Unsubscribe
Unsubscribe
You can mailto user-unsubscr...@hadoop.apache.org
2015-04-09 23:35 GMT+08:00 Ram pramesh...@gmail.com:
Root cause: Network related issue?
can you tell us more detailedly? Thank you
2015-04-09 13:51 GMT+08:00 sandeep vura sandeepv...@gmail.com:
Our issue has been resolved.
Root cause: Network related issue.
Thanks for each and everyone spent sometime and replied to my questions.
Regards,
I think the log information has lost.
the hadoop is not designed for that you deleted these files incorrectly
2015-04-02 11:45 GMT+08:00 煜 韦 yu20...@hotmail.com:
Hi there,
If log files are deleted without restarting service, it seems that the
logs is to be lost for later operation. For
Hi Ted
I have read the feature, and it says, The patch appears to be a
documentation patch that doesn't require tests.
Can you tell me what patch should add UT, and which would not
2015-03-29 9:44 GMT+08:00 Ted Yu yuzhih...@gmail.com:
Himawan:
You don't need to recompile the code.
Please
I don't think it nessary to run the command with daemon in that client, and
hdfs is not a daemon for hadoop。
2015-03-03 20:57 GMT+08:00 Somnath Pandeya somnath_pand...@infosys.com:
Is your hdfs daemon running on cluster. ? ?
*From:* Vikas Parashar [mailto:para.vi...@gmail.com]
*Sent:*
I think benchmark will do some help, since it can help to find out the
executing speed of I/O rated job and CPU rated job
2015-03-02 19:01 GMT+08:00 Adrien Mogenet adrien.moge...@contentsquare.com
:
This is a non-sense ; you have to tell us under which conditions you want
to find a bottleneck.
yes, you can do this in java, if these conditions are satisfied
1. your client is in the same network with the hadoop cluster
2. add the hadoop configuration to your java classpath, then the jvm
will load the hadoop configuration
but the suggesttiong way is
hadoop jar
2015-02-20
why not trying
-D files=/home/MapReduce/testFile.json
2015-02-20 5:03 GMT+08:00 Haoming Zhang haoming.zh...@outlook.com:
Hi,
As you know, Hadoop support the Generic Options
http://hadoop.apache.org/docs/r1.2.1/commands_manual.html#Generic+Options.
For example you can use -files to
I think so
2015-02-15 18:11 GMT+08:00 bit1...@163.com bit1...@163.com:
Hi, Hadoopers,
I am pretty newbie to Hadoop, I got a question: when a job runs, Will
each mapper or reducer task take up a JVM process or only a thread?
I hear that the answer is the Process. That is, say, one job
hi ulul
thank you for explanation. I have googled the feature, and hortonworks
said
This feature is a technical preview and considered under development. Do
not use this feature in your production systems.
can we use it in production env?
2015-02-15 20:15 GMT+08:00 Ulul had...@ulul.org:
Do you mean you want to execute a job in the remote cluster which don't
contain you node?
If you copy the configure of RM to your own computer, and this computer
will be taken as the hadoop client. Then you can execute the job through
'hadoop jar', the job will be executed on remote cluster
17 matches
Mail list logo