This does not save in the xml file. I think this just keep the
variable in memory.
On 19 January 2013 18:48, Arun C Murthy a...@hortonworks.com wrote:
jobConf.set(String, String)?
--
Best regards,
The MR framework saves it into the job.xml before it sends it for execution.
If you're asking about a way to save the config object into the XML file,
use
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#writeXml(java.io.Writer)or
similar APIs.
On Sun, Jan 20,
On Sun, Jan 20, 2013 at 7:50 AM, Mohammad Tariq donta...@gmail.com wrote:
Hey Jean,
Feels good to hear that ;) I don't have to feel
myself a solitary yonker anymore.
Since I am working on a single node, the problem
becomes more sever. I don't have any other node
where MR files
Check integrity of the file system, and check the replication factor, by
mistake if default is left as 3 or so. if you have hbase configured check
hbck if everything is fine with the cluster.
∞
Shashwat Shriparv
On Sun, Jan 20, 2013 at 3:09 PM, xin jiang jiangxin1...@gmail.com wrote:
On
If your DN is starting too slow, then you should investigate why.
In any case, Apache Bigtop's (http://bigtop.apache.org) pseudo-distributed
configs provide good values for 1-node setups. In your case, you seem to be
missing dfs.safemode.min.datanodes set to 1, and dfs.safemode.extension set
to
Hi Tariq,
When you start your namenode,Is it able to come out of Safemode
Automatically.
If no then there are under replicated blocks or corrupted blocks where
namenode is trying to fetch it.
Try to remove corrupted blocks.
Regards,
Varun Kumar.P
On Sun, Jan 20, 2013 at 4:05 AM, Mohammad
Hello Varun,
Thank you so much for your reply. In most of the
cases, it is not. But apart from that everything seems
to be fine. I am not getting any notification about
under replicated blocks or corrupted blocks. I will do
a recheck though.
Thank you.
Warm Regards,
Tariq
Oh yeah Alex. Thank God that we have a German
expert as well ;)
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Sun, Jan 20, 2013 at 1:28 PM, Alexander Alten-Lorenz wget.n...@gmail.com
wrote:
Actually Der Untergang ;)
Alexander Alten-Lorenz
I am not aware of a direct regression in DN startup slowdown or block
report slowdown; its hard to tell what exactly the regression is without
more notes or logs on behavior.
On Sun, Jan 20, 2013 at 5:43 PM, Mohammad Tariq donta...@gmail.com wrote:
Thank you so much for the valuable reply
Hi guys,
I have a quick question regarding to fire scheduler of Hadoop, I am reading
this article =
http://blog.cloudera.com/blog/2008/11/job-scheduling-in-hadoop/, my
question is from the following statements, There is currently no support
for preemption of long tasks, but this is being added in
Hi all,
I was wondering if anyone here tried using the GPU of a Hadoop Node to
enhance MapReduce processing ?
I read about it but it always comes down to heavy computations such as
Matrix multiplications and Mote Carlo algorithms.
Did anyone try it with MapReduce jobs that analyze logs or any
-- Forwarded message --
From: Vikas Jadhav vikascjadha...@gmail.com
Date: Sat, Jan 19, 2013 at 10:58 PM
Subject: new join algorithm using mapreduce
To: user@hadoop.apache.org
I am writing new join algorithm using hadoop
and want to do multi way join in single mapreduce job
map
Lin,
The article you are reading us old.
Fair scheduler does have preemption.
Tasks get killed and rerun later, potentially on a different node.
You can set a minimum / guaranteed capacity. The sum of those across pools
would typically equal the total capacity of your cluster or less.
Then you
Hi Mirko,
Thanks for your reply. It works for me as well.
Now I was able to mount the folder on the master node and
configured Flume such that it can either poll for logs in real time or even
for periodic retrieval.
Thanks,
Mahesh Balija.
Calsof Labs.
On Thu, Jan 17, 2013
Check your node manager logs to understand the bottleneck first. When we
had a similar issue on recent version of hadoop, which includes fix for
MAPREDUCE-4068: we rearranged our job jar file to reduce time spent on
'expanding' the job jar file by the node manager(s).
-Rahul
On Sun, Jan 20, 2013
Hi,
I have installed a cluster with hadoop2.0.0-alpha, totally 4 pc works, 1
Namenode, 3 Datanodes.
I opened the http://master:50070/dfshealth.jps page by Chrome from a remote
pc, it's all right,
However, when I clicked the browse the filesystem, the Chrome redirect to
16 matches
Mail list logo