Hi All,
In Hadoop 2.2.0, how to increase the data node expire time before name node
list it as a dead node?
I want to shut down a data node for 30 minutes, and I don't want the name node
to list it as dead node to prevent the re-replication process.
Best regards,
Henry
minutes
SO If you want keep more time, you can configure
dfs.namenode.heartbeat.recheck-interval based one your requirement..
Thanks Regards
Brahma Reddy Battula
From: Henry Hung [ythu...@winbond.com]
Sent: Tuesday, November 25, 2014 8:05 AM
To: user
Try to add “/” at the end of hadoop fs -ls
So it will become
Hadoop fs -ls /
From: David Novogrodsky [mailto:david.novogrod...@gmail.com]
Sent: Thursday, October 30, 2014 7:00 AM
To: user@hadoop.apache.org
Subject: Fwd: problems with Hadoop instalation
All,
I am new to Hadoop so any help would
Dear All,
Recently I observe that whenever Hadoop archive is running, all HBase insert
(put) will slow down, from average 0.001 to 0.023, that is 23x times slower.
I wonder if there a way to slow down the archive process? Like restrict read /
write buffer?
One of the restriction already
to your longrun as long as there no other applications are running
in the other queues.
Thanks
Yehia
On 12 August 2014 20:30, Henry Hung
ythu...@winbond.commailto:ythu...@winbond.com wrote:
Hi Everyone,
I’m using Hadoop-2.2.0 with fair scheduler in my YARN cluster, but something is
wrong
queue used 66%, exceed the fair
share 30%.
Can anyone tell me why the application can get more than it deserved?
Is the problem with my configuration? Or there is a bug?
Best regards,
Henry Hung
The privileged confidential information contained in this email
Hi All,
I'm using QJM with 2 namenodes, in the active namenode, the main page's loading
edits panel only show 10 records, but in standby namenode, the loading edits
panel show a lot more records, never count it, but I think it has 100 records.
Is this a problem?
Here I provide some of the
On Wed, Jun 11, 2014 at 7:49 PM, Henry Hung
ythu...@winbond.commailto:ythu...@winbond.com wrote:
Hi All,
I’m using QJM with 2 namenodes, in the active namenode, the main page’s loading
edits panel only show 10 records, but in standby namenode, the loading edits
panel show a lot more records
Hi All,
Strange thing happens after I start to use Fair Scheduler, when executing a
large MR job (around 660 maps and 1 reduce), some of the map tasks will failed
with this error:
2014-06-05 10:13:47,379 ERROR
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
in daily basis.
2. Is there a way to port the fix into Hadoop 2.2.0? could you give me
some direction to which java files need to be looked at?
I already try to compare 2.2.0 src and 2.4.0 src, but a lot have changed and I
kind of spinning around in place right now.
Best regards,
Henry
, but not limited to, total
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify
the sender by
phone or email immediately and delete it!
From: Henry Hung [mailto:ythu...@winbond.com]
Sent: 30 May
HI All,
I have an application that consumes all of nodemanager capacity (30 Map and 1
Reducer) and will need 4 hours to finish.
Let's say I need to run another application that will be quicker to finish (30
minutes) and only need 1 Map and 1 Reducer.
If I just execute the new application, it
Hi Hadoop Users,
In hadoop 2.2.0, if I want yarn applications to be able to include external
jars, I need to specify yarn.application.classpath in yarn-site.xml,
including the default value, so it will be like this:
property
descriptionCLASSPATH for YARN applications. A comma-separated
Hi Hadoop Users,
I'm using hadoop-2.2.0 with YARN.
Today I stumble upon a problem with YARN management UI, when I look into
cluster/apps, there is one apps running but not showing in the entries.
I make sure there is an application running in cluster with command yarn
application -list, could
Hi All,
I don't know why the journal node logs has this weird NoClassDefFoundError:
org/apache/hadoop/hdfs/server/namenode/FSImage exception.
This error occurs each time I switch my namenode from standby to active
2014-02-13 10:34:47,873 INFO
Dear All,
Sorry, I found the root cause for this problem, it appears that I overwrite the
hadoop-hdfs-2.2.0.jar with my own custom jar, but forgot to restart the journal
node process,
so the process cannot find the FSImage class, but it actually there inside my
custom jar.
Note to myself:
Hi all,
I just want to confirm if my understanding with Hadoop FileSystem object is
correct or not.
From the source code of org.apache.hadoop.fs. FileSystem (either from version
1.0.4 or 2.2.0), the method
public static FileSystem get(URI uri, Configuration conf) throws IOException
is using
-default.xml intentionally).
On Wed, Dec 4, 2013 at 12:56 PM, Henry Hung ythu...@winbond.com wrote:
Hello,
I have a question.
Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node
mapred.child.java.opts is replaced by two new node
mapreduce.map.java.opts
Hello,
I have a question.
Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node
mapred.child.java.opts is replaced by two new node mapreduce.map.java.opts
and mapreduce.reduce.java.opts?
Best regards,
Henry
The privileged confidential information
Hi All,
Could someone explain to me what is the use of YARN proxyserver?
I ask this because apparently the map reduce can be execute and complete
without starting the proxyserver.
Best regards,
Henry Hung
The privileged confidential information contained
Hi All,
I already upgrade hadoop 1.0.4 to hadoop 2.2.0, but when I want to check the
upgrade progress by executing bin/hdfs dfsadmin -upgradeProgress, the command
is unknown for hadoop 2.2.0.
Could someone tell me how to get a upgradeProgress status in the new hadoop
2.2.0 stable?
Best
Hi Everyone,
I'm using Hadoop 1.0.4 and only define 1 location for name node files, like
this:
property
namedfs.name.dir/name
value/home/hadoop/hadoop-data/namenode/value
/property
Now I want to protect my name node files by changing the configuration to:
property
: Wednesday, April 17, 2013 2:18 PM
To: user
Cc: MA11 YTHung1
Subject: Re: Adding new name node location
Hi Henry,
As per your mail Point number 1 is correct.
After doing these changes metadata will be written in the new partition.
Regards,
Varun Kumar.P
On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung
Hi All,
What is the property name of Hadoop 1.0.4 to change secondary namenode location?
Currently the default in my machine is /tmp/hadoop-hadoop/dfs/namesecondary,
I would like to change it to /data/namesecondary
Best regards,
Henry
The privileged
on
the file, with the exact minute xx:03, xx:19, xx:34, xx:49
Do you know what cause this kind of behavior?
Ps:
I already try to delete another file and it work normally, the other file will
not reappear after deletion.
Best regards,
Henry Hung
The privileged
Sorry, please ignore my question. It appears that the problem is from program
that upload files into hadoop.
I'm too quick to assume that the problem relies inside hadoop.
Sorry for being a noob in hadoop.
Best regards,
Henry Hung.
From: MA11 YTHung1
Sent: Friday, March 29, 2013 11:21 AM
26 matches
Mail list logo