RE: how to increate data node expire time

2014-11-24 Thread Henry Hung
minutes SO If you want keep more time, you can configure dfs.namenode.heartbeat.recheck-interval based one your requirement.. Thanks & Regards Brahma Reddy Battula From: Henry Hung [ythu...@winbond.com] Sent: Tuesday, November 25, 2014 8:05 AM To:

how to increate data node expire time

2014-11-24 Thread Henry Hung
Hi All, In Hadoop 2.2.0, how to increase the data node expire time before name node list it as a dead node? I want to shut down a data node for 30 minutes, and I don't want the name node to list it as dead node to prevent the re-replication process. Best regards, Henry ___

RE: problems with Hadoop instalation

2014-10-29 Thread Henry Hung
Try to add “/” at the end of hadoop fs -ls So it will become Hadoop fs -ls / From: David Novogrodsky [mailto:david.novogrod...@gmail.com] Sent: Thursday, October 30, 2014 7:00 AM To: user@hadoop.apache.org Subject: Fwd: problems with Hadoop instalation All, I am new to Hadoop so any help would

hadoop archive impact on hbase performance

2014-09-17 Thread Henry Hung
Dear All, Recently I observe that whenever Hadoop archive is running, all HBase insert (put) will slow down, from average 0.001 to 0.023, that is 23x times slower. I wonder if there a way to slow down the archive process? Like restrict read / write buffer? One of the restriction already implem

RE: fair scheduler not working as intended

2014-08-12 Thread Henry Hung
your "longrun" as long as there no other applications are running in the other queues. Thanks Yehia On 12 August 2014 20:30, Henry Hung mailto:ythu...@winbond.com>> wrote: Hi Everyone, I’m using Hadoop-2.2.0 with fair scheduler in my YARN cluster, but something is wrong with th

fair scheduler not working as intended

2014-08-12 Thread Henry Hung
unning for some time, the application run 10 maps in parallel. The scheduler page show that the "longrun" queue used 66%, exceed the fair share 30%. Can anyone tell me why the application can get more than it deserved? Is the problem with my configuration? Or there is a bug?

RE: hadoop 2.2.0 HA: standby namenode generate a long list of loading edits

2014-06-11 Thread Henry Hung
On Wed, Jun 11, 2014 at 7:49 PM, Henry Hung mailto:ythu...@winbond.com>> wrote: Hi All, I’m using QJM with 2 namenodes, in the active namenode, the main page’s loading edits panel only show 10 records, but in standby namenode, the loading edits panel show a lot more records, never count i

hadoop 2.2.0 HA: standby namenode generate a long list of loading edits

2014-06-11 Thread Henry Hung
Hi All, I'm using QJM with 2 namenodes, in the active namenode, the main page's loading edits panel only show 10 records, but in standby namenode, the loading edits panel show a lot more records, never count it, but I think it has > 100 records. Is this a problem? Here I provide some of the dat

RE: [hadoop 2.2.0] map tasks failed with error "This token is expired"

2014-06-04 Thread Henry Hung
job that need to be executed in daily basis. 2. Is there a way to port the fix into Hadoop 2.2.0? could you give me some direction to which java files need to be looked at? I already try to compare 2.2.0 src and 2.4.0 src, but a lot have changed and I kind of spinning around in place right now.

[hadoop 2.2.0] map tasks failed with error "This token is expired"

2014-06-04 Thread Henry Hung
Hi All, Strange thing happens after I start to use Fair Scheduler, when executing a large MR job (around 660 maps and 1 reduce), some of the map tasks will failed with this error: 2014-06-05 10:13:47,379 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Un

RE: change yarn application priority

2014-06-03 Thread Henry Hung
? ) I would suggest that you look at fair scheduler, but also consider multiple queues under capacity scheduler. You may have admin jobs you want to run in the background while other tasks are running. On Jun 2, 2014, at 8:10 PM, Henry Hung mailto:ythu...@winbond.com>> wrote: @Rohith

RE: change yarn application priority

2014-06-02 Thread Henry Hung
not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! From: Henry Hung [mailto:ythu...@winbond.com] Sent: 30

change yarn application priority

2014-05-29 Thread Henry Hung
HI All, I have an application that consumes all of nodemanager capacity (30 Map and 1 Reducer) and will need 4 hours to finish. Let's say I need to run another application that will be quicker to finish (30 minutes) and only need 1 Map and 1 Reducer. If I just execute the new application, it wil

hadoop 2.2.0 nodemanager graceful stop

2014-05-16 Thread Henry Hung
Is there a way to gracefully stop a nodemanager? I want to make some yarn-site.xml and mapred-site.xml configuration changes, and need to restart the nodemanager. Best regards, Henry The privileged confidential information contained in this email is intended for

about "yarn.application.classpath"

2014-03-30 Thread Henry Hung
Hi Hadoop Users, In hadoop 2.2.0, if I want yarn applications to be able to include external jars, I need to specify "yarn.application.classpath" in yarn-site.xml, including the default value, so it will be like this: CLASSPATH for YARN applications. A comma-separated list of CLASSPAT

yarn application still running but dissapear from UI

2014-03-26 Thread Henry Hung
Hi Hadoop Users, I'm using hadoop-2.2.0 with YARN. Today I stumble upon a problem with YARN management UI, when I look into cluster/apps, there is one apps running but not showing in the entries. I make sure there is an application running in cluster with command "yarn application -list", could

(Solved) hadoop 2.2.0 QJM exception : NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage

2014-02-12 Thread Henry Hung
Dear All, Sorry, I found the root cause for this problem, it appears that I overwrite the hadoop-hdfs-2.2.0.jar with my own custom jar, but forgot to restart the journal node process, so the process cannot find the FSImage class, but it actually there inside my custom jar. Note to myself: make

hadoop 2.2.0 QJM exception : NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage

2014-02-12 Thread Henry Hung
Hi All, I don't know why the journal node logs has this weird "NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage" exception. This error occurs each time I switch my namenode from standby to active 2014-02-13 10:34:47,873 INFO org.apache.hadoop.hdfs.server.namenode.FileJournal

accessing hadoop filesystem from Tomcat

2014-01-06 Thread Henry Hung
Hi all, I just want to confirm if my understanding with Hadoop FileSystem object is correct or not. >From the source code of org.apache.hadoop.fs. FileSystem (either from version >1.0.4 or 2.2.0), the method public static FileSystem get(URI uri, Configuration conf) throws IOException is using

RE: about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-04 Thread Henry Hung
ger in the > mapred-default.xml intentionally). > > On Wed, Dec 4, 2013 at 12:56 PM, Henry Hung wrote: >> Hello, >> >> >> >> I have a question. >> >> Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node >> "mapre

about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-03 Thread Henry Hung
Hello, I have a question. Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node "mapred.child.java.opts" is replaced by two new node "mapreduce.map.java.opts" and "mapreduce.reduce.java.opts"? Best regards, Henry The privileged confidential inform

RE: hdfs getconf -excludeFile or -includeFile always failed

2013-11-28 Thread Henry Hung
@ Vinayakumar B Thanks, I already made a post HDFS-5582 Best regards, Henry Hung From: Vinayakumar B [mailto:vinayakuma...@huawei.com] Sent: Friday, November 29, 2013 10:30 AM To: user@hadoop.apache.org Subject: RE: hdfs getconf -excludeFile or -includeFile always failed Hi Henry, Good

hdfs getconf -excludeFile or -includeFile always failed

2013-11-28 Thread Henry Hung
HI All, In hadoop-2.2.0, if you execute getconf for exclude and include file, it will return this error message: [hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -excludeFile Configuration DFSConfigKeys.DFS_HOSTS_EXCLUDE is missing. [hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -includeFile Config

what is the use of YARN proxyserver?

2013-11-27 Thread Henry Hung
Hi All, Could someone explain to me what is the use of YARN proxyserver? I ask this because apparently the map reduce can be execute and complete without starting the proxyserver. Best regards, Henry Hung The privileged confidential information contained in

missing command dfsadmin -upgradeProgress in hadoop 2.2.0 stable

2013-11-19 Thread Henry Hung
Hi All, I already upgrade hadoop 1.0.4 to hadoop 2.2.0, but when I want to check the upgrade progress by executing bin/hdfs dfsadmin -upgradeProgress, the command is unknown for hadoop 2.2.0. Could someone tell me how to get a upgradeProgress status in the new hadoop 2.2.0 stable? Best regards

How to change secondary namenode location in Hadoop 1.0.4?

2013-04-17 Thread Henry Hung
Hi All, What is the property name of Hadoop 1.0.4 to change secondary namenode location? Currently the default in my machine is "/tmp/hadoop-hadoop/dfs/namesecondary", I would like to change it to "/data/namesecondary" Best regards, Henry The privileged confiden

RE: Adding new name node location

2013-04-16 Thread Henry Hung
: Wednesday, April 17, 2013 2:18 PM To: user Cc: MA11 YTHung1 Subject: Re: Adding new name node location Hi Henry, As per your mail Point number 1 is correct. After doing these changes metadata will be written in the new partition. Regards, Varun Kumar.P On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung

Adding new name node location

2013-04-16 Thread Henry Hung
Hi Everyone, I'm using Hadoop 1.0.4 and only define 1 location for name node files, like this: dfs.name.dir /home/hadoop/hadoop-data/namenode Now I want to protect my name node files by changing the configuration to: dfs.name.dir /home/hadoop/hadoop-data/namenode,/backu

RE: file reappear even after deleted in hadoop 1.0.4

2013-03-28 Thread Henry Hung
Sorry, please ignore my question. It appears that the problem is from program that upload files into hadoop. I'm too quick to assume that the problem relies inside hadoop. Sorry for being a noob in hadoop. Best regards, Henry Hung. From: MA11 YTHung1 Sent: Friday, March 29, 2013 11:21

file reappear even after deleted in hadoop 1.0.4

2013-03-28 Thread Henry Hung
lease on the file, with the exact minute xx:03, xx:19, xx:34, xx:49 Do you know what cause this kind of behavior? Ps: I already try to delete another file and it work normally, the other file will not reappear after deletion. Best regards, Henry Hung The privi