minutes
SO If you want keep more time, you can configure
dfs.namenode.heartbeat.recheck-interval based one your requirement..
Thanks & Regards
Brahma Reddy Battula
From: Henry Hung [ythu...@winbond.com]
Sent: Tuesday, November 25, 2014 8:05 AM
To:
Hi All,
In Hadoop 2.2.0, how to increase the data node expire time before name node
list it as a dead node?
I want to shut down a data node for 30 minutes, and I don't want the name node
to list it as dead node to prevent the re-replication process.
Best regards,
Henry
___
Try to add “/” at the end of hadoop fs -ls
So it will become
Hadoop fs -ls /
From: David Novogrodsky [mailto:david.novogrod...@gmail.com]
Sent: Thursday, October 30, 2014 7:00 AM
To: user@hadoop.apache.org
Subject: Fwd: problems with Hadoop instalation
All,
I am new to Hadoop so any help would
Dear All,
Recently I observe that whenever Hadoop archive is running, all HBase insert
(put) will slow down, from average 0.001 to 0.023, that is 23x times slower.
I wonder if there a way to slow down the archive process? Like restrict read /
write buffer?
One of the restriction already implem
your "longrun" as long as there no other applications are running
in the other queues.
Thanks
Yehia
On 12 August 2014 20:30, Henry Hung
mailto:ythu...@winbond.com>> wrote:
Hi Everyone,
I’m using Hadoop-2.2.0 with fair scheduler in my YARN cluster, but something is
wrong with th
unning for some time, the application run 10 maps in parallel.
The scheduler page show that the "longrun" queue used 66%, exceed the fair
share 30%.
Can anyone tell me why the application can get more than it deserved?
Is the problem with my configuration? Or there is a bug?
On Wed, Jun 11, 2014 at 7:49 PM, Henry Hung
mailto:ythu...@winbond.com>> wrote:
Hi All,
I’m using QJM with 2 namenodes, in the active namenode, the main page’s loading
edits panel only show 10 records, but in standby namenode, the loading edits
panel show a lot more records, never count i
Hi All,
I'm using QJM with 2 namenodes, in the active namenode, the main page's loading
edits panel only show 10 records, but in standby namenode, the loading edits
panel show a lot more records, never count it, but I think it has > 100 records.
Is this a problem?
Here I provide some of the dat
job that need
to be executed in daily basis.
2. Is there a way to port the fix into Hadoop 2.2.0? could you give me
some direction to which java files need to be looked at?
I already try to compare 2.2.0 src and 2.4.0 src, but a lot have changed and I
kind of spinning around in place right now.
Hi All,
Strange thing happens after I start to use Fair Scheduler, when executing a
large MR job (around 660 maps and 1 reduce), some of the map tasks will failed
with this error:
2014-06-05 10:13:47,379 ERROR
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Un
? )
I would suggest that you look at fair scheduler, but also consider multiple
queues under capacity scheduler. You may have admin jobs you want to run in the
background while other tasks are running.
On Jun 2, 2014, at 8:10 PM, Henry Hung
mailto:ythu...@winbond.com>> wrote:
@Rohith
not limited to, total
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify
the sender by
phone or email immediately and delete it!
From: Henry Hung [mailto:ythu...@winbond.com]
Sent: 30
HI All,
I have an application that consumes all of nodemanager capacity (30 Map and 1
Reducer) and will need 4 hours to finish.
Let's say I need to run another application that will be quicker to finish (30
minutes) and only need 1 Map and 1 Reducer.
If I just execute the new application, it wil
Is there a way to gracefully stop a nodemanager?
I want to make some yarn-site.xml and mapred-site.xml configuration changes,
and need to restart the nodemanager.
Best regards,
Henry
The privileged confidential information contained in this email is intended for
Hi Hadoop Users,
In hadoop 2.2.0, if I want yarn applications to be able to include external
jars, I need to specify "yarn.application.classpath" in yarn-site.xml,
including the default value, so it will be like this:
CLASSPATH for YARN applications. A comma-separated list
of CLASSPAT
Hi Hadoop Users,
I'm using hadoop-2.2.0 with YARN.
Today I stumble upon a problem with YARN management UI, when I look into
cluster/apps, there is one apps running but not showing in the entries.
I make sure there is an application running in cluster with command "yarn
application -list", could
Dear All,
Sorry, I found the root cause for this problem, it appears that I overwrite the
hadoop-hdfs-2.2.0.jar with my own custom jar, but forgot to restart the journal
node process,
so the process cannot find the FSImage class, but it actually there inside my
custom jar.
Note to myself: make
Hi All,
I don't know why the journal node logs has this weird "NoClassDefFoundError:
org/apache/hadoop/hdfs/server/namenode/FSImage" exception.
This error occurs each time I switch my namenode from standby to active
2014-02-13 10:34:47,873 INFO
org.apache.hadoop.hdfs.server.namenode.FileJournal
Hi all,
I just want to confirm if my understanding with Hadoop FileSystem object is
correct or not.
>From the source code of org.apache.hadoop.fs. FileSystem (either from version
>1.0.4 or 2.2.0), the method
public static FileSystem get(URI uri, Configuration conf) throws IOException
is using
ger in the
> mapred-default.xml intentionally).
>
> On Wed, Dec 4, 2013 at 12:56 PM, Henry Hung wrote:
>> Hello,
>>
>>
>>
>> I have a question.
>>
>> Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node
>> "mapre
Hello,
I have a question.
Is it correct to say that in hadoop-2.2.0, the mapred-site.xml node
"mapred.child.java.opts" is replaced by two new node "mapreduce.map.java.opts"
and "mapreduce.reduce.java.opts"?
Best regards,
Henry
The privileged confidential inform
@ Vinayakumar B
Thanks, I already made a post HDFS-5582
Best regards,
Henry Hung
From: Vinayakumar B [mailto:vinayakuma...@huawei.com]
Sent: Friday, November 29, 2013 10:30 AM
To: user@hadoop.apache.org
Subject: RE: hdfs getconf -excludeFile or -includeFile always failed
Hi Henry,
Good
HI All,
In hadoop-2.2.0, if you execute getconf for exclude and include file, it will
return this error message:
[hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -excludeFile
Configuration DFSConfigKeys.DFS_HOSTS_EXCLUDE is missing.
[hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -includeFile
Config
Hi All,
Could someone explain to me what is the use of YARN proxyserver?
I ask this because apparently the map reduce can be execute and complete
without starting the proxyserver.
Best regards,
Henry Hung
The privileged confidential information contained in
Hi All,
I already upgrade hadoop 1.0.4 to hadoop 2.2.0, but when I want to check the
upgrade progress by executing bin/hdfs dfsadmin -upgradeProgress, the command
is unknown for hadoop 2.2.0.
Could someone tell me how to get a upgradeProgress status in the new hadoop
2.2.0 stable?
Best regards
Hi All,
What is the property name of Hadoop 1.0.4 to change secondary namenode location?
Currently the default in my machine is "/tmp/hadoop-hadoop/dfs/namesecondary",
I would like to change it to "/data/namesecondary"
Best regards,
Henry
The privileged confiden
: Wednesday, April 17, 2013 2:18 PM
To: user
Cc: MA11 YTHung1
Subject: Re: Adding new name node location
Hi Henry,
As per your mail Point number 1 is correct.
After doing these changes metadata will be written in the new partition.
Regards,
Varun Kumar.P
On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung
Hi Everyone,
I'm using Hadoop 1.0.4 and only define 1 location for name node files, like
this:
dfs.name.dir
/home/hadoop/hadoop-data/namenode
Now I want to protect my name node files by changing the configuration to:
dfs.name.dir
/home/hadoop/hadoop-data/namenode,/backu
Sorry, please ignore my question. It appears that the problem is from program
that upload files into hadoop.
I'm too quick to assume that the problem relies inside hadoop.
Sorry for being a noob in hadoop.
Best regards,
Henry Hung.
From: MA11 YTHung1
Sent: Friday, March 29, 2013 11:21
lease on
the file, with the exact minute xx:03, xx:19, xx:34, xx:49
Do you know what cause this kind of behavior?
Ps:
I already try to delete another file and it work normally, the other file will
not reappear after deletion.
Best regards,
Henry Hung
The privi
30 matches
Mail list logo