Hi.
Again, experimentation is needed. All we can say is that the datastructures
> get smaller, but not as small as 32-bit systems, as every instance is
> aligned to 8-byte/64-bit boundaries.
>
Ok, thanks - will check and see if/how it reduces the used memory.
Regards.
Stas Oskin wrote:
Hi.
It would be nice if Java 6 had a way of switching compressed pointers on by
default -the way JRockit 64 bit did. Right now you have to edit every shell
script to start up every program, hadoop included. Maybe when jdk7 ships
it will do this by default.
Does it give an
Hi.
>
> It would be nice if Java 6 had a way of switching compressed pointers on by
> default -the way JRockit 64 bit did. Right now you have to edit every shell
> script to start up every program, hadoop included. Maybe when jdk7 ships
> it will do this by default.
>
Does it give any memory be
Allen Wittenauer wrote:
On 9/2/09 3:49 AM, "Stas Oskin" wrote:
It's a Sun JVM setting, not something Hadoop will control. You'd have to
turn it on in hadoop-env.sh.
Question is, if Hadoop will include this as standard, if it indeed has such
benefits.
We can't do this as then if you try a
On 9/2/09 3:49 AM, "Stas Oskin" wrote:
>> It's a Sun JVM setting, not something Hadoop will control. You'd have to
>> turn it on in hadoop-env.sh.
>>
>>
> Question is, if Hadoop will include this as standard, if it indeed has such
> benefits.
Hadoop doesn't have a -standard- here, it has a -d
Hi.
>>
> It's a Sun JVM setting, not something Hadoop will control. You'd have to
> turn it on in hadoop-env.sh.
>
>
Question is, if Hadoop will include this as standard, if it indeed has such
benefits.
Regards.
Hi.
>>
> Resident, shared, or virtual? Unix memory management is not
> straightforward; the worst thing you can do is look at the virtual memory
> size of the java process and assume that's how much RAM it is using.
>
>
I'm using a tool called ps_mem.py to measure total memory taken. It usually
For info on newer JDK support for compressed oops, see http://java.sun.com/javase/6/webnotes/6u14.html
and http://wikis.sun.com/display/HotSpotInternals/CompressedOops
-Bryan
On Sep 1, 2009, at Sep 1, 12:21 PM, Brian Bockelman wrote:
On Sep 1, 2009, at 1:58 PM, Stas Oskin wrote:
Hi.
On Sep 1, 2009, at 1:58 PM, Stas Oskin wrote:
Hi.
With regards to memory, have you tried the compressed pointers JDK
option
(we saw great benefits on the NN)? Java is incredibly hard to get a
straight answer from with regards to memory. You need to perform a
GC first
manually - the act
On Sep 1, 2009, at 2:02 PM, Stas Oskin wrote:
Hi.
What does 'up to 700MB' mean? Is it JVM's virtual memory? resident
memory?
or java heap in use?
700 MB is what taken by overall java process.
Resident, shared, or virtual? Unix memory management is not
straightforward; the worst thi
Hi.
What does 'up to 700MB' mean? Is it JVM's virtual memory? resident memory?
> or java heap in use?
>
700 MB is what taken by overall java process.
>
> How many blocks to you have? For an idle DN, most of the memory is taken by
> block info structures. It does not really optimize for it.. May
Hi.
[
> https://issues.apache.org/jira/browse/HADOOP-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>
>
Does it has any effect on the issue I have?
It seems from the description that the issues are related to various node
task, and not to one particular.
Regards.
Hi.
> With regards to memory, have you tried the compressed pointers JDK option
> (we saw great benefits on the NN)? Java is incredibly hard to get a
> straight answer from with regards to memory. You need to perform a GC first
> manually - the actual usage is the amount it reports used post-GC
Hi.
The datanode would be using the major part of memory to do following-
> a. Continuously (at regular interval) send heartbeat messages to namenode
> to
> say 'I am live and awake'
> b. In case, any data/file is added to DFS, OR Map Reduce jobs are running,
> datanode would again be talking to n
I think this thread is moving in all the possible directions... without
many details on original problem.
There is no need to speculate on where the memory goes you can run 'jmap
-histo:live' and 'jmap -heap' to get much better idea.
What does 'up to 700MB' mean? Is it JVM's virtual memory?
take up to 700 MB
> of
> RAM.
>
> As their main job is to store files to disk, any idea why they take so
> much
> RAM?
>
> Thanks for any information.
>
>
--
View this message in context:
http://www.nabble.com/Datanode-high-memory-usage-tp25221400p25243059.html
Se
y generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---
Cheers!
Amogh
-Original Message-
From: Stas Oskin [mailto:stas.os...@gmail.com]
Sent: Tuesday, September 01, 2009 2:31 PM
To: common-user@hadoop.apache.org
Subject: Re: Datanode high me
Hey Mafish,
If you are getting 1-2m blocks on a single datanode, you'll have many
other problems - especially with regards to periodic block reports.
With regards to memory, have you tried the compressed pointers JDK
option (we saw great benefits on the NN)? Java is incredibly hard to
ge
2009/9/1 Mafish Liu :
> Both NameNode and DataNode will be affected by number of files greatly.
> In my test, almost 60% memory are used in datanodes while storing 1m
> files, and the value reach 80% with 2m files.
> My test best is with 5 nodes, 1 namenode and 4 datanodes. All nodes
test bed
>
Both NameNode and DataNode will be affected by number of files greatly.
In my test, almost 60% memory are used in datanodes while storing 1m
files, and the value reach 80% with 2m files.
My test best is with 5 nodes, 1 namenode and 4 datanodes. All nodes
have 2GB memory and replication is 3.
2009/
Hi.
2009/9/1 Amogh Vasekar
> This wont change the daemon configs.
> Hadoop by default allocates 1000MB of memory for each of its daemons, which
> can be controlled by HADOOP_HEAPSIZE, HADOOP_NAMENODE_OPTS,
> HADOOP_TASKTRACKER_OPTS in the hadoop script.
> However, there was a discussion on this
Hi.
2009/9/1 Mafish Liu
> Did you have many small files in your system?
>
>
Yes, quite plenty.
But this should influence the Namenode, and not the Datanode, correct?
Regards.
would be overridden by default 1000MB, not sure if the patch is available.
Cheers!
Amogh
-Original Message-
From: Stas Oskin [mailto:stas.os...@gmail.com]
Sent: Monday, August 31, 2009 10:40 PM
To: common-user@hadoop.apache.org
Subject: Re: Datanode high memory usage
Hi
Did you have many small files in your system?
2009/9/1 Stas Oskin :
> Hi.
>
>
>>
>> mapred.child.java.opts
>>
>> -Xmx512M
>>
>>
>>
>>
> This has effect even if I not using any reduce tasks?
>
> Regards.
>
--
maf...@gmail.com
Hi.
>
> mapred.child.java.opts
>
> -Xmx512M
>
>
>
>
This has effect even if I not using any reduce tasks?
Regards.
The maximum and minimum amount of memory to be used by the task
trackers can be specified inside the configuration files under conf.
For instance, in order to allocate a maximum of 512 MB, you need to
set:
mapred.child.java.opts
-Xmx512M
Hope that helps.
Jim
On Mon, Aug 31, 2009
Hi.
I think what you see is reduce task, because in reduce task, you have three
> steps:
> copy , sort, and reudce. The copy and sort steps may cost a lot of
> memory.
>
>
>
Nope, I just running the Datanode and copying files to HDFS - no reduce
tasks are running.
How typically large is a stan
I think what you see is reduce task, because in reduce task, you have three
steps:
copy , sort, and reudce. The copy and sort steps may cost a lot of memory.
2009/8/31 Stas Oskin
> Hi.
>
>
> > What does 700MB represent for ? total memory usage of OS or only the task
> > process.
> >
> >
> Th
Hi.
> What does 700MB represent for ? total memory usage of OS or only the task
> process.
>
>
The Datanode task process - I'm running just it to find out how actually RAM
it takes.
Regards.
your
configration)
-Original Message-
From: Stas Oskin [mailto:stas.os...@gmail.com]
Sent: 2009年8月31日 3:41
To: core-u...@hadoop.apache.org
Subject: Datanode high memory usage
Hi.
I measured the Datanode memory usage, and noticed they take up to 700 MB of
RAM.
As their main job is to
Hi.
I measured the Datanode memory usage, and noticed they take up to 700 MB of
RAM.
As their main job is to store files to disk, any idea why they take so much
RAM?
Thanks for any information.
31 matches
Mail list logo