On Thu, Jan 13, 2011 at 12:25 AM, Friso van Vollenhoven <
fvanvollenho...@xebia.com> wrote:
> Hey Todd,
>
> I saw the patch. On what JVM (versions) have you tested this?
>
I tested on Sun JVM 1.6u22, but the undocumented calls I used have
definitely been around for a long time, so it ought to wor
Hey Todd,
I saw the patch. On what JVM (versions) have you tested this?
(Probably the wrong list for this, but: is there a officially supported JVM
version for CDH3?)
Thanks,
Friso
On 13 jan 2011, at 07:42, Todd Lipcon wrote:
> On Wed, Jan 12, 2011 at 5:01 PM, Tatsuya Kawano wrote:
>
>>> A
Inline...
> Hi Friso and everyone,
>
> OK. We don't have to spend time to juggle hadoop-core jars anymore since Todd
> is working hard on enhancing hadoop-lzo behavior.
>
> I think your assumption is correct, but what I was trying to say was HBase
> doesn't change the way to use Hadoop compr
Hey Todd,
Hopefully I can get to this somewhere next week. We have had our NN corrupted,
so we are rebuilding the prod cluster, meaning we use dev for backing our apps
now, so I have no environment to give it a go. Stay tuned...
>> Yea, you're definitely on the right track. Have you considered
On Wed, Jan 12, 2011 at 5:01 PM, Tatsuya Kawano wrote:
> > And
> > in some circumstances (like all the rigged tests I've attempted to do)
> these
> > get cleaned up nicely by the JVM. It seems only in pretty large heaps in
> > real workloads does the leak actually end up running away.
>
> This iss
Hi Todd,
> Yep - but that jar isn't wire-compatible with a CDH3b3 cluster. So if you
> have a CDH3b3 cluster for one of the other features included, you need to
> use a 3b3 client jar as well,
Yeah, I saw the number "+737" after the version number. Thanks for clarifying
it. (and sorry for the
On Wed, Jan 12, 2011 at 3:25 PM, Tatsuya Kawano wrote:
> Hi Friso and everyone,
>
> OK. We don't have to spend time to juggle hadoop-core jars anymore since
> Todd is working hard on enhancing hadoop-lzo behavior.
>
> I think your assumption is correct, but what I was trying to say was HBase
> doe
Hi Friso and everyone,
OK. We don't have to spend time to juggle hadoop-core jars anymore since Todd
is working hard on enhancing hadoop-lzo behavior.
I think your assumption is correct, but what I was trying to say was HBase
doesn't change the way to use Hadoop compressors since HBase 0.20 r
Can someone who is having this issue try checking out the following git
branch and rebuilding LZO?
https://github.com/toddlipcon/hadoop-lzo/tree/realloc
This definitely stems one leak of a 64KB directbuffer on every reinit.
-Todd
On Wed, Jan 12, 2011 at 2:12 PM, Todd Lipcon wrote:
> Yea, you'
Yea, you're definitely on the right track. Have you considered systems
programming, Friso? :)
Hopefully should have a candidate patch to LZO later today.
-Todd
On Wed, Jan 12, 2011 at 1:20 PM, Friso van Vollenhoven <
fvanvollenho...@xebia.com> wrote:
> Hi,
> My guess is indeed that it has to do
Hi,
My guess is indeed that it has to do with using the reinit() method on
compressors and making them long lived instead of throwaway together with the
LZO implementation of reinit(), which magically causes NIO buffer objects not
to be finalized and as a result not release their native allocati
Hey all,
I will be looking into this today :)
-Todd
On Wed, Jan 12, 2011 at 11:08 AM, Stack wrote:
> 2011/1/12 Friso van Vollenhoven :
> > No, I haven't. But the Hadoop (mapreduce) LZO compression is not the
> problem. Compressing the map output using LZO works just fine. The problem
> is HBas
2011/1/12 Friso van Vollenhoven :
> No, I haven't. But the Hadoop (mapreduce) LZO compression is not the problem.
> Compressing the map output using LZO works just fine. The problem is HBase
> LZO compression. The region server process is the one with the memory leak...
>
(Sorry for dumb questio
No, I haven't. But the Hadoop (mapreduce) LZO compression is not the problem.
Compressing the map output using LZO works just fine. The problem is HBase LZO
compression. The region server process is the one with the memory leak...
Friso
On 12 jan 2011, at 12:44, Tatsuya Kawano wrote:
>
> Hi
Hi,
Have you tried the ASF version of hadoop-core? (The one distributed with HBase
0.90RC.)
It doesn't call reinit() so I'm hoping it will just work fine with the latest
hadoop-lzo and other compressors.
Thanks,
--
Tatsuya Kawano (Mr.)
Tokyo, Japan
On Jan 12, 2011, at 7:51 PM, Friso van
Once I have a moment to play with our dev cluster, I will give this another go.
Thanks,
Friso
On 12 jan 2011, at 12:05, Andrey Stepachev wrote:
> No, I use only malloc env var, and I set it (as suggested before) into
> hbase-env.sh, and it looks like it eats more less memory (in my case 4.7G vs
No, I use only malloc env var, and I set it (as suggested before) into
hbase-env.sh, and it looks like it eats more less memory (in my case 4.7G vs
3.3G with 2Gheap)
2011/1/12 Friso van Vollenhoven
> Thanks.
>
> I went back to hbase 0.89 with 0.1 LZO, which works fine and does not show
> this is
Thanks.
I went back to hbase 0.89 with 0.1 LZO, which works fine and does not show this
issue.
I tried with a newer Hbase and LZO version, also with the MALLOC... setting but
without max direct memory set, so I was wondering whether you need a
combination of the two to fix things (apparently n
with MALLOC_ARENA_MAX=2
I check -XX:MaxDirectMemorySize=256m, before, but it doesn't affect anything
(even no OOM
exceptions or so on).
But it looks like i have exactly the same issue (it looks like). I have many
64Mb anon memory blocks.
(sometimes they 132MB). And on heavy load i have rapidly gr
Just to clarify: you fixed it by setting the MALLOC_MAX_ARENA=? in hbase-env.sh?
Did you also use the -XX:MaxDirectMemorySize=256m ?
It would be nice to check that this is a different than the leakage with LZO...
Thanks,
Friso
On 12 jan 2011, at 07:46, Andrey Stepachev wrote:
> My bad. All t
My bad. All things work. Thanks for Todd Lipcon :)
2011/1/11 Andrey Stepachev
> I tried to set MALLOC_ARENA_MAX=2. But still the same issue like in LZO
> problem thread. All those 65M blocks here. And JVM continues to eat memory
> on heavy write load. And yes, I use "improved" kernel
> Linux 2.
I tried to set MALLOC_ARENA_MAX=2. But still the same issue like in LZO
problem thread. All those 65M blocks here. And JVM continues to eat memory
on heavy write load. And yes, I use "improved" kernel
Linux 2.6.34.7-0.5.
2011/1/11 Xavier Stevens
> Are you using a newer linux kernel with the new
Are you using a newer linux kernel with the new and "improved" memory
allocator?
If so try setting this in hadoop-env.sh:
export MALLOC_ARENA_MAX=
Maybe start by setting it to 4. You can thank Todd Lipcon if this works
for you.
Cheers,
-Xavier
On 1/11/11 7:24 AM, Andrey Stepachev wrote:
> N
No. I don't use LZO. I tried even remove any native support (i.e. all .so
from class path)
and use java gzip. But nothing.
2011/1/11 Friso van Vollenhoven
> Are you using LZO by any chance? If so, which version?
>
> Friso
>
>
> On 11 jan 2011, at 15:57, Andrey Stepachev wrote:
>
> > After start
Are you using LZO by any chance? If so, which version?
Friso
On 11 jan 2011, at 15:57, Andrey Stepachev wrote:
> After starting the hbase in jroсkit found the same memory leakage.
>
> After the launch
>
> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
> Tue Jan 11 16:49:3
After starting the hbase in jroсkit found the same memory leakage.
After the launch
Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
Tue Jan 11 16:49:31 2011
11 16:49:31 MSK 2011
PID RSS VSZ% CPU
7863 2547760 5576744 78.7
JR dumps:
Total mapped 5576740KB (reserve
No, I'm not using LZO on this host. Only cloudera hadoop 0.20.2+320 + hbase
0.89.20100830
Digging google gives only hints, that jit or something in jvm can eat
memory, but nothing concrete.
pmap shows that some memory blocks are grow in size... but what are they, i
can't imagine.
4010a000
Hi Andrey,
Any chance you're using hadoop-lzo with CDH3b3? There was a leak in earlier
versions of hadoop-lzo that showed up under CDH3b3. You should upgrade to
the newest.
If that's not it, let me know, will keep thinking.
-Todd
On Thu, Dec 30, 2010 at 12:13 AM, Andrey Stepachev wrote:
> Add
Addition information:
ps shows, that my HBase process eats up to 4GB of RSS.
$ ps --sort=-rss -eopid,rss | head | grep HMaster
PID RSS
23476 3824892
2010/12/30 Andrey Stepachev
> Hi All.
>
> After heavy load into hbase (single node, nondistributed test system) I got
> 4Gb process size of
Hi All.
After heavy load into hbase (single node, nondistributed test system) I got
4Gb process size of my HBase java process.
On 6GB machine there was no room for anything else (disk cache and so on).
Does anybody knows, what is going on, and how you solve this. What heap
memory is set on you ho
30 matches
Mail list logo