We find the disk I/O is the major bottleneck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 1.00 0.00 85.21 0.00 20926.32 0.00 245.58
31.59 364.49 11.77 100.28
sdb 5.76 4752.88 53.13 131.08 10145.
Stefan Will wrote:
Yes, I think the JVM uses way more memory than just its heap. Now some of it
might be just reserved memory, but not actually used (not sure how to tell
the difference). There are also things like thread stacks, jit compiler
cache, direct nio byte buffers etc. that take up proce
the Java heap. But none of that should imho add up to Gigabytes...
-- Stefan
> From: zsongbo
> Reply-To:
> Date: Tue, 12 May 2009 20:06:37 +0800
> To:
> Subject: Re: How to do load control of MapReduce
>
> Yes, I also found that the TaskTracker should not use so much
Yes, I also found that the TaskTracker should not use so much memory.
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
32480 schubert 35 10 1411m 172m 9212 S0 2.2 8:54.78 java
The previous 1GB is the default value, I have just change the heap of TT to
384MB one hours
zsongbo wrote:
Hi Stefan,
Yes, the 'nice' cannot resolve this problem.
Now, in my cluster, there are 8GB of RAM. My java heap configuration is:
HDFS DataNode : 1GB
HBase-RegionServer: 1.5GB
MR-TaskTracker: 1GB
MR-child: 512MB (max child task is 6, 4 map task + 2 reduce task)
But the memory u
gt; your available RAM at all times. I'm actually having a hard time achieving
> this since the virtual memory usage of the JVM is usually way higher than
> the maximum heap size (see my other thread).
>
> -- Stefan
>
>
> > From: zsongbo
> > Reply-To:
> > Dat
he maximum heap size (see my other thread).
-- Stefan
> From: zsongbo
> Reply-To:
> Date: Tue, 12 May 2009 10:58:49 +0800
> To:
> Subject: Re: How to do load control of MapReduce
>
> Thanks Billy,I am trying 'nice', and will report the result later.
>
Thanks Billy,I am trying 'nice', and will report the result later.
On Tue, May 12, 2009 at 3:42 AM, Billy Pearson
wrote:
> Might try setting the tasktrackers linux nice level to say 5 or 10
> leavening dfs and hbase setting to 0
>
> Billy
> "zsongbo" wrote in message
> news:fa03480d0905110549j7f
Might try setting the tasktrackers linux nice level to say 5 or 10 leavening
dfs and hbase setting to 0
Billy
"zsongbo" wrote in message
news:fa03480d0905110549j7f09be13qd434ca41c9f84...@mail.gmail.com...
Hi all,
Now, if we have a large dataset to process by MapReduce. The MapReduce
will
ta
Hi all,
Now, if we have a large dataset to process by MapReduce. The MapReduce will
take machine resources as many as possible.
So when one such a big MapReduce job are running, the cluster would become
very busy and almost cannot do anything else.
For example, we have a HDFS+MapReduc+HBase clust
10 matches
Mail list logo