Have you taken jstack for the slow scans ?
If so, can you pastebin the stack trace ?
1.0.0 is quite old.
Any chance of upgrading to 1.2 release ?
Cheers
> On Oct 10, 2016, at 2:04 AM, 陆巍 wrote:
>
> Hi All,
>
> I met with a problem where the scan perfoamance decreases
Hi All,
I met with a problem where the scan perfoamance decreases over time.
Hbase connections are kept in a data access service (in tomcat), and there are
table scan operations. The scan performance for each scan batch(~10 parallel
scan) increases as below:
dayavg. cost(ms)
156.213115
There's an HDFS bandwidth setting which is set to 10MB/s.
Way too low for even 1GBe.
Have you modified this setting yet?
-Mike
On Nov 3, 2012, at 2:50 PM, David Koch ogd...@googlemail.com wrote:
Hello Ted,
We never initiate major compaction manually. I have not looked at I/O
balance
Where is this settings located?
Sent from my iPhone
On 5 בנוב 2012, at 15:05, Michael Segel michael_se...@hotmail.com wrote:
There's an HDFS bandwidth setting which is set to 10MB/s.
Way too low for even 1GBe.
Have you modified this setting yet?
-Mike
On Nov 3, 2012, at 2:50 PM, David
hdfs-site.xml
Its an HDFS setting that may impact the balancing of HBase as well.
(I'm sure someone can give a better response by looking at the code. )
On Nov 5, 2012, at 12:14 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
Where is this settings located?
Sent from my iPhone
On 5 בנוב
There is property dfs.balance.bandwidthPerSec in hdfs-site.xml
property
namedfs.balance.bandwidthPerSec/name
value625/value
description
Specifies the maximum amount of bandwidth that each datanode
can utilize for the balancing purpose in term of
the
Hello,
Every now and then we need to flatten our cluster and re-import all data
from log files (changes in data format, etc.) Afterwards we notice a
significant increase in scan performance. As data is added and shuffled
around between region servers, performance goes down again over time (say a
Can you tell us how often you run major compaction after the import ?
Have you noticed imbalanced read / write requests in the cluster ? Meaning
subset of region servers receive bulk of the writes.
We do some manual movement of regions when the above happens.
Cheers
On Sat, Nov 3, 2012 at 8:12
Hello Ted,
We never initiate major compaction manually. I have not looked at I/O
balance between nodes in detail. We have noticed that after running for a
couple of weeks HBase seems to spend hours pushing blocks between nodes in
order to optimize things. We add data daily in one ~30gb push to
Have you looked at http://hbase.apache.org/book.html#performance ?
Thanks
On Sat, Nov 3, 2012 at 12:50 PM, David Koch ogd...@googlemail.com wrote:
Hello Ted,
We never initiate major compaction manually. I have not looked at I/O
balance between nodes in detail. We have noticed that after
10 matches
Mail list logo