There's also a ticket to separate out MR:
https://issues.apache.org/jira/plugins/servlet/mobile#issue/HBASE-11843
On Wednesday, November 5, 2014, Ted Yu wrote:
> See this JIRA:
> https://issues.apache.org/jira/browse/HBASE-11549
>
> Cheers
>
> On Nov 5, 2014, at 4:58 AM, Tim Robertson > wrote:
I think it may be a thrift issue, have you tried playing with the connection
queues?
set hbase.thrift.maxQueuedRequests to 0
From Varun Sharma:
"If you are opening persistent connections (connections that never close), you
should probably set the queue size to 0. Because those connections will
a
I used hbase 0.94.18, hadoop 2.4.0 on AWS EMR m1.large with all default
heap size.
On Thu, Nov 6, 2014 at 10:11 PM, Ted Yu wrote:
> Can you provide a bit more information about your environment ?
>
> hbase release
> hadoop release
> hardware config
> heap size for the daemons
>
> Cheers
>
> On T
another thing to keep in mind is that each rename() on s3 is a copy
and since we tend to move files around our compaction is like:
- create the file in .tmp
- copy the file to the region/family dir
- copy the old files to the archive
..and an hfile copy is not cheap
Matteo
On Fri, Nov 7, 2014
And note this is any file, potentially, table descriptors, what have you.
S3 isn't a filesystem, we can't pretend it is one.
On Fri, Nov 7, 2014 at 10:13 AM, Andrew Purtell wrote:
> Admittedly it's been *years* since I experimented with pointing a HBase
> root at a s3 or s3n filesystem, but my
Admittedly it's been *years* since I experimented with pointing a HBase
root at a s3 or s3n filesystem, but my (dated) experience is it could take
some time for newly written objects to show up in a bucket. The write will
have completed and the file will be closed, but upon immediate open attempt
t
Thanks for the comment.
ALTER 'msg', { MEMSTORE_FLUSHSIZE => '256MB' }
Now I get ~256MB memstore flushes. Although this still hasn't
increased the write throughput..
This is the stable state request counts per RS :
http://postimg.org/image/gbb5nf6d1
Avg regions per node: ~100
The table is cons
Hi to all,
in the TableInputFormatBase there's a method that computes the splits that
depends on the region start/end key. I'd like to further split each split
so that to be able to assign work in the cluster more evenly if the regions
are not well-balanced. Is that possible..? Probably not but it
Hi,
There is no mistake of basic configuration.
The Cluster normal run for a long time , stored after a certain amout of data.
I restart hbase service , this kinds of problem will appear !
hanked...@sina.cn
From: Jean-Marc Spaggiari
Date: 2014-11-07 22:45
To: user
CC: yuzhihong
Subject: Re:
What are you hosts names and what is in your /etc/hosts file?
Can you dig, dig -X and ping all your hosts including the master?
Is hostname returned value mapped correctly to the IP?
JM
2014-11-07 9:37 GMT-05:00 hanked...@sina.cn :
> Hi,
>
> using hbase 0.96 and hadoop 2.3
> Master is
Hi,
using hbase 0.96 and hadoop 2.3
Master is no exception information
regionserver WARN logs:
2014-11-07 15:13:19,512 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O
error constructing remote block reader.
java.net.BindException: Cannot assign requested address
at sun.nio.
Hi,
My hadoop is running fine when don't start hbase service . And my network
is normal , I checked !
now , I restart hbase service , HDFS read timeout will occur!
need you help , Thanks!
hanked...@sina.cn
From: Jean-Marc Spaggiari
Date: 2014-11-07 20:57
To: user
Subject: Re
Please pastebin log from region server around the time it became dead.
What hbase / Hadoop version are you using ?
Anything interesting in master log ?
Thanks
On Nov 7, 2014, at 4:57 AM, Jean-Marc Spaggiari wrote:
> Hi,
>
> Have you checked that your Hadoop is running fine? Have you checked
Hi,
Have you checked that your Hadoop is running fine? Have you checked that
network between your servers is fine to?
JM
2014-11-07 5:22 GMT-05:00 hanked...@sina.cn :
> I've deploied a "2+4" cluster which has been normally running for a
> long time.
> The cluster has got more than 40T data
I've deploied a "2+4" cluster which has been normally running for a long
time.
The cluster has got more than 40T data.When I initiatively shut the hbase
service
and try to restart it,the regionserver will be dead.
The log of regionserver shows that all the regions are opened. But in the
15 matches
Mail list logo