Would you please provide the log of the region server serving the 'busy' region
:
regionName=test-table,doc-id-843162,1393341942533.8477b42b33d2fe9abb2b25a4e5e94b24.
? HBASE-10499 is an issue where a problematic region continuously throws out
RegionTooBusyException due to the fact that it's
I have a task which comes with a lot of requests. I had the same issue with
0.94. I managed to solve it with letting my requests waiting longer if the
traffic is high and they have to wait (time_wait expansion). I put the
following properties in hbase-site.xml:
property
Hi
We have made modifications to hbase-env.sh to enable* JMX metrics* for our
HBase setup.
But any change made to hbase-env.sh are not getting reflected after
restarting Mater and Region Servers.
We added mkdir /home/XYZ command at the end of file to test
*Hbase version:* hbase-0.94.6-cdh4.5.0
This is what I get from hbase 0.94 running the same task that lead to
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?
WARN
0.94 doesn't throws RegionTooBusyException when memstore exceeds
blockingMemstore...it waits in regionserver, that's why you gets
TimeoutException from client side. Nicolas has said this in above mail.
Maybe you can try some actions suggested in above mails such as split out more
regions to
btw: Any chance to provide the log of the regionserver that serves the
problematic region for which RegionTooBusyException is thrown?
(hbase-0.96.1.1), thanks
发件人: shapoor [esmaili_...@yahoo.com]
发送时间: 2014年2月26日 18:30
收件人: user@hbase.apache.org
主题: Re:
Hi Vikram,
I don't think CDH 4.5.0 uses this script to start HBase. You might want to
ask on the CDH mailing list to get a confirmation.
JM
2014-02-26 5:02 GMT-05:00 Vikram Singh Chandel vikramsinghchan...@gmail.com
:
Hi
We have made modifications to hbase-env.sh to enable* JMX metrics*
HBASE-8755 introduced new write thread model.
It is integrated in the recently released 0.98.0
You can consider giving 0.98.0 a spin.
FYI
On Feb 26, 2014, at 1:03 AM, shapoor esmaili_...@yahoo.com wrote:
I have a task which comes with a lot of requests. I had the same issue with
0.94. I
Thanks for all that detail. Re: updating documentation, it looks like there
is a ticket for that: https://issues.apache.org/jira/browse/HBASE-6192
My specific use case is to support secure multi-tenancy. It looks like
namespaces is the way to go, and security for them was added in
I was looking at HBASE-9206 : the last comment was 5 months ago.
On Wed, Feb 26, 2014 at 8:57 AM, Alex Nastetsky anastet...@spryinc.comwrote:
Thanks for all that detail. Re: updating documentation, it looks like there
is a ticket for that: https://issues.apache.org/jira/browse/HBASE-6192
My
Thanks Ted. See below for the entire exception message from region server log
file. BTW, I use hadoop 2.3.0 version
2014-02-25 15:42:42,620 ERROR [main] regionserver.HRegionServerCommandLine:
Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class
Can you check the version of hadoop-common jar in your classpath to see if
there is conflict ?
On Wed, Feb 26, 2014 at 9:06 AM, S. Zhou myx...@yahoo.com wrote:
Thanks Ted. See below for the entire exception message from region server
log file. BTW, I use hadoop 2.3.0 version
2014-02-25
Does that indicate to you an abandoned ticket?
I think that HBASE-8409 alone would satisfy my needs because it prevents
other tenants from dropping and altering my tables (the W permission). I
can live with users with dropping and altering tables of other users in the
same namespace.
Do you have
I haven't set the number of the regions my self at the beginning. In 0.94
with region size of 10 gig, I start with one region and after around 250 gig
of saves I see 60 regions are running and somewhere around here the timeout
exception flies around.
java.util.concurrent.ExecutionException:
hadoop 2.3.0 uses hadoop-common-2.3.0
hbase 0.98 uses hadoop-common-2.2.0
On Wednesday, February 26, 2014 9:14 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you check the version of hadoop-common jar in your classpath to see if
there is conflict ?
On Wed, Feb 26, 2014 at 9:06 AM, S. Zhou
My rowKey in this table is like : 1,MCA
2,BCA
3,Btech
4,Mtech
..
And when with shell i performing get/scan operation i got everything as
expected.
But with http://172.20.8.20:60010/table.jsp?name=StudentScoreCard
start key, end key showing totally different values as below.
Is HBase
I think
HBase guarantees ACID semantics per-row
That's why all CF have to flush when any one of them got memstore limit.
On Tue, Feb 25, 2014 at 1:28 PM, Bharath Vissapragada bhara...@cloudera.com
wrote:
Hi Upendra,
Your argument is correct, especially when there is an uneven data
Please read http://hbase.apache.org/book.html#arch.catalog.meta
Read operation would be directed to the region(s) which contain the rows
you look for.
Cheers
On Wed, Feb 26, 2014 at 10:23 AM, Upendra Yadav upendra1...@gmail.comwrote:
My rowKey in this table is like : 1,MCA
2,BCA
3,Btech
bq. HBase guarantees ACID semantics per-row
ACID guarantees are at region level.
bq. That's why all CF have to flush when any one of them got memstore limit.
See this comment where LSN means log sequence number:
On Wed, Feb 26, 2014 at 10:18 AM, S. Zhou myx...@yahoo.com wrote:
hadoop 2.3.0 uses hadoop-common-2.3.0
hbase 0.98 uses hadoop-common-2.2.0
Yes.
The first thing you must do is match up the Hadoop jars in your HBase
installation with the version of Hadoop you are using, if it is different
We are running hbase 0.94.2 on hadoop 0.20 append version in production
(yes we have plans to upgrade hadoop). Its a 5 node cluster and a 6th node
running just the name node and hmaster.
I am seeing frequent RS YouAreDeadExceptions. Logs here
http://pastebin.com/44aFyYZV
The RS log shows a
Submitted JIRA patch: https://issues.apache.org/jira/browse/HDFS-6022
(with test)
On Mon, Feb 24, 2014 at 12:16 PM, Jack Levin magn...@gmail.com wrote:
I will do that.
-Jack
On Mon, Feb 24, 2014 at 6:23 AM, Steve Loughran ste...@hortonworks.com
wrote:
that's a very old version of
I went over the patch for HBASE-8409 one more time.
I don't see a test case covering your scenario.
Cheers
On Wed, Feb 26, 2014 at 9:36 AM, Alex Nastetsky anastet...@spryinc.comwrote:
Does that indicate to you an abandoned ticket?
I think that HBASE-8409 alone would satisfy my needs because
Hi there,
On top of what Vladimir already saidŠ
re: Table1: 80 m records say Author, Table2 : 5k records say Category
Just 80 million records? Hbase tends to be overkill for relatively low
data volumes.
But if you wish to proceed this path, to extend what was already said,
rather than
I haven't looked at the patch, just the ticket description. Here is an
excerpt:
Lets see an example on how privileges work with namespaces.
User Mike request for a namespace named hbase_perf with the hbase
admin.
whoami: hbase
hbase shell namespace_create 'hbase_perf'
hbase shell grant
I created a table 'ted:t1' using user X in a secure cluster.
I then logged in as user hrt_1 and did the following:
hbase(main):007:0 user_permission 'ted:t1'
User
Table,Family,Qualifier:Permission
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions
Thanks Ted.
Can you use user X to grant hrt_1 permission to create tables just in the
'ted' namespace (but not in the global namespace)?
I want a user to be able to create their own tables, but not drop the
tables of other users. If I can't have that, then I would settle for not
being able to
I tried that and stumbled into HBASE-10621
On Wed, Feb 26, 2014 at 2:48 PM, Alex Nastetsky anastet...@spryinc.comwrote:
Thanks Ted.
Can you use user X to grant hrt_1 permission to create tables just in the
'ted' namespace (but not in the global namespace)?
I want a user to be able to
Thanks, I am watching that issue now.
On Wed, Feb 26, 2014 at 5:51 PM, Ted Yu yuzhih...@gmail.com wrote:
I tried that and stumbled into HBASE-10621
On Wed, Feb 26, 2014 at 2:48 PM, Alex Nastetsky anastet...@spryinc.com
wrote:
Thanks Ted.
Can you use user X to grant hrt_1 permission
I put up a patch which I have tested locally.
On Wed, Feb 26, 2014 at 2:56 PM, Alex Nastetsky anastet...@spryinc.comwrote:
Thanks, I am watching that issue now.
On Wed, Feb 26, 2014 at 5:51 PM, Ted Yu yuzhih...@gmail.com wrote:
I tried that and stumbled into HBASE-10621
On Wed, Feb
Thanks. I am unfortunately on 0.96 still, but looking forward to having it
working in 0.98 when we upgrade.
Just to confirm, you are able to grant W permissions to hrt_1 in namespace
'ted', and then hrt_1 can create tables in namespace 'ted', but not in
other namespaces?
On Wed, Feb 26, 2014 at
The patch is one line fix in ruby script which you can apply in your
cluster.
That way you would be able to verify yourself :-)
On Wed, Feb 26, 2014 at 3:51 PM, Alex Nastetsky anastet...@spryinc.comwrote:
Thanks. I am unfortunately on 0.96 still, but looking forward to having it
working in
bq. I haven't set the number of the regions my self at the beginning.
If no pre-split, there is only one region for each table from the beginning, so
all the writes go to this single region...very possible to incur
SocketTimeoutException(0.94) or RegionTooBusyException(0.96+).
You can
I am thinking of storing medium sized objects (~1M) using HBase. The
advantage of using HBase rather than HBase (storing pointers) + HDFS, in
my mind, is:
data locality. When I want to run analytics, I will access these objects
using HBase scan, and HBase stores KVs in a sequential manner. If I
What type of analytics are you going to do on medium sized objects (1M)?
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From: Wei Tan [w...@us.ibm.com]
Sent: Wednesday, February
Hi Doug
*80m records* were in *3 year* data set whose HFile(s) size is around 47 Gb.
We have *75 year dataset also *(3,5,7,10,25,50,75 years) and that has a
HFile(s) size of 2.7 Tb around for a single table.
We are now planning now to use a *Post Get Observer*
that will write the get/read time
36 matches
Mail list logo