Hello everyone,
I am wondering if someone has experimentally determined how much a 64 MB
block of hfile will occupy once it is loaded in block cache. I suppose some
overhead of storing as java object.
Thanks
Abhishek
--
View this message in context:
Thank @lars for the suggestions.
We will randomly pick some columns from MySQL, then compare with their
values in HBase. Because the data is constantly increasing, we will not
verify all the data.
On Wed, Aug 13, 2014 at 1:14 PM, lars hofhansl la...@apache.org wrote:
Just in the interest of
I have a hbase table with more than 2G rows.
Every hour there comes 5M~10M row ids and i must get all the row info from the
hbase table.
But even I use the batch call(1000 row ids as a list) as described here
Haven't tried yet
only one thread
10 regions servers, total 2555 regions.
I am just new to HBase and not sure what exactly the block cache mean, here's
the configuration i can see from the CDH HBase master UI:
namehbase.rs.cacheblocksonwrite/name
valuefalse/value
sourcehbase-default.xml/source
Hi,
I want to develop a custom SplitPolicy for my hbase table. But when I use my
policy to create a new table, I get this exception Unable to load configured
region split policy
I put the MyPolicy.jar in the lib directory of HBase and use following code to
assign it to the table.
Sorry, I found the reason. I forgot to restart the RegionServer...
-原始邮件-
发件人: LEI Xiaofeng le...@ihep.ac.cn
发送时间: 2014年8月13日 星期三
收件人: user@hbase.apache.org
抄送:
主题: how to develop a cunstom splitpolicy for hbase table
Hi,
I want to develop a custom SplitPolicy for my hbase
I'm trying to read specific HBase data and index into solr using groovy
script in /update handler of solrconfig file but I'm getting the error
mentioned below
I'm placing the same HBase jar on which i'm running in solr lib. Many
article said
WorkAround:
1. First i thought that class path has two
Can you show us the contents of solr lib and the classpath ?
Thanks
On Aug 13, 2014, at 4:47 AM, Vivekanand Ittigi vi...@biginfolabs.com wrote:
I'm trying to read specific HBase data and index into solr using groovy
script in /update handler of solrconfig file but I'm getting the error
Hi Ted,
echo $CLASSPATH
/home/biginfolabs/BILSftwrs/hbase-0.94.10/conf
under /home/biginfolabs/BILSftwrs/hbase-0.94.10/conf, I've hbase-site.xml.
Actually i've made one more folder called custom-lib under
solr-4.2.0/example/lib and this path in pointed in solrconfig.xml using the
following
Is there anyone who can provide guidance on creating a RESTful interface to
connect a client app to an hbase datastore?
Sorry to cast the wide net...
Sincerely,
Sean
Like what Esteban said.
Try to use more threads to query HBase. Start with 10 clients, each with 1K
gets per batch, and adjust those numbers to see the impact on the
performances.
Any reason why your block cache is disabled? (hfile.block.cache.size = 0)
JM
2014-08-13 5:23 GMT-04:00
bq. a package which hits my HBase.jar
Can you check the contents of the above jar to see if it contains
hbase-default.xml ?
Cheers
On Wed, Aug 13, 2014 at 5:49 AM, Vivekanand Ittigi vi...@biginfolabs.com
wrote:
Hi Ted,
echo $CLASSPATH
/home/biginfolabs/BILSftwrs/hbase-0.94.10/conf
under
Im not seeing any hbase-default.xml since that jar is built using Maven.
If I had exported (Runnable jar) the same package using eclipse IDE i'd
have seen hbase-default.xml file on opening a package which hits my
HBase.jar but instead of exporting im building it using maven and placing
the jar in
bq. im building it using maven
maven may have included hbase-default.xml in your jar.
Can you pastebin the output of the following command ?
jar tvf your-jar | grep hbase
On Wed, Aug 13, 2014 at 7:21 AM, Vivekanand Ittigi vi...@biginfolabs.com
wrote:
Im not seeing any hbase-default.xml since
Hello there, I'm running a read/write benchmark on a huge data (tweeter posts)
for my school project.
The problem im dealing with is that the tests are going extreamly slow.
I dont know how to optimize the process. Hbase is using only about 10% of RAM
memory, and 40% of CPU.
I've been
Team,
I want to create a table with rowkey + timestamp in hbase shell. Is it
possible?
Regards,
Ravi
Can you post the client code you're using to read/write from HBase?
On Wed, Aug 13, 2014 at 11:21 AM, kacperolszewski kacperolszew...@o2.pl
wrote:
Hello there, I'm running a read/write benchmark on a huge data (tweeter
posts) for my school project.
The problem im dealing with is that the
rowkey gets involved when you insert / delete data.
At time of table creation, you specify column family settings.
Cheers
On Wed, Aug 13, 2014 at 6:48 AM, Ravi Kanth ravikanth.as...@gmail.com
wrote:
Team,
I want to create a table with rowkey + timestamp in hbase shell. Is it
possible?
Have you looked at the performance guidelines in our online book?
http://hbase.apache.org/book.html#performance
http://hbase.apache.org/book.html#casestudies.perftroub
On Wed, Aug 13, 2014 at 8:43 AM, Pradeep Gollakota pradeep...@gmail.com
wrote:
Can you post the client code you're using to
Hi Lei,
Any chance for you to provide the value for hfile.block.cache.size from one
of the region servers? The HBase master disables the block cache (thats why
it shows 'programatically' as the source of the config)
cheers,
esteban.
--
Cloudera, Inc.
On Wed, Aug 13, 2014 at 6:41 AM,
Hello Sean,
Have you looked into the HBase wiki page for the REST server?
http://wiki.apache.org/hadoop/Hbase/Stargate
cheers,
esteban.
--
Cloudera, Inc.
On Wed, Aug 13, 2014 at 5:57 AM, Sean Kennedy s...@comcast.net wrote:
Is there anyone who can provide guidance on creating a RESTful
Another resource is the Javadoc for the rest server package:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/package-summary.html
On Wed, Aug 13, 2014 at 10:07 AM, Esteban Gutierrez este...@cloudera.com
wrote:
Hello Sean,
Have you looked into the HBase wiki page for the REST
Hi all,
We're running with Hadoop 1.0.4 and HBase 0.94.12 and thinking of upgrading
to Hadoop 2 but I'm not sure which is the latest HBase stable version 0.96
or 0.98 ?
Would you recommend upgrading straight to 0.98 ?
Thanks,
Amit.
Amit:
See http://www.us.apache.org/dist/hbase/stable/
0.98.5 was released this week.
On Wed, Aug 13, 2014 at 10:48 AM, Amit Sela am...@infolinks.com wrote:
Hi all,
We're running with Hadoop 1.0.4 and HBase 0.94.12 and thinking of upgrading
to Hadoop 2 but I'm not sure which is the latest
Apache HBase 0.98.5 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev
The latest stable version of HBase is 0.98.5.
The upgrade procedure for 0.94 - 0.96 can be applied in the exact same
manner to 0.94 - 0.98. There is no need to upgrade through 0.96 as an
intermediate step.
We discussed this recently and I expect we are going to stop supporting (as
a
namehfile.block.cache.size/name
value0.0/value
Yikes. Don't do that. :)
Even if your blocks are in the OS cache, upon each single Get HBase needs to
re-allocate a new 64k block on the heap (including the index blocks).
If you see no chance that a working set of the data fits into the
Hello-
We are running HBase v0.94.16 on an 8 node cluster.
We have a recurring problem w/ HBase clients hanging. In latest occurrence, I
observed the following sequence of events:
0) client plays w/ HBase for a long time w/o issue
1) client runs out of memory during HBase operation:
Hi,
I am working on https://issues.apache.org/jira/browse/STORM-444. The task is
very similar to https://issues.apache.org/jira/browse/OOZIE-961. Basically in
storm secure mode we would like to fetch topology/job submitter user’s
credentials on behalf of them on our master node and auto
Hey Ted,
so this is a problem with the ZK client, it seems to not clean itself up
correctly upon receiving an exception at the wrong moment.
Which version of ZK are you using?
-- Lars
- Original Message -
From: Ted Tuttle t...@mentacapital.com
To: user@hbase.apache.org
@Ravi Do you mean using a key + timestamp as rowkey in HBase shell?
If so, you can `import java.text.SimpleDateFormat` to get the timestamp.
More detail on http://hbase.apache.org/book/shell_tricks.html.
On Wed, Aug 13, 2014 at 11:50 PM, Ted Yu yuzhih...@gmail.com wrote:
rowkey gets involved
Sorry, the real region server config is this:
namehfile.block.cache.size/name
value0.25/value
sourcehbase-site.xml/source
leiwang...@gmail.com
From: Esteban Gutierrez
Date: 2014-08-14 01:05
To: user@hbase.apache.org
Subject: Re: Re: Any fast way to random access hbase data?
Hi Lei,
Any
Hi Lars-
We are running ZK 3.3.4, Cloudera cdh3u3, HBase 0.94.16.
Thanks,
Ted
On Aug 13, 2014, at 5:36 PM, lars hofhansl la...@apache.org wrote:
Hey Ted,
so this is a problem with the ZK client, it seems to not clean itself up
correctly upon receiving an exception at the wrong moment.
No Ted, I did not see hbase-default.xml after running the command.
Im building maven using this command (mvn clean install), i guess everyone
does this way only.
Anyway I'm attaching the jar and groovy script as well. My class is
com.search.ReadHbase.java.
-Vivek
On Wed, Aug 13, 2014 at 8:00
the sendthread stacktrace looks not correct. Do you have the client log?
(in case zk client code log sth there)
from the zk code, it looks ClientCnxn$SendThread.run should have caught
it(throwable) and done the cleanup work, e.g. notify the main thread, so
that it can wake up from
35 matches
Mail list logo