JD hi
The system is under heavy load only via hbase (no map reduce running).
Another thing is that we have hard constraints in latencies, therefore if
this point on performance problem i would like to fix it. The GC seems
fine too.
I see around 300 message like this in 10 hours of run, approx
Richard,
Let's see if I understand what you want to do...
You have some data and you want to store it in some table A.
Some of the records/rows in this table have a limited life span of 3 days,
others have a limited life span of 3 months. But both are the same records? By
this I mean that both
2011/12/22 庄阳 zhuangy...@asiainfo-linkage.com:
Hi,
How to obtain the data in the cell using API.If it is convenient for you,
please provide a demo for reference.
Thank you!
Have you seen this from the API javadoc:
Hi,
We are trying to use the aggregation functionality in HBase 0.92 and we have
managed to get the test code working using the following command:
java -classpath junit-4.10.jar:build/*:$HBASELIBS/* org.junit.runner.JUnitCore
org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
Closer
Have you loaded AggregateImplementation into your table ?
Can you show us the contents of the following command in hbase shell:
describe 'your-table'
BTW are you using the tip of 0.92 ?
HBASE-4946 would be of help for dynamically loaded coprocessors which you
might use in the future.
Cheers
On
Hi,
I am trying to do bulkload into a HBase Table with one column family,
using a custom mapper to create the PUTs according to my needs. (Machine
setup at the end of the mail)
Unfortunately, with our data it is a bit hard to presplit the tables
since the keys are not predictable thaat good
If you move the region to another host, do you same same perf (Perhaps
some hardware issue?).
Done more testing today. Is not related to a particular region, it
happened today with an other region on same machine. Also is not a
permanent issue, after some time I retried and the scan was fast
Just an update on this thread:
JD told me via IRC that this problem happens because my HBase manages
Zookeeper and when I restart HBase, Zookeeper is restarted along with it
and the Zookeeper client has a bug that doesn't allow it to work after a
Zookeeper restart.
He suggested a workaround
Culvert was originally introduced at Hadoop Summit 2011, but recent updates
have made it very applicable to current systems. Recently, we added support
for Accumulo as well as upgraded HBase support to 0.92. Since Hadoop
Summit, there have also been significant code cleanup and added some small
We have a 6 node 0.90.3-cdh3u1 cluster. We have 8092 regions. I
realize we have too many regions and too few nodes…we're addressing
that. We currently have an issue where we seem to have lost region
data. When data is requested for a couple of our regions, we get
errors like the following on
See also...
http://hbase.apache.org/book.html#data_model_operations
On 12/22/11 9:46 AM, Stack st...@duboce.net wrote:
2011/12/22 庄阳 zhuangy...@asiainfo-linkage.com:
Hi,
How to obtain the data in the cell using API.If it is convenient for
you, please provide a demo for reference.
Thanks for the update, Jesse.
Let us know of any feature Culvert needs from HBase.
After cloning Culvert, I got:
[INFO] Culvert - Accumulo Integration FAILURE [0.431s]
[INFO]
[INFO] BUILD FAILURE
[INFO]
Wow, that's embarrassing - project not building...
It's because accumulo's release is no longer deployed into the standard
apache maven repository. Maybe one of the accumulo committers can shed some
light on where to find it?
I'll make some changes and have it at least compiling from the raw
Thanks for the hint. That works.
I had to modify culvert-accumulo/pom.xml so that it looks for
1.5.0-incubating-SNAPSHOT which was built by accumulo TRUNK.
On Thu, Dec 22, 2011 at 2:22 PM, Jesse Yates jesse.k.ya...@gmail.comwrote:
Wow, that's embarrassing - project not building...
It's
I plan to use HBase to store data that has a variable length lifespan
[...]
Indeed that the simplest approach is usually best.
The simplest way to manage automatic expiration of data over various lifetimes,
especially if there are only a few of them, like in your case (3 days versus 3
+1
Tarball looks good to me.
Took it a spin local mode, created some tables, inserted some data, scanned
data, removed tables, etc.
-- Lars
From: Stack st...@duboce.net
To: Hbase-User user@hbase.apache.org
Sent: Friday, December 9, 2011 12:35 PM
Subject:
On Thu, Dec 22, 2011 at 4:23 PM, lars hofhansl lhofha...@yahoo.com wrote:
+1
Grand.
Vote passes.
Let me push out the release..
St.Ack
You've understood correctly Michel and thanks you for your suggestions, I think
I'll take the second and manually do TTL.
Andrew - I somewhat over simplified my use case; happy to explain in full but
it's probably OTT. I am intrigued by your idea and certainly hadn't thought of
anything that
+1..
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: Friday, December 23, 2011 6:14 AM
To: user@hbase.apache.org; lars hofhansl
Subject: Re: ANN: HBase 0.90.5RC0 available for download
On Thu, Dec 22, 2011 at 4:23 PM, lars hofhansl
We are using this version, It is fine .
-邮件原件-
发件人: Ramkrishna S Vasudevan [mailto:ramkrishna.vasude...@huawei.com]
发送时间: 2011年12月23日 10:01
收件人: user@hbase.apache.org; 'lars hofhansl'
主题: RE: ANN: HBase 0.90.5RC0 available for download
+1..
-Original Message-
From:
On Thu, Dec 22, 2011 at 11:44 AM, Jesse Yates jesse.k.ya...@gmail.com wrote:
Culvert was originally introduced at Hadoop Summit 2011, but recent updates
have made it very applicable to current systems. Recently, we added support
for Accumulo as well as upgraded HBase support to 0.92. Since
On Wed, Dec 21, 2011 at 12:14 PM, Steven Noels stev...@outerthought.org wrote:
Hi everybody,
if you use 'HBase' and 'Solr' in one sentence, Lily might be worth checking
out. It's a scalable data repository layering a high-level (i.e.
easy-to-use) data model + API on top of HBase, and
Hi,
I'm trying to setup HBase on a Ubuntu 11.04 virtual server using
jdk1.6.0_29, I've had it on another similar server as well as a desktop
machine and this is the first time I've seen this error and I can't find
anything helpful on the web.
Cheers,
Greg
root@ve:/usr/share/hbase-0.90.3/bin#
Hi Alex,
see http://hbase.apache.org/book.html#hadoop
Make sure you replace the hadoop.jar in the 0.90.4 directory with the
hadoop.jar from the HDFS distro you pick. Otherwise you will get
incompatiblity exceptions about connecting to the file system.
On Wed, Dec 21, 2011 at 12:23 PM,
24 matches
Mail list logo