Hello,
My HBase version is 0.92.0. And I find that when I use minor
compaction and major compaction to compact a table, there are no
differences. In the minor compaction, it will remove the deleted cells
and discard the exceeding data versions which should be the task of
major compaction. I
Hi Sambit,
I think that you should add google guava jar to your job classpath.
Slim.
Le 26 avril 2012 10:50, Sambit Tripathy sambi...@gmail.com a écrit :
Hi All,
Can anyone help me with this exception?
I have been trying to import data from csv files into HBase.
As per my understanding
Slim,
That exception is gone now after adding guava jar. (I wonder why do we need
a Google Data Java Client !!!)
Well there is something more, I am getting the following exception now.
Exception in thread main java.lang.reflect.InvocationTargetException
at
On Thu, Apr 26, 2012 at 10:40 AM, Sambit Tripathy sambi...@gmail.com wrote:
Slim,
That exception is gone now after adding guava jar. (I wonder why do we need
a Google Data Java Client !!!)
Well there is something more, I am getting the following exception now.
Exception in thread main
As you use a hbase client in the importer you should have the zookeeper
dependency.
So add it to the job classpath.
I think that you should also add the hbase/zookeeper confs into your
classpath.
For your question on guava, it's used in the parser (the guava splitter).
Slim.
Le 26 avril 2012
Sambit,
Just a tip:
When using the hadoop executable to run HBase programs of any kind,
the right way is to do this:
HADOOP_CLASSPATH=`hbase classpath` hadoop jar args
This will ensure you run with all HBase dependencies loaded on the
classpath, for code to find its HBase-specific resources.
Hi there, as a sanity check with respect to writing have you
double-checked this section of the RefGuide..
http://hbase.apache.org/book.html#perf.writing
... regarding pre-created regions and monotonically increasing keys?
Also as a sanity check refer to this case study as a diagnostic
Ok...
5 machines...
Total cluster? Is that 5 DN?
Each machine 1quad core, 32gb ram, 7 x600GB not sure what types of drives.
so let's assume 1control node running NN, JT, HM, ZK
And 4 DN running DN,TT,RS.
We don't know your Schema, row size, or network. ( 10GBe, 1GBe, 100MBe?)
We also don't
Hi,
Im trying to filter a Hbase table where column names are like foo_1 foo_2
foo_3..
A sql equalent query will be like
Select * from tablename where colunm name like 'foo_%' and value ='value'
The difficulty is that Im not able to do a filter for both column and value
together.
Devis,
Have you tried FilterList? I think combine ColumnPrefixFilter ValueFilter
together can implement that query.
Jieshan.
-Original Message-
From: Davis [mailto:davisabra...@gmail.com]
Sent: Thursday, April 26, 2012 9:42 PM
To: user@hbase.apache.org
Subject: Hbase filter with
I found http://hbase.apache.org/book/ops.monitoring.html but am confused
about the distinction between operationTooSlow and responseTooSlow. I
find the text client operation ambiguous, I am not sure whether that
means the side emitting this log entry is the client and the slow
operation
Hi Jieshan
Thanks a lot for the reply.
BUT in this case i don't have the column name since im using ColumnPrefixFilter.
Is there a way get the result without adding the particular column and column
family name to scanner object??
You are right.
FWIW I cannot find operationTooSlow in the source code, not sure
what's going on there.
J-D
On Thu, Apr 26, 2012 at 10:55 AM, Mike Spreitzer mspre...@us.ibm.com wrote:
I found http://hbase.apache.org/book/ops.monitoring.html but am confused
about the distinction between
I think the 0.92 code has a way to promote minor into major
compactions, feel free to checkout the code (also it should be present
in your logs).
J-D
On Wed, Apr 25, 2012 at 11:48 PM, yonghu yongyong...@gmail.com wrote:
Hello,
My HBase version is 0.92.0. And I find that when I use minor
Mike,
Check logResponse method
of org.apache.hadoop.hbase.ipc.WritableRpcEngine.Server.
This piece in particular :
else if (params.length == 1 instance instanceof HRegionServer
params[0] instanceof Operation) {
// annotate the response map with operation details
Hey Davis You can try this Get a full row using row key which you can get
using creating a scanner like follow :
Scan sc=new scan();
Resultscanner rss= hTable.getscanner(sc);
for(Result r:rss)
rowkey=r.getrow();
then using this rowkey you can create a Get as
Get g=new get(rowkey);
then
Yes, as per Lars' book: Minor compactions can be promoted to major compactions
if the minor would include all storefiles and there are less then the
configured maximum amount of storefiles per compaction.
On 26 apr. 2012, at 20:54, Jean-Daniel Cryans jdcry...@apache.org wrote:
I think the
There is also a description of the compaction file-selection algorithm in
here...
http://hbase.apache.org/book.html#regions.arch
(section 8.7.5.5)
On 4/26/12 5:28 PM, Robby robby.verkuy...@gmail.com wrote:
Yes, as per Lars' book: Minor compactions can be promoted to major
compactions if
The main practical difference is that only a major compaction cleans out delete
markers.
Delete markers cannot be removed during a minor compaction since an affected
KeyValue could exist in an HFile that is not part of this compaction.
-- Lars
From: yonghu
We are transitioning from a BerkeleyDB+MySQL backend to testing Hbase.
One of the toughest challenges we are facing is porting the interface
over.
All our clients were using C/C++ APIs of Berkeley and MySQL to write
data, is there a standard way to do that for HBase. The current method
we are
Please refer to http://wiki.apache.org/hadoop/Hbase/ThriftApi
You can download hbase source code and read the following:
./src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
./src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
Cheers
On Thu, Apr 26, 2012 at 7:47 PM,
21 matches
Mail list logo