Hi Ming,
HConnection connection = HConnectionManager.createConnection(conf);
HTableInterface table = connection.getTable(mytable);
table.get(...); / table.put(...);
Is the correct way to use. However
HConnectionManager.createConnection(conf) gives you a shared HConnection
which you can reuse
Thank you Bharath,
This is a very helpful reply! I will share the connection between two threads.
Simply put, HTable is not safe for multi-thread, is this true? In
multi-threads, one must use HConnectionManager.
Thanks,
Ming
-Original Message-
From: Bharath Vissapragada
Yes thats correct. HTable is not thread-safe.
On Mon, Nov 24, 2014 at 2:55 PM, Liu, Ming (HPIT-GADSC) ming.l...@hp.com
wrote:
Thank you Bharath,
This is a very helpful reply! I will share the connection between two
threads. Simply put, HTable is not safe for multi-thread, is this true? In
Is there any tool to draw data from Hbase to a dashboard like Kibana?
I have been looking for, but I didn't found a tool which fits directly
with HBase for that purpose.
One option is to set the output buffer of your IDE large enough so that
test output is retained.
Another option, though tedious, is to issue 'tail -f ' command and redirect
its output to a file when the test is running.
Cheers
On Mon, Nov 24, 2014 at 1:48 AM, Qiang Tian tian...@gmail.com wrote:
Hi,
I am designing my HBase table schema. I have two entities that are related
to each other in a nested structure. For example, consider two entities A
and B. Both of them are complex types.
Entity A contains one or more entity B values. Both entities have their own
tables. Each row of entity A
[ANN]: HBase-Writer 0.98.7 available for download
The HBase Writer Project is proud to announce the release of version
0.98.7-RELEASE of HBase Writer [1]. HBase Writer includes numerous fixes
for issues recently identified as well as a number of other enhancements
and changes. The notable
Hi,
IN PreBatchMutate method in Observer, I am trying to check whether a row ,
say '123', exists and delete the row. This row is different from the
current row , say '111' ,in the method, which has locked the current row.
But Delete operation is waiting for the lock to '111' instead of '123'.
Row '123' may be in a different region from the one row '111' is in.
Cross region RPC should be avoided in coprocessor.
Does your application require that the deletion of row '123' be in the same
transaction as the mutation ?
Cheers
On Nov 24, 2014, at 12:31 PM, tvraju tvvr...@gmail.com
Hi
How do I use the Thrift Java API to get the last row in the table?
Regards,
Néstor
Yes, you can. Did you try the scannerOpenWithScan() call? It takes a TScan
as input and you can build it with a filterString. Closest examples I could
find are [1] and [2] but I believe its not difficult to extend these.
[1]
On Mon, Nov 24, 2014 at 6:06 PM, Bharath Vissapragada bhara...@cloudera.com
wrote:
Yes, you can. Did you try the scannerOpenWithScan() call? It takes a TScan
as input and you can build it with a filterString. Closest examples I could
find are [1] and [2] but I believe its not difficult to
You can use reverse scan. See the example
in
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java:
TScan reversedScan = new TScan();
reversedScan.setReversed(true);
Cheers
On Mon, Nov 24, 2014 at 5:21 PM, Néstor Boscán nesto...@gmail.com wrote:
Hi
How do
Hi,
I created a test table inside a single node hbase (version 0.99.1) installation
using the local file system, then I disable the table and reboot the hbase
server, then the enable table command hungs hbase shell, is this a bug?
create 'test', 'cf'
disable 'test'
reboot habse
Please tell us what happened. Exceptions, error messages, anything in the logs,
anything on the overview pages, etc, etc?
-- Lars
From: guxiaobo1982 guxiaobo1...@qq.com
To: user user@hbase.apache.org
Sent: Monday, November 24, 2014 7:15 PM
Subject: Can't enable table after rebooting
Thanks Wilm, Let me try to explain my scenario in more detail. Let me talk
about two specific entities, Jobs and Sources.*Source-* A URL that is source
of some data. It also contains other meta-info like description, type etc.
So, the required columns are, source_name, url, description,
Thanks Wilm,
Let me try to explain my scenario in more detail. Let me talk about two
specific entities, Jobs and Sources.
*Source- *A URL that is source of some data. It also contains other
meta-info like description, type etc. So, the required columns are,
source_name, url, description, type.
Also if you're relying on the local filesystem, what OS and filesystem.
--
Sean
On Nov 24, 2014 11:26 PM, lars hofhansl la...@apache.org wrote:
Please tell us what happened. Exceptions, error messages, anything in the
logs, anything on the overview pages, etc, etc?
-- Lars
From:
hi, all
I retest the YCSB load data, and here is a situation which may explain the load
data blocked.
I use too many threads to insert values, so the flush thread is not effectively
to handle all memstore,
and the user9099 memstore is queued at last, and waiting for flush too long
which
hbase.hstore.flusher.count to 20 (default value is 2), and run the YCSB
to load data
with 32 threads
Apologies for the late reply. Your change of configuraton from 2 to 20 is
right in this case because you are data ingest rate is high I suppose.
Thanks for the reply.
Regards
Ram
On Tue, Nov
hi,Ram
After i modify the hbase.hstore.flusher.count, it just improve the load, but
after one hour , the YCSB
load program is still blocked! Then I change hbase.hstore.flusher.count to 40,
but it’s the same as 20,
On Nov 25, 2014, at 14:47, ramkrishna vasudevan
Hi
I have a cluster of 3 systems each of 32GB RAM and 1 TB HD. I have
clustered all the three and able to start and run Hadoop Successfully.
I have installed Hbase on the master node. Now am trying to start
Zookeeper in the cluster. When I start zookeeper and give command
./zkServer.sh
Are you getting any exceptions in the log? Do you have a stack trace when
it is blocked?
On Tue, Nov 25, 2014 at 12:30 PM, louis.hust louis.h...@gmail.com wrote:
hi,Ram
After i modify the hbase.hstore.flusher.count, it just improve the load,
but after one hour , the YCSB
load program is
yes, the stack trace like below:
2014-11-25 13:35:40:946 4260 sec: 232700856 operations; 28173.18 current
ops/sec; [INSERT AverageLatency(us)=637.59]
2014-11-25 13:35:50:946 4270 sec: 232700856 operations; 0 current ops/sec;
14/11/25 13:35:59 INFO client.AsyncProcess: #14, table=usertable2,
24 matches
Mail list logo