Hi Jason,
AFAIK, you need to write custom serialization and deserialization code for
this kind of stuff. For any primitive data type and some others like
BigDecimal, HBase has a utility class named as
org.apache.hadoop.hbase.util.Bytes. You might take advantage of this class
in your code if you ca
On Tue, Sep 18, 2012 at 12:39 PM, Bryan Beaudreault
wrote:
> Looking closer at it I guess the flush and the IOException probably weren't
> related. So the multi call to delete must have failed at the client (which
> is good). It does seem very strange to me that the pattern always seemed
> to be
On Tue, Sep 18, 2012 at 11:37 AM, Bryan Beaudreault
wrote:
> We are running cdh3u2 on a 150 node cluster, where 50 are HBase and 100 are
> map reduce. The underlying hdfs spans all nodes.
>
This is a 0.90.4 HBase and then some Bryan?
What was the issue serving data that you refer to? What did
On Tue, Sep 18, 2012 at 10:43 PM, Jean-Daniel Cryans wrote:
> What's in the master's log?
>
> J-D
> Please find the attached master logs.
> On Tue, Sep 18, 2012 at 3:41 AM, Venkateswara Rao Dokku
> wrote:
> > Hi,
> > I am new to Hbase & I wanted to Cluster Hbase on 2 nodes. I put one
> > of
Jieshan,
Thanks! HTablePool is used in my system.
Best,
Bing
On Thu, Sep 20, 2012 at 11:19 AM, Bijieshan wrote:
> >If it is not safe, it means locking must be set as what is
> >shown in my code, doesn't it?
>
> You should not use one HTableInterface instance in multi-threads("Sharing
> one HTa
>If it is not safe, it means locking must be set as what is
>shown in my code, doesn't it?
You should not use one HTableInterface instance in multi-threads("Sharing one
HTableInterface in multi-threads + Lock" will degrade the performance). There
are 2 options:
1. Create one HTableInterface inst
Sorry, I didn't keep the exceptions. I will post the exceptions if I get
them again.
But after putting "synchronized" on the writing methods, the exceptions are
gone.
I am a little confused. HTable must be the interface to write/read data
from HBase. If it is not safe, it means locking must be se
Yes. It should be safe. What you need to pay attention is HTable is not thread
safe. What are the exceptions?
Jieshan
-Original Message-
From: Bing Li [mailto:lbl...@gmail.com]
Sent: Thursday, September 20, 2012 10:52 AM
To: user@hbase.apache.org
Cc: hbase-u...@hadoop.apache.org; Zhouxu
Dear Jieshan,
Thanks so much for your reply!
Now locking is not set on the reading methods in my system. It seems to be
fine with that.
But I noticed exceptions when no locking was put on the writing method. If
multiple threads are writing to HBase concurrently, do you think it is safe
without l
I prefer the idea similar to " sensor+time+v "...The problem is the part of
"v". Is it the version number? Or some random number to distinguish the
different version?
Jieshan
-Original Message-
From: Rita [mailto:rmorgan...@gmail.com]
Sent: Thursday, September 20, 2012 9:09 AM
To: user@
I have a similar situation. I have certain keys such that if I didn't have
the timestamps as part of the key I would have to have hundreds and even
thousands of duplicates.
However, I would recommend making sure a the timestamps portion is fixed
width (it will guarantee that your keys for a partic
You can avoid read & write running parallel from your application level, if I
read your mail correctly. You can use ReentrantReadWriteLock if your intention
is like that. But it's not recommended.
HBase has its own mechanism(MVCC) to manage the read/write consistency. When we
start a scanning,
Got it. Thanks. That should of been obvious.
On Wed, Sep 19, 2012 at 2:16 AM, Yusup Ashrap wrote:
> hi Rita ,check out this link.
>
> http://happybase.readthedocs.org/en/latest/api.html#happybase.Connection.create_table
>
> On Wed, Sep 19, 2012 at 7:58 AM, Rita wrote:
> > using happybase, the
You probably want to do a review of these chapters too...
http://hbase.apache.org/book.html#architecture
http://hbase.apache.org/book.html#datamodel
http://hbase.apache.org/book.html#schema
On 9/19/12 4:48 PM, "Pankaj Misra" wrote:
>Thank you so much Doug.
>
>You are right there is only o
re: "pseudo-distributed mode"
Ok, so you're doing a local test. The benefits you get with multiple
regions per table that are spread across multiple RegionServers are that
you can engage more of the cluster in your workload. You can't really do
that on a local test.
On 9/19/12 4:48 PM, "Pa
Thank you so much Doug.
You are right there is only one region to start with as I am not pre-splitting
them. So for a given set of writes, all are hitting the same region.
I will have the table pre-split as described, and test again. Will the number
of region servers also impact the writes per
Hi there,
You haven't described much about your environment, but there are two
things you might want to consider for starters:
1) Is the table pre-split? (I.e., if it isn't, there is one region)
2) If it is, are all the writes hitting the same region?
For other write tips, see thisÅ
Just to report back (in case someone else ran into similar issues
during install) - I noticed that one of my friends use Sun's JDK (and
I was using openJDK). I then replaced my JDK and started a new install
with hadoop 1.0.3 + Hbase 0.94.1. Now it works in my MacBook!
Jason
On Wed, Sep 19, 2012 a
Dear All,
I have created 2 clients with multi-threading support to perform concurrent
writes to HBase with initial expectation that with multiple threads I should be
able to write faster. The clients that I created are using the Native HBase API
and Thrift API.
To my surprise, the performance
I think I should clarify my question - is there any existing tool to
help convert these type of nested entities to byte arrays so I can
easily write them to the column, or do I need to write my own
serialization/deserialization?
thanks!
Jason
On Wed, Sep 19, 2012 at 12:53 PM, Jason Huang wrote:
I just ran fsck and this is the result.
[root@node9-0 ~]# fsck -f /dev/sdb1
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5:
Hello,
I am designing a HBase table for user activity and I want to use a
nested entity to store the login information as following:
(1) Define a column qualifier named "user login" use this as a nested entity;
(2) This nested entity will have the following attributes:
total # of login
Did you notice the 'file:' scheme for your file ?
Have you run fsck to see if your hdfs is healthy ?
Cheers
On Wed, Sep 19, 2012 at 8:32 AM, Bai Shen wrote:
> It's the one from the cloudera repo. 0.92.1
>
> On Wed, Sep 19, 2012 at 10:48 AM, Ted Yu wrote:
>
> > Can you tell us which HBase ver
Thanks JD and Shumin for the responses.
I realized that this chain is getting longer and longer and I've tried
many different things in an between. I will clean out every previous
installs and start a fresh one with the newest version following your
instructions step by step. Hopefully that will w
It's the one from the cloudera repo. 0.92.1
On Wed, Sep 19, 2012 at 10:48 AM, Ted Yu wrote:
> Can you tell us which HBase version you are using ?
>
> On Wed, Sep 19, 2012 at 7:27 AM, Bai Shen wrote:
>
> > I'm running Nutch 2 using HBase as my backend in local mode. Everything
> > seems to be
Hi Pankaj,
You reached the Hadoop user lists. Not everyone here may be an HBase
user. Moving this to the HBase user lists (user@hbase.apache.org).
Please use the right project's lists for best responses.
My reply inline.
On Wed, Sep 19, 2012 at 12:44 PM, Pankaj Misra
wrote:
> Hi All,
>
> I have
Can you tell us which HBase version you are using ?
On Wed, Sep 19, 2012 at 7:27 AM, Bai Shen wrote:
> I'm running Nutch 2 using HBase as my backend in local mode. Everything
> seems to be working correctly except when I run the readdb method. When I
> run readdb, I get the following stack tra
Interesting! Thanx for sharing.
I wonder if storing "minTs" as a delete marker (instead of deleting the
data) can help in your case. I.e. keep somewhere (in HBase/DB) the minTs
value for each part of deleted data and use it when reading data for the
user. When user deletes piece of data, you can s
I'm running Nutch 2 using HBase as my backend in local mode. Everything
seems to be working correctly except when I run the readdb method. When I
run readdb, I get the following stack trace.
2012-09-19 10:15:46,485 WARN mapred.LocalJobRunner - job_local_0001
org.apache.hadoop.hbase.client.Retri
Hi Alex:
we have a functionality which allows users to delete the data stored in
hbase but once in awhile, users can call us to undelete certain data that
have been deleted a hour/day ago. Since we run major compaction weekly, my
wishful thinking was that the data is still there and can be recover
Hi Jerry,
Just out of the curiosity: what is your use-case? Why do you want to do
that? To gain extra protection from software error or smth else?
Alex Baranau
--
Sematext :: http://sematext.com/ :: Hadoop - HBase - ElasticSearch - Solr
On Tue, Sep 18, 2012 at 6:32 PM, lars hofhansl wrote:
DoNotRetryIOException means that the error is considered at permanent: it's
not a missing regionserver, but for example a table that's not enabled.
I would expect a more detailed exception (a caused by or something alike).
If it's missing, you should have more info in the regionserver logs.
On Wed
32 matches
Mail list logo