Hi,
Have you taken a look at:
http://stackoverflow.com/questions/14451554/hbase-org-apache-hadoop-hbase-pleaseholdexception
this may simply be a hosts file issue. I have seen many issues with
start
up come down to assigning a hostname to the wrong ip. So if you could paste
your hosts file that wou
Hi dear,
I want to delete some rows with the specified column value, how to do it
more quickly?
Thanks.
Hi Lars,
What I am trying to do is to do a internal scan inside a coprocessor and
then stream the kv buffer as an byte array to a separate process for
processing. I hit a snag on how to reconstruct the kv in the separate
process from the byte array since I do not know what are the correct
offsets
To Ted,
--"Can you tell me why readings corresponding to different timestamps would
appear in the same row ?"
Is that mean the data versions which belong to the same row should at least
have the same timestamps?
For adding a row into HBase, I can use single Put instance, for example,
Put put = n
94.7
On Fri, Sep 27, 2013 at 2:21 PM, Vladimir Rodionov
wrote:
> Version?
>
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: vrodio...@carrieriq.com
>
>
> From: Jay Vyas [jayunit...@gmail.com]
> S
I don't think there's a CDH that includes Hadoop 1.2.1
So either your code is doing something slow or it's the reading itself. For
the latter, make sure you go through
http://hbase.apache.org/book.html#perf.reading and we also recently had
this thread on the list were you can see some "live" perfo
Version?
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From: Jay Vyas [jayunit...@gmail.com]
Sent: Friday, September 27, 2013 11:14 AM
To: user@hbase.apache.org
Subject: Still not
A few days ago I pasted logs from my cluster which wont let me create a
table:
http://stackoverflow.com/questions/18993200/hbase-cant-create-table-who-to-blame-hmaster-or-zookeeper
Even though hmaster is running according to JPS, I get a please hold
exception.
Hate to bump, but I think it would
Hi Jean,
HBase 0.94.6 and Hadoop 1.2.1 Cloudera Distributions..
I infact tried that out, in place of doing the get operations , i created
stub data and returned that instead.. It was practically at the same speed.
Nothing changed.. After 20 mins or so when i check the job status.. It
hardly reach
Your details are missing important bits like you configurations,
Hadoop/HBase versions, etc.
Doing those random reads inside your MR job, especially if they are reading
cold data, will indeed make it slower. Just to get an idea, if you skip
doing the Gets, how fast does it became?
J-D
On Fri, S
Hi everyone,
I posted this question many time before and i've given full details on
stackoverflow..
http://stackoverflow.com/q/19056712/938959
Please i need someone to guide me in the right direction here.
Help much appreciated!
--
Regards-
Pavan
That means that the master cluster isn't able to see any region servers in
the slave cluster... is cluster b up? Can you create tables?
J-D
On Fri, Sep 27, 2013 at 3:23 AM, Arnaud Lamy wrote:
> Hi,
>
> I tried to configure a replication with 2 boxes (a&b). A hosts hbase & zk
> and b only hbase
Hi,
I tried to configure a replication with 2 boxes (a&b). A hosts hbase &
zk and b only hbase. A is on zk:/hbase and b on zk:hbase_b. I used
start-hbase.sh script to start hbase and I changed
HBASE_MANAGES_ZK=false on both.
A is master and B is slave. I added a peer on A and when I list it
Hi Amit,
Would you be able to open a ticket summarizing your findings? Can you
provide a sample project that demonstrates the behavior you're seeing? We
could use that to provide a fix and, I hope, some kind of unit or
integration test.
Thanks,
Nick
On Sun, Sep 22, 2013 at 6:10 AM, Amit Sela w
Hi there,
Are you using the REST Gateway with JSON serialization? How are you forming
your queries? Do you use Jersey's "mapped" notation (with the '@' prepended
to attribute names)?
Please have a look at the recent comments [0] on HBASE-9435 and weigh in.
Thanks!
Nick
[0]:
https://issues.apach
Not sure I follow.
You have a single row with two columns?
In your scenario you'd see that supplier c has 15k iff you query the latest
data, which seems to be what you want.
Note that you could also query as of TS 4 (c:20k), TS3 (d:20k), TS2 (d:10k)
-- Lars
F
Can you tell me why readings corresponding to different timestamps would
appear in the same row ?
Thanks
On Fri, Sep 27, 2013 at 8:57 AM, yonghu wrote:
> (1,3,5) are timestamp.
>
> regards!
>
> Yong
>
>
> On Fri, Sep 27, 2013 at 4:47 PM, Ted Yu wrote:
>
> > In {10K:1, 20K:3, 15K:5}, what does
(1,3,5) are timestamp.
regards!
Yong
On Fri, Sep 27, 2013 at 4:47 PM, Ted Yu wrote:
> In {10K:1, 20K:3, 15K:5}, what does the value (1, 3, 5) represent ?
>
> Cheers
>
>
> On Fri, Sep 27, 2013 at 7:24 AM, yonghu wrote:
>
> > Hello,
> >
> > In my understanding, the timestamp of each data versi
I will say yes. 128mb is the max size, but only the real content is flushed
and that's what is displayed. This value is memstoreSize.get();
JM
2013/9/26 aiyoh79
> Hi,
>
> http://pastebin.com/z9zPb49Y
>
> Looking at the log entries above, is it normal to have different size for
> memsize (164.8
In {10K:1, 20K:3, 15K:5}, what does the value (1, 3, 5) represent ?
Cheers
On Fri, Sep 27, 2013 at 7:24 AM, yonghu wrote:
> Hello,
>
> In my understanding, the timestamp of each data version is generated by Put
> command. The value of TS is either indicated by user or assigned by HBase
> itsel
Hello,
In my understanding, the timestamp of each data version is generated by Put
command. The value of TS is either indicated by user or assigned by HBase
itself. If the TS is generated by HBase, it only records when (the time
point) that data version is generated (Have no meaning to the applica
21 matches
Mail list logo