Hi,
I am writing a maven junit test for a HBase coprocessor. The problem is
that, I want to write a junit test that deploy the cp jar into a cluster,
and test its function. However, test is before install so I cannot get a
cp jar to deploy at that time.
Is this like a chicken-and-egg problem? An
JM,
>100 rows from the 2nd region is using extra time and resources. Why
not ask for only the number of missing lines?
These are some thing needs to be controlled by the scanning app. It can well
control the pagination with out using the PageFilter I guess.. What do u say?
-Anoop-
___
So.
After many retries, repairs, etc, I was able to get rid of the
OP_READ_BLOCK exception.
But I still have this one in the hbck output:
13/01/30 21:42:38 WARN regionserver.StoreFile: Failed match of store
file name hdfs://node3:9000/hbase/.META./1028785192/.oldlogs/hlog.1341486023008
13/01/30 2
Congrats guys !!! This is something that was sorely missing in what I am
trying to build... will definitely try it out... just out of curiosity,
what kind of projects/tools at SalesForce uses this library ?
On Wed, Jan 30, 2013 at 5:55 PM, Huanyou Chang wrote:
> Great tool,I will try it later. th
Great tool,I will try it later. thanks for sharing!
2013/1/31 Devaraj Das
> Congratulations, James. We will surely benefit from this tool.
>
> On Wed, Jan 30, 2013 at 1:04 PM, James Taylor
> wrote:
> > We are pleased to announce the immediate availability of a new open
> source
> > project, Ph
Hi,
When I do an hbase hbck, I'm getting this:
13/01/30 19:29:55 WARN hdfs.DFSClient: Failed to connect to
/192.168.23.7:50010, add to deadNodes and continuejava.io.IOException:
Got error for OP_READ_BLOCK, self=/192.168.23.7:57612,
remote=/192.168.23.7:50010, for file
/hbase/entry/df36c172b5b652
Cool. Will play a bit later on. Was waiting for it to appear.
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor wrote:
> We are pleased to announce the immediate availability of a new open source
> project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
> Salesforce.com. We put t
Congratulations, James. We will surely benefit from this tool.
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor wrote:
> We are pleased to announce the immediate availability of a new open source
> project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
> Salesforce.com. We put t
Hi All,
I am using HBase0.92.1. I am trying to break the HBase bulk loading into
multiple MR jobs since i want to populate more than one HBase table from a
single csv file. I have looked into MultiTableOutputFormat class but i
doesnt solve my purpose becasue it does not generates HFile.
I modifie
Wow.
Thanks so much for open sourcing this.
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor wrote:
> We are pleased to announce the immediate availability of a new open source
> project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
> Salesforce.com. We put the SQL back in th
Great stuff! I've been waiting for this. Congrats on open sourcing and
thanks for sharing!
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor wrote:
> We are pleased to announce the immediate availability of a new open source
> project, Phoenix, a SQL layer over HBase that powers the HBase use case
Congrats lads!
St.Ack
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor wrote:
> We are pleased to announce the immediate availability of a new open source
> project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
> Salesforce.com. We put the SQL back in the NoSQL:
>
> * Availab
We are pleased to announce the immediate availability of a new open
source project, Phoenix, a SQL layer over HBase that powers the HBase
use cases at Salesforce.com. We put the SQL back in the NoSQL:
* Available on GitHub at https://github.com/forcedotcom/phoenix
* Embedded JDBC driver imple
On Mon, Jan 28, 2013 at 12:14 PM, Jim Abramson
mailto:j...@magnetic.com>> wrote:
> Hi,
>
> We are testing HBase for some read-heavy batch operations, and
> encountering frequent, silent RegionServer crashes.
'Silent' is interesting. Which files did you check? .log and the .out?
Nothing in t
If increasing the timeout reduces the magnitude of the error, then this is
probably it.
The solution is IMHO to introduce a nonce (probably internally generated by
the client) on non-idempotent operations to convert them into idempotent
ones. I know this has been discussed before but am not sure a
So if this bug you mentioned (3787) is correct, there is no workaround.
Once you reach 60 seconds timeout, you have no way of knowing if the server
finished processing this Increment or not, so you'll know whether to send it or
not.
On Jan 30, 2013, at 8:28 PM, Andrew Purtell wrote:
> This may
There is another option,
You could do a MapReduce job that, for each row from the main table, emits
all times that it would be in the window of time,
For example, "event1" would emit {"10:06": event1}, {"10:05": event1} ...
{"10:00": event1}. (also for "10:07" if you want to know those who happen
i
This may be an old one: https://issues.apache.org/jira/browse/HBASE-3787
On Wed, Jan 30, 2013 at 9:25 AM, Mesika, Asaf wrote:
> Hi,
>
> We ran the QA test again, this time with INFO message on at the client
> side (HTable).
> We saw many retry attempts which failed on RPC timeouts (we use the
Hi,
We ran the QA test again, this time with INFO message on at the client side
(HTable).
We saw many retry attempts which failed on RPC timeouts (we use the default of
60 seconds).
I guess when this error occurs, the increment shouldn't really happen, right?
This may explain the diff we see f
Hi,
I'm having following issues with triggering manually major compaction on
selected regions via HBaseAdmin:
1. When I'm triggering major compaction on first region, which does not
contains key, it's running normally - I see message in logs ([..]Large
Compaction requested ... Because: User-tri
Sounds like if you had 1000 regions, each with 99 rows, and you asked
for 100 that you'd get back 99,000. My guess is that a Filter is
serialized once and that is sent successively to each region and that
it isn't updated between regions. Don't think doing that would be too
easy.
Toby
On 1/30/13
Hi Anoop,
So does it mean the scanner can send back LIMIT*2-1 lines max? Reading
100 rows from the 2nd region is using extra time and resources. Why
not ask for only the number of missing lines?
JM
2013/1/30, Anoop Sam John :
> @Anil
>
>>I could not understand that why it goes to multiple region
@Anil
>I could not understand that why it goes to multiple regionservers in
parallel. Why it cannot guarantee results <= page size( my guess: due to
multiple RS scans)? If you have used it then maybe you can explain the
behaviour?
Scan from client side never go to multiple RS in parallel. Scan fr
Hi Rodrigo.
Using solution with 2 tables : one main and one as index.
I have ~50 Million records , in my case I need scan all table and as a
result I will have 50 Millions scans and It will kill all performance.
Is there any other approach to model my usecase using hbase?
Thanks
Oleg.
On Mo
Hi Mohammad,
You are most welcome to join the discussion. I have never used PageFilter
so i don't really have concrete input.
I had a look at
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/PageFilter.html
I could not understand that why it goes to multiple regionservers in
parallel
25 matches
Mail list logo