+1 on the RC.
Checked the signature.
Downloaded the source, built and ran the testcases.
Ran Integration Tests with ACL and Visibility labels. Everything looks
fine.
Compaction, flushes etc too.
Regards
Ram
On Tue, Apr 1, 2014 at 2:14 AM, Elliott Clark wrote:
> +1
>
> Checked the hash
> Chec
Hi Manju,
If I am understanding correctly what you are trying to do, there is no
current great to achieve that with the existing hive hbase integration.
Ofcourse you can read and write data to HBase like you mentioned, but that
is pretty much it. If you need more fine grained access like accessing
On Mon, Mar 31, 2014 at 4:06 PM, Rendon, Carlos (KBB) wrote:
> As far as I can tell the cache on write settings are more global than
> just the table in question.
>
> https://hbase.apache.org/book/config.files.html
>
>
>
That is correct. BlockCache is by the server currently. There are no
per-
I usually look at imports and use native IDE auto suggest to search maven. For
my examples I end up having hbase-client version 0.96.1-hadoop2.
Artem Ervits
Data Analyst
New York Presbyterian Hospital
- Original Message -
From: rakesh rakshit [mailto:ihavethepotent...@gmail.com]
Sent: T
As far as I can tell the cache on write settings are more global than just the
table in question.
https://hbase.apache.org/book/config.files.html
Is there a table-level option or API level option I’m not aware of?
-Carlos
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
On Mon, Mar 31, 2014 at 10:37 AM, Rendon, Carlos (KBB) wrote:
> Table does not exist yet. I'm expecting random access across the rowkey
> namespace. I also expect bursts of access to a given row, all of which will
> change its contents and also read it.
>
> My question is from reading here:
> http
Usually to access Hbase from Hive, you will map Hbase table using
.HBaseStorageHandler and specifying Hbase table in TBLPROPERTIES.
But my question is ..I have to Access Hbase records directly .
INSERT OVERWRITE TABLE top_cool_hbase SELECT name, map(`date`,
cast(coolness as int)) FROM* top_cool
+1
Checked the hash
Checked the tar layout.
Played with a single node. Everything seemed good after ITBLL
On Mon, Mar 31, 2014 at 9:23 AM, Stack wrote:
> +1
>
> The hash is good. Doc. and layout looks good. UI seems fine.
>
> Ran on small cluster w/ default hadoop 2.2 in hbase against a tip
Can you elaborate a little on what exactly you mean by "mounting"? The
least you will need to have hbase data query able in hive is to create an
external table on top of it.
On Mon, Mar 31, 2014 at 2:11 PM, Manju M wrote:
> Without mapping /mounting the hbase table , how can I access and query
>
Without mapping /mounting the hbase table , how can I access and query
hbase table ?
Table does not exist yet. I'm expecting random access across the rowkey
namespace. I also expect bursts of access to a given row, all of which will
change its contents and also read it.
My question is from reading here:
https://hbase.apache.org/book/regionserver.arch.html#block.cache
that block
+1
The hash is good. Doc. and layout looks good. UI seems fine.
Ran on small cluster w/ default hadoop 2.2 in hbase against a tip of the
branch hadoop 2.4 cluster. Seems to basically work (small big linked list
test worked).
TSDB seems to work fine against this RC.
I don't mean to be stealin
For #1, please take a look at the following method in HTable:
public boolean checkAndPut(final byte [] row,
final byte [] family, final byte [] qualifier, final byte [] value,
final Put put)
For #2, can you clarify your goal ?
Java API provides stronger capability compared to she
*As we had configured HBase 0.94.1 pseudo Distributed mode Hadoop 1.0.3 &
It's working fine.*
*we have several queries ..*
*1.How to perform search operation with particular Value & Compare it
within the HBase ? *
*i.e. We stored XYZ value in HBase now for next time before storing New
Value in HBa
Hi Aishwarya,
you can pass multiple column families also.
Ex: -Dimporttsv.columns=HBASE_ROW_KEY,cf1:c1,cf2:c2
The below link provides you more information.
http://hbase.apache.org/book/ops_mgt.html#importtsv
if you want to divide the table into regions you can create table with split
keys.
http
Thank you for the input.
On Sun, Mar 30, 2014 at 8:10 PM, Vladimir Rodionov
wrote:
> It can be viable approach if you can keep replication lag under control.
>
> > I'm not sure how the java api deals with reading from a region server
> that
> > is in the process of failing over? Is there a way t
+1
Checked signature.
Ran unit test.
Tested per cell acl feature
Looks good
Anoop
On Monday, March 31, 2014, Andrew Purtell wrote:
> +1
>
> Unit test suite passes 100% 25 times out of 25 runs.
>
> Cluster testing looks good with LoadTestTool, YCSB, ITI, and ITIBLL.
>
> An informal performance te
+1
Unit test suite passes 100% 25 times out of 25 runs.
Cluster testing looks good with LoadTestTool, YCSB, ITI, and ITIBLL.
An informal performance test on a small cluster comparing 0.98.0 and 0.98.1
indicates no serious perf regressions. See email to dev@ titled "Comparsion
between 0.98.0 and
18 matches
Mail list logo