Ok, I got it. Thank you!
2014-11-03 2:20 GMT+03:00 Sean Busbey bus...@cloudera.com:
On Sun, Nov 2, 2014 at 5:09 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. context.write(hbaseKey, put); //Exception here
I am not mrunit expert. But as long as you call the following method
prior
to the
Hi
I would like to filtering rows that contain specific value at specific
{family, qualifier}.
For example, if my table contains the following lines, the cells are of the
form {fam, qual, val}
Row 1 : {fam-1, col0, val1}, {fam-1, col1, val11}
Row 2 : {fam-1, col0, val11}, {fam-1, col1, val21}
Would SingleColumnValueFilter serve your need ?
Cheers
On Mon, Nov 3, 2014 at 7:28 AM, Sznajder ForMailingList
bs4mailingl...@gmail.com wrote:
Hi
I would like to filtering rows that contain specific value at specific
{family, qualifier}.
For example, if my table contains the following
!!
Thanks a lot!
Benjamin
On Mon, Nov 3, 2014 at 5:31 PM, Ted Yu yuzhih...@gmail.com wrote:
Would SingleColumnValueFilter serve your need ?
Cheers
On Mon, Nov 3, 2014 at 7:28 AM, Sznajder ForMailingList
bs4mailingl...@gmail.com wrote:
Hi
I would like to filtering rows that
Hello,
We have a requirement to determine whether a PUT will create a new row or
update an existing one. I looked at using preBatchMutate in a co-processor and
have the code below.
Few things I need to ask:
1) Is there a more efficient way of doing this?
2) Will region.getClosestRowBefore() add
Hi Ted
Any update on this error? i tried Pseudo-Distributed mode But i still have
error
hbase(main):001:0 create 't1','c1'
ERROR: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
Here is some help for this command:
Creates a table. Pass a table name, and a set of column
Here is the method in JniBasedUnixGroupsMapping which appears in stack
trace:
native static void anchorNative();
It is a native method.
Which hadoop release are you using ? How did you install it ?
Cheers
On Mon, Nov 3, 2014 at 9:25 AM, beeshma r beeshm...@gmail.com wrote:
Hi Ted
Any
Hey folks,
How do I remove a dead region server?.I manually failed over the hbase
master but this is still appearing in master UI and also on the status
command that I run.
Thanks,
Nishan
Nishanth,
In my experience the only way I have been able to clear the dead region
servers is to restart the master daemon.
-Pere
On Mon, Nov 3, 2014 at 9:49 AM, Nishanth S nishanth.2...@gmail.com wrote:
Hey folks,
How do I remove a dead region server?.I manually failed over the hbase
Thanks Pere. I just did that and still has the dead region server showing
up in Master UI as well as in status command.I have replication turned on
in hbase and seeing few issues.Below is the stack trace I am seeing.
2014-11-03 18:31:00,215 WARN
St.Ack,
I think you're side stepping the issue concerning schema design.
Since HBase isn't my core focus, I also have to ask since when has heap sizes
over 16GB been the norm?
(Really 8GB seems to be quite a large heap size... )
On Oct 31, 2014, at 11:15 AM, Stack st...@duboce.net wrote:
Hi,
I am implementing disaster recovery for our Hbase cluster and had one quick
question about import/export of the s3n file system.
I know that ExportTable can be given a start time and end time enabling
incremental backups. My question is how to properly store these incremental
backups on
There are many blog posts and articles about people turning for 16GB
heaps since java7 and the G1 collector became mainstream. We run with 25GB
heap ourselves with very short GC pauses using a mostly untuned G1
collector. Just one example is the excellent blog post by Intel,
Bryan,
I wasn’t saying St.Ack’s post wasn’t relevant, but that its not addressing the
easiest thing to fix. Schema design.
IMHO, that’s shooting one’s self in the foot.
You shouldn’t be using versioning to capture temporal data.
On Nov 3, 2014, at 1:54 PM, Bryan Beaudreault
Hi,
I am using Hbase MultiTableInputFormat to compare 2 tables: Table1 (7
million), Table2 (30 million).
In the driver, i am passing to scans ( without any filters). In my mapper i
am doing a compare and writing the summary in Reducer.
Any settings specific to this scenario that might speed up
Hi,
What do you mean by auth in SQL? It supports SPNEGO incase you are
interested.
On Mon, Nov 3, 2014 at 12:16 PM, Margusja mar...@roo.ee wrote:
Hi
I am looking solutions where users before using HBase rest will be
authenticate from SQL (in example from Oracle).
Is there any best
Hi Pere and Nishanth,
In master branch i developt a bash script to same problem. Its name is
considerAsDead.sh [1] It mark as dead and start the recovery process.
[1] https://github.com/apache/hbase/blob/master/bin/considerAsDead.sh
Talat
On Nov 3, 2014 8:32 PM, Nishanth S
Thanks Pere. I just did that and still has the dead region server showing
up in Master UI as well as in status command.I have replication turned on
in hbase and seeing few issues.Below is the stack trace I am seeing.
2014-11-03 18:31:00,215 WARN
Hi
In one old project where usernames and passwords are in RDB, we need
authenticate users from RDB before they can go via REST to HBase.
So the first thing was Knox.
Best regards, Margus Roo
skype: margusja
phone: +372 51 48 780
web: http://margus.roo.ee
On 04/11/14 04:02, Bharath
19 matches
Mail list logo