Thanks for the help.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-prioritizing-writes-over-reads-tp4042838p4043876.html
Sent from the HBase User mailing list archive at Nabble.com.
Hi
Say, I have four field for one record :id, status, targetid, and count.
Status is on and off, target could reference other id, and count will
record the number of on status for all targetid from same id.
The record could be add / delete, or updated to change the
Btw. Is that possible or practice to implement something like PutAndGet
which put in new row and return the old row back to client been implemented?
That would help a lot for my case.
Oh, I realized that it is better to be named as GetAndMutate, say Mutate
anyway, but return the
Hi,
I'm trying to understand the append operation introduced in 0.94.
Is there a documentation or a specification somewhere?
The javadoc for Append does not provide any detail.
The HBase book doesn't mention this operation.
I found some fragmented info in several JIRA issues, but nothing
I have just gone through the code and trying answering ur questions.
From what I currently understand, this operation allows me to append bytes
to an existing cell.
Yes
Does this append by creating a new cell with a new timestamp?
Yes
Does this update the cell while maintaining its timestamp?
No.
Can you give more details to the prioritization ?
I assume you were talking about user tables.
Cheers
On May 7, 2013, at 11:58 PM, kzurek kzu...@proximetry.pl wrote:
Thanks for the help.
--
View this message in context:
Hi all,
I'm trying to scan my HBase table to get only rows that are missing some
qualifiers.
I read that for getting rows with specific qualifiers I should use
something like:
List list = new ArrayListFilter(2);
Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes(fam1),
Forgot to mention: Hadoop 1.0.4 HBase 0.94.2
On Wed, May 8, 2013 at 4:52 PM, Amit Sela am...@infolinks.com wrote:
Hi all,
I'm trying to scan my HBase table to get only rows that are missing some
qualifiers.
I read that for getting rows with specific qualifiers I should use
something
I think you can implement your own filter that overrides this method:
public void filterRow(ListKeyValue ignored) throws IOException {
When certain qualifiers don't appear in the List, you can remove all the
kvs from the passed List.
Cheers
On Wed, May 8, 2013 at 7:00 AM, Amit Sela
Thanks Anoop!
For reference:
- Applying an append containing multiple KeyValue for the same
family/qualifier results in all the values but the last one being discarded.
- Applying an append on a column already containing versions more recent
than the current time apparently discards all
Thanks for the offer Lars! I haven't made much progress speeding things up.
I finally put together a test program that populates a table that is similar to
my production dataset. I have a readme that should describe things, hopefully
enough to make it useable. There is code to populate a test
M7 is not Apache HBase, or any HBase. It is a proprietary NoSQL datastore
with (I gather) an Apache HBase compatible Java API.
As for running HBase on EC2, we recently discussed some particulars, see
the latter part of this thread: http://search-hadoop.com/m/rI1HpK90gu where
I hijack it. I
To add to what Andy said - the key to getting HBase running well in AWS is:
1. Choose the right instance types. I usually recommend the HPC
instances or now the high storage density instances. Those will give
you the best performance.
2. Use the latest Amzn Linux AMIs and the latest HBase and
With respect to EMR, you can run HBase fairly easily.
You can't run MapR w HBase on EMR stick w Amazon's release.
And you can run it but you will want to know your tuning parameters up front
when you instantiate it.
Sent from a remote device. Please excuse any typos...
Mike Segel
On May 8,
14 matches
Mail list logo