Thanks everybody, much appreciated!
On 4/20/13 5:40 AM, "varun kumar" wrote:
>+1
>
>
>On Sat, Apr 20, 2013 at 1:23 PM, Ravindranath Akila <
>ravindranathak...@gmail.com> wrote:
>
>> +1
>>
>> R. A.
>> On 20 Apr 2013 12:07, "Viral Bajaria" wrote:
>>
>> > +1!
>> >
>> >
>> > On Fri, Apr 19, 20
Here you have several examples:
http://hbase.apache.org/book/mapreduce.example.html
http://sujee.net/tech/articles/hadoop/hbase-map-reduce-freq-counter/
http://bigdataprocessing.wordpress.com/2012/07/27/hadoop-hbase-mapreduce-examples/
http://stackoverflow.com/questions/12215313/load-data-into-hbas
Hello:
I'm working in a proyect, and i'm using hbase for storage
the data, y have this method that work great but without the performance
i'm looking for, so i want is to make the same but using mapreduce.
public ArrayList findZ(String z) throws IOException {
ArrayList rows = new Arra
Varun:
Thanks for trying out HBASE-8354 .
Can you move the text in Environment section of HBASE-8389 to Description ?
If you have a patch for HBASE-8389, can you upload it ?
Cheers
On Sun, Apr 21, 2013 at 10:38 AM, Varun Sharma wrote:
> Hi Ted, Nicholas,
>
> Thanks for the comments. We found
Hi Ted, Nicholas,
Thanks for the comments. We found some issues with lease recovery and I
patched HBASE 8354 to ensure we don't see data loss. Could you please look
at HDFS 4721 and HBASE 8389 ?
Thanks
Varun
On Sat, Apr 20, 2013 at 10:52 AM, Varun Sharma wrote:
> The important thing to note i
HBase Serve the purpose if you use the HDFS underlying hbase, as it will
distribute, and also you can write hbase mapreduce code additional to the
hbase APIs. Please check following links for hbase mapreduce coding...
http://hbase.apache.org/book/mapreduce.example.html
http://hbase.apache.org/book
HBase relies on hdfs features heavily.
HBase also supports running Map Reduce Jobs.
You can find examples in these places (0.94 codebase):
./security/src/test/java/org/apache/hadoop/hbase/mapreduce
./src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce
./src/main/java/org/apache/hadoop/hbase
Hello Rami,
Hbase is not built on top of Hadoop. Hdfs is not a must, but
provided you a better storage option(courtesy Hdfs's distributed style
storage, scalability etc). You could use it with other FS as well, even
with your local FS. And you could definitely use MR jobs to efficiently
han
Since HBaseConfiguration extends Configuration, can you utilize this method
from Configuration ?
public void setClassLoader(ClassLoader classLoader) {
Thanks
On Sun, Apr 21, 2013 at 10:00 AM, Amit Sela wrote:
> Hi all,
>
> I'm trying to run an HBase client from an OSGI environment and for tha
I have additional question:
Hbase is built on top of hadoop.
Does HBases uses HDFS only of hadoop or uses Map Reduce Jobs engine as well?
Thanks a lot!
From: Rami Mankevich
Sent: Tuesday, April 09, 2013 8:52 PM
To: 'user@hbase.apache.org'
Cc: 'Andrew Purtell'
Subject: RE: Hbase question
First of
Hi all,
I'm trying to run an HBase client from an OSGI environment and for that I
need to set the Configuration classLoader.
In Configuration (Hadoop) itself, there is a method for that but since
HBaseConfiguration.create() is static the only solution I found was:
Thread.currentThread().setContext
Time is relative.
What does the timestamp mean?
Sounds like a simple question, but its not. Is it the time your application
says they wrote to HBase? Is it the time HBase first gets the row? Or is it the
time that the row was written to the memstore?
Each RS has its own clock in addition to y
I think Yun wants some global timestamp, not uniq ids.
This is doable, technically. However, not sure what's the performance
requirement.
Thanks,
Jimmy
On Sun, Apr 21, 2013 at 9:22 AM, kishore g wrote:
> Its probably not practical to do this for every put. Instead each client
> can get a chun
Its probably not practical to do this for every put. Instead each client
can get a chunk of ids, and use it for every put. Each chunk of ids will
be mutually exclusive and monotonically increases. You need to know that
there can be holes in ids and ids will not be according to timestamp within
a s
Hi, ted and JM, Thanks for the nice introduction. I have read the Omid paper,
which looks use a centralized party to do the coordination and achieves 72K
transactions per sec. And It does much more work than just assigning
timestamps, and I think it implicitly justifies the usage of a global tim
Hi, no it's not important, only the stops are.
On Sun, Apr 21, 2013 at 3:34 AM, Ted Yu wrote:
> Thanks for sharing the information below.
>
> How do you plan to store time (when the bus gets to each stop) in the row ?
> Or maybe it is not of importance to you ?
>
> On Sat, Apr 20, 2013 at 2:24
You can use MultiRowMutationEndpoint for atomic op on multiple rows (within
same region)..
On Sun, Apr 21, 2013 at 5:55 AM, Ted Yu wrote:
> Here is code from 0.94 code base:
>
> public void mutateRow(final RowMutations rm) throws IOException {
> new ServerCallable(connection, tableName, r
17 matches
Mail list logo