- Lucene - Nutch - Hadoop - HBase
P.S.
Can you summarize HBaseWD in your blog
That is on my todo list! You pushed it higher to the top priority items ;)
On Thu, May 19, 2011 at 6:50 AM, Weishung Chung weish...@gmail.comwrote:
I have another question about option 2. It seems like I need
cells of it or
add new cells.
Please let me know if you have more Qs!
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop - HBase
On Wed, May 18, 2011 at 1:19 AM, Weishung Chung weish...@gmail.com
wrote:
I have another question. For overwriting, do I need
, Weishung Chung weish...@gmail.com
wrote:
I have another question. For overwriting, do I need to delete the
existing
one before re-writing it?
On Sat, May 14, 2011 at 10:17 AM, Weishung Chung weish...@gmail.com
wrote:
Yes, it's simple yet useful. I am integrating
I have another question. For overwriting, do I need to delete the existing
one before re-writing it?
On Sat, May 14, 2011 at 10:17 AM, Weishung Chung weish...@gmail.com wrote:
Yes, it's simple yet useful. I am integrating it. Thanks alot :)
On Fri, May 13, 2011 at 3:12 PM, Alex Baranau
://github.com/sematext/HBaseWD/issues/1) this doesn't affect
stability
and/or major functionality.
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop - HBase
On Fri, May 13, 2011 at 10:45 AM, Weishung Chung weish...@gmail.com
wrote:
What's the status
What's the status on this package? Is it mature enough?
I am using it in my project, tried out the write method yesterday and going
to incorporate into read method tomorrow.
On Wed, May 11, 2011 at 3:41 PM, Alex Baranau alex.barano...@gmail.comwrote:
The start/end rows may be written twice.
Dear fellow HBase developers,
Could someone educate me and let me know how to figure out the number of
disk seeks involved in a range search (startRow to endRow specified in
Scan). Also, could anyone give me the details of all the steps involved once
the Scan for range retrieval is called? I know
Hey my fellow hadoop/hbase developers,
I just came across this google compression/decompression package yesterday,
could we make a good use of this compression scheme in hadoop? It's written
in C++ though.
http://code.google.com/p/snappy/
http://code.google.com/p/snappy/I haven't looked close
-- Forwarded message --
From: Weishung Chung weish...@gmail.com
Date: Tue, Mar 22, 2011 at 11:31 AM
Subject: Re: File formats in Hadoop
To: Vivek Krishna vivekris...@gmail.com
Cc: u...@hbase.apache.org, common-u...@hadoop.apache.org,
qwertyman...@gmail.com, Doug Cutting cutt