On Thu, Feb 11, 2010 at 3:07 PM, David Hawthorne wrote:
> Perhaps. I figured it would be easier to go mapreduce -> hbase instead of
> mapreduce -> output file in hdfs -> load output file into hbase as a
> separate job. Then again, for performance, maybe hdfs -> hbase is better
> than mapreduce -
Perhaps. I figured it would be easier to go mapreduce -> hbase
instead of mapreduce -> output file in hdfs -> load output file into
hbase as a separate job. Then again, for performance, maybe hdfs ->
hbase is better than mapreduce -> hbase and I should just plan to do
that instead.
On
Either works for me. What about others?
St.Ack
On Thu, Feb 11, 2010 at 2:33 PM, Andrew Purtell wrote:
> March 8 is ok -- afternoon/evening.
>
> - Andy
>
>
>
>> From: Stack
>> Can we do March 8th? I can't do March 9th.
>> St.Ack
>>
>> On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote:
>>
+1 for March 8th evening post 6pm
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Thu, Feb 11, 2010 at 2:33 PM, Andrew Purtell wrote:
> March 8 is ok -- afternoon/evening.
>
> - Andy
>
>
>
> > From: Stack
> > Can we do March 8th? I can't do March
March 8 is ok -- afternoon/evening.
- Andy
> From: Stack
> Can we do March 8th? I can't do March 9th.
> St.Ack
>
> On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote:
> > Hi all,
> >
> > Trend Micro would like to host HUG9 at our offices in Cupertino:
> >
> http://maps.google.com/map
On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne wrote:
> I was under the impression that you could read from/write to an hbase table
> from within a mapreduce job. Import and Export look like methods for
> reading HDFS files into hbase and dumping hbase into an HDFS file.
>
Yes. Isn't that wh
Can we do March 8th? I can't do March 9th.
St.Ack
On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote:
> Hi all,
>
> Trend Micro would like to host HUG9 at our offices in Cupertino:
>
> http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.
Sounds good to me.
J-D
On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote:
> Hi all,
>
> Trend Micro would like to host HUG9 at our offices in Cupertino:
>
>
> http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.0625,-95.677068&sspn=37.1
You can find plenty of examples in nutchbase:
http://github.com/apache/nutch/tree/nutchbase
On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne wrote:
> I was under the impression that you could read from/write to an hbase table
> from within a mapreduce job. Import and Export look like methods f
Hi all,
Trend Micro would like to host HUG9 at our offices in Cupertino:
http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.0625,-95.677068&sspn=37.136668,65.214844&ie=UTF8&hq=&hnear=10101+N+De+Anza+Blvd,+Cupertino,+Santa+Clara,+California+9
I was under the impression that you could read from/write to an hbase
table from within a mapreduce job. Import and Export look like
methods for reading HDFS files into hbase and dumping hbase into an
HDFS file.
On Feb 11, 2010, at 12:25 PM, Guohua Hao wrote:
Hello there,
Did you take
Hello there,
Did you take a look at the Import and Export classes under package
org.apache.hadoop.hbase.mapreduce? They are mostly using new APIs in my
mind. Correct me if I am wrong.
Thanks,
Guohua
On Thu, Feb 11, 2010 at 2:13 PM, David Hawthorne wrote:
> I'm looking for some examples for re
I'm looking for some examples for reading data out of hbase for use
with mapreduce and for inserting data into hbase from a mapreduce
job. I've seen the example shipped with hbase, and, well, it doesn't
exactly make things click for me. It also looks like it's using the
old API, so maybe
Sorry guys, this is not doable for 0.20.4, the RPC protocol would have
to change and make previous clients incompatible. I really wish it
was otherwise.
2010/2/11 Marc Limotte :
> There's a patch in JIRA to support MultiGet for hbase-0.20.3. See
> http://issues.apache.org/jira/browse/HBASE-1845
On Wed, Feb 10, 2010 at 7:16 AM, Bruno Dumon wrote:
> Hi,
>
> I would like a filter that accepts rows as long as the first X bytes
> of the row key are less than or equal to a certain byte array.
>
Would an inclusivestoprow filter work for you where the stoprow is the
'certain byte array'?
> I h
There's a patch in JIRA to support MultiGet for hbase-0.20.3. See
http://issues.apache.org/jira/browse/HBASE-1845
There is sample code in the unit tests which are part of the patch (see
HTableTest).
Marc
2010/2/9
> Hi,
>
> Does HBase 0.20.3 support multiget?
> How can I use it? Any sample co
On Thu, Feb 11, 2010 at 9:59 AM, Boris Aleksandrovsky
wrote:
> I noticed that compaction reduced the
> size of the region, but did not reduce the number of regions.
>
Thats right. Once made, there is no going back currently, not unless
you run a manual merge of regions (see the Merge tool under
Inline.
J-D
On Thu, Feb 11, 2010 at 9:59 AM, Boris Aleksandrovsky wrote:
> Hi guys,
>
> We have a table which stored previously uncompressed data which we changed
> to store GZ-compressed data. We performed a compaction on that table which
> shrank its size three-fold. However, I noticed that co
Hi guys,
We have a table which stored previously uncompressed data which we changed
to store GZ-compressed data. We performed a compaction on that table which
shrank its size three-fold. However, I noticed that compaction reduced the
size of the region, but did not reduce the number of regions.
M
Please see this note:
http://hadoop.apache.org/hbase/docs/r0.20.3/api/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
Below you are setting the CLASSPATH in launching shell but it may not
be set for the context the task is running in.
St.Ack
On Wed, Feb 10, 2010 at 5:51 PM, Alex
20 matches
Mail list logo