Re: request for mapreduce with hbase examples

2010-02-11 Thread Stack
On Thu, Feb 11, 2010 at 3:07 PM, David Hawthorne wrote: > Perhaps.  I figured it would be easier to go mapreduce -> hbase instead of > mapreduce -> output file in hdfs -> load output file into hbase as a > separate job.  Then again, for performance, maybe hdfs -> hbase is better > than mapreduce -

Re: request for mapreduce with hbase examples

2010-02-11 Thread David Hawthorne
Perhaps. I figured it would be easier to go mapreduce -> hbase instead of mapreduce -> output file in hdfs -> load output file into hbase as a separate job. Then again, for performance, maybe hdfs -> hbase is better than mapreduce -> hbase and I should just plan to do that instead. On

Re: thinking about HUG9

2010-02-11 Thread Stack
Either works for me. What about others? St.Ack On Thu, Feb 11, 2010 at 2:33 PM, Andrew Purtell wrote: > March 8 is ok -- afternoon/evening. > >   - Andy > > > >> From: Stack >> Can we do March 8th?  I can't do March 9th. >> St.Ack >> >> On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote: >>

Re: thinking about HUG9

2010-02-11 Thread Amandeep Khurana
+1 for March 8th evening post 6pm Amandeep Khurana Computer Science Graduate Student University of California, Santa Cruz On Thu, Feb 11, 2010 at 2:33 PM, Andrew Purtell wrote: > March 8 is ok -- afternoon/evening. > > - Andy > > > > > From: Stack > > Can we do March 8th? I can't do March

Re: thinking about HUG9

2010-02-11 Thread Andrew Purtell
March 8 is ok -- afternoon/evening. - Andy > From: Stack > Can we do March 8th? I can't do March 9th. > St.Ack > > On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote: > > Hi all, > > > > Trend Micro would like to host HUG9 at our offices in Cupertino: > > > http://maps.google.com/map

Re: request for mapreduce with hbase examples

2010-02-11 Thread Stack
On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne wrote: > I was under the impression that you could read from/write to an hbase table > from within a mapreduce job.  Import and Export look like methods for > reading HDFS files into hbase and dumping hbase into an HDFS file. > Yes. Isn't that wh

Re: thinking about HUG9

2010-02-11 Thread Stack
Can we do March 8th? I can't do March 9th. St.Ack On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote: > Hi all, > > Trend Micro would like to host HUG9 at our offices in Cupertino: > > http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.

Re: thinking about HUG9

2010-02-11 Thread Jean-Daniel Cryans
Sounds good to me. J-D On Thu, Feb 11, 2010 at 12:43 PM, Andrew Purtell wrote: > Hi all, > > Trend Micro would like to host HUG9 at our offices in Cupertino: > > > http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.0625,-95.677068&sspn=37.1

Re: request for mapreduce with hbase examples

2010-02-11 Thread Ted Yu
You can find plenty of examples in nutchbase: http://github.com/apache/nutch/tree/nutchbase On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne wrote: > I was under the impression that you could read from/write to an hbase table > from within a mapreduce job. Import and Export look like methods f

thinking about HUG9

2010-02-11 Thread Andrew Purtell
Hi all, Trend Micro would like to host HUG9 at our offices in Cupertino: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=10101+North+De+Anza+Blvd,+Cupertino,+CA&sll=37.0625,-95.677068&sspn=37.136668,65.214844&ie=UTF8&hq=&hnear=10101+N+De+Anza+Blvd,+Cupertino,+Santa+Clara,+California+9

Re: request for mapreduce with hbase examples

2010-02-11 Thread David Hawthorne
I was under the impression that you could read from/write to an hbase table from within a mapreduce job. Import and Export look like methods for reading HDFS files into hbase and dumping hbase into an HDFS file. On Feb 11, 2010, at 12:25 PM, Guohua Hao wrote: Hello there, Did you take

Re: request for mapreduce with hbase examples

2010-02-11 Thread Guohua Hao
Hello there, Did you take a look at the Import and Export classes under package org.apache.hadoop.hbase.mapreduce? They are mostly using new APIs in my mind. Correct me if I am wrong. Thanks, Guohua On Thu, Feb 11, 2010 at 2:13 PM, David Hawthorne wrote: > I'm looking for some examples for re

request for mapreduce with hbase examples

2010-02-11 Thread David Hawthorne
I'm looking for some examples for reading data out of hbase for use with mapreduce and for inserting data into hbase from a mapreduce job. I've seen the example shipped with hbase, and, well, it doesn't exactly make things click for me. It also looks like it's using the old API, so maybe

Re: Multiget support in HBase 0.20.3?

2010-02-11 Thread Ryan Rawson
Sorry guys, this is not doable for 0.20.4, the RPC protocol would have to change and make previous clients incompatible. I really wish it was otherwise. 2010/2/11 Marc Limotte : > There's a patch in JIRA to support MultiGet for hbase-0.20.3. See > http://issues.apache.org/jira/browse/HBASE-1845

Re: filtering on a prefix of the row key

2010-02-11 Thread Stack
On Wed, Feb 10, 2010 at 7:16 AM, Bruno Dumon wrote: > Hi, > > I would like a filter that accepts rows as long as the first X bytes > of the row key are less than or equal to a certain byte array. > Would an inclusivestoprow filter work for you where the stoprow is the 'certain byte array'? > I h

Re: Multiget support in HBase 0.20.3?

2010-02-11 Thread Marc Limotte
There's a patch in JIRA to support MultiGet for hbase-0.20.3. See http://issues.apache.org/jira/browse/HBASE-1845 There is sample code in the unit tests which are part of the patch (see HTableTest). Marc 2010/2/9 > Hi, > > Does HBase 0.20.3 support multiget? > How can I use it? Any sample co

Re: compaction does not reduce the number of regions

2010-02-11 Thread Stack
On Thu, Feb 11, 2010 at 9:59 AM, Boris Aleksandrovsky wrote: > I noticed that compaction reduced the > size of the region, but did not reduce the number of regions. > Thats right. Once made, there is no going back currently, not unless you run a manual merge of regions (see the Merge tool under

Re: compaction does not reduce the number of regions

2010-02-11 Thread Jean-Daniel Cryans
Inline. J-D On Thu, Feb 11, 2010 at 9:59 AM, Boris Aleksandrovsky wrote: > Hi guys, > > We have a table which stored previously uncompressed data which we changed > to store GZ-compressed data. We performed a compaction on that table which > shrank its size three-fold. However, I noticed that co

compaction does not reduce the number of regions

2010-02-11 Thread Boris Aleksandrovsky
Hi guys, We have a table which stored previously uncompressed data which we changed to store GZ-compressed data. We performed a compaction on that table which shrank its size three-fold. However, I noticed that compaction reduced the size of the region, but did not reduce the number of regions. M

Re: TableMapper ClassNotFound

2010-02-11 Thread Stack
Please see this note: http://hadoop.apache.org/hbase/docs/r0.20.3/api/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath Below you are setting the CLASSPATH in launching shell but it may not be set for the context the task is running in. St.Ack On Wed, Feb 10, 2010 at 5:51 PM, Alex