Hi:
I installed hbase 0.19.3 and hadoop 0.19.1. I tried to run the BulkImport
example on http://wiki.apache.org/hadoop/Hbase/MapReduce, get the following
error.
"org.apache.hadoop.hbase.MasterNotRunningException: localhost:6"
>From the error message, it looks like the hadoop looks at
HBase 0.20.0 is available for download:
http://hadoop.apache.org/hbase/releases.html
The Release Notes are available here: http://su.pr/2sjrkf
This HBase is faster, slimmer, sweeter smelling, and more robust than
previous versions. We recommend that all upgrade to this release.
HBase 0.20.0
The default out-of-the-box configuration of HBase does not require Hadoop and
stores data temporarily in my /tmp directory. This is a great way to
quickly create a dev environment for newbies.
I guess if I have to run both HBase and Hadoop in at least pseudo
distributed mode then I need to reco
Where do you store your hbase data? Arent you using Hadoop and HDFS?
If you want to run MR jobs over data stored in HBase, you would need a
Hadoop instance...
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Tue, Sep 8, 2009 at 12:52 PM, Keith Thomas w
I have had a great time working through the awesome Hbase 0.20.0 client api
as I write my first web app with data persisted by HBase on HDFS. However
the time has come to write my first map/reduce job for use by the web app.
Until now I've been starting HBase with 'start-hbase.sh' but I see that
If I/we can get around to HBASE-1002 it may be substantially easier to use
filters.
- Andy
From: stack
To: hbase-user@hadoop.apache.org
Sent: Tuesday, September 8, 2009 11:18:34 AM
Subject: Re: Any basic documentation about using filters with hbase?
I'm
If I/we can get around to HBASE-1002 it may be substantially easier to use
filters.
- Andy
From: stack
To: hbase-user@hadoop.apache.org
Sent: Tuesday, September 8, 2009 11:18:34 AM
Subject: Re: Any basic documentation about using filters with hbase?
I'm
On Fri, Sep 4, 2009 at 2:55 PM, Roldano Cattoni wrote:
> Hi all
>
> I am new to the thift-generated python API to Hbase.
> Is there a method to get the list of all the row keys of a given table?
> I wouldn't like to scan all the columns to get them.
> Ot should I store them separately to get the
I'm afraid that there is a paucity of documentation on filters (to be
addressed in upcoming releases):
http://people.apache.org/~stack/hbase-0.20.0-candidate-3/docs/api/org/apache/hadoop/hbase/filter/package-summary.html#package_description
The principals though need little by way of documentation
+1
I checked docs and have been putting load on it all w/e. This version goes
further than any previous (and any previous candidate -- OOME'd w/ 1400
regions in a heap of 1.4G, ~700M 1k records into 3 servers).
St.Ack
On Tue, Sep 8, 2009 at 8:16 AM, Jonathan Gray wrote:
> +1 on releasing 0.2
+1 on releasing 0.20.0 RC3 as 0.20.0
Have been running RC2 plus two patches from RC3 in production and
development for a while now without issue.
JG
stack wrote:
The third hbase 0.20.0 release candidate is available for download:
http://people.apache.org/~stack/hbase-0.20.0-candidate-3/
Hallo, can someone lead me to a good documentation with examples,
explaining the principle of using filters on hbase in a mapreduce program?
Thank you
12 matches
Mail list logo