Is it possible to run hadoop streaming jobs with hbase as target ?
-Jack
Hey,
We're just looking into ways to run multiple instances/versions of HBase for
testing/development and were wondering how other people have gone about
doing this.
If we used just one hadoop cluster then we can have a different paths / user
for each hbase instance, and then have a s
Hello, does anyone have hbase_hive handler working for 0.89 Hbase
version? We only have 0.20.6 handler working well, but not 0.89.
Thanks.
-Jack
Hi
I have written a small program to connect to hbase but it stuck half way and
just go in the wait state.
*Here is the code
* System.out.println("Start");
HBaseConfiguration config = new HBaseConfiguration();
config.clear();
config.set("hbase.zookeeper.quorum&quo
Hi All,
I'm learning how to add a secondary index to my HTable by following the
instructions from:
http://rajeev1982.blogspot.com/2009/06/secondary-indexes-in-hbase.html
Could someone please tell me where I can download the HBase contrib package for
the hbase-0.20.0-transactional.jar?
hi , is there any features for me to control the client to access to my
hbase.
like some authority ,some user or some password?
now one way to control
my servers use iptables to control the access, is there any better way ?
thanks.
f one but there might be. Unfortunately I'm booted into
Windows (ugh) at the moment so don't have the source code handy ;-) Maybe
someone else can answer this.
I have seen archived discussions which refer to RECORD vs BLOCK compression,
> but I don't see those options in later version
Hi,
I want do some research about HBase in application, there are only 2
opensource projects in HBase/powered by, and only one is available.
Could you recommend some actual available applications of Hbase.Let me know
how HBase is used and it's performance.
Thanks
Sindy
Hi,
we're investigating in different scalable data stores. My concern with
HBase at this moment is
the poor support for ruby (we've got a larger rails app). Some insights on how
others are interfacing
with HBase through ruby would be very appreciated. (do you use thrift, or av
HBase does require zookeeper. However you can have hbase manage zookeeper for you or you could
manage zookeeper yourself. For the size of your cluster (small) I would just let hbase manage
zookeeper. If I'm not mistaken the standard start up scripts that come with apache hbase will star
Yes. It will start zookeeper by default. Else you can start it manually too.
See if given below link is helpful for you.
http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/ to setup HBase.
Vivek
-Original Message-
From: Jeff Whiting [mailto:je...@qualtrics.com]
Sent: Tuesday
HI Anze,
In word, yes - 0.20.4 is not that stable in my experience, and
upgrading to the latest CDH3 beta (which includes HBase 0.89.20100924)
should give you a huge improvement in stability.
You'll still need to do a bit of tuning of settings, but once it's
well tuned it should be ab
Hi Anze,
Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work
for stability about a month. Some issue we have been met, and may helpful to
you.
HDFS:
1. hbase file has short life cycle than map-red, some times there're
many blocks should be delete, we should tunin
HBase is not designed or well tested for production or stability on 2 nodes.
It will work on 2 nodes, but do not expect good performance or stability.
What is the hardware configuration and daemon setup on this cluster of 2 nodes?
How many cores, spindles, RAM, heap sizes etc... And you have
Hey Baggio,
Looks like you've done some good analysis. Much of what you've mentioned under
HBase is in the works (multi-thread compactions, distributed log splitting,
HBCK tool).
I would definitely recommend upgrading to 0.90 when it is released, there are
some good fixes
Some comments inline in the below.
On Mon, Dec 13, 2010 at 8:45 AM, baggio liu wrote:
> Hi Anze,
> Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work
> for stability about a month. Some issue we have been met, and may helpful to
> you.
>
Thanks for writ
Hi,
does https://issues.apache.org/jira/browse/HBASE-3334
prevent me currently from using 0.20-append trunk with 0.90-rc1
to build or can I just replace the hadoop.jar and be done?
Christian
On Dec 13, 2010, at 6:44 PM, Stack wrote:
>>Beside upon, in production cluster, dat
Hi,
I am trying to setup replication for my HBase clusters. I have two
small clusters for testing each with 4 machines. The setup for the two
clusters is identical. Each machine runs a DataNode, and
HRegionServer. Three of the machines run a ZK peer and one machine
runs the HMaster and NameNode
Just replace the jar.
St.Ack
On Mon, Dec 13, 2010 at 10:45 AM, Christian van der Leeden
wrote:
> Hi,
>
> does https://issues.apache.org/jira/browse/HBASE-3334
> prevent me currently from using 0.20-append trunk with 0.90-rc1
> to build or can I just replace the hadoop.
olja.net]
Sent: Monday, December 13, 2010 2:41 AM
To: user@hbase.apache.org
Subject: HBase stability
Hi all!
We have been using HBase 0.20.4 (cdh3b1) in production on 2 nodes for a
few
months now and we are having constant issues with it. We fell over all
standard traps (like "Too many
r 4GB+ (we can't go that far)
- more nodes would help with stability issues
@Jonathan: yes, we are using 2 nodes that run both Hadoop (namenode, sec.
namenode, datanodes, jobtracker, tasktrackers) and Hbase. The reason is that
performance-wise we don't need more than that yet, but w
010 at 8:45 AM, baggio liu wrote:
> > Hi Anze,
> > Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work
> > for stability about a month. Some issue we have been met, and may helpful
> to
> > you.
> >
>
> Thanks for writing back to the lis
Please see comment inline. :D
2010/12/14 Stack
> Some comments inline in the below.
>
> On Mon, Dec 13, 2010 at 8:45 AM, baggio liu wrote:
> > Hi Anze,
> > Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work
> > for stability about a month.
ache you have and
the faster your reads will be).
> - more nodes would help with stability issues
>
> @Jonathan: yes, we are using 2 nodes that run both Hadoop (namenode, sec.
> namenode, datanodes, jobtracker, tasktrackers) and Hbase. The reason is that
> performance-wise we don
.Ack
>
>
> On Tue, Dec 14, 2010 at 1:44 AM, Stack wrote:
>
>> Some comments inline in the below.
>>
>> On Mon, Dec 13, 2010 at 8:45 AM, baggio liu wrote:
>> > Hi Anze,
>> > Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we
very mechanism. But some
> IOException is not fatal, in our branch, we add retry mechanism in common fs
> operation, such as exist().
>
In my experience this hasn't been a problem - most operations that
fail would not have succeeded with a retry. But a patch would be
interesting.
>
>
our branch, we add a time-out to close idle connection.
> And in long term, we can re-use connection between DFSClient and datanode.
> (may be this kind of re-use can be fulfill by RPC framework)
>
The above sounds great. So, the connection is reestablished
automatically by DF
More row is better. We don't exactly support get_slice, but we have
filters that let you choose which columns based on various predicates.
See:
http://hbase.apache.org/docs/r0.89.20100924/apidocs/org/apache/hadoop/hbase/filter/package-summary.html
On Tue, Dec 14, 2010 at 6:38 PM, King
in connection manage.
6.
>> Very similar to that. I don't usually tune MaxTenuringThreshold,
GCTimeRatio or soft reference LRU. Class unloading isn't particularly
necessary in HBase. The CMS settings look about right - I generally
recommend between 70-80%.
:D, this most
Hi all,
I have exception with HBase Client. Here is my code
HTable client = new HTable(conf, this.table);
Put put = new Put(rowid);
put.add(cf, columnkey, columnval);
hTable.put(put);
client.close();
Data put well. But some time client raise the error.
10/12/30 11:48:33 INFO
Granted, HDFS is the default HBase FS (and I also understand KFS, which is
another GFS-like FS)
Here is my question: is GFS-like FS a best mate for HBase? GFS-like FS is
good for streaming type operations, and definitely it's a good fit to
MapRdduce. But for the access pattern of HBase,
Forward order only.
On Jan 4, 2011 6:17 PM, "King JKing" wrote:
> Dear all,
>
> Does HBase support scan by both reverse order and normal order?
>
> Thank a lot for support.
On Jan 5, 2011 8:39 PM, "King JKing" wrote:
> Dear all,
>
> I use default configuration of HBase. That is 1000MB.
>
> hbase.regionserver.global.memstore.upperLimit
> 0.4
>
> hbase.regionserver.global.memstore.lowerLimit
> 0.35
>
> But the memory used
ackground to explain?
Thanks,
Sean
On Thu, Jan 6, 2011 at 12:07 AM, Kevin Apte <
technicalarchitect2...@gmail.com> wrote:
> On Jan 5, 2011 8:39 PM, "King JKing" wrote:
> > Dear all,
> >
> > I use default configuration of HBase. That is 1000MB.
>
I already used hadoop eclipse plug-in and hbase explorer.
buf, I'd like to find hbase client like sql client.
if you concern, check http://toadforcloud.com/thread.jspa?threadID=30532
there is also eclipse plugin.
someday, I'll write hbase client by using swing or swt. I'll sh
Due to the fairly heavy setup and maintenance requirements of maintaining
hadoop and hbase clusters, we've been discussing the possibility of namespacing
access to hbase to support multiple dev, qa, etc environments.
Is there any built in support for this?
What have other people done a
Hello.
first of all, thanks to hbase members so much ,
you helped me a lot.
I wrote simple jdo module for hbase newbie. I wish this module can help
them.
Some features
- simple jdo(used reflection)
- HTable pool ( I already know HTablePool in habse )
- simple query classes ( insert
> Hi, does anyone know of any implementation of GeoIndexing on
> HBase as of yet?
Given the lack of responses, I think not.
> If not I was thinking of writing one using CoProcessors to
> increment the substrings of a GeoHash to help with "number
> of neighbors" and
HDFS does not provide for keyed access to data, nor column oriented access
when only a subset of related data is needed. Also, HDFS is a write-once
file system while hbase provides random updates.
On Mon, Feb 7, 2011 at 4:51 AM, som_shekhar wrote:
>
> Hi All,
> I am new to Hbase, c
On Mon, Feb 7, 2011 at 7:01 AM, Ted Dunning wrote:
> HDFS does not provide for keyed access to data, nor column oriented access
> when only a subset of related data is needed. Also, HDFS is a write-once
> file system while hbase provides random updates.
Note that the write-once as
Hi,
My HBase setup was running fine for a couple of months and all of a sudden
the following issues has cropped up. The master will shutdown immediately
after startup. The Hadoop datanode is running fine and hdfs status is
Healthy. Any ideas on what could be happening and steps on how to fix
Thanks for the above fact.
When one of the node is added or failed, the distribution of data on hadoop
has to be done manually. i have read somewhere that because of this the
hbase or NOSQL comes into picture.
Can you please give some more details.
Ted Dunning-2 wrote:
>
> HDFS do
Well, you don't have to rebalance your HDFS cluster every time you add
or remove a node. You can let HDFS Name Node to manage data placement
and this usually works fine. So this won't be a big difference between
HDFS and HBase.
Like others said, HDFS is designed for high throughput, an
You should run hadoop-20-append or cdh3 and run hbase 0.90.1 which is
set to be released next week.
-ryan
On Fri, Feb 11, 2011 at 8:12 AM, Joseph Coleman
wrote:
> Hello if I am going to run hadoop 0.20.2 what version should I use for Hbase
> that is compatable?
>
What Ryan said. Before you start, check out the requirements section
in the manual. It has a section on Hadoop versions:
http://hbase.apache.org/notsoquick.html
St.Ack
On Fri, Feb 11, 2011 at 9:36 AM, Ryan Rawson wrote:
> You should run hadoop-20-append or cdh3 and run hbase 0.90.1 which
Hi
Thinking of implementing Hbase on top of our data processing pipeline (Hadoop)
and was curious if there are some guidelines to memory needs, number of region
servers recommended based on the size of the grid/volume of data etc.
Any thoughts here would be appreciated I would interested
Hi,
I am a newbie to Hbase and am testing on a small 3 node cluster running
Hadoop 0.20.2 and Hbase 0.89.
Is there a limit in Hbase on how many versions of a cell one can keep
record of under a given column family?
I understand that each column family can have its own rules but was
Hi,
I've deployed hbase on 5 nodes cluster of amazon ec2 successfully and was
working fine.
The next day when I logon I changed the configuration file(regionservers
list,slaves, masters,dfs rootdir etc)
in hadoop and hbase /conf directory for the new IP address which is dynamic
for ec2
the
I am using a HBase as backend for a service. I want to somehow cache
the connection to HBase so each request doesn't need to pay the cost
of making the connection. I am already cacheing the HTable object, is
that enough or is there a better way? And how long can the connection
be held onto? Thanks!
Hi,
I am getting the following error when trying to run an import job using
hadoop with the importtsv tool. My HADOOP_CLASSPATH is set to the
following in hadoop-env.sh:
export
HADOOP_CLASSPATH=/usr/lib/hbase-0.90.0/lib/hbase-0.90.0.jar:/usr/lib/hbase-0.90.0/lib/zookeeper-3.3.2.jar:/usr
Hi,
I am new to HBase. Please suggest some links to learn HBase apart from
hbase.apache.org.
--
With Regards,
Jr.
hi,
I am new to hbase and hadoop. Any how i have succeeded in setting up a
hadoop cluster which consists of 3 machines. Now i need some help on
building up the database. I have a table "comments" contains fields
1)user id
2)comments
3)comments on comments(which can be more than
What you describe is more like the rsync tool, which isn't what HBase
replication is doing at all. Replication works with log shipping, and
only copies data when it reads it from a log, there's no proactive
thread that checks for differences between two clusters and that
copies the miss
, Mark Kerzner wrote:
> Yes, J-D, your understanding of my understanding is correct.
> The two would actually be the same if all new records in one HBase could be
> copied to the other HBase through log shipping. That is assuming that the
> two databases never get records with the sam
Dear all,
I want to move my Hbase data to another directory. How can I do that?
Thank a lot for support.
Hi,
Is the table rename not supported in Hbase 0.90.0 at the moment?
I tried using the rename_table.rb in the bin directory but it returned
with the following errors:
./rename_table.rb abc xyz
./rename_table.rb: line 36: include: command not found
./rename_table.rb: line 37: import: command
Hi,
I have few basic question related to HBase shell. Please help me out in
these issues.
1. When I start the HBase Shell I am not getting the HBase prompt.
2. When I enter the command wrongly shell is closing abnormally.
3. When I use backspace key also the shell closes abnormally.
4. Is there
Hi Joseph,
You are talking about a full distributed setup - just all with single
nodes? So your ZooKeeper is started and maintained by you as well
separately? If so, then sure you can run it on your own. Well, even
with HBase you can run this on your own using the supplied version
that comes with
I really like the way HBase manages zookeeper. It seems much more intuitive to
me than the native zookeeper configuration. For my cluster I use zookeeper for
a couple of different tasks (like hbase, solr cloud, and other home grown
things). I manage Zookeeper using a slightly different
Thanks, I am only doing 3 servers total for running through the setup for
comfort sakes before my production gear gets here. I am looking at a
single Master HDFS server will do Avatar if I can figure it out. The have
a 10 node data cluster for HDFS and Hbase and a 3 node cluster for
Zookeeper
es before my production gear gets here. I am looking at a
> single Master HDFS server will do Avatar if I can figure it out. The have
> a 10 node data cluster for HDFS and Hbase and a 3 node cluster for
> Zookeeper. Because at some point we way me 20 plus data nodes by years
> end.. Or
Hi Joseph,
As Dave says, you could always use HBase to manage ZooKeeper. If you
need it for other things as well, and the AvatarNode is one of those,
then you have to make sure you set
HBASE_MANAGES_ZK=false
in the hbase-env.sh but still use the HBase scripts to start and stop
the stand-alone
Hi,
What's the best place to learn about HBase replication?
I found http://hbase.apache.org/book/cluster_replication.html , but note how
there is only a link there, and that link points to a 404.
Thanks,
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Hadoop - HBase
H
Hi,
I am very new to Hbase. Just got hbase installed. I have been looking
for sample codes to get started by doing simple things like: insert,
search, update a record. Could someone point me to such examples? My
searches on google show couple sample code links, but they all are
broken links
Hi,
I tried to use my hbase-default.xml from 0.89 with my new 0.90.1
installation. I get a message stating "hbase-default.xml seems to be
from an old version of hbase(null), this version is 0.90.1.
But 0.90.1 doesn't seem to have an hbase-default.xml file that it ships
with (at
*<http://hbase.apache.org/docs/r0.20.6/api/org/apache/hadoop/hbase/client/transactional/package-summary.html#package_description>
*Is there any example about transaction in hbase?
begin transaction
Put p1=new Put("
Hi ,
We inserting ~ 10 million raws per day to hbase.
We are using hbase 0.20.3 and going to use 0.90.1.
Currently we delete raws in a such way
public void delete(HTable t, byte[] row) throws IOException {
Delete del = new Delete(row);
t.delete(del
Hi,
Since HBase has a mechanism to replicate edit logs to another HBase cluster, I
was wondering if people think it would be possible to implement HBase=>Hive
replication? (and really make the destination pluggable later on)
I'm asking because while one can integrate Hive and HBase by
t;, {NAME => 'f1'}, {NAME => 'f2', REPLICATION_SCOPE => '1'}
>>>
>>> When you write, always write to 1 family (and that family is different
>>> depending on the cluster you're on). When you read, always get the
>>> data
Hi,
I am new to hbase and trying to write first POJO to access hbase table,
please bear with my query it seems to be very simple but i am not able to
find the answer myself. I am using the sample code in api.
Configuration config = HBaseConfiguration.create();
HTable table = new HTable(config
Can anybody help me on coding PHP-hbase using Stargate interface?
I have posted queries on the same in many forums, Unfortunately no replies
yet. Nobody seems interested in PHP!
--
Hi , we started our tests on cluster ( hbase 0.90.1 , hadoop append) ,
I set HBASE_HEAPSIZE to 4000m in hbase-env.sh and got 3 processes which
has heap size 4000m:
my questions are:
1)What is the way to set separately heap size for these processes. In case
I want to give to zookeper less
We want to insert to hbase on daily basis (hbase 0.90.1 , hadoop append).
currently we have ~ 10 million records per day.We use map/reduce to prepare
data , and write it to hbase using chunks of data (5000 puts every chunk)
All process takes 1h 20 minutes. Making some tests verified that
Is there a reason you are not using a recent version of 0.90?
On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott wrote:
> We are using Hbase 0.89.20100924+28, r
>
No, map-reduce is not really necessary to add so few rows.
Our internal tests repeatedly load 10-100 million rows without much fuss.
And that is on clusters ranging from 3 to 11 nodes.
On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott wrote:
> Is the only way to upload (say 1,000,000 rows) via map
This rate is dramatically slow than I would suspect. In our tests, a single
insertion program
has trouble inserting more than about 24,000 records per second, but that is
because we
are inserting kilobyte values and the network interfaces are saturated at
this point. These
tests are being done us
1 March 2011 20:20
To: user@hbase.apache.org
Cc: Stuart Scott
Subject: Re: HBase Stability
No, map-reduce is not really necessary to add so few rows.
Our internal tests repeatedly load 10-100 million rows without much
fuss. And that is on clusters ranging from 3 to 11 nodes.
On Mon, Mar 2
Have you seen Todd Lipcon's post on MSLAB's?
http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/
This is a new feature in 0.90.1 that prevents memory fragmentation during write
loads. You do have to explicitly enable th
ovide? The servers are 4-5 years old,
just running on IDE 750gb drives (for testing) with 2-4 gb of RAM in
each of the 16 servers.
We only have a 100MB network at present.
Should Hbase work on this level of hardware? Up to 100 million rows. We
are obviously happy for it to take a while but it
You will need about 2G per datanode jvm and 4G per region-server jvm
process. Plus another 500-750M for the operating system itself.
That 2-4G DRAM per machine is too low for HBase and the Datanode to function
well together (2G is extremely low to the point of unusable). I am assuming
of course
table#setAutoFlush(false) ?
--- On Mon, 3/21/11, Buttler, David wrote:
> From: Buttler, David
> Subject: RE: HBase Stability
> To: "user@hbase.apache.org"
> Date: Monday, March 21, 2011, 1:46 PM
> Have you seen Todd Lipcon's post on
> MSLAB's?
What you are asking for is a secondary index, and it doesn't exist at
the moment in HBase (let alone REST). Googling a bit for "hbase
secondary indexing" will show you how people usually do it.
J-D
On Thu, Mar 24, 2011 at 6:18 AM, sreejith P. K. wrote:
> Is it possible using
Hello,
We are looking into HBase replication to separate our clients'-facing HBase
cluster and the one we need to run analytics against (likely heavy MR jobs +
potentially big scans).
1. How long does it take for edits to be propagated to a slave cluster?
As far as I understand from
s a secondary index, and it doesn't exist at
> the moment in HBase (let alone REST). Googling a bit for "hbase
> secondary indexing" will show you how people usually do it.
>
> J-D
>
> On Thu, Mar 24, 2011 at 6:18 AM, sreejith P. K.
> wrote:
> > Is it possi
There is no native support for secondary indices in HBase (currently).
You will have to manage it yourself.
St.Ack
On Thu, Mar 24, 2011 at 10:47 PM, sreejith P. K. wrote:
> I have tried secondary indexing. It seems I miss some points. Could you
> please explain how it is possible
I need to use secondary indexing too, hopefully this important feature
will be made available soon :)
Sent from my iPhone
On Mar 25, 2011, at 12:48 AM, Stack wrote:
There is no native support for secondary indices in HBase (currently).
You will have to manage it yourself.
St.Ack
On Thu
that the
secondary table is mutated in the same atomic transaction. Since HBase only
has row-level locks, this can't be guaranteed across tables.
The situation is not hopeless, because in many cases you don't need to have
perfectly consistent data and can afford to wait for cleanup t
in InnoDB to implement
secondary index but I know that B-Tree is the primary limitation when come
to scalability and the main reason why NoSQL have discarded B-Tree. But it
would be super nice to be able to build the secondary index without using
another secondary table in HBase.
I am not
I added pointer to below into our book as 'intro to secondary indexing
in hbase'.
St.Ack
On Fri, Mar 25, 2011 at 8:39 AM, Buttler, David wrote:
> Do you know what it means to make secondary indexing a feature? There are
> two reasonable outcomes:
> 1) adding ACID semantic
Ugh. Redo. I added pointer to David Butler's response above as an
intro to secondary indexing issues in hbase.
St.Ack
On Fri, Mar 25, 2011 at 10:09 AM, Stack wrote:
> I added pointer to below into our book as 'intro to secondary indexing
> in hbase'.
> St.Ack
>
> O
case that is somewhat specialized. Hence,
you see that people who really care about secondary indexes / transaction hbase
have separate packages. The probably don't do the job as well as is ideally
possible by rolling the code into hbase proper, but on the other hand, neither
do they in
But now you are talking about greatly increasing the
> complexity of the codebase for a use case that is somewhat specialized.
> Hence, you see that people who really care about secondary indexes /
> transaction hbase have separate packages. The probably don't do the job as
> wel
Hi guys,
On what factors does HBase read latency primarily depend? What would be the
approx theoretical limit for read latency in v0.90.1 on a cluster of 7 nodes
(16 core/16 GB RAM on 5 machines and 36 GB on the other two)? I have an
application where I generate around 1000 rows/s to be input
Hello,
Does anyone know of any case studies where HBase is used in production for a
large data volumes (including big files/documents on the scale of few
KBs-100MBs
stored in rows) and giving subsecond responses to online queries?
Thanks and Regards,
Shantian
Hi there everybody-
Just thought I'd let everybody know about this... Stack and I have been
working on updating the HBase book and porting portions of the very-out-date
HBase wiki to the HBase book. These two pages...
http://wiki.apache.org/hadoop/Hbase/DesignOverview
http://wiki.apach
Hi,
I need some help to a schema design on HBase.
I have 5 dimensions (Time,Site,Referrer Keyword,Country).
My row key is Site+Time.
Now I want to answer some questions like what is the top Referrer by Keyword
for a site on a Period of Time.
Basically I want to cross all the
Google will give you what you're asking for.
Look for how Facebook is using HBase for messages. Also look for how
we have been using HBase at StumbleUpon for 2 years now and for both
live and batch queries. Numbers are usually included in the decks.
J-D
On Wed, Apr 6, 2011 at 2:18 PM, Sha
Hi folks,
I recently upgraded to hbase 0.90.2 that runs with hadoop 0.20.1. And
I got the following errors in the hbase logs:
2011-04-09 02:28:02,429 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server localhost/127.0.0.1:2181
2011-04-09 02:28:02,430 WARN
I have a weird issue starting HBase server. I am using HBase-0.90.2. I set my
root.dir under $HBASE_HOME/conf/hbase-site.xml. Then when I try starting HBase
using bin/start-hbase.sh, I get the following error message :
2011-04-11 13:57:56,578 INFO org.apache.zookeeper.ClientCnxn: Opening socket
We're new to hbase, but somewhat familiar with the core concepts associated
with it. We use mysql now, but have also used cassandra for portions of our
code. We feel that hbase is a better fit because of the tight integration
with mapreduce and the proven stability of the underlying hadoop s
401 - 500 of 3429 matches
Mail list logo