HBase 0.19.3 is now available for download:
http://www.apache.org/dyn/closer.cgi/hadoop/hbase/
This release addresses 14 issues found since the release of 0.19.2.
See the release notes for details: http://tinyurl.com/qcd4dg
We recommend that all upgrade to this version of hbase.
Thanks to all wh
Hadoop Fans,
Lately, we've been spending a lot of time on the East Coast, and one thing
is clear: Hadoop is everywhere.
Hadoop usage on the East Coast tends to be slightly different. There are
still web companies with armies of tech gurus, but there are also many
"regular" industries and enterpri
On Tue, May 26, 2009 at 11:28 PM, Ryan Rawson wrote:
> Hi all,
>
> With HBASE-1304, it's time to normalize and review our filter API.
>
> Here are a few givens:
> - all calls must be byte[] offset,int offset, int length
> - maybe we can have calls for KeyValue (which encodes all parts of the key &
Thanks for the system info. CPU and RAM resources look good.
> >
> > Can you consider adding additional nodes to spread the load on DFS?
> >
> Yes. If that will help. Right now I'm not seeing any splits happening, so
> I don't know how much adding more boxes will help. It seems to not be
>
Answers in-line.
Alex
On Wed, May 27, 2009 at 6:50 AM, Patrick Angeles
wrote:
> Hey all,
>
> I'm trying to find some up-to-date hardware advice for building a Hadoop
> cluster. I've only been able to dig up the following links. Given Moore's
> law, these are already out of date:
>
>
> http://mai
Thank you that did work.
So I can connect while logged into the hbase master, all that remains is to
achieve connection from a remote machine. I think I'll try running my java
program from the hbase master as opposed from a remote machine and see it
that works.
Jean-Daniel Cryans-2 wrote:
>
Andrew Purtell-2 wrote:
>
> Also the program that is pounding the cluster with inserts? What is the
> hardware spec of those nodes? How many CPUs? How many cores? How much RAM?
>
I'm currently running the client loader program from my local box.
2 Duo CPU P8400 @ 2.26GHz, 3.48GB of RA
the decommissioning process in hadoop takes a little while I thank there the
balancing bandwidth has a lot to do with it.
but to stop region server you should be able to run
bin/hbase-daemon.sh stop regionserver
on the server you want to stop the hbase as long as it not the master then
you can d
You should just subscribe to the mailing list and directly write to it
http://hadoop.apache.org/hbase/mailing_lists.html#Users ;)
If you can get, it means it works. You could also check if any
datanode is registered with the web ui at port 50070
60020 is the region server, is there anything wrong
J-D:
Thank you very much for your reply. I add "hbase-user@hadoop.apache.org" to
"send to" of this email, so I think it will be seen by emaillist.
I do find datanode's log file show that it does not work, but when i solve
the datanode problem, the hbase still does not work.
is hadoop dir, I run
i use
hadoop hadoop-0.19.1
hbase: hbase-0.19.2
On Wed, May 27, 2009 at 8:42 PM, Puri, Aseem wrote:
> Andy,
>It seems to be there is version mismatch. What Hadoop and HBase
> version you are using?
>
> Thanks & Regards
>
> Aseem Puri
>
> -Original Message-
> From: jdcry...@gmail.c
60010 is the default port for the web ui
60030 is the default port for the region servers
If you tried "telnet localhost 6" and it failed, then that's
probably because you specified "hbase01:6" in your hbase-site so
it was binded on whichever interface it is (most probably not
localhost).
I tried
- http://192.168.25.49:60010 from the browser and it works
- 192.168.25.49 60010 and it connects.
-
hbase.master
192.168.25.49:60010
throws an error in the console (see below) but nothing in the
log files.
If I log in to the hbase master machine and try to telnet to localh
Yeah for some reason it wasn't there yesterday. Here is part of the log from
one of the datanodes/region servers that went down:
==
2009-05-27 06:19:51,884 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResp
Well the IP I was talking about is 192.168.25.49 so try
http://192.168.25.49:60010
J-D
On Wed, May 27, 2009 at 12:16 PM, gcr44 wrote:
>
> When I connected via http, I used the same IP from the same client machine as
> with telnet. This works from the browser: http://hbase1:60010 as does
> http:/
When I connected via http, I used the same IP from the same client machine as
with telnet. This works from the browser: http://hbase1:60010 as does
http://hbase1:60030.
For now, I'm stumped.
Jean-Daniel Cryans-2 wrote:
>
> If you aren't able to telnet, then this is a network issue. Can you
>
If you aren't able to telnet, then this is a network issue. Can you
figure why you can't telnet? When you connected via http, did you use
the same IP address and was it from the same client machine? Did you
try setting hbase1.qn-niat.net:6 as your hbase.master?
J-D
On Wed, May 27, 2009 at 11:
No, I am not able to telnet either. However, when I try to connect via http
on port 60010, I see a web page that says, "Master:
hbase1.qn-niat.net:6", which looks to me like the master is up and
running correctly. Also, the log files appear not to contain any errors.
Could the problem be in
Hello,
I just needed to remove 8 machines from our 33 node cluster. (hadoop-0.19.1,
hbase-0.19.2)
The first 4 nodes I decommissioned in hadoop with the exclude-file and hbase
running. After 4 hours I gave up and stopped the machines by hand (they
never finished the decommissioning in progress phase
Hey all,
I'm trying to find some up-to-date hardware advice for building a Hadoop
cluster. I've only been able to dig up the following links. Given Moore's
law, these are already out of date:
http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200811.mbox/%3ca47c361b-d19b-4a61-8dc1-41d4c097
Andy,
It seems to be there is version mismatch. What Hadoop and HBase version
you are using?
Thanks & Regards
Aseem Puri
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-Daniel
Cryans
Sent: Wednesday, May 27, 2009 5:32 PM
To: hbase-user@
Go to the hosts where you think you run DataNodes and check what is set
for "HADOOP_LOG_DIR" in
$HADOOP_HOME/conf/hadoop-env.sh
(It is "$HADOOP_HOME/logs" by default.)
If there is no log like
"$HADOOP_LOG_DIR/hadoop-user-datanode-hostname.log"
in that dir I'd assume you are not running D
Hi Norbert,
If you like you could check this out
http://www.larsgeorge.com/2009/05/hbase-schema-manager.html
I created a tool for exactly that purpose, i.e. create and maintain
tables across different clusters.
Lars
Norbert Antunes wrote:
The first option worked.
Thanks
On Tue, May 26,
Xudong Du,
While I appreciate answering HBase questions, I prefer to do that on
the HBase users mailing list for everyone's benefit.
Your problem seems to be that the HDFS Namenode is running but you
don't have any datanode, so the file was created in the namespace but
you can't read it. Make sur
24 matches
Mail list logo