Anyone else seeing these on startup of a job in fresh update from this
morning (Revision: 382439)?
060302 170405 Server handler 0 on 50050 call error: java.io.IOException:
java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
at
org.apache.hadoop.mapred.Tas
jobtracker.jsp shows the list of TaskTrackers over the course of a job -- in a
HADOOP-16 -like manner -- shrinking. No errors show in the jobdetails.jsp to
explain the disappeared. I see this behavior in a fresh svn pull from
yesterday morning. I also retried post a Doug commit of a HADOOP-
eems to lose tasks
at faster rate now. 35 slaves. 4 Tasks per node. Will try with less
tasks per host.
St.Ack
Doug
stack wrote:
jobtracker.jsp shows the list of TaskTrackers over the course of a
job -- in a HADOOP-16 -like manner -- shrinking. No errors show in
the jobdetails.
Doug Cutting wrote:
stack wrote:
ipc timeout is an hour.
FYI, you should no longer need to set this so high. I've left it at
the default for recent runs. But I doubt that is what is causing your
difficulties...
Doug
Is there a configurable timeout that says how long jobtrackers wa
Doug Cutting wrote:
stack wrote:
Is there a configurable timeout that says how long jobtrackers wait
on communique from tasktrackers?
It looks like its my job thats the problem. I've moved to new hardware
and os and the /tmp dir is of a smaller size over-filling with temporary
files a
Yesterday I wrote of a hung rack. In that case -- after consultation w/
DC -- it looked like the map output files had been removed prematurely
for some unknown reason. Subsequently, the rack is stuck spinning
eternally with reduce tasks trying and failing to pickup removed map
output parts (Looks
Do you think things are not working because you see a
NotServingRegionException in the log? These exceptions in the log are
'normal' (See HADOOP-1724 for discussion on throwing exceptions as
part of 'normal' operation). Otherwise, your log looks fine to me.
10G for a heap in my experience is ove
Holger Stenzhorn wrote:
> Since I am just a beginner at Hadoop and Hbase I cannot really tell whether
> an exception is a "real" one or just a "hint" - but exceptions always look
> scary... :-)
Yeah. You might add your POV to the issue.
> Anyways, when I did cut the allowed heap for the server
c is light and your requests are read-only, there is the HQL
page in the master's webui.
If hbase had a REST interface, hadoop-2068, would that work for you?
St.Ack
Billy wrote:
Can you show me an example on how that would be down with the command line?
"Michael Stack" <
Your DFS is healthy? This seems odd: "File
/tmp/hadoop-kcd/hbase/hregion_TestTable,2102165,6843477525281170954/info/mapfiles/6464987859396543981/datacould
only be replicated to 0 nodes, instead of 1;" In my experience, IIRC,
it means no datanodes running.
(I just tried the PE from TRUNK and
Usually the most recent one is the one you want. Be careful though. The
last patch listed at the head of the JIRA issue may not be the most
recent if the names of attached patches are not all the same.
St.Ack
Billy wrote:
When a issue has more then one patch and I want to test do I need to ap
t;> -Xuebing Yan
>>
>> -邮件原件-
>> 发件人: Kareem Dana [mailto:[EMAIL PROTECTED]
>> 发送时间: 2007年11月16日 9:32
>> 收件人: hadoop-user@lucene.apache.org
>> 主题: Re: HBase PerformanceEvaluation failing
>>
>> My DFS appears healthy. After the PE fails, the
Hey Billy.
Looks like we need to make it so the regionserver show more info on
regions (smile). The first and last key in a table is null. Your jpg
is showing two regions with the null end key. Thats kinda odd. Want to
make an issue and add logs? I'll take a look (Include the EOF excepti
They have not yet been added (See HADOOP-1608. Its nearly there).
St.Ack
张茂森 wrote:
> Hi all:
>
> This page http://wiki.apache.org/lucene-hadoop/Hbase/HbaseShell introduces
> the ‘Algebraic Query Commands’, but I can’t find them in my HBASE shell.
>
>
>
>
>
> Hbase Shell, 0.0.2 version.
>
> C
What Jim just said, but it looks to me like Text is doing the wrong
thing. When you ask it its length, it returns the byte buffer capacity
rather than how many bytes are in use. It says length is 16 but there
are only 15 characters in your test string, UTF-8'd or not.
St.Ack
Jason Grey wro
On review, after a kick from JK, I'm mistaken in the below. I wasn't
using Text.getLength, I was doing Text.getBytes().length. Please ignore
below.
St.Ack
stack wrote:
What Jim just said, but it looks to me like Text is doing the wrong
thing. When you ask it its length, it return
te them
when I started over but I will take a look and add an issue with what I
got.
Billy
"stack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Hey Billy.
Looks like we need to make it so the regionserver show more info on
regions (smile). The first and l
Fixed. Sorry about that.
St.Ack
P.S. Be sure to check out the hbase unit tests if you're looking for
more on how to write hbase-ise.
Billy wrote:
Can we get someone who has written java apps that work with hbase update the
example on this page
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ
TestTableMapReduce.class but its a little over complex for me
to start with.
Billy
"stack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Fixed. Sorry about that.
St.Ack
P.S. Be sure to check out the hbase unit tests if you're looking for more
on how to write hba
release with no modifications and included hbase.
Each machine has 2GB of local storage space, 400MB ram, 256mb swap.
Let me know if anymore information will be helpful. Thanks for looking
at the logs.
- Kareem
On Nov 20, 2007 10:01 AM, stack <[EMAIL PROTECTED]> wrote:
May I see your logs Kareem?
is CORRUPT
On Nov 24, 2007 6:21 PM, Ted Dunning <[EMAIL PROTECTED]> wrote:
I think that stack was suggesting an HDFS fsck, not a disk level fsck.
Try [hadoop fsck /]
On 11/24/07 4:09 PM, "Kareem Dana" <[EMAIL PROTECTED]> wrote:
I do not have root
t;>> MISSING 1 blocks of total size 0 B.
>>>
>>> /tmp/hadoop-kcd/hbase/hregion_TestTable,12612310,1652062411016999689/info/mapf
>>> iles/5071453667327337040/data:
>>> MISSING 1 blocks of total size 0 B.
>>> .
>>> /tmp/hadoo
essing? (Memory? Data nodes? Is it a timing issue?)
Thanks Dhurba,
St.Ack
> Thanks,
> dhruba
>
> -Original Message-
> From: stack [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 26, 2007 9:21 AM
> To: hadoop-user@lucene.apache.org
> Subject: Re: 答复: HBase Per
Anyone have insight on the following message from a near-TRUNK namenode log?
2007-11-26 01:16:23,282 WARN dfs.StateChange - DIR*
NameSystem.startFile: failed to create file
/hbase/hregion_-1194436719/oldlogfile.log for DFSClient_610028837 on
client 38.99.77.80 because current leaseholder is t
at
org.apache.hadoop.hbase.HRegionServer$Compactor.run(HRegionServer.java:378)
Am I supposed to retry?
Will that even make a difference? Every ten minutes or so, client will
retry getting a reader on either same file or another and gets same error.
Thanks for any insight,
St.Ack
stack
Reading down through the dfs code, at least the
AlreadyBeingCreatedException should be getting retried. Why am I seeing
it then in my code (And what to do about the below IOE that has no
matching complaint in the namenode)?
Thanks,
St.Ack
stack wrote:
Here's more:
2007-11-27 05:00:2
Try SUN's JDK. You are using the default gcj java on your, I presume,
red hat 7 linux install. It looks like it might have encoding issues.
St.Ack
P.S. IIRC, this question has been answered already on this list. Also,
nutchwax has its own list that would be more appropriate to questions of
(and from the
AlreadyBeingCreatedException with no matching namenode complaint in the
below)?
Thanks,
St.Ack
stack wrote:
Reading down through the dfs code, at least the
AlreadyBeingCreatedException should be getting retried. Why am I
seeing it then in my code (And what to do about the
Billy wrote:
Ok I see the two bugs you said we where waiting on have been committed to
the trunk what else are we waiting on to be able to do map reduce with
hbase?
Thats all that the issues that I currently know about. Let us know if
you run into anything.
St.Ack
Billy
"
Billy:
I just committed Edward Yoon's fix for the below. Also, regards your
comments in HADOOP-2068, try recent TRUNK. HADOOP-2315 should fix the
issue you reported with slashes in rows.
(Stay tuned to this channel for more on answers/fixes to your queries
posted to HADOOP-2068...)
Yours
Billy (HADOOP-2138). Once implemented, I
imagine that it would somehow all be just hidden from you and
hosting regionserver would be doing its best to scan data off adjacent
datanode -- if one were running on same machine.
St.Ack
Billy
"stack" <[EMAIL PROTECTED]> wrote in mess
Try giving your hbase master more memory. See item 3 in the FAQ:
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3.
St.Ack
Thiago Jackiw wrote:
I've just checked out hadoop from svn trunk, 'ant' compiled it by
following the instructions on the wiki and I'm getting a strange
behavior when going t
Do you have more than that for a Stack Trace Billy?
St.Ack
Billy wrote:
2007-12-05 19:06:08,401 WARN rest:
/api/webdata/row/com.ifrance.retrophotos|\|:http/:
java.lang.NullPointerException
Wonder what these could be caused form
Billy
Hey Billy:
The region assignment algorithim is kinda basic. For sure needs work
but it works good enough for the time being. I'd say your bad
distribution has to do w/ the low number of regionservers. Try putting
a regionserver on your master machine and see how the distribution is then.
Did you update your hbase and use data written by a previous version of
hbase? If so, I'd guess the EOF is because of the incompatible changes
listed in CHANGES.txt. Otherwise, anything earlier in the log
complaining of failed writes to hdfs?
St.Ack
Billy wrote:
Any idea if this can be fix
See first item in the FAQ: http://wiki.apache.org/lucene-hadoop/Hbase/FAQ.
St.Ack
ma qiang wrote:
Hi colleague,
After reading the api docs about hbase,I don't know how to
manipulate the hase using the java api .Would you please send me some
examples?
Thank you!
Ma Qiang
Departmen
See if last item in FAQ fixes your issue Billy:
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ
St.Ack
Billy wrote:
I have tried to load hbase several times and always keep filing
2007-12-18 14:21:45,062 FATAL org.apache.hadoop.hbase.HRegionServer: Replay
of hlog required. Forcing server resta
Billy wrote:
Hbase does split on a row key level so what's to happens if I have a row
that's larger then the max region size set in the conf?
My guess is that a row > configured region size would not be split.
I have one row that has been split into many smaller regions I just checking
if
Or see http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#1
St.Ack
Peter Boot wrote:
look in src\contrib\hbase\src\test\org\apache\hadoop\hbase\TestHBaseCluster.java
On Dec 18, 2007 7:51 PM, ma qiang <[EMAIL PROTECTED]> wrote:
Hi colleague,
After reading the API docs about hbase,I don't k
I
did not know if the master held parent:child data for the regions? I guess
if it did know the start row key:parent:child col then it could handle
splits of a large row.
"stack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Billy wrote:
Hbase does split on a
Hey Billy:
Master itself should use little memory and though it is not out of the
realm of possibiliites, it should not have a leak.
Are you running with the default heap size? You might want to give it
more memory if you are (See
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3 for how).
Billy wrote:
1. is it working?
Not implemented yet.
2. If so does it force all items in to memory of does it just cache and roll
out old data as new is requested?
To be determined: first implemenation will probably do the latter rather
than the former.
St.Ack
"There are also core design flaws. For example, they use threaded
IO...This just won’t scale."
FYI, Kevin, hbase puts up non-blocking server sockets to field client
and intra-server communications (It uses Hadoop RPC). Client's of
Hadoop's DFS -- e.g. mapreduce jobs, hbase, etc. -- use blockin
Lars George wrote:
Hi,
I have inserted about 3.5m documents in a single two column table in
HBase running on 32 nodes. So far I was able to insert most data, but
with the last million or so I am stuck with this error:
org.apache.hadoop.hbase.WrongRegionException: Requested row out of
range
If possible, please move to TRUNK instead. Most of the below have been
addressed there (I can send you a patch if you want to run hbase TRUNK
on hadoop 0.15.x).
Further comments inline below:
Lars George wrote:
Hi Stack,
Yes, it happens every time I insert particular rows. Before it would
Lars George wrote:
Hi Stack,
Can and will do, but does that make the error go away, i.e.
automagically fix it? Or is it broken and nothing can be done about it
now?
Your current install is broke. We could try spending time getting it
back into a healthy state but TRUNK is more robust than
Lars George wrote:
Eeek, means what? Replace jar files with the trunk version and the
reformat the whole Hadoop dfs (since I only have HBase on top of it)
and then reimporting all 4.5m documents? What are my chances that
there are more profound changes coming that require me to do that
again i
stack wrote:
Regards your having to do this again in the near future, hopefully not...
Related, see http://www.nabble.com/Question-for-HBase-users-tc14607732.html.
St.Ack
Lars George wrote:
Hi,
I have two questions for the mapred BuildTableIndex classes folks.
First, if I have 40 servers with about 32 regions per server, what
would I set the mapper and reducers to?
Coarsely, make as many maps as you have total regions (Assuming
TableInputFormat is in the mix
regionservers will shut themselves down if they are unable to contact
the master. Can you figure what the master was doing such that it
became non-responsive during this time?
St.Ack
Billy wrote:
I been getting these errors from time to time seams like when the region
servers are under a load
time
even the one on the same node as the master so that rules out network/switch
problems. if it was the master then all the regions server would go down at
about the same time.
Billy
"stack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
regionservers will sh
Doug Cutting wrote:
stack wrote:
ipc timeout is an hour.
FYI, you should no longer need to set this so high. I've left it at
the default for recent runs. But I doubt that is what is causing your
difficulties...
I just did the same. A high IPC is not a good idea as the hung
conne
e list can
comment why?
060308 053947 task_r_b2avsx java.io.IOException: Cannot obtain
additional block for file
/user/stack/nara/outputs/segments/20060307210958/crawl_fetch/part-00028/.data.crc
060308 053947 task_r_b2avsx at
org.apache.hadoop.ipc.Client.call(Client.java:301)
060308 053947 tas
far as its tasktracker is
concerned, the task_m_5g00f1 completed fine:
060308 112952 parsing file:/0/hadoop/nara/app/runtime-conf/hadoop-site.xml
060308 112952 task_m_5g00f1 1.0%
/user/stack/nara/outputs/segments/2006030721095
20:22400+3200
060308 112953 Task task_m_5g00f1 is done.
...
/user/stack/e04/outputs/segments/20060322213322/crawl_parse/part-00019
java.io.IOException: Cannot obtain additional block for file
/user/stack/e04/outputs/segments/20060322213322/crawl_parse/part-00019
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:160)
at
e working but there seems to be a
correlation.
Thanks,
St.Ack
Michael Stack wrote:
Why would a lightly loaded nameserver w/ no other emissions on a
seemingly healthy machine have trouble allocating blocks in a job that
is almost done?
From the nameserver log:
060323 173126 Server ha
Hey Michael:
I'll be at SIGIR on the Thursday attending the OSIR workshop.
Would be good to meet.
St.Ack
Michael Cafarella wrote:
Hi everyone,
Are you planning on coming to SIGIR in Seattle? If enough people are,
perhaps we can
have a Nutch/Hadoop get-together.
I'm in Seattle, so if enough pe
eException:
org.apache.hadoop.dfs.LeaseExpiredException: No lease on
/user/stack/nla/2005-outputs/segments/20060920054847-nla2005/crawl_fetch/part-00018/data
at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:454)
at org.apache.hadoop.dfs.NameNode.addBlock(Nam
:29:27 WARN mapred.TaskTracker: Error
running child
2006-09-17 22:29:12,854 INFO org.apache.hadoop.mapred.TaskRunner:
task_0001_r_32_3 org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.LeaseExpiredException: No lease on
/user/stack/nla/2005-outputs/segments/20060915202116-nla2
I'm trying to use the s3 filesystem that was recently added to hadoop
TRUNK.
If I set fs.default.name to be s3://AWS_IDENTIFIER:[EMAIL PROTECTED]/
so I can run mapreduce jobs that get and set directly from S3, I get the
following complaint:
java.io.IOException: Cannot create file
/mapred/sy
Tom White wrote:
I've raised a Jira: http://issues.apache.org/jira/browse/HADOOP-857.
I'll take a look at it.
Tom
Thanks (Your supposition in 857 looks right).
Other things I notice are that the '-rmr'/'-lsr' options don't act as
'expected'. Its a little confusing. Should the 'hadoop fs' to
Bryan A. P. Pendleton wrote:
S3 has a lot of somewhat weird limits right now, which make some of this
tricky for the common case. Files can only be stored as a single s3
object
if they are less than 5gb, and not 2gb-4gb in size, for instance.
Perhaps an implementation could throw an exception f
Tom White wrote:
Other things I notice are that the '-rmr'/'-lsr' options don't act as
'expected'. Its a little confusing. Should the 'hadoop fs' tool return
from rmr/lsr rather be that the action is not supported rather than
'success' if, say, I try to remove a 'directory' from S3? (See below
Doug Cutting wrote:
Michael Stack has some experience tracking down problems with flaky
memory. Michael, did you use a test program to validate the memory on
a node?
One of the lads at the Archive used to run CTCS,
http://sourceforge.net/projects/va-ctcs/. It was good for weeding out
bad
HBase is another application that needs write-append.
Every HBase update is written both to a RAM-based and file-system-based
log. On a period the RAM-based log is flushed to the filesystem. The
RAM-based log and its flushes are used fielding queries.
The sympathetic file-system-based log i
hank williams wrote:
I have been meaning to ask a similar question, but perhaps a bit more
broadly about the status and anticipated timeline for hbase. I am
curious if the effort is purely individual or if there is any
corporate push (for example from powerset) or if it is just a personal
project
Peter W. wrote:
Hi,
Are there any HBase samples out there not using Junit?
Not currently. The unit tests are good source for figuring how to
manipulate hbase. What else do you need?
I would like to:
a. create a master server, region and table descriptor.
Do you mean in code or on the co
total 24
-rw-rw-r-- 1 manager manager 6036 Jun 4 12:02 hbase
-rw-rw-r-- 1 manager manager 1603 Jun 4 12:02 hbase-config.sh
I was also curious if 127.0.0.1 can be a valid HServerAddress and is
logging with HLog required.
Thanks,
Peter W.
On Jun 27, 2007, at 12:50 PM, Michael Stack wrote:
Anyone have any pointers debugging why an odd HDFS close is failing?
Here is the exception I'm getting.
2007-08-22 21:45:21,459 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 9000, call
complete(/bfd/hadoop-stack-data/tmp/hbase/compaction.tmp/hregion_hbaserepos
on 9000, call
complete(/bfd/hadoop-stack-data/tmp/hbase/compaction.tmp/hregion_hbaserepository,,8918388410463499185/repo/mapfiles/-1/data,
DFSClient_1857293290) from 208.76.44.139:52301: error:
java.io.IOException: Unknown file:
/bfd/hadoop-stack-data/tmp/hbase/compaction.tmp
also from Namenode log. Before that line
there should have been a message that adds a block to this file in the
same log. I think you should file a Jira for this. It probably related
to HADOOP-999.
Raghu.
Michael Stack wrote:
Thanks for responding Raghu.
The exception in my previous mail I
You might try backing out the HADOOP-1708 patch. It changed the test
guarding the log message you report below.
St.Ack
C G wrote:
Further experimentation, again single node configuration on a 4way 8G machine
w/0.14.0, trying to copyFromLocal 669M of data in 5,000,000 rows I see this in
the
My mistake...
St.Ack
Doug Cutting wrote:
Michael Stack wrote:
You might try backing out the HADOOP-1708 patch. It changed the test
guarding the log message you report below.
HADOOP-1708 isn't in 0.14.0.
Doug
k at the
hbase-USERID-master-*log content. Might help if you up the log level to
DEBUG (add the line 'log4j.logger.org.apache.hadoop.hbase.HMaster=DEBUG'
to $HADOOP_HOME/conf/log4j.properites). Stack traces are also useful
figuring where the programs are hung (Send a 'kill -QUIT
shutdown sequence?) Logs are in $HADOOP_HOME/logs. Look at the
hbase-USERID-master-*log content. Might help if you up the log level to
DEBUG (add the line 'log4j.logger.org.apache.hadoop.hbase.HMaster=DEBUG'
to $HADOOP_HOME/conf/log4j.properites). Stack traces are also useful
figuring
The current nutchwax release is built against hadoop 0.9.2 (See the
nutchwax home page). I'm guessing from the log below that you are
trying to launch the nutchwax jar on a hadoop cluster whose version is
in advance of 0.9.2.
St.Ack
P.S. The nutchwax mailing list is probably a better place fo
Does this just-created page help?
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#preview
(If not, add questions to the page and we'll answer them).
St.Ack
Billy Pearson wrote:
Can someone give an example of basic commands to hbase
Examples in java basic scripts (insert update delete select)
Looks like a key was written at indexing time w/ 'illegal' characers.
You are using gcj for you VM. Did you index using this machine? What
happens if you deploy your tomcat w/ the SUN JVM?
St.Ack
P.S. As Dennis suggests, nutchwax mailing list is a more appropriate
location for this question
The NutchWAX mailing list is here:
http://archive-access.sourceforge.net/projects/nutch/mail-lists.html.
The most current release of NutchWAX runs on hadoop 0.9.2 according to
the home page. As Dennis below suggests, a common problem is trying to
use NutchWAX with a too-recent version of hado
Did you start hbase?
% $HBASE_HOME/bin/start-hbase.sh
Are both the hbase master and regionservers up and running?
Look in the hbase logs (Default location is $HADOOP_HOME/logs). Check
the hbase master log. It should be scanning the catalog regions named
-ROOT- and .META. on a period. If it
(Ignore my last message. I had missed your back and forth with Edward).
Regards step 3. below, you are starting both mapreduce and dfs daemons.
You only need dfs daemons running hbase so you could do
./bin/start-dfs.sh instead.
Are you using hadoop 0.14.x? (It looks like it going by the co
Should we be making a runnable hbase at $HADOOP_HOME/build/contrib/hbase
Dennis?
If you run the package target from $HADOOP_HOME/build.xml, under
$HADOOP_HOME/build it makes a hadoop-X.X.X directory. In its src, there
is a contrib/hbase with the lib, bin, etc. You can run hbase from here.
Andrzej Bialecki wrote:
That's excellent news - I just looked at the code, I think it would
require only minimal tweaks to use it together with other Hadoop
services running in "local" mode - e.g. it would be more convenient to
have the MiniHBaseCluster (or its modified version, let's call it
Andrzej Bialecki wrote:
If I'm not mistaken, there is no way right now to use HBase in a
"local" mode similar to the Hadoop "local" mode, where we don't have
to start any daemons and all necessary infrastructure runs inside a
single JVM. What would it take to implement such mode? Would it
requ
want I can
modify the ant script for this?
Dennis Kubes
Michael Stack wrote:
Should we be making a runnable hbase at
$HADOOP_HOME/build/contrib/hbase Dennis?
If you run the package target from $HADOOP_HOME/build.xml, under
$HADOOP_HOME/build it makes a hadoop-X.X.X directory. In its src,
there is
3:45 +0800
From: [EMAIL PROTECTED]
To: hadoop-user@lucene.apache.org
Subject: Re: A basic question on HBase
Dear edward yoon & Michael Stack,
After using the hadoop branch-0.15, hbase runs correctly.
Thank you very much!
Best wishes,
Bin YANG
On 10/19/07, Bin YANG wrote:
T
Josh Wills wrote:
...So it seems to me that the right thing to do is to up the
hbase.client.retries.number parameter to 10, from 5. This could be a
function of how I'm just using the one region server in this test--
w/more servers, the master could just redirect the new row to another
server, an
Hope you don't mind my saving below into the wiki here:
http://wiki.apache.org/lucene-hadoop/Hbase/10Minutes.
St.Ack
Dennis Kubes wrote:
I had a somewhat difficult time figuring out how to get hbase
started. In the end, it was pretty simple. Here are the steps:
1. Download hadoop from svn
Andrzej Bialecki wrote:
Michael Stack wrote:
...
Otherwise, I"m thinking we'd move MiniHBaseCluster from src/test to
src/java so it makes it into the hadoop-*-hbase.jar renaming it
LocalHBaseCluster to follow the LocalJobRunner precedent. We should
also make the hbase.master val
Regards 1. in the below, HBase store is closer to the A format described
below.
A table has column families. Each column family is written to a
HStore. A HStore has HStoreFiles (~SSTables in bigtable-speak).
HStoreFiles are hadoop MapFiles where the key is
row/columnname/timestamp (See HS
Bin YANG wrote:
Hi,
I am confused with some thing in HBase.
1. All data is stored in HDFS. Data is served to clients by
HRegionServers. Is it allowed that the tablet T is on machine A, and
served by a HRegionServers running on machine B?
Yes, tablet T may be hosted in HDFS on machine A but
Hey Jonathan.
From the below, the regionserver looks to have reported into the master
fine and even gotten instruction that it should deploy the -ROOT- region
but then when master tried to talk back later, it couldn't. I have seen
this previous when hosts were confused on how to reach each ot
Thanks for the detail Holger. Helps.
Reading it, it looks like the cluster hasn't started up properly; the
NoSuchElementException would seem to indicate that the basic startup
deploying the catalog meta tables hasn't happened or has gotten mangled
somehow. Whats in your hbase master log file
-ROOT-,,0,
startKey: <>, server: 192.168.18.7:4535} complete
2007-11-01 21:07:03,873 INFO org.apache.hadoop.hbase.HMaster: all meta
regions scanned
Michael Stack wrote:
Thanks for the detail Holger. Helps.
Reading it, it looks like the cluster hasn't started up properly; the
NoSuch
$Server.call(RPC.java:379)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
Any suggestions?
- Jonathan
On Thu, 2007-11-01 at 13:43 -0700, Michael Stack wrote:
Hey Jonathan.
From the below, the regionserver looks to have reported into the master
fine and even gotten instruction that it
file:/// filesystem instead
of distributed file system.
St.Ack
- J
On Fri, 2007-11-02 at 08:27 -0700, Michael Stack wrote:
The complaint below is from HDFS. Would seem to indicate that you do
not have any data nodes running ('...could only be replicated to 0
nodes'). Did
has
been doing a bunch of work trying to get the same to happen on cygwin
(10Minutes has the latest fruit of his work).
As requested by Michel Stack I also attach my log (turned to debug)
when starting the server.
Thanks for the log. Comments interlaced.
Cheers,
Holger
07/11/02 13:40
p\hadoop-holste".
Also, I tried to do the very same (as described in my intitial
mail) on Ubuntu also *without* specifying "hadoop.tmp.dir".
There it works wihtout problems...
Yeah. I know things work out-of-the-box on linux/macosx. Jim has
been doing a bunch of work trying
ctually be grabbing my hostname and
doing a reverse lookup on it and then trying to connect to the IP
returned.
Anyway, I was able to get everything running by mapping localhost to my
actual IP in my /etc/hosts file.
Thanks,
- Jonathan
On Thu, 2007-11-01 at 13:43 -0700, Michael Stack wrote:
H
Your program looks innocuous enough. Does attaching w/ jmap and
getting a '-histo' dump tell you anything? Have you tried upping your
JVM heap size (HADOOP_OPTS)? There's a lot going if you have the FS,
MR, and HBase all up and running in the one JVM.
St.Ack
Holger Stenzhorn wrote:
Hi,
1 - 100 of 108 matches
Mail list logo