hey
I'm trying to run: HADOOP_HOME/hadoop jar
HBASE_HOME/hbase-0.20.0-alpha-test.jar sequentialWrite 2
and get the following exception:
Exception in thread "main" java.lang.SecurityException: class
"org.apache.hadoop.hbase.HConstants"'s signer information does not match
signer information of o
.Ack
>
>
> On Tue, Aug 4, 2009 at 9:41 AM, llpind wrote:
>
>>
>> hey
>>
>> I'm trying to run: HADOOP_HOME/hadoop jar
>> HBASE_HOME/hbase-0.20.0-alpha-test.jar sequentialWrite 2
>>
>> and get the following exception:
>>
>&
hey,
I'm having problems starting up this version at all.
Here is what i've done so far:
1. changed data dir to a new location, and port for HDFS.
2. formatted namenode
3. started dfs
4. setup zookeeper properties in hbase-site.xml
5. started hbase
it prints out normal output (starting mas
Hey onur,
yeah i already had that set to a location. i tried changing it, and
restarting...but still see the same problem.
onur.aktas wrote:
>
>
> Hi,
>
> Can you please add the following to your conf/core-site.xml in Hadoop
> directory and try if it solves your problem?
> Change "/tmp/hado
Yeah.
Hadoop version 0.20.0
HBase version 0.20.0 RC1
onur.aktas wrote:
>
>
> You also use it with Hadoop 0.20, right?
>
>> Date: Wed, 5 Aug 2009 12:01:35 -0700
>> From: sonny_h...@hotmail.com
>> To: hbase-user@hadoop.apache.org
>> Subject: RE: hbase 0.20.0 Release Candidate 1 available fo
Does anyone know what could be causing this?
I've tried changing various properties etc. everything seems to give me the
same error. I don't even need to migrate my data, I just want to get a
clean version going.
Could my old HDFS causing problems? I even changed the data directories,
and re
:48:43,026 INFO
org.apache.hadoop.hbase.master.RegionServerOperation: region set as
unassigned: -ROOT-,,0
any ideas?
llpind wrote:
>
> Does anyone know what could be causing this?
>
> I've tried changing various properties etc. everything seems to give me
> the same error
FYI for others. After setting the 'hfile.block.cache.size' property it
works. It was never required in previous versions, and isn't mentioned on
the "Getting Started" page. I think we need to add this if it is in fact a
requirement.
Thanks
--
View this message in context:
http://www.nabble.c
As some of you know, I've been playing with HBase on/off for the past few
months.
I'd like your take on some cluster setup/configuration setting that you’ve
found successful. Also, any other thoughts on how I can persuade usage of
HBase.
Assume: Working with ~2 TB of data. A few very tall tab
thorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
>
> -Original Message-
> From: llpind [mailto:sonny_h...@hotmail.com]
> Sent: T
Ryan & Eric thanks for your input.
Currently we're experimenting on a small cluster (5-8) node. I'm trying to
get statistics from a cluster of this size in order to estimate what impact
adding more nodes will have.
This is proving to be a hard task, since it’s hard with such a small number
of
Thanks Stack.
I will try mapred with more clients. I tried it without mapred using 3
clients Random Write operations here was the output:
09/08/12 09:22:52 INFO hbase.PerformanceEvaluation: client-0 Start
randomWrite at offset 0 for 1048576 rows
09/08/12 09:22:52 INFO hbase.PerformanceEvaluati
page up on wiki. SequentialWrite was about same as
> RandomWrite. Check out the stats on hw up on that page and description of
> how test was set up. Can you figure where its slow?
>
> St.Ack
>
> On Wed, Aug 12, 2009 at 10:10 AM, llpind wrote:
>
>>
>> Thanks Stack
s? I'm unsure if this is a result of having a
small cluster? Please advise...
stack-3 wrote:
>
> Yeah, seems slow. In old hbase, it could do 5-10k writes a second going
> by
> performance eval page up on wiki. SequentialWrite was about same as
> RandomWrite. Check out the st
ted
> were
> for writes?
>
> St.Ack
>
>
> On Wed, Aug 12, 2009 at 1:15 PM, llpind wrote:
>
>>
>> Not sure why my performance is so slow. Here is my configuration:
>>
>> box1:
>> 10395 SecondaryNameNode
>> 11628 Jps
>> 10131 NameNode
&
s that could hurt bad. I'd avoid doing massive
> map-reduces with a large intermediate output on these machines.
>
> -ryan
>
> On Tue, Aug 11, 2009 at 4:14 PM, llpind wrote:
>>
>> Thanks for the link. I will keep that in mind.
>>
>> Yeah 256MB isn't m
HBASE-1603
Has this been fixed in .20? Thanks.
stack-3 wrote:
>
> On Wed, Aug 12, 2009 at 8:58 AM, llpind wrote:
>
>>
>> Playing with the HBase perfomanceEval Class, but it seems to take a long
>> time to run “sequentialWrite 2” (~20 minutes). If I simply emulat
27;, VER
SIONS => '3', COMPRESSION => 'NONE', TTL =>
'2147483647', BLOCKSIZE => '65536', I
N_MEMORY => 'false', BLOCKCACHE => 'true'}]}}
===
Ryan Rawson wrote:
>
> Absolutely not. VM = low performance, no good.
>
If you have a box with a lot of RAM, and you split the box into VMs
allocating enough RAM for each.
Lets say you have a box with 32GB of RAM, and you put two VMs on it
allocating 16GB each... will that be slow too?
ple
;
> Why do I think VM is low performance? I could ask you, why do you
> think that Virtualizing is as fast as native?
>
>
> On Wed, Aug 19, 2009 at 2:33 PM, llpind wrote:
>>
>>
>> Ryan Rawson wrote:
>>>
>>> Absolutely not. VM = low performance,
w to me writing too. Let me take a
>> look
>> St.Ack
>>
>>
>> On Thu, Aug 13, 2009 at 10:06 AM, llpind wrote:
>>
>>>
>>> Okay I changed replication to 2. and removed "-XX:NewSize=6m
>>&
Hey,
I'm trying to move a relational model to HBase, and would like some input.
Suppose i have constant stream of documents coming in, and I'd like to parse
these by a single word.
It makes sense to have this word as my rowkey, but I need a way to handle
duplicate word text. Kind of a dicition
>
>
> -----Original Message-
> From: llpind [mailto:sonny_h...@hotmail.com]
> Sent: Monday, August 24, 2009 1:37 PM
> To: hbase-user@hadoop.apache.org
> Subject: HBase data model question
>
>
> Hey,
>
> I'm trying to move a relational model to HBase, and
Thanks, I think thats a good starting point. Along the lines i was thinking,
but I couldn't figure out how to get all for a given lemma (not by doc id,
WP). Looking at scanners again to see if can pull that off.
--
View this message in context:
http://www.nabble.com/HBase-data-model-question-t
Can someone please point me to a XML input format example. I'm using .20
code. Thanks
--
View this message in context:
http://www.nabble.com/XML-Input--tp25179786p25179786.html
Sent from the HBase User mailing list archive at Nabble.com.
Hello all,
Our company has been looking into Hadoop & HBase, and has decided to put up
a test cluster. I've got Hadoop (0.19.1) with HBase cluster up and running
on 4 boxes. Currently we store our data in an Oracle database; I'd like
ideas on how I can model a specific set of tables into HBase
Thanks. Thats cool, I'm interested in indexes. Here is a classic
student/course example:
RDBMS
TBL_STUDENT: student_id, student_name, student_address
TBL_COURSES: course_id, student_id, course_type
TBL_COURSE_TYPES: course_type, course_desc
1st shot at HBase (1 HBase table):
Key: ST: exampl
Thanks Amandeep for your thoughts.
I mentioned the million range simply for this particular test case. Once we
are up and running and satisfied with the performance, we will start scaling
towards the target system. The target system will be somewhere between 2-20
terabytes (As far as I know)
Hey all,
I'm loading data from a DB into HBase. I have a single java process
iterating over a ResultSet. After about 10,000 rows i do a BatchUpdate.
I've changed the Heap size of both Hadoop & HBase to 2000.
Setup: 0.19.1. 1 box with master and secondary. 3 boxes with
HRegionServer.
Pro
Yes, as more data is loaded the 'used heap' fluctuates. (note: box 1 now is
now fluctuates around 200, box2 is still around 40. 3/3/2 regions).
Thanks I will update 'hbase.regionserver.hlog.blocksize' property. Will i
need to restart the job if I turn on debugging?
The job is still going (@
Okay. Thanks.
This exception was thrown two times so far in the java process:
2009-05-15 12:42:28,080 DEBUG [main]
client.HConnectionManager$TableServers(966): Reloading region XX,,XX
location because regionserver didn't accept updates; tries=0 of max=10,
waiting=2000ms
2009-05-15 12:42:30,080
Hey All,
Finally finished. But didn't complete (~ between 24,770,000 - 25,760,000
records processed). Got the following exception before java upload app
died:
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address
listed in .META. for region
I believe it has to do with "
@Ryan
Thanks. how do i turn off authflushing? also, do you have an example of
map/reduce which uses a Resultset?
Yes. for a given row key, i have potentially millions of columns. so i was
doing 10K cols per row in the batch update.
--
View this message in context:
http://www.nabble.com/Lo
Thanks. How can I handle millions of columns? .20? If not, how can I
handle a data model similar to:
http://www.nabble.com/HBase-Data-Model-td23511426.html
--
View this message in context:
http://www.nabble.com/Loading-large-resultset-into-HBase-tp23566568p23620679.html
Sent from the HBase
I see similar behavior in my small cluster. (1 master, 3 datanodes)
I am also planning on trying this RC version. I've tried various
configurations, and I continue to lose Regions with intensive writes. I
really hope something like this will work, because we are starting to
consider other opti
All writes seem to go to a single Region. I set 'hbase.hregion.max.filesize'
to 64MB. Here is the log from the Region that failed:
3125568883
2009-05-23 20:38:26,370 WARN org.apache.hadoop.hbase.util.Sleeper: We slept
55828ms, ten times longer than scheduled: 3000
2009-05-23 20:38:37,551 WARN
> On Sat, May 23, 2009 at 2:17 PM, llpind wrote:
>>
>> I see similar behavior in my small cluster. (1 master, 3 datanodes)
>>
>> I am also planning on trying this RC version. I've tried various
>> configurations, and I continue to lose Regions with inte
. "region loss". You might give it a go?
> St.Ack
>
> On Sun, May 24, 2009 at 10:33 AM, llpind wrote:
>>
>> Hey Stack, I'm using 0.19.1. Also, would like to know if I should check
>> out
>> the latest and try that or try the RC you mentioned above.
&g
All,
I tailed one of the region servers that seems to be handling most of the
load. Here is what I saw:
2009-05-26 10:00:21,015 INFO org.apache.hadoop.hdfs.DFSClient: Could not
complete file
/hbase/log_192.168.240.175_1243356336827_60020/hlog.dat.1243357169473
retrying...
2009-05-26 10:00:21,4
hbase/log_192.168.240.175_1243356336827_60020/hlog.dat.1243358798947
=
stack-3 wrote:
>
> On Tue, May 26, 2009 at 8:41 AM, llpind wrote:
>>
>> Haven't tried the RC yet, but regions do get lost when i do inte
Finally failed between 7M-8M records. below is the last tail output. The
other two region server don't have much activity in the logs, but i can post
those if necessary.
===
2009-05-26 10:28:06,550 WARN org.apache.hadoop.hdfs.DFSClient: Error
Re
Yea. DFS stays healthy. one of the nodes dies during massive load. I've
made the following changes:
1. upped FDs
2. block size to default. only properties set in hbase-site.xml are
'base.rootdir' and 'hbase.master'.
3. turned debugging on for hbase and hbase dfs.
4. set auth flushing to fals
Here is the client output (it's still going. PROCESSING* messages are mine
which show after 1M records):
09/05/26 12:11:29 INFO ipc.HBaseClass: Retrying connect to server:
/192.168.240.175:60020. Already tried 0 time(s).
09/05/26 12:11:31 INFO ipc.HBaseClass: Retrying connect to server:
/192.168.
Here is process info after 6M:
HADOOP:
==
Cluster Summary
101 files and directories, 589 blocks = 690 total. Heap Size is 25.12 MB /
2.6 GB (0%)
Configured Capacity : 550.01 GB
DFS Used : 1.65 GB
Non DFS Used : 28.53
Where is the datanode log? The logs I have above are from the datanode
(hbase/logs). I don't see any logs from hadoop/logs directory.
--
View this message in context:
http://www.nabble.com/HBase-looses-regions.-tp23657983p23733204.html
Sent from the HBase User mailing list archive at Nabble.co
Yeah for some reason it wasn't there yesterday. Here is part of the log from
one of the datanodes/region servers that went down:
==
2009-05-27 06:19:51,884 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResp
Andrew Purtell-2 wrote:
>
> Also the program that is pounding the cluster with inserts? What is the
> hardware spec of those nodes? How many CPUs? How many cores? How much RAM?
>
I'm currently running the client loader program from my local box.
2 Duo CPU P8400 @ 2.26GHz, 3.48GB of RA
Thanks stack. I'm upgrading, is there any reason I shouldn't just move to
.2?
stack-3 wrote:
>
> HBase 0.19.3 is now available for download:
> http://www.apache.org/dyn/closer.cgi/hadoop/hbase/
>
> This release addresses 14 issues found since the release of 0.19.2.
> See the release notes fo
t;
> .2? You mean hbase 0.19.2? 0.19.3 is superior to hbase 0.19.2 in
> that it has extra bug fixes (see below for detail).
> St.Ack
>
>
> On Thu, May 28, 2009 at 9:38 AM, llpind wrote:
>>
>> Thanks stack. I'm upgrading, is there any reason I shouldn't
Thanks for all the help thus far. :)
stack-3 wrote:
>
> Are things working any better for you llpind?
> St.Ack
>
> On Thu, May 28, 2009 at 11:15 AM, llpind wrote:
>
>>
>> Sorry I didn't make that clear. I meant HBase version 0.20.1. It may be
>> ea
Just having trouble connecting to the giga network from my client upload
program (port 6). I can run the program from one of the linux boxes.
llpind wrote:
>
> Hey Stack. Have not gotten to the data load yet. I added 5 more boxes
> (making 8 datanodes/region servers and 1 mast
Hey All,
I'm new to map/reduce & HBase. Sorry if this has been asked before. I
would like to run a map/reduce job on a Hadoop (0.19.1)/Hbase (0.19.3)
cluster. I have attached the modified version of SampleUploader &
DBInputFormat.
When I run the uploader program from my windows box (within
Here are the changes I've made.
- Moved to Gigabit ethernet
- upgraded to HBase 0.19.3
- Run upload program from master box (instead of local)
- Added 5 more nodes (Now w/8 region servers and 1 master. same boxes hold
hadoop datanodes).
The upload got much further (45M+ of 58M), but I still l
Andrew Purtell-2 wrote:
>
> Should we make the first entry on the Troubleshooting page a question
> about if HDFS is deployed on a gigabit network or not?
>
It would surely help me.
I'm unable to connect my client to the giga network now. Do i have to port
forward each IP/Port?
--
View
Andrew Purtell-2 wrote:
>
> Is the patch for HADOOP- 4681 applied? See
> https://issues.apache.org/jira/browse/HADOOP-4681
>
>- Andy
>
>
I'm on Hadoop 0.19.1. It appears it has been applied to 0.19.2.
Where can I download 0.19.2?
--
View this message in context:
http://www.nabble
This may have been asked before, but I was unable to find by googling.
Is there a way in HBase to get columns based on AND criteria?
e.g. give me all columns where row key = 'key1' AND row key = 'key2' etc.
Basically a type of where clause.
If not supported by API, how would I go about desig
he/hadoop/hbase/filter/RowFilterInterface.html
>
> http://hadoop.apache.org/hbase/docs/r0.19.0/api/org/apache/hadoop/hbase/client/HTable.html#getScanner(byte[][],%20byte[],%20long,%20org.apache.hadoop.hbase.filter.RowFilterInterface)
>
> JG
>
> On Wed, June 3, 2009 3:52 pm, llpind
I'm doing an insert operation using the java API.
When inserting data where the rowkey changes often, it seems the inserts go
really slow.
Is there another method for doing inserts of this type? (instead of
BatchUpdate).
Thanks
--
View this message in context:
http://www.nabble.com/Frequent-
ointoh.blogspot.com/2009/01/performance-of-hbase-importing.html
>
> -ryan
>
>
> On Sat, Jun 6, 2009 at 4:55 PM, llpind wrote:
>
>>
>> I'm doing an insert operation using the java API.
>>
>> When inserting data where the rowkey changes often, it seems
nd be sure to turn off auto flushing and use a
> reasonably
> sizable commit buffer. 1-12MB is probably ideal.
>
> i can push a 20 node cluster past 180k inserts/sec using this.
>
> On Sat, Jun 6, 2009 at 5:51 PM, llpind wrote:
>
>>
>> Thanks Ryan, well done.
>
size..?
Thanks.
stack-3 wrote:
>
> On Sat, Jun 6, 2009 at 6:13 PM, llpind wrote:
>
>>
>>
>> And it's inserting 1M in about 1 minute+ . Not the best still.
>
>
> What you looking for performance-wise?
>
> Is your cluster working for you now?
&
stack-3 wrote:
>
>
> Row key changes frequently?
>
> You mean you are filling rows with lots of columns and while you hold to a
> single row, insert is fast?
>
> Your uploader encounters keys randomly or are they sorted?
>
>
Yes. given a large dataset with frequently changing row keys, t
Hi Erik,
Yes that sounds good. The type of calls I was looking for in the API.
Erik Holstad wrote:
>
> Hi Ilpind!
>
> On Mon, Jun 8, 2009 at 8:45 AM, llpind wrote:
>
>>
>> The insert works well for when I have a row key which is constant for a
>> long
&
Hey all,
getting started with map/reduce jobs. Figured I'd try the
PerformanceEvaluation test program first.
After adding HBase jar files to HADOOP_CLASSPATH in hadoop-env.sh, I issue
the following command:
hadoop19/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation
sequentialWrite 4
hild.main(Child.java:158)
stack-3 wrote:
>
> Look at why the job failed in the jobtracker UI -- usually on port 50030.
> Looks like your job launched fine. You have the conf dir on the
> HADOOP_CLASSPATH so the MR job can find hbase?
> St.Ack
>
> On Tue, Jun 9, 2009 at
stack-3 wrote:
>
> You have the conf dir on the
> HADOOP_CLASSPATH so the MR job can find hbase?
> St.Ack
>
>
Yeah. I have conf in my HADOOP_CLASSPATH on all boxes in the cluster. I
have a shared directory for all Slave boxes which holds the hadoop config.
each box has hbase 0.19.3
--
Vi
Ack
>
> On Tue, Jun 9, 2009 at 10:05 AM, llpind wrote:
>
>>
>> Hmm...jobtracker UI has the following:
>>
>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>> contact
>> region server 192.168.0.195:60020 for region TestTable,,
like so:
hadoop19/bin/hadoop ColumnCountMapReduce inputTableName
stack-3 wrote:
>
> Thats a CLASSPATH issue. For sure the hbase jars are out on all nodes of
> your cluster? For sure the paths are correct?
> St.Ack
>
> On Tue, Jun 9, 2009 at 10:28 AM, llpind wrote:
>
&
this environment variable? (if you are using this
> variable name).
> St.Ack
>
> On Tue, Jun 9, 2009 at 10:47 AM, llpind wrote:
>
>>
>> Yeah that was my first thought. rowcounter and PerformanceEvaluation
>> work.
>> here is my HADOOP_CLASSPATH:
>> ==
I made an executable job with all the jars in a lib folder with each listed
in the MANIFEST file under Class-Path. Looks like this:
Manifest-Version: 1.0
Class-Path: lib/ojdbc14.jar lib/commons-cli-2.0-SNAPSHOT.jar lib/comm
ons-httpclient-3.0.1.jar lib/commons-loggin-1.0.4.jar lib/commons-l
Hey Andy,
Yeah I eventually did that instead. It just bothers me why it doesn't work
otherwise. Anyway, thanks to all
Andrew Purtell-2 wrote:
>
> For what it's worth, I deploy HBase jars into the Hadoop library directory
> so I don't have to deal with this. Way back I had classpath problems
}
c.collect(k, bu);
}
}
It runs the map/reduce, but I get nothing in my output table.
Thanks.
llpind
--
View this message in context:
http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23952252.html
Sent from the HBase User mailing list archive at Nabble.com.
m1:",
>
> I do not thank the * works like a all option just leave it blank colFam1:
> and it will give all results
>
> Billy
>
>
> "llpind" wrote in
> message news:23952252.p...@talk.nabble.com...
>>
>> Hi again,
>>
>> I need some
{COLUMNS => ['c1', 'c2'], LIMIT => 10, \
> STARTROW => 'xyz'}
>
> so something like
> scan 'tablename', {COLUMNS => ['col1']}
>
> That will spit out data if there is any
> I thank you might be able
also,
I think what we want is a way to wildcard everything after colFam1: (e.g.
colFam1:*). Is there a way to do this in HBase?
This is assuming we dont know the column name, we want them all.
llpind wrote:
>
> Thanks.
>
> Yea I've got that colFam for sure in the HBase
:84:in `main'
from /home/hadoop/hbase193/bin/../bin/hirb.rb:346:in `scan'
===
Is there an easy way around this problem?
Billy Pearson-2 wrote:
>
> Yes that's what scanners are good for they will return all t
in 0.20, but I don't know if it ended up in
> 0.19.x anywhere.
>
>
> On Wed, Jun 10, 2009 at 2:14 PM, llpind wrote:
>
>>
>> Okay, I think I got it figured out.
>>
>> although when scanning large row keys I do get the following exception:
>
cambridgemike wrote:
>
>
> -tried moving hbase-0.19.2.jar to the hadoop/lib folder of all the slave
> machines.
>
>
Hmm thats weird. Moving the hbase jars solved my issue. go to the job
tracker UI, and look at what machine is throwing the exception, and make
sure you have hbase jars in
n our problem in more detail.
stack-3 wrote:
>
> On Wed, Jun 10, 2009 at 4:52 PM, llpind wrote:
>
>>
>> Thanks. I think the problem is I have potentially millions of columns.
>>
>
>> where a given RowResult can hold millions of columns to values. That
Sorry I forgot to mention the overflow then overflows into new row keys per
10,000 column entries (or some other split number).
llpind wrote:
>
>
> When is the plan for releasing .20? This particular issue is really
> important to us.
>
> Stack, I also have another que
good idea but you might be able to redesign you layout of
> the table
> using a different key then the current one worth barnstorming.
>
> Billy
>
>
>
> "llpind" wrote in message
> news:23975432.p...@talk.nabble.com...
>
> Sorry I forgot to mentio
> Good luck!
>
> On Jun 11, 2009 11:44 AM, "Billy Pearson"
> wrote:
>
> That might be a good idea but you might be able to redesign you layout of
> the table
> using a different key then the current one worth barnstorming.
>
> Billy
>
>
>
&g
counter = 0;
}
}
==
What am I doing wrong?
The extract is simiply getting the TYPE|VALUE only.
what excatly do I have in the Iterator at this point?
Thanks
llpind wrote:
>
> If i have a tall table, what is returned i
s the value. The in
> the reduce you count how many values there are, then do the batch update
> as
> you have below.
>
>
>
> On Fri, Jun 12, 2009 at 10:04 AM, llpind wrote:
>
>>
>> I believe my map is collecting per row correcly, but reduce doesn't
Sweet thanks stack. I'll be upgrading as well. My client program takes far
too long to simply open a scanner. This problem appears to have been
addressed in .20 (https://issues.apache.org/jira/browse/HBASE-1118).
In order to skip reloading the data I do the following:
1. Shutdown hadoop/hbas
9.x datafiles yet. That is due for the RC a
> few weeks away.
>
> If you can reload your data, that'd be best for now.
>
> Good luck!
> -ryan
>
> On Wed, Jun 17, 2009 at 2:34 PM, llpind wrote:
>
>>
>> Sweet thanks stack. I'll be upgrading as w
Could the .19 install be messing something up? Thanks
stack-3 wrote:
>
> On Wed, Jun 17, 2009 at 2:34 PM, llpind wrote:
>
>>
>> Sweet thanks stack. I'll be upgrading as well. My client program takes
>> far
>> too long to simply open a scanner. This problem
Thanks I did finally get it going, but only when I moved to a different
master box. Kind of weird. must have been something quirky in my config.
when I start HBase .20, everything appears to start up. But I cant access
the Web UI, and when i do stop-hbase.sh it just hangs with "stopping
maste
Hey all,
HBase appears to startup, but i'm unable to get to UI and I get the
following error when trying to create table in shell:
java.io.IOException:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
at
org.apache.hadoop.
. When i get a
bit of time, I will try to dive into the HBase code and try to help out as
well. Will keep you posted. :)
stack-3 wrote:
>
> On Thu, Jun 18, 2009 at 2:35 PM, llpind wrote:
>
>>
>> Thanks I did finally get it going, but only when I moved to a diffe
Hey All,
I've got HBase 0.20.0 Alpha installed and running. When i write a class,
and try to run it, I get:
09/06/22 10:39:48 FATAL zookeeper.ZooKeeperWrapper: Fail to read properties
from zoo.cfg
java.io.IOException: zoo.cfg not found
at
org.apache.hadoop.hbase.zookeeper.HQuorumPeer.pa
Yea I tried putting it in the executable jar, still same exception. Might be
a java issue. I've got it in the src folder. I tried putting it in the same
package as main class as well.
Erik Holstad wrote:
>
> Hi Ilpind.
>
> The jar that you are running, does it have access to the to zoo.cfg?
That was the issue. I had it pointing directly to zoo.cfg. Thanks all
stack-3 wrote:
>
> You have your conf directory on your CLASSPATH?
> St.Ack
>
> On Mon, Jun 22, 2009 at 11:11 AM, llpind wrote:
>
>>
>> Yea I tried putting it in the executable jar, s
Is there an example of map/reduce using .20 yet?
A Map/Reduce program that used to work now I get the following error:
09/06/23 09:31:28 INFO mapred.JobClient: Running job: job_200906221630_0003
09/06/23 09:31:29 INFO mapred.JobClient: map 0% reduce 0%
09/06/23 09:37:57 INFO mapred.JobClient:
1 other question. Do I list all my servers in the zoo.cfg? Not sure what
role zookeeper plays in map/reduce, please explain.
--
View this message in context:
http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
Sent from the HBase User mailing list archive
p.mapred.TaskRunner: Runnning
cleanup for the task
Looks like the ScannerCallable is coming into getRegionServerWithRetries as
Null?
stack-3 wrote:
>
> On Tue, Jun 23, 2009 at 10:07 AM, llpind wrote:
>
>>
>> 1 other question. Do I list all my servers
> This looks like HBASE-1560:
> https://issues.apache.org/jira/browse/HBASE-1560
>
> - Andy From: llpind
>
> To: hbase-user...
> Sent: Tuesday, June 23, 2009 12:04:59 PM
> Subject: Re: Running programs under HBase 0.20.0 alpha
>
> Oka
The map/reduce job finished.
The input table has ~ 50 million records with row key in form
(TYPE|VALUE|ID), the output table held row key (TYPE|VALUE) with a single
column family which held a value of count (the # of IDs in TYPE|VALUE).
Here is the output for the job:
09/06/24 09:10:44 INFO
Hey,
I'm doing the following to get a scanner on a tall table:
Scan linkScan = new Scan();.
linkScan.addColumn(Bytes.toBytes("type"), Bytes.toBytes("ELECTRONICS"));
ResultScanner scanner = tblEntity.getScanner(linkScan);
for (Result linkRowResult : scanner ) {
String row = Bytes.toStrin
toBytes("ELECTRONICS"));
linkScan.setStartRow (Bytes.toBytes(e + "|"));
linkScan.setStopRow (Bytes.toBytes(e + " "));
ResultScanner scanner = tblEntity.getScanner(linkScan);
for (Result linkRowResult : scanner ) {
String row = Bytes.toString(linkRow
1 - 100 of 123 matches
Mail list logo