thanks, stack.
I sent out the draft before the mail is finished, sorry.
On Wed, Feb 24, 2010 at 2:31 PM, Stack wrote:
> What version are you on? There is no hbase.master in hbase 0.20.x.
> Its a vestige of 0.19 hbase. Now you specify the zookeeper ensemble
> you want to connect to. It knows
Thanks Paul!
- Original Message
> From: Paul Smith
> To: hbase-user@hadoop.apache.org; hbase-...@hadoop.apache.org
> Sent: Tue, February 23, 2010 10:29:00 PM
> Subject: Hbase has been Mavenized
>
> Just a quick cross post to mention that Hbase trunk has now been migrated to
> a
> Ma
What version are you on? There is no hbase.master in hbase 0.20.x.
Its a vestige of 0.19 hbase. Now you specify the zookeeper ensemble
you want to connect to. It knows who master is.
Regards multi-master hbase, see
http://wiki.apache.org/hadoop/Hbase/MultipleMasters off the main wiki
page.
St
Just a quick cross post to mention that Hbase trunk has now been migrated to a
Maven build system.
You can find some getting started info on the wiki here:
http://wiki.apache.org/hadoop/Hbase/MavenPrimer
Over time, some more tweaks and directory changes may be flushed out.
If you have any qu
hello, everyone.
I found that we can leave hbase.master unset in hbase-site.xml, and
we can have a hbase cluster running OK.
Is there any mechanism like auto-election make one node the master?
--
best wishes.
steven
Vincent,
I don't expect that either, can you give us more info about your test
environment?
Thx,
J-D
On Tue, Feb 23, 2010 at 10:39 AM, Vincent Barat
wrote:
> Hello,
>
> I did some testing to figure out which compression algo I should use for my
> HBase tables. I thought that LZO was the good c
Well, the cells themselves should not be too big. Just a few Strings
(url length) or ints at the most per cell.
Its just that there could be 10M (or maybe even 100M) cells per row.
I'm on the latest 0.20.3.
I'll try to find the big record as you suggested earlier and see what
it looks like.
Thanks
On Tue, Feb 23, 2010 at 10:40 AM, Bluemetrix Development
wrote:
>
> If this is the case tho, how big is too big?
Each cell and its coordinates is read into memory. If not enough
memory, then OOME.
Or does it depend on my
> disk/memory resources?
> I'm currently using dynamic column qualifiers,
Ok, this probably explains it.
I've been trying large sets of data, so I'm sure there are some HFiles
that are too big.
Data is all test data, so no worries about missing or corrupt data for now.
If this is the case tho, how big is too big? Or does it depend on my
disk/memory resources?
I'm curren
Hello,
I did some testing to figure out which compression algo I should use
for my HBase tables. I thought that LZO was the good candidate, but
it appears that it is the worst one.
I uses one table with 2 families and 10 columns. Each row has a
total of 200 to 400 bytes.
Here is my results
Looks like you have a corrupt record -- a record that has had a bit
flipped or so and you are trying to allocate memory to accomodate this
oversized record --- or you managed to write something really big out
to an hfile (the hfile doesn't look that big though).
Try iterating over the file and see
Thanks.
I've tried heap at both 1G and 2G for both hadoop and hbase and got
the same results either way.
Heres the lsr
http://pastebin.com/YXcsSFc4
On Tue, Feb 23, 2010 at 12:09 PM, Jean-Daniel Cryans
wrote:
> Please run a hdfs lsr on /hbase/UserData_0216/1765145465/ and pastebin
> the result.
Please run a hdfs lsr on /hbase/UserData_0216/1765145465/ and pastebin
the result.
Also consider using a bigger heap size than 1GB (change that in hbase-env.sh).
J-D
On Tue, Feb 23, 2010 at 7:34 AM, Bluemetrix Development
wrote:
> Hi,
> When trying to restart HBase, I'm getting the following in
Hi,
When trying to restart HBase, I'm getting the following in the regionservers:
http://pastebin.com/GPw6yt2G
and cannot get HBase fully restarted.
I'm on the latest version 0.20.3.
Where would I start digging to see what is causing this?
Thanks
I have got it running stably under OpenSolaris - but I have no
benchmarks for it so I cannot help you there.
One thing to note is that some parts of the Hadoop infrastructure have a
C native implementation which would need to be compiled against Solaris
to gain the performance benefits they give.
Hi,
Any ways I can get the max value through index column,
like SQL: Select Max(column1) from tableName where indexColumn ='value1'
Thanks
Fleming Chiu(邱宏明)
707-6128
y_823...@tsmc.com
週一無肉日吃素救地球(Meat Free Monday Taiwan)
---
16 matches
Mail list logo