Hey St.Ack
Thanks for clarifications,
For 4. replay of log: (Please correct if i'm wrong)
So the RS will:
a. split the log via HLogSplitter, write concurrently log content to other
log files under each region,
b. replay those smaller logs into its own memstore and own logs (is it done
when the
[Apparently I'm a little late with this]
+1 on from me with a +1 for a soon following 0.94.1 release.
On 5 node cluster of vanilla hbase-0.94.0rc3 recompiled for and on top of
cdh4b2's variant of hadoop 0.23.1:
- Ran TestLoadAndVerify (bigtop), TestAcidGuarantees, hbck, and
PerformanceEvaluation
See Inline..
Regards
Ram
-Original Message-
From: Jonathan Hsieh [mailto:j...@cloudera.com]
Sent: Wednesday, May 16, 2012 9:35 PM
To: dev@hbase.apache.org; lars hofhansl
Subject: Re: ANN: The third hbase 0.94.0 release candidate is available
for download
[Apparently I'm a little
On Wed, May 16, 2012 at 12:25 AM, Mikael Sitruk mikael.sit...@gmail.com wrote:
Hey St.Ack
Thanks for clarifications,
For 4. replay of log: (Please correct if i'm wrong)
So the RS will:
a. split the log via HLogSplitter, write concurrently log content to other
log files under each region,
(Moved this conversation off the vote thread)
On Sat, May 12, 2012 at 3:14 PM, Mikael Sitruk mikael.sit...@gmail.com wrote:
So in case a RS goes down, the master will split the log and reassign the
regions to other RS, then each RS will replay the log, during this step the
regions are
Hi
We (it includes the test team here) 0.94 RC and carried out various
operations on it.
Puts, Scans, and all the restart scenarios (using kill -9 also). Even the
encoding stuffs were tested and carried out our basic test scenarios. Seems
to work fine.
Did not test rolling restart with 0.92. By
Hi
One small observation after giving +1 on the RC.
The WAL compression feature causes OOME and causes Full GC.
The problem is, if we have 1500 regions and I need to create recovered.edits
for each of the region (I dont have much data in the regions (~300MB)).
Now when I try to build the
Thanks for sharing this information, Ramkrishna.
Dictionary WAL compression makes replication not functional - see details
in https://issues.apache.org/jira/browse/HBASE-5778
I would vote for the removal of Dictionary WAL compression until we make it
more robust and consuming much less memory.
It's default off. I'd say we just say it's an experimental feature in the
release notes.
Are you saying we should have another RC?
There was other stuff that went into 0.94 after I cut the RC, so that would
potentially need to stabilize if I cut a new RC now.
-- Lars
On Mon, May 14, 2012 at 8:15 AM, lars hofhansl lhofha...@yahoo.com wrote:
It's default off. I'd say we just say it's an experimental feature in the
release notes.
+1 for calling it experimental in notes and docs, and not removing it.
Replication was in an experimental state for quite some
+1 on adding release notes. New RC is not required and even my intention
was not to take new RC. Just a documentation on this would be enough.
Regards
Ram
-Original Message-
From: Todd Lipcon [mailto:t...@cloudera.com]
Sent: Monday, May 14, 2012 8:48 PM
To: dev@hbase.apache.org;
OK, I'll change my tactic :)
If there are no -1's by Wed, May 16th, I'll release RC4 as 0.94.0.
-- Lars
- Original Message -
From: Stack st...@duboce.net
To: lars hofhansl lhofha...@yahoo.com
Cc: dev@hbase.apache.org dev@hbase.apache.org
Sent: Friday, May 11, 2012 10:39 PM
Subject:
Thanks for the clarifications St.Ack.
Still I have some questions in regards of 3 in scenario discussed - when a
region is offline it means that client operation are not possible on it
(even read)?
In case a second master is up (in an environment with multiple master), i
presume all this occurs
On Sat, May 12, 2012 at 10:14 AM, Mikael Sitruk mikael.sit...@gmail.com wrote:
Thanks for the clarifications St.Ack.
Still I have some questions in regards of 3 in scenario discussed - when a
region is offline it means that client operation are not possible on it
(even read)?
Correct.
In
Hi St.Ack
You asked for it :-)
So in case a RS goes down, the master will split the log and reassign the
regions to other RS, then each RS will replay the log, during this step the
regions are unavailable, and clients will got exceptions.
1. how the master will choose a RS to assign a region?
2.
Stack hi
Sorry for not being precise enough.
The point is that i'm trying to check the impact of HA scenarios. one of
them is when the master goes down.
That is true that the Master is not it the critical path of read/write
unless (please correct me if i'm wrong):
1. new client are trying to
On Fri, May 11, 2012 at 3:24 AM, Mikael Sitruk mikael.sit...@gmail.com wrote:
Sorry for not being precise enough.
The point is that i'm trying to check the impact of HA scenarios. one of
them is when the master goes down.
That is true that the Master is not it the critical path of read/write
Thanks Stack.
So that's two +1 (mine doesn't count I guess). And no -1.
I talked to Ram offline, and we'll fix HBase with Hadoop 2.0.0 in a 0.94 point
release.
I would like to see a few more +1's before I declare this the official 0.94.0
release.
Thanks.
-- Lars
On Fri, May 11, 2012 at 10:26 PM, lars hofhansl lhofha...@yahoo.com wrote:
Thanks Stack.
So that's two +1 (mine doesn't count I guess). And no -1.
Why doesn't yours count? Usually the RMs does, if they +1 it. So,
that'd be 3x+1 + a non-binding +1.
I talked to Ram offline, and we'll fix
Hi Devs
I discussed this with Lars, thought it would be better to get the opinion of
the dev list. It is regarding the 0.94 RC
HBASE-5964 seems to be needed if we want to run with 0.94.0 and hadoop 2.0
(latest). Also the Guava jar related issue HBASE-5955 may also be needed
i.e HBASE-5739 patch
In my opinion this can be handled in a point release. It's not critical though
important and not new functionality.
Best regards,
- Andy
On May 10, 2012, at 8:41 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Hi Devs
I discussed this with Lars, thought it would be
On Thu, May 10, 2012 at 8:41 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
But older version has a filehandler leak in DN side for which Todd has
provided fix in hadoop side. Refer to HDFS-3359. We can easily reproduce
this problem if we have a multiple column family table
HiI will let you know which build exactly produced that problem. When we
upgraded the build with that of May 8th or May 7th(exact date i forgot)then we
did not get that problem.
But one thing is flushes were very frequent and the system was heavily loaded.
RegardsRam
From: t...@cloudera.com
On Tue, May 1, 2012 at 4:26 PM, lars hofhansl lhofha...@yahoo.com wrote:
The third 0.94.0 RC is available for download here:
http://people.apache.org/~larsh/hbase-0.94.0-rc3/
(My gpg key is available from pgp.mit.edu. Key id: 7CA45750)
I'm +1 on this RC going out as 0.94.0. On the hadoop
Stack do you have latency graph during the time the RS and HMaster were
down? (did you see a big variance in latency)?
BTW this test is a MR/scan test or you also have update and delete?
Thanks
Mikael.S
On Fri, May 11, 2012 at 7:15 AM, Stack st...@duboce.net wrote:
On Tue, May 1, 2012 at 4:26
On Thu, May 10, 2012 at 10:04 PM, Mikael Sitruk mikael.sit...@gmail.com wrote:
Stack do you have latency graph during the time the RS and HMaster were
down? (did you see a big variance in latency)?
Not sure I follow Mikael. The master is not in the read/write path so
its restart wouldn't
Gentle reminder to please provide your votes.
(and actually this is 4th RC, rather than the third)
-- Lars
From: lars hofhansl lhofha...@yahoo.com
To: hbase-dev dev@hbase.apache.org
Sent: Tuesday, May 1, 2012 4:26 PM
Subject: ANN: The third hbase 0.94.0 release
Probably not. The number for on cluster are just so close that it looks
like the only large differences in perf were on standalone installs. Since
everywhere that talks about standalone talks about how it's not to be used
as a basis for performance evaluation I think things are fine. 0.94 looks
14 machine
1 Master (nn 2nn jt hmaster)
13 slaves (dn tt rs) - 4hhd's each with 2xQuad Core intel w/HT
Sampling of Configs:
-Xmx10G -XX:CMSInitiatingOccupancyFraction=75 -XX:NewSize=256m
-XX:MaxNewSize=256m
hbase.hregion.memstore.flush.size = 2147483648
hbase.hregion.max.filesize = 2147483648
http://www.scribd.com/eclark847297/d/92715238-0-94-0-RC3-Cluster-Perf
On Fri, May 4, 2012 at 7:42 PM, Ted Yu yuzhih...@gmail.com wrote:
0.94 also has LoadTestTool (from FB)
I have used it to do some cluster load testing.
Just FYI
On Fri, May 4, 2012 at 3:14 PM, Elliott Clark
Sorry everything is in elapsed time as reported by Elapsed time in
milliseconds. So higher is worse.
The standard deviation on 0.92.1 writes is 4,591,384 so Write 5 is a little
outside of 1 std dev. Not really sure what happened on that test, but it
does appear that PE is very noisy.
On Mon,
Elliot, any plan on running the same on 0.90.x?
Enis
On Mon, May 7, 2012 at 11:07 AM, Elliott Clark ecl...@stumbleupon.comwrote:
Sorry everything is in elapsed time as reported by Elapsed time in
milliseconds. So higher is worse.
The standard deviation on 0.92.1 writes is 4,591,384 so
So I got 94.0rc3 up on a cluster and tried to break it, Killing masters and
killing rs. Everything seems good. hbck reports everything is good. And
all my reads succeed.
I'll post cluster benchmark numbers once they are done running. Should
only be a couple more hours of pe runs.
Looks great
Hi guys
Looking at the posted slide/pictures for the benchmark the
following intriguing me:
1. The recordcount is only 100,000
2. workoloada is: read 50%, update 50% and zipfian distribution even with
5M operations count, the same keys are updated again and again.
3. heap size 10G
Therefore it
I agree it was just a micro benchmark with no guarantee that it relates to
real world. With it just being standalone I didn't think anyone should take
the numbers as 100% representative. Really I was just trying to shake out
any weird behaviors and the fact that we got a big speed up was
I ran some tests of local filesystem YCSB. I used the 0.90 client for
0.90.6. For the rest of the tests I used 0.92 clients. The results are
attached.
0.90 - 0.94.0RC3 13% faster
0.92 - 0.94.0RC3 50% faster
This seems to be a pretty large performance improvement. I'll run some
tests on a
Elliot:
Thanks for the report.
Can you publish results somewhere else ?
Attachments were stripped off.
On Wed, May 2, 2012 at 2:59 PM, Elliott Clark ecl...@stumbleupon.comwrote:
I ran some tests of local filesystem YCSB. I used the 0.90 client for
0.90.6. For the rest of the tests I used 0.92
Sure, sorry about that.
http://imgur.com/waxlS
http://www.scribd.com/eclark847297/d/92151092-Hbase-0-94-0-RC3-Local-YCSB-Perf
On Wed, May 2, 2012 at 3:01 PM, Ted Yu yuzhih...@gmail.com wrote:
Elliot:
Thanks for the report.
Can you publish results somewhere else ?
Attachments were stripped
I am surprised to see 0.92.1 exhibit such unfavorable performance profile.
Let's see whether cluster testing gives us similar results.
On Wed, May 2, 2012 at 3:07 PM, Elliott Clark ecl...@stumbleupon.comwrote:
Sure, sorry about that.
http://imgur.com/waxlS
+1 from me, I took it for a spin on the local filesystem with some YCSB
load.
Here is my signature on the non-secure tarball.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEABECAAYFAk+guTIACgkQXkPKua7Hfq9YSQCeMnCQ4XFqLjw+PF8IXNPDug+t
h90AoJ+q4YSg4JbfiCmaXenadWSRU1of
=CdfZ
Thanks Todd.
I agree with doing source code releases going forward.
For that, would it be sufficient to just vote against an SVN tag?
Tarballs can then be pulled straight from that tag.
-- Lars
- Original Message -
From: Todd Lipcon t...@cloudera.com
To: dev@hbase.apache.org; lars
I hate to do this but I think this sinks this rc:
https://issues.apache.org/jira/browse/HBASE-5861
Jon.
On Wed, Apr 18, 2012 at 4:20 PM, lars hofhansl lhofha...@yahoo.com wrote:
The third 0.94.0 RC is available for download here:
http://people.apache.org/~larsh/hbase-0.94.0-rc2/
I signed
On Mon, Apr 23, 2012 at 10:46 AM, Jonathan Hsieh j...@cloudera.com wrote:
I hate to do this but I think this sinks this rc:
https://issues.apache.org/jira/browse/HBASE-5861
Party-pooper!
I put up some questions in the issue Jon.
Good on you,
St.Ack
No problem. :)
Better finding it now than later. I'll go through the fixed 0.94.1 issues and
pull them back into 0.94.0.
Once we have this fixed I'll tag a new release.
-- Lars
- Original Message -
From: Jonathan Hsieh j...@cloudera.com
To: dev@hbase.apache.org; lars hofhansl
44 matches
Mail list logo