Yep. That's also why I've doing 0.94 release all this time. 0.92 had a
no-downtime path to 0.94. And 0.96 had a no downtime path to 0.98. So both
could be EOL'ed with relatively little annoyance.0.94 is different as going to
0.96 or later (including 0.98) is a big change and requires downtime.
I
Nope :(Replication uses RPC and that was changed to protobufs. AFAIK snapshots
can also not be exported from 0.94 and 0.98. We have a really shitty story
here. From: Sean Busbey
To: user
Sent: Monday, December 15, 2014 5:04 PM
Subject: Re: 0.94 going forward
Does replication and sn
Replication from 0.94 to +0.96 wont work out of the box unless you use a
bridge: HBASE-9360 Maybe before EOL'ing 0.94 we should ship it as part of
0.94.
cheers,
esteban.
--
Cloudera, Inc.
On Mon, Dec 15, 2014 at 5:23 PM, Bijieshan wrote:
>
> Thanks, Lars. We have customers still using 94. It
Thanks, Lars. We have customers still using 94. It is indeed stable now.
Jieshan.
From: Sean Busbey [bus...@cloudera.com]
Sent: Tuesday, December 16, 2014 9:04 AM
To: user
Subject: Re: 0.94 going forward
Does replication and snapshot export work from 0.94.
Does replication and snapshot export work from 0.94.6+ to a 0.96 or 0.98
cluster?
Presuming it does, shouldn't a site be able to use a multiple cluster set
up to do a cut over of a client application?
That doesn't help with needing downtime for to do the eventual upgrade, but
it mitigates the imp
Which is why I feel that a lot of customers are still on 0.94. Pretty much
trapped unless you want to take downtime for your site. Any type of
guidance would be helpful. We are currently in the process of designing our
own system to deal with this.
On Mon, Dec 15, 2014 at 4:47 PM, Andrew Purtell
Zero downtime upgrade from 0.94 won't be possible. See
http://hbase.apache.org/book.html#d0e5199
On Mon, Dec 15, 2014 at 4:44 PM, Jeremy Carroll wrote:
>
> Looking for guidance on how to do a zero downtime upgrade from 0.94 -> 0.98
> (or 1.0 if it launches soon). As soon as we can figure this ou
Looking for guidance on how to do a zero downtime upgrade from 0.94 -> 0.98
(or 1.0 if it launches soon). As soon as we can figure this out, we will
migrate over.
On Mon, Dec 15, 2014 at 1:37 PM, Esteban Gutierrez
wrote:
>
> Hi Lars,
>
> Thanks for bringing this for discussion. From my experience
I am trying to import data into HBase table and tried the following as an
example,
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,b,c datatsv hdfs://data.tsv - this command
complains about data.tsv not existing in HDFS, when I run hadoop fs -ls , I do
s
Minor correction:
HConnectionFactory is in the upcoming 1.0 release.
How often is HConnectionManager.createConnection() called ?
Cheers
On Mon, Dec 15, 2014 at 3:36 PM, Nick Dimiduk wrote:
>
> Meta is getting pegged? Sounds like your client applications are not being
> friendly. Are you reusing
Meta is getting pegged? Sounds like your client applications are not being
friendly. Are you reusing cluster configurations? You should have one per
process for its lifetime. Basically, how often are you calling
HConnectionFactory.createConnection() ?
On Mon, Dec 15, 2014 at 12:30 PM, Pere Kyle w
Hi Lars,
Thanks for bringing this for discussion. From my experience I can tell that
0.94 is very stable but that shouldn't be a blocker to consider to EOL'ing.
Are you considering any specific timeframe for that?
thanks,
esteban.
--
Cloudera, Inc.
On Mon, Dec 15, 2014 at 11:46 AM, Koert Kuipe
This is the table that stores information about all the tables. It is
normal when a cluster is recovering for reads to be high on this table
while all the table information is being loaded into the regionservers.
http://hbase.apache.org/book/arch.catalog.html
-Pere
On Mon, Dec 15, 2014 at 12:21 P
hbase:meta,,1.1588230740620282632 < This is the Offending Table. I
currently do not know what this means yet. But my basic understanding it is
probably an Index of sorts.
Jon
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Region-Server-Thread-with-a-Singl
was: 200-300 ms per request
now: <80 ms request
request=full trip from servlet to HBase and back to response.
2014-12-15 22:40 GMT+03:00 lars hofhansl :
>
> Excellent! Should be quite a bit faster too.
> -- Lars
> From: Serega Sheypak
> To: user
> Cc: lars hofhansl
> Sent: Monday, Dece
given that CDH4 is hbase 0.94 i dont believe nobody is using it. for our
clients the majority is on 0.94 (versus 0.96 and up).
so i am going with 1), its very stable!
On Mon, Dec 15, 2014 at 1:53 PM, lars hofhansl wrote:
>
> Over the past few months the rate of the change into 0.94 has slowed
>
Excellent! Should be quite a bit faster too.
-- Lars
From: Serega Sheypak
To: user
Cc: lars hofhansl
Sent: Monday, December 15, 2014 5:57 AM
Subject: Re: HConnectionManager leaks with zookeeper conection oo many
connections from /my.tomcat.server.com - max is 60
Hi, the problem i
Over the past few months the rate of the change into 0.94 has slowed
significantly.
0.94.25 was released on Nov 15th, and since then we had only 4 changes.
This could mean two things: (1) 0.94 is very stable now or (2) nobody is using
it (at least nobody is contributing to it anymore).
If anyb
On January 15th, we're meeting at AppDynamics in San Francisco. We have
some nice talks linked up [1]. On Feb 17th, lets meet around Strata+Hadoop
World in San Jose. If you are interested in hosting or speaking, write the
organizers.
Thanks,
St.Ack
1. http://www.meetup.com/hbaseusergroup/events/
Sounds like your access patterns are not balanced well, you have a
hotspot. Have a look at the metrics emitted from that machine. It will tell
you which region is winning the popularity contest.
On Monday, December 15, 2014, uamadman wrote:
> I've done a bit of digging and hope someone can shed
I've done a bit of digging and hope someone can shed some light on my
particular issue. One and Only One of my region servers after each restart
is randomly "Plagued" with a single maxed out CPU-Core and a Read Request
chart registering around 40k read requests per second. The remaining 13
dance ar
Hi, the problem is gone.
I did what you say :)
Thanks!
2014-12-13 22:38 GMT+03:00 Serega Sheypak :
> Great, I'll refactor the code. and report back
>
> 2014-12-13 22:36 GMT+03:00 Stack :
>>
>> On Sat, Dec 13, 2014 at 11:33 AM, Serega Sheypak <
>> serega.shey...@gmail.com>
>> wrote:
>> >
>> > So t
22 matches
Mail list logo