Jabr observed:
> adding HA to a legacy application after the fact is a lot like adding
> security to an application after it's been developed, instead of
> addressing security as part of the application development process.
Very astute. Wouldn't it be nice if there were some OpenStack-like HA
fram
As I see it, the problem is that we still treat each and every HA cluster
as a unique snowflake and build the whole thing from scratch.
But while there are many aspects to HA that are dependent on the particular
set of services and applications being run,there are enough commonalities
that it shou
Tom Metro wrote:
I think much of the reset ends up being carefully developed in-house
configurations that haven't been shared back with the community.
That's because a HA configuration is unique to the services and
applications it's wrapped around. I can share how I implemented a given
HA clu
Rich Braun wrote:
> My goal is simple: mash the power button or yank the network cable from
> either of these machines, and have all the apps still running. Then plug the
> machine back in and have all state restored to full redundancy without having
> to type any commands.
>
> For me, the use-c
On Mon, Mar 31, 2014 at 5:33 PM, Tom Metro wrote:
> It does seem like every application has its own unique approach to
> clustering.
>
or, for legacy applications, their own assumptions that need to be worked
around with kludges to repackage for HA.
--
Bill
@n1vux bill.n1...@gmail.com
Rich Braun wrote:
> The problem with [most] services is they either don't have clustering
> capability or are a true pain to set up for clustering. (Think postfix local
> delivery, think Jira, think MythTV's backend.)
True. This thread never really explored this aspect of Rich's question.
It doe
Hello
I am new( I do not learn it well) to Linux.
I got Usb- Rj11, Usb - phone line modem.
Zoom 3095
CD with modem contains Linux , Ubuntu software, but instructions
are vague , too complex for me.
I am interested in Getting CallerId program,
to get phone number of incoming call.
I am looking for
On Mon, Mar 31, 2014 at 4:06 PM, Richard Pieri wrote:
> How hard could it be? Really hard. Designing and building reliable HA
> clusters from scratch is one of the hardest things a sysadmin can be called
> upon to do.
Yup. Very tough for legacy apps not designed for anything fancier than
reboot
Bill Ricker wrote:
(Split-brain is why i've avoided remote auto-restart. If you need
distributed HA, you need to architect for hot-hot distributed
load-balancing -- not easily retrofitted to monolithic legacy apps!)
This is a lot of why there's no such thing as a turnkey HA cluster
installatio
On Mon, Mar 31, 2014 at 11:03 AM, Richard Pieri wrote:
> Bill Ricker wrote:
>
>> I've seen a big-name commercial block-replication solution duplicate
>> trashed data to the cold spare ... wasn't pretty !
>>
>
> Another great example of how replication is not backup.
Exactly.
Extra copies of blo
On 03/31/2014 03:00 PM, Rich Braun wrote:
For me, the use-case for HA technology really isn't just about
designing around failure. The more-important thing for me as a
weekend-hobbyist user is being able to take something down, mess with
it/upgrade it/overhaul it for a few hours or days, and pu
Kent Borg wrote regarding HA:
> ... But that just saves you from losing
> your primary hardware in a flood, fire, theft, etc.
>
> There are more ways for things go go wrong. The software maybe has a
> bug that messes up your data, or a human maybe fat-fingers a command ...
For me, the use-case for
> ma...@mohawksoft.com wrote:
>> OK, that's a pretty stupid thing to do. Who would do that? That's the
>
> DRDB does precisely this.
That will teach me to come in mid-thread. Yes, I have looked at that
before. That isn't a backup, per se' that's a HA fail-over mechanism.
In the case of "A" being
ma...@mohawksoft.com wrote:
OK, that's a pretty stupid thing to do. Who would do that? That's the
DRDB does precisely this.
worse of both worlds. Not only are you backing up EVERY block, you aren't
even preserving old data. Hell you aren't even excluding uninitialized
disk blocks. So, even if
John Abreau wrote:
I believe you missed Rich's point. He's not talking about advances in
solving the problem, he's talking about advances in making it easier and
less expensive to deploy the solution.
We reached the point of least cost and least effort several decades ago.
--
Rich P.
_
On Mon, Mar 31, 2014 at 12:04 PM, Richard Pieri wrote:
> You don't see advancement in HA clustering because we reached the pinnacle
> over 30 years ago. It's a well-understood problem with literally decades of
> history backing up a handful of best practices. Anything new is just a
> specific imp
Daniel Feenberg wrote:
would be pretty straightforward. I don't know why client-side HA
features have never shown up in standards since DNS was defined, but
they haven't.
Because at the time DNS became a standard a typical "client" was
something like a VT-100 or DECserver wired to the highly a
> ma...@mohawksoft.com wrote:
>> I currently work at a fairly high end deduplicated backup/recovery
>> system
>> company. In a deduplicated system, a "new" backup should not ever be
>> able
>> to trash an old backup. Period. Only "new" data is added to a
>> deduplicated
>> pool and old references a
On Mon, 31 Mar 2014, Rich Braun wrote:
Edward Ned Harvey wrote:
Hehhehe - No. The goal is mash the power button, with the results described
above, while using only 2 servers and free software. ;-)
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to
Rich Braun wrote:
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to pay the electric bill running a third server.
You mean like Red Hat Cluster Suite and Pacemaker?
You don't see advancement in HA clustering because we reached the
pinnacle over 30 yea
ma...@mohawksoft.com wrote:
I currently work at a fairly high end deduplicated backup/recovery system
company. In a deduplicated system, a "new" backup should not ever be able
to trash an old backup. Period. Only "new" data is added to a deduplicated
pool and old references are untouched. Old dat
> Bill Ricker wrote:
>> I've seen a big-name commercial block-replication solution duplicate
>> trashed data to the cold spare ... wasn't pretty !
>
> Another great example of how replication is not backup.
I call FUD! that is more of an example of how a bad program can corrupt data.
I currently
On 03/31/2014 11:03 AM, Richard Pieri wrote:
Bill Ricker wrote:
I've seen a big-name commercial block-replication solution duplicate
trashed data to the cold spare ... wasn't pretty !
Another great example of how replication is not backup.
Or, another way of looking at it: a demonstration t
Edward Ned Harvey wrote:
> Hehhehe - No. The goal is mash the power button, with the results described
> above, while using only 2 servers and free software. ;-)
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to pay the electric bill running a third ser
Bill Ricker wrote:
I've seen a big-name commercial block-replication solution duplicate
trashed data to the cold spare ... wasn't pretty !
Another great example of how replication is not backup.
--
Rich P.
___
Discuss mailing list
Discuss@blu.org
htt
> From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss-
> bounces+blu=nedharvey@blu.org] On Behalf Of Rich Braun
>
> My goal is simple: mash the power button or yank the network cable from
> either of these machines, and have all the apps still running. Then plug the
> machine bac
26 matches
Mail list logo