Re: [GENERAL] need for in-place upgrades

2003-09-22 Thread Andrew Sullivan
On Sat, Sep 20, 2003 at 04:54:30PM -0500, Ron Johnson wrote:
 Sure, I've seen expensive h/e flake out.  It was the 8 or 9 times
 in a row that confused me.

You need to talk to people who've had Sun Ex500s with the UltraSPARC
II built with the IBM e-cache modules.  Ask 'em about the reliability
of replacement parts.

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] need for in-place upgrades

2003-09-20 Thread Christopher Browne
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Ron Johnson) would write:
 On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
 On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
 
  So instead of 1TB of 15K fiber channel disks (and the requisite 
  controllers, shelves, RAID overhead, etc), we'd need *two* TB of
  15K fiber channel disks (and the requisite controllers, shelves,
  RAID overhead, etc) just for the 1 time per year when we'd upgrade
  PostgreSQL?
 
 Nope.  You also need it for the time when your vendor sells
 controllers or chips or whatever with known flaws, and you end up
 having hardware that falls over 8 or 9 times in a row.

 

This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either.  I'm sure I got a full 8
hours sleep last night.  I'm sure of it...
-- 
cbbrowne,@,cbbrowne.com
http://www3.sympatico.ca/cbbrowne/finances.html
XML combines all the inefficiency of text-based formats with most of the  
unreadability of binary formats :-)  -- Oren Tirosh

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] need for in-place upgrades

2003-09-20 Thread Christopher Browne
[EMAIL PROTECTED] (Marc G. Fournier) writes:
 On Thu, 18 Sep 2003, Andrew Sullivan wrote:

 On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
 
  I thought we were talking about upgrades here?

 You do upgrades without being able to roll back?

 Hadn't thought of it that way ... but, what would prompt someone to
 upgrade, then use something like erserver to roll back?  All I can
 think of is that the upgrade caused alot of problems with the
 application itself, but in a case like that, would you have the time
 to be able to 're-replicate' back to the old version?

Suppose we have two dbs:

  db_a - Old version
  db_b - New version

Start by replicating db_a to db_b.

The approach would presumably be that at the time of the upgrade, you
shut off the applications hitting db_a (injecting changes into the
source), and let the final set of changes flow thru to db_b.

That brings db_a and db_b to having the same set of data.

Then reverse the flow, so that db_b becomes master, flowing changes to
db_a.  Restart the applications, configuring them to hit db_b.

db_a should then be just a little bit behind db_b, and be a recovery
plan in case the new version played out badly.  

That's surely not what you'd _expect_; the point of the exercise was
for the upgrade to be an improvement.  But if something Truly Evil
happened, you might have to.  And when people are talking about risk
management, and ask what you do if Evil Occurs, this is the way the
answer works.

It ought to be pretty cheap, performance-wise, to do things this way,
certainly not _more_ expensive than the replication was to keep db_b
up to date.
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://www.ntlug.org/~cbbrowne/oses.html
Rules of  the Evil Overlord  #149. Ropes supporting  various fixtures
will not be  tied next to open windows  or staircases, and chandeliers
will be hung way at the top of the ceiling.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] need for in-place upgrades

2003-09-20 Thread Ron Johnson
On Fri, 2003-09-19 at 06:37, Christopher Browne wrote:
 [EMAIL PROTECTED] (Ron Johnson) wrote:
  On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
  On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
  
   So instead of 1TB of 15K fiber channel disks (and the requisite 
   controllers, shelves, RAID overhead, etc), we'd need *two* TB of
   15K fiber channel disks (and the requisite controllers, shelves,
   RAID overhead, etc) just for the 1 time per year when we'd upgrade
   PostgreSQL?
  
  Nope.  You also need it for the time when your vendor sells
  controllers or chips or whatever with known flaws, and you end up
  having hardware that falls over 8 or 9 times in a row.
 
  
 
 This of course never happens in real life; expensive hardware is
 _always_ UTTERLY reliable.
 
 And the hardware vendors all have the same high standards as, well,
 certain database vendors we might think of.
 
 After all, Oracle and MySQL AB would surely never mislead their
 customers about the merits of their database products any more than
 HP, Sun, or IBM would about the possibility of their hardware having
 tiny flaws.  

Well, I use Rdb, so I wouldn't know about that!

(But then, it's an Oracle product, and runs on HPaq h/w...)

 And I would /never/ claim to have lost sleep as a result of flakey
 hardware.  Particularly not when it's a HA fibrechannel array.  I'm
 /sure/ that has never happened to anyone.  [The irony herre should be
 causing people to say ow!]

Sure, I've seen expensive h/e flake out.  It was the 8 or 9 times
in a row that confused me.

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

The difference between drunken sailors and Congressmen is that 
drunken sailors spend their own money.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] need for in-place upgrades (was Re: State of

2003-09-15 Thread Christopher Browne
In the last exciting episode, [EMAIL PROTECTED] (Ron Johnson) wrote:
 On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
 http://spectralogic.com discusses how to use their hardware and
 software products to do terabytes of backups in an hour.  They sell a
 software product called Alexandria that knows how to (at least
 somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
 systems.  (When I was at American Airlines, that was the software in
 use._

 HP, Hitachi, and a number of other vendors make similar hardware.

 You mean the database vendors don't build that parallelism into
 their backup procedures?

They don't necessarily build every conceivable bit of possible
functionality into the backup procedures they provide, if that's what
you mean.

Of thee systems mentioned, I'm most familiar with SAP's backup
regimen; if you're using it with Oracle, you'll use tools called
brbackup and brarchive, which provide a _moderately_ sophisticated
scheme for dealing with backing things up.

But if you need to do something wild, involving having two servers
each having 8 tape drives on a nearby servers that are used to manage
backups for a whole cluster of systems, including a combination of OS
backups, DB backups, and application backups, it's _not_ reasonable to
expect one DB vendor's backup tools to be totally adequate to that.

Alexandria (and similar software) certainly needs tool support from DB
makers to allow them to intelligently handle streaming the data out of
the databases.

At present, this unfortunately _isn't_ something PostgreSQL does, from
two perspectives:

 1.  You can't simply keep the WALs and reapply them in order to bring
 a second database up to date;

 2.  A pg_dump doesn't provide a way of streaming parts of the
 database in parallel, at least not if all the data is in
 one database.  (There's some nifty stuff in eRServ that
 might eventually be relevant, but probably not yet...)

There are partial answers:

 - If there are multiple databases, starting multiple pg_dump
   sessions provides some useful parallelism;

 - A suitable logical volume manager may allow splitting off
   a copy atomically, and then you can grab the resulting data
   in strips to pull it in parallel.

Life isn't always perfect.

 Generally, this involves having a bunch of tape drives that are
 simultaneously streaming different parts of the backup.
 
 When it's Oracle that's in use, a common strategy involves
 periodically doing a hot backup (so you can quickly get back to a
 known database state), and then having a robot tape drive assigned
 to regularly push archive logs to tape as they are produced.

 Rdb does the same thing.  You mean DB/2 can't/doesn't do that?

I haven't the foggiest idea, although I would be somewhat surprised if
it doesn't have something of the sort.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://www.ntlug.org/~cbbrowne/wp.html
Rules of  the Evil Overlord #139. If  I'm sitting in my  camp, hear a
twig  snap, start  to  investigate, then  encounter  a small  woodland
creature, I  will send out some scouts  anyway just to be  on the safe
side. (If they disappear into the foliage, I will not send out another
patrol; I will break out napalm and Agent Orange.)
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] need for in-place upgrades (was Re: State of

2003-09-14 Thread Ron Johnson
On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
 After a long battle with technology,[EMAIL PROTECTED] (Martin Marques), an 
 earthling, wrote:
  El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
  At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
[snip]
 Certainly there are backup systems designed to cope with those sorts
 of quantities of data.  With 8 tape drives, and a rack system that
 holds 200 cartridges, you not only can store a HUGE pile of data, but
 you can push it onto tape about as quickly as you can generate it.
 
 http://spectralogic.com discusses how to use their hardware and
 software products to do terabytes of backups in an hour.  They sell a
 software product called Alexandria that knows how to (at least
 somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
 systems.  (When I was at American Airlines, that was the software in
 use._

HP, Hitachi, and a number of other vendors make similar hardware.

You mean the database vendors don't build that parallelism into
their backup procedures?

 Generally, this involves having a bunch of tape drives that are
 simultaneously streaming different parts of the backup.
 
 When it's Oracle that's in use, a common strategy involves
 periodically doing a hot backup (so you can quickly get back to a
 known database state), and then having a robot tape drive assigned to
 regularly push archive logs to tape as they are produced.

Rdb does the same thing.  You mean DB/2 can't/doesn't do that?

[snip]
 None of this is particularly cheap or easy; need I remind gentle
 readers that if you can't afford that, then you essentially can't
 afford to claim High Availability?

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

(Women are) like compilers. They take simple statements and 
make them into big productions.
Pitr Dubovitch


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match