Scott Carey wrote:
>
> On Aug 10, 2010, at 11:38 AM, Karl Denninger wrote:
>
> .....
>>
>> Most people who will do this won't reload it after a crash.  They'll
>> "inspect" the database and say "ok", and put it back online.  Bad
>> Karma will ensue in the future.
>
> Anyone going with something unconventional better know what they are
> doing and not just blindly plug it in and think everything will be OK.
>  I'd never recommend unconventional setups for a user that wasn't an
> expert and understood the tradeoff.
True.
>>
>> Incidentally, that risk is not theoretical either (I know about this
>> one from hard experience.  Fortunately the master was still ok and I
>> was able to force a full-table copy.... I didn't like it as the
>> database was a few hundred GB, but I had no choice.)
>
> Been there with 10TB with hardware that should have been perfectly
> safe.  5 days of copying, and wishing that pg_dump supported lzo
> compression so that the dump portion had a chance at keeping up with
> the much faster restore portion with some level of compression on to
> save the copy bandwidth.
Pipe it through ssh -C

PS: This works for SLONY and Bucardo too - set up a tunnel and then
change the port temporarily.    This is especially useful when the DB
being COPY'd across has big fat honking BYTEA fields in it, which
otherwise expand about 400% - or more - on the wire.

-- Karl

<<attachment: karl.vcf>>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to