At 07:58 13/07/00 +1000, Giles Lean wrote:
>
>I recommend you compress the whole stream, not the pieces. Presumably
>you can determine the size of the pieces you're backing up, and ending
>with a .tar.gz (or whatever) file is more convenient to manage than a
>.tar file of compressed pieces unless
Philip Warner wrote:
> will send the schema to stdout
> Is that sufficient? Or are you strictly interested in the text output side
> of things?
Strictly interested in the text output side of things, for various
not-necessarily-good reasons (:-)).
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
At 10:38 12/07/00 -0400, Lamar Owen wrote:
>
>If we simply know that the backup cannot be sent to psql, but a
>deshar-ed version can have the schema sent to psql, would that
>ameliorate most concerns?
>
In the current version
pg_restore --schema
will send the schema to stdout
Is that suffi
At 15:25 12/07/00 +0100, Peter Mount wrote:
>No he didn't, just I've been sort of lurking on this subject ;-)
>
>Actually, tar files are simply a small header, followed by the file's
>contents. To add another file, you simply write another header, and contents
>(which is why you can cat two tar fi
If anyone can send me a nice interface for reading and writing a tar file
from C, I'll do it. I just don't have the inclination to learn about tar
internals at the moment. By 'nice' I mean that I would like:
I don't know the details of the API, but the NetBSD pax code handles
tar formats
At 15:32 12/07/00 +0100, Peter Mount wrote:
>Which is why having them on stdout is still a nice option to have. You can
>pipe the lot through your favourite compressor (gzip, bzip2 etc) and
>straight on to tape, or whatever.
Well, the custom format does that, it also does compression and can go t
Philip Warner wrote:
> At 10:13 12/07/00 -0400, Lamar Owen wrote:
> >Philip Warner wrote:
> >> I'll obviosly need to be passed a directory/file location for the script
> >> since I can't pipe seperate files to stdout.
> >
> >uuencode the blobs, perhaps, using a shar-like format?
> For the human
Which is why having them on stdout is still a nice option to have. You can
pipe the lot through your favourite compressor (gzip, bzip2 etc) and
straight on to tape, or whatever.
I don't know why you would want them as separate files - just think what
would happen to directory search times!!
How
On Thu, Jul 13, 2000 at 12:17:28AM +1000, Philip Warner wrote:
> At 14:58 12/07/00 +0100, Peter Mount wrote:
> >Why not have it using something like tar, and the first file being stored in
> >ascii?
> >
> >That way, you could extract easily the human readable SQL but still pipe the
> >blobs to std
TED]
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] RE: [HACKERS] pg_dump & blobs - editable dump?
At 14:58 12/07/00 +0100, Peter Mount wrote:
>Why not have it using something like tar, and the first file being stored
in
>ascii?
>
>That way, you could extract easily the human readabl
At 10:13 12/07/00 -0400, Lamar Owen wrote:
>Philip Warner wrote:
>> My guess is that this will involve a plain text schema dump, followed by
>> all BLOBs in separate files, and a script to load them. To implement this
>> I'll obviosly need to be passed a directory/file location for the script
>> s
11 matches
Mail list logo