On 2014-06-03 11:42:49 -0400, Tom Lane wrote:
> Andres Freund <and...@2ndquadrant.com> writes:
> > On 2014-06-03 11:04:58 -0400, Tom Lane wrote:
> >> My point is that having backups crash on an overflow doesn't really seem
> >> acceptable.  IMO we need to reconsider the basebackup protocol and make
> >> sure we don't *need* to put values over 4GB into this field.  Where's the
> >> requirement coming from anyway --- surely all files in PGDATA ought to be
> >> 1GB max?
> 
> > Fujii's example was logfiles in pg_log. But we allow to change the
> > segment size via a configure flag, so we should support that or remove
> > the ability to change the segment size...
> 
> What we had better do, IMO, is fix things so that we don't have a filesize
> limit in the basebackup format.

Agreed. I am just saying that we either need to support that case *or*
remove configurations where such large files are generated. But the
former is clearly preferrable since other files can cause large files to
exist as well.

> After a bit of googling, I found out that
> recent POSIX specs for tar format include "extended headers" that among
> other things support member files of unlimited size [1].  Rather than
> fooling with partial fixes, we should make the basebackup logic use an
> extended header when the file size is over INT_MAX.

That sounds neat enough. I guess we'd still add code to error out with
larger files for <= 9.4?

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to