On Sat, 2002-08-10 at 00:25, Mark Kirkwood wrote: > Ralph Graulich wrote: > > >Hi, > > > >just my two cents worth: I like having the files sized in a way I can > >handle them easily with any UNIX tool on nearly any system. No matter > >wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at > >a maximum size below any limits, handy for handling. > > > Good point... however I was thinking that being able to dump the entire > database without resporting to "gzips and splits" was handy... > > > > >For example, Oracle suggests it somewhere in their documentation, to keep > >datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had > >any problems with it. > > > Yep, fixed or controlled sizes for data files is great... I was thinking > about databases rather than data files (altho I may not have made that > clear in my mail) >
I'm actually amazed that postgres isn't already using large file support. Especially for tools like dump. I do recognize the need to keep files manageable in size but my file sizes for my needs may differ from your sizing needs. Seems like it would be a good thing to enable and simply make it a function for the DBA to handle. After all, even if I'm trying to keep my dumps at around 1GB, I probably would be okay with a dump of 1.1GB too. To me, that just seems more flexible. Greg
signature.asc
Description: This is a digitally signed message part