Andrew Sullivan wrote:
On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
I'm actually amazed that postgres isn't already using large file
support. Especially for tools like dump.
Except it would only cause confusion if you ran such a program on a
system that didn't itself have
On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:
Are there any filesystems in common use (not including windows ones) that
don't support 32-bit filesizes?
Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
probably much more. What about the BSDs? XFS? etc
On Mon, 2002-08-12 at 21:07, Peter Eisentraut wrote:
This is not the only issue. You really need to check all uses of off_t
(for example printf(%ld, off_t) will crash) and all places where off_t
should have been used in the first place. Furthermore you might need to
replace ftell() and
On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:
Andrew Sullivan wrote:
On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
I'm actually amazed that postgres isn't already using large file
support. Especially for tools like dump.
Except it would only cause confusion if
On Tue, Aug 13, 2002 at 08:02:05AM -0500, Larry Rosenman wrote:
On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:
Other operating systems where 64 bit file access can be disabled or
unconfigured require more care - possibly (sigh) 2 binary RPMS with a
distinctive 32 and 64 bit label
Looking at how to deal with this, is the following going to be
portable?:
in pg_dump/Makefile:
CFLAGS += -D_LARGEFILE_SOURCE -D_OFFSET_BITS=64
in pg_dump.h:
#ifdef _LARGEFILE_SOURCE
#define FSEEK fseeko
#define FTELL ftello
#define
On Tue, 2002-08-13 at 14:26, Zeugswetter Andreas SB SD wrote:
Looking at how to deal with this, is the following going to be
portable?:
No, look at the int8 code to see how to make it portable.
On AIX e.g it is %lld and long long int.
OK. %lld is usable by glibc, so amend %Ld to %lld.
Oliver Elphick [EMAIL PROTECTED] writes:
Looking at how to deal with this, is the following going to be
portable?:
#define OFF_T_FORMAT %Ld
That certainly will not be. Use INT64_FORMAT from pg_config.h.
typedef long int OFF_T;
Why not just use off_t? In both cases?
On Tue, 2002-08-13 at 15:23, Tom Lane wrote:
typedef long int OFF_T;
Why not just use off_t? In both cases?
The prototype for fseek() is long int; I had assumed that off_t was not
defined if _LARGEFILE_SOURCE was not defined.
--
Oliver Elphick[EMAIL
Oliver Elphick [EMAIL PROTECTED] writes:
On Tue, 2002-08-13 at 15:23, Tom Lane wrote:
Why not just use off_t? In both cases?
The prototype for fseek() is long int; I had assumed that off_t was not
defined if _LARGEFILE_SOURCE was not defined.
Oh, you're right. A quick look at HPUX shows
Oliver Elphick wrote:
On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:
Ext2 3 should be okay. XFS (very sure) and JFS (reasonably sure)
should also be okay...IIRC. NFS and SMB are probably problematic, but I
can't see anyone really wanting to do this.
Hmm. Whereas I can't see many people
On Tue, 2002-08-13 at 17:11, Rod Taylor wrote:
I wouldn't totally discount using NFS for large databases. Believe it or
not, with an Oracle database and a Network Appliance for storage, NFS is
exactly what is used. We've found that we get better performance with a
(properly tuned) NFS
Oliver Elphick [EMAIL PROTECTED] writes:
But large file support is not really an issue for the database itself,
since table files are split at 1Gb. Unless that changes, the database
is not a problem.
I see no really good reason to change the file-split logic. The places
where the backend
On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
I see no really good reason to change the file-split logic. The places
where the backend might possibly need large-file support are
* backend-side COPY to or from a large file
I _think_ this causes a crash. At least, I
On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
On a system where building with large-file support is reasonably
standard, I agree that PG should be built that way too. Where it's
not so standard, I agree with Andrew Sullivan's concerns ...
What do you mean by standard? That only
On Tue, Aug 13, 2002 at 06:45:59PM +0100, [EMAIL PROTECTED] wrote:
support isn't compiled, I didn't see one occuring from any application
having the support, but not the filesystem. (Your not so standard
Wrong. The symptom is _exactly the same_ if the program doesn't have
the support, the
On Tue, 2002-08-13 at 12:45, [EMAIL PROTECTED] wrote:
On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
On a system where building with large-file support is reasonably
standard, I agree that PG should be built that way too. Where it's
not so standard, I agree with Andrew
On Tue, 2002-08-13 at 12:04, Tom Lane wrote:
On a system where building with large-file support is reasonably
standard, I agree that PG should be built that way too. Where it's
not so standard, I agree with Andrew Sullivan's concerns ...
Agreed. This is what I originally asked for.
Greg
On Tue, Aug 13, 2002 at 02:09:07PM -0400, Andrew Sullivan wrote:
On Tue, Aug 13, 2002 at 06:45:59PM +0100, [EMAIL PROTECTED] wrote:
support isn't compiled, I didn't see one occuring from any application
having the support, but not the filesystem. (Your not so standard
Wrong. The
Tom Lane writes:
The prototype for fseek() is long int; I had assumed that off_t was not
defined if _LARGEFILE_SOURCE was not defined.
All that _LARGEFILE_SOURCE does is make fseeko() and ftello() visible on
some systems, but on some systems they should be available by default.
Oh, you're
Tom Lane writes:
* postmaster log to stderr --- does this fail if log output
exceeds 2G?
That would be an issue of the shell, not the postmaster.
--
Peter Eisentraut [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you
On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
I'm actually amazed that postgres isn't already using large file
support. Especially for tools like dump.
Except it would only cause confusion if you ran such a program on a
system that didn't itself have largefile support.
On Mon, 2002-08-12 at 09:39, Andrew Sullivan wrote:
On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
I'm actually amazed that postgres isn't already using large file
support. Especially for tools like dump.
Except it would only cause confusion if you ran such a program
On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:
If by turn...on, you mean recompile, that's a horrible idea IMO.
Ah. Well, that is what I meant. Why is it horrible? PostgreSQL
doesn't take very long to compile.
I guess what I'm trying to say here is, it's moving the
On Monday 12 August 2002 11:30 am, Andrew Sullivan wrote:
The problem is not just a system-level one, but a filesystem-level
one. Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports largefiles by default and therefore not
notice that the filesystem
On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
The problem is not just a system-level one, but a filesystem-level
one. Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports largefiles by default and therefore not
notice that the filesystem
On Mon, 2002-08-12 at 10:30, Andrew Sullivan wrote:
On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:
If by turn...on, you mean recompile, that's a horrible idea IMO.
Ah. Well, that is what I meant. Why is it horrible? PostgreSQL
doesn't take very long to compile.
On Mon, 2002-08-12 at 11:04, Andrew Sullivan wrote:
On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
keep discussing the issues involved, and I'll see what comes of it. I don't
have an direct experience with the largefile support, and am learning as I go
with this.
I do
On Mon, 2002-08-12 at 16:44, Lamar Owen wrote:
Interesting point. Before I could deploy RPMs with largefile support by
default, I would have to make sure it wouldn't silently break anything. So
keep discussing the issues involved, and I'll see what comes of it. I don't
have an direct
On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:
Many reasons. A DBA is not always the same thing as a developer (which
means it's doubtful he's even going to know about needed options to pass
-- if any).
This (and the upgrade argument) are simply documentation issues.
If
On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
And, what if he just remounted it read only. Mistakes will happen.
That doesn't come across as being a strong argument to me. Besides,
it's doubtful that a filesystem is going to be remounted while it's in
use. Which means,
On Mon, 2002-08-12 at 11:40, Andrew Sullivan wrote:
On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:
Many reasons. A DBA is not always the same thing as a developer (which
means it's doubtful he's even going to know about needed options to pass
-- if any).
This (and
On Mon, 2002-08-12 at 11:48, Andrew Sullivan wrote:
On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
[snip]
There are, in any case, _lots_ of problems with these large files.
All of those are SA issues.
So is compiling the software correctly, if the distinction has
Oliver Elphick writes:
One person said:
However compiling with largefile support will change the size
of off_t from 32 bits to 64 bits - if postgres uses off_t or
anything else related to file offsets in a binary struct in one
of the database files you will break stuff
On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
The problem is not just a system-level one, but a filesystem-level
one. Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports largefiles by default and therefore not
notice that the
On Mon, 2002-08-12 at 18:41, Martijn van Oosterhout wrote:
On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
The problem is not just a system-level one, but a filesystem-level
one. Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports
On Sat, 2002-08-10 at 00:25, Mark Kirkwood wrote:
Ralph Graulich wrote:
Hi,
just my two cents worth: I like having the files sized in a way I can
handle them easily with any UNIX tool on nearly any system. No matter
wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it
On Fri, 2002-08-09 at 06:07, Lamar Owen wrote:
On Thursday 08 August 2002 05:36 pm, Nigel J. Andrews wrote:
Matt Kirkwood wrote:
I just spent some of the morning helping a customer build Pg 7.2.1 from
source in order to get Linux largefile support in pg_dump etc. They
possibly would
On Fri, 9 Aug 2002, Helge Bahmann wrote:
As far as I can make out from the libc docs, largefile support is
automatic if the macro _GNU_SOURCE is defined and the kernel supports
large files.
Is that a correct understanding? or do I actually need to do something
special to ensure that
Ralph Graulich wrote:
Hi,
just my two cents worth: I like having the files sized in a way I can
handle them easily with any UNIX tool on nearly any system. No matter
wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at
a maximum size below any limits, handy for handling.
40 matches
Mail list logo