------------------------------------------------
On Tue, 19 Aug 2003 14:20:47 -0700, Rich Parker <[EMAIL PROTECTED]> wrote:


Hi,
I have been watching the thread about the file::copy. I ran into an issue in the Linux environment that brings a serious question, MAX file size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise Server also, and we pay for support. But even the RH support says they can't handle files in excess of 2GB (approx). I was using TAR, GZIP, or most any functions, I have found that the targeted file is only 1.8GB instead of being a much larger file, in our case 16GB. This was on a "/mnt" device, not a local disk. So the COPY (TAR in this case) was from one "/mnt/" device to another, it did not matter if I used TAR, COPY, MOVE, or a Perl program, same problem.


Everyone I talked to about this on the various "Groups" only said "Rebuild the kernel using 64 bit support", but this is on an Intel box (32 bit?). Have any of YOU seen this problem? I can't be the only person dealing with large files. Ideas?? How is this issue on later releases??



I am no kernel hacker so take what I say with a grain of salt. The large file size has to do with the addressable space on the disk which to support over 2 gigs you need more "bits" to produce longer addresses, which is I believe why they suggested you add 64 bit support. Its been a while since I was doing kernel builds but I thought there was a specific switch for "large file size", but I thought this was specifically to support partitions of larger than 2 GB not files themselves, but maybe they are one in the same.

Now you mention that the file is 1.8 GB, is that machine readable or human readable, aka is that where 1 KB = 1000 bytes or 1024 bytes? It is likely that your file exceeds the 2 GB boundary if the 1.8 is human readable.

I am not sure about copy, theoretically it should work if the file can be addressed completely, move won't work accross file system boundaries anyways, nor will a 'rename' in Perl. Again because Perl is talking to the underlying kernel theoretically you would need large file support in the kernel first, but then you *ALSO* need it in the 'perl' (not Perl) executable. For instance, perl -V will have something near the bottom like:

Compile-time options: ... USE_LARGE_FILES ...

Though I am also not a Perl internals hacker so I don't know what all this adds, but I suspect it is needed in your case if you do use a Perl script.

To my knowledge this has been fixed in 2.4 or newer kernels (are you running 2.2?), or it was fixed by default from the jump from RH 7.x to RH 8.0.

Maybe one of the real gurus can provide better explanation/help...

In any case you may get better help asking on a Linux kernel list...

http://danconia.org


You have a very good point, I've seen that "LARGE_FILES" thing in the set up, however, the people at RedHat said not to do that, but rather wait for the next release of the 2.4 kernel, at that time (About 6 months ago) 2.4 was real "Buggy" according to them. Yet the current "Advertised" release of RH is 9.0!! Which makes me wonder about it, the stuff you can pay "Support" for is way back on the release scale. Here at work we also have a S/390 running VM and I've been trying to get the "Powers at be" to allow me to use the Linux and all of the things that go with that, gee, like PERL, but it has been a real up hill battle. If any of you can give me a GREAT reason to help me convince them, then I'm "All ears". I can see the "Bennies" of having a whole bunch of servers on ONE box, but it's very difficult to get them to the next step, $30K for TCP/IP for VM, which we would need. But then that 2GB limit hits me square in the face again. To answer your question about the 1.8, YES, when I use ANY piece of software, or do an LS, for example, it only shows 1.8GB when on the WinNT machine where the files sits, it shows 16GB, for example. Didn't matter which piece of software or what "command" I was using. I don't think I would see this if I was using Perl in a Win32 arena, but with all of the troubles I had pushing huge amounts of SQL data through the cgi interface, I had to abandon the Win32 for the more stable and less "Buggy" Linux, but then I ran into the 2GB limit. Looks like WE have to wait until the Enterprise edition gets the newer kernel, agreed? But I HATE waiting... Cal me impatient...

Thanks...



--
Rich Parker


-- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to