Do you 'refresh' your tables before each test ?  If so,  is it possible that
you're writing over the same disk space each time (which would point to a
drive surface problem).   Is it possible that the larger runs of 'inserts'
are using more RAM than the smaller tests, and you're hitting a bad memory
chip ?

Are you running the tests at 'full speed',  and are your drives setup to
handle high levels of 'writes' ?  (are they 'striped' or are you writing to
1 disk ?)

----- Original Message -----
From: "Larry Hotchkiss" <[EMAIL PROTECTED]>
To: "'MySQL List'" <[EMAIL PROTECTED]>
Sent: Monday, May 21, 2001 10:05 AM
Subject: table corruption during large import.


> I am having a heck of a time here. Im running RH 7.0 kernel 2.2.16 on a
> PII 300, 128meg RAM. Mysql 2.23.32, php 4.04. I have a DB set up that has
4
> tables. 3 of the four only have a couple thousand records, but one table
> will have somewhere between 80k to maybe as many as a million records.
>
> The DB is a backend for a dynamic site and I have scripts to allow the
> uploading of files. These files are CVS and are handled by php scripts.
> Every night I plan on having a cron job run a script to copy the files to
a
> working directory, deleting the originals and then processing the files
one
> at a time, line by line and inserting them into the table in question.
> During testing I am using a data file that has 481 records. To mimick real
> scenario, I simply copy the file giving it a bunch of new names, but the
> content is identicle. Then I run my script to import the data. Everything
> seems fine upto about 200k records. After that things seems to go awry. At
> about 400k records the import runs fine, no erros and running CHECK TABLE
> shows no problems, however, if I try to do a select, I get an error 127.
If
> I up the import to ver 1 million records for testing purposes my script
> actually aborts with error 127 and check table reports corruption.
>
> I thought perhaps it was a bug so updated via the RH RPM to mysql 3.23.36.
> I still have the same problem and my performance on small tables seems
> greatly reduced. Does anyone have any ides why this at moderat record
> counts I get thie error even though check table reports no problems? And
> why does my script abort with a corrupt table when I break 1 million
> records?
>
> P.S. I am no where close to OS file limitations, at 1.1 million records,
my
> table is about 60 meg.
>
> ---------------------------------
> Larry Hotchkiss
>
>
>
> ---------------------------------------------------------------------
> Before posting, please check:
>    http://www.mysql.com/manual.php   (the manual)
>    http://lists.mysql.com/           (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to