On May 16, 2004, at 1:15 AM, Ron Gilbert wrote:
I have a table that is:
CREATE TABLE GPSData (
ID int(10) unsigned NOT NULL auto_increment,
Lat decimal(9,5) default '0.0',
Lon decimal(9,5) default '0.0',
TDate datetime default NULL,
PRIMARY KEY (ID),
UNIQUE KEY ID (ID),
KEY ID
Ron Gilbert <[EMAIL PROTECTED]> writes:
>It currently takes 15 or 20 minutes to run though a 10K to 20K GPS track
>logs. This seems too long to me. I took out the INSERTS to just to
>make sure it wasn't my PHP scripts, and they run in a few seconds
>without the MySQL calls.
Doing a lot of ins
> data points. I don't want duplicate entries, mostly due to sections of
> the log accidentally being uploaded twice. I am currently doing a
Ok, so it is EXACTLY the same data that might be inserted twice?
- Make a UNIQUE index for the relevant column(s) that uniquely identify a
record.
- Use "
At 17:25 -0600 9/23/02, Derek Scruggs wrote:
> > Where might I find information about optimizing inserts to MySQL tables.
>
>In Paul DuBois's excellent "MySQL" from New Riders, there is a section about
>loading data efficiently in which he talks a little about inserts. In a
>nutshell, LOAD DATA i
> Where might I find information about optimizing inserts to MySQL tables.
In Paul DuBois's excellent "MySQL" from New Riders, there is a section about
loading data efficiently in which he talks a little about inserts. In a
nutshell, LOAD DATA is faster than INSERT, the fewer the indexes the fast