I have a table that is:

CREATE TABLE GPSData (
  ID int(10) unsigned NOT NULL auto_increment,
  Lat decimal(9,5) default '0.00000',
  Lon decimal(9,5) default '0.00000',
  TDate datetime default NULL,
  PRIMARY KEY (ID),
  UNIQUE KEY ID (ID),
  KEY ID_2 (ID)
) TYPE=MyISAM;

When I insert a GPS log into the table, it is around 10,000 to 20,000 data points. I don't want duplicate entries, mostly due to sections of the log accidentally being uploaded twice. I am currently doing a SELECT on the Lat, Lon and TDate to see if it already exists, and if not, I do an INSERT. Is this the fastest way? I realize that I probably only need to check the TDate, not the Lat, Lon. Is it better to make the TDate UNIQUE and let the INSERT fail? Should the TDate be a INDEX? Would it be fast if the TDate was stored in UNIXTIME, rather then MySQL time?

It currently takes 15 or 20 minutes to run though a 10K to 20K GPS track logs. This seems too long to me. I took out the INSERTS to just to make sure it wasn't my PHP scripts, and they run in a few seconds without the MySQL calls.

Ron



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to