Re: optimizing inserts

2004-05-16 Thread Jigal van Hemert
 data points.  I don't want duplicate entries, mostly due to sections of
 the log accidentally being uploaded twice.  I am currently doing a

Ok, so it is EXACTLY the same data that might be inserted twice?

- Make a UNIQUE index for the relevant column(s) that uniquely identify a
record.
- Use INSERT IGNORE 

This way you will have no duplicates and no inserts will fail (MySQL will
simply ignore duplicate entries)

Hope this helps a bit.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimizing inserts

2004-05-16 Thread beacker
Ron Gilbert [EMAIL PROTECTED] writes:
It currently takes 15 or 20 minutes to run though a 10K to 20K GPS track 
logs.  This seems too long to me.  I took out the INSERTS to just to 
make sure it wasn't my PHP scripts, and they run in a few seconds 
without the MySQL calls.

Doing a lot of inserts in this manner introduces considerable delays for
each one as the information is sent across the connection, processed, and
the result returned.  Have you considered using LOAD DATA INFILE...?
You can transfer across the entire data set into a temporary file and then
use the LOAD DATA command of the temporary file.  To make sure you haven't
already loaded the data set you could do a single select on the first
element of the data set, loading if there are no rows returned.
 Brad Eacker ([EMAIL PROTECTED])



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimizing inserts

2004-05-16 Thread Bill Allaire
On May 16, 2004, at 1:15 AM, Ron Gilbert wrote:
I have a table that is:
CREATE TABLE GPSData (
  ID int(10) unsigned NOT NULL auto_increment,
  Lat decimal(9,5) default '0.0',
  Lon decimal(9,5) default '0.0',
  TDate datetime default NULL,
  PRIMARY KEY (ID),
  UNIQUE KEY ID (ID),
  KEY ID_2 (ID)
) TYPE=MyISAM;

Why do you have a unique key on 'id' when you also have a primary key 
on 'id'? Primary keys are unique.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


optimizing inserts

2004-05-15 Thread Ron Gilbert
I have a table that is:
CREATE TABLE GPSData (
  ID int(10) unsigned NOT NULL auto_increment,
  Lat decimal(9,5) default '0.0',
  Lon decimal(9,5) default '0.0',
  TDate datetime default NULL,
  PRIMARY KEY (ID),
  UNIQUE KEY ID (ID),
  KEY ID_2 (ID)
) TYPE=MyISAM;
When I insert a GPS log into the table, it is around 10,000 to 20,000 
data points.  I don't want duplicate entries, mostly due to sections of 
the log accidentally being uploaded twice.  I am currently doing a 
SELECT on the Lat, Lon and TDate to see if it already exists, and if 
not, I do an INSERT.  Is this the fastest way?  I realize that I 
probably only need to check the TDate, not the Lat, Lon.  Is it better 
to make the TDate UNIQUE and let the INSERT fail?  Should the TDate be a 
INDEX?  Would it be fast if the TDate was stored in UNIXTIME, rather 
then MySQL time?

It currently takes 15 or 20 minutes to run though a 10K to 20K GPS track 
logs.  This seems too long to me.  I took out the INSERTS to just to 
make sure it wasn't my PHP scripts, and they run in a few seconds 
without the MySQL calls.

Ron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


optimizing inserts

2002-09-23 Thread Jamie Beu


Where might I find information about optimizing inserts to MySQL tables.

I've purchased 2 books, and so far, the only optimizations I see concern
reads, not writes.

The only advice I've received so far on optimizing writes is to have my
application dump the records to a text file first, and do periodic LOAD
DATA INFILE commands, so as to bulk insert records.

Are there any websites/webpages or books that detail optimizing writes to
MySQL databases (aside from more memory or faster CPU or faster hard
drive).

Thanx,

Jamie Beu
Lockheed-Martin Information Systems
Software Engineer
CTIA
(407) 306-2484

The true soldier fights not because he hates what is in
 front of him, but because he loves what is behind him.
G. K. Chesterton - Illustrated London News, 1/14/11



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: optimizing inserts

2002-09-23 Thread Derek Scruggs

 Where might I find information about optimizing inserts to MySQL tables.

In Paul DuBois's excellent MySQL from New Riders, there is a section about
loading data efficiently in which he talks a little about inserts. In a
nutshell, LOAD DATA is faster than INSERT, the fewer the indexes the faster,
the shorter the statements the faster, let MySQL handle default values...
and a few other things.

If you're using INSERT, the syntax INSERT INTO table VALUES(...),(...),...
is preferred because it allows you to batch multiple inserts.

Which leads me to a follow-on question for Paul if he's reading. If batching
is not an option, is this syntax still faster than INSERT INTO table SET
col=value,... ?

I'm working on an OO app and would like to use the objects for batch imports
 exports. I know this will be slower, of course, but the table
relationships are rather complex, the objects are stable and debugged and
I'd rather not introduce a new uncertainty, especially since import/export
will be used rarely. That said, the objects' insert methods use the SET
col=value syntax and I'm wondering if I should re-write them to use the
VALUES(...) syntax. I'd rather not do that if there's no performance
benefit.

-Derek

PS This book is my bible for MySQL and I highly recommend it.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: optimizing inserts

2002-09-23 Thread Paul DuBois

At 17:25 -0600 9/23/02, Derek Scruggs wrote:
   Where might I find information about optimizing inserts to MySQL tables.

In Paul DuBois's excellent MySQL from New Riders, there is a section about
loading data efficiently in which he talks a little about inserts. In a
nutshell, LOAD DATA is faster than INSERT, the fewer the indexes the faster,
the shorter the statements the faster, let MySQL handle default values...
and a few other things.

If you're using INSERT, the syntax INSERT INTO table VALUES(...),(...),...
is preferred because it allows you to batch multiple inserts.

Which leads me to a follow-on question for Paul if he's reading. If batching
is not an option, is this syntax still faster than INSERT INTO table SET
col=value,... ?

I'm working on an OO app and would like to use the objects for batch imports
 exports. I know this will be slower, of course, but the table
relationships are rather complex, the objects are stable and debugged and
I'd rather not introduce a new uncertainty, especially since import/export
will be used rarely. That said, the objects' insert methods use the SET
col=value syntax and I'm wondering if I should re-write them to use the
VALUES(...) syntax. I'd rather not do that if there's no performance
benefit.

I am not certain, but I suspect the only difference really lies in
time to parse the different forms of the statement.  I say this because,
parsing time aside, the actions associated internally with each form
of the statement are extremely similar.

Upshot: negligible difference, if any.


-Derek

PS This book is my bible for MySQL and I highly recommend it.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php