Another issue that is easy to overlook is your indexing. If you have a lot of 
indexing and keep increasing the database, expect to see a significant 
slowdown during inserts into a well populated db, compared to an empty one.

Since you were trying on local machine first, and networked later, it could be 
the case.

niclas

On Saturday 26 October 2002 13:11, [EMAIL PROTECTED] wrote:
> > It would be helpful to know how much data you are trying to pump across.
> > If you are having trouble finishing in under 30 seconds over a 100mb
> > connection, it must be a lot of data.
> > The first thing to check is to make sure you have your connections set
> > to full duplex. Even if there are only two machines talking you could be
> > getting a lot of collisions, especially if you are transferring data in
> > small amounts.
>
> Ooops, my bad.  The inserts are bulked up into chunks of 1000 (it was
> 100, but I increased the number to see if it cured the problem -
> doesn't, but the insertions are faster as I would have expected), and
> there are up to 100,000 of them.
>
> Agree about the full/half duplex issue. I believe that they're both
> full, but will check.
>
> > Which brings me to the next suggestion. If you are doing many individual
> > sql inserts you may not be using the network efficiently. You want to be
> > able to fill multiple network packets during your transfer, taking
> > advantage of what some refer to as "burst" mode. You should be using
> > this style insert:
> > INSERT INTO db (field1,field2,...) VALUES
> > (val1,val2,...),(val1,val2,...),(val1,val2,...),(val1,val2,...),...
> >
> > If you are still having trouble, you may want to rethink how you are
> > going about transferring the data. Perhaps creating an import file
> > locally and transferring the file over to the database machine. You then
> > have a program on the db machine to process files that are transferred.
> > In this scenario you don't have any timing issues since you are
> > essentially creating a queue that is being processed on the db machine.
> > Once a file is processed it's deleted and then the program checks for
> > any other files ot process. This also allows you to take the database
> > down for maintenance if you have to. Lots of benefits to this setup.
>
> That's true, I hadn't thought that option through completely. Using
> import files would help fix another problem with the current design
> that I've managed to produce in testing with lots of data (150,000
> inserts per 30 seconds). If the child process doesn't complete in 30
> seconds, children back up on the server, and eventually the HEAP table
> gets full. The system takes care of that automatically, but there are
> previously created children trying to use a full table who don't realise
> that there is another available. The design is a little flaky at that
> point.
>
>
> Paul Wilson
> Chime Communications
>
>
> ---------------------------------------------------------------------
> Before posting, please check:
>    http://www.mysql.com/manual.php   (the manual)
>    http://lists.mysql.com/           (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]> Trouble
> unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to