>I agree that whatever the app is, having the ability to unplug the
>database (or for it to go down) and have there be a queue on the other
>machine is ideal.  This will mean that even if the db machine reboots
>for whatever reason in the middle of the night, nobody will ever know
>the difference.  This is good application design.

Might be a good idea for a database doing only inserts, but when you
also
use select statements it does not (of course) work.

> 30's seconds over a 100MB line, that really is a lot of data for a
single query. I can pull 200K rows with 5 columns in about 20 seconds.
(int, double, double, date_time, varchar(2).

Problem could also be the application you use to get the data. Look at
your
processor and see if that is a bottleneck.

Next try to look at your network configuration; try to FTP a file (over
100MB) from the DB server to the client or the other way around and see
what
the transfer rates are.

Jeroen

-----Original Message-----
From: Brent Baisley [mailto:brent@;landover.com.] 
Sent: Friday, October 25, 2002 9:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Performance over a network


It would be helpful to know how much data you are trying to pump across.

If you are having trouble finishing in under 30 seconds over a 100mb 
connection, it must be a lot of data.
The first thing to check is to make sure you have your connections set 
to full duplex. Even if there are only two machines talking you could be

getting a lot of collisions, especially if you are transferring data in 
small amounts.

Which brings me to the next suggestion. If you are doing many individual

sql inserts you may not be using the network efficiently. You want to be

able to fill multiple network packets during your transfer, taking 
advantage of what some refer to as "burst" mode. You should be using 
this style insert:
INSERT INTO db (field1,field2,...) VALUES 
(val1,val2,...),(val1,val2,...),(val1,val2,...),(val1,val2,...),...

If you are still having trouble, you may want to rethink how you are 
going about transferring the data. Perhaps creating an import file 
locally and transferring the file over to the database machine. You then

have a program on the db machine to process files that are transferred. 
In this scenario you don't have any timing issues since you are 
essentially creating a queue that is being processed on the db machine. 
Once a file is processed it's deleted and then the program checks for 
any other files ot process. This also allows you to take the database 
down for maintenance if you have to. Lots of benefits to this setup.


On Thursday, October 24, 2002, at 08:45 PM, [EMAIL PROTECTED] 
wrote:

> *     Is there any explicit tuning which can be done to speed up
access
>       over the network (short of adding gig-ethernet cards which isn't
>       likely) ?
>
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to