David Allen wrote:
>
> Hi,
>
> Because of various political issues during the development of our web
> site, it has been suggested that the web server and php scripts should
> be connected to the database server via a shared 2Meg WAN connection. As
> I am rather green when it comes to setting up internet sites which use
> databases I would welcome the comments of some "power" users who have
> experience of setting up large web database sites. For example:
>
> * I presume it is essential to have as fast links as possible
> (100meg) between the two servers
> * It is normal design philosophy to have the php scripts on the web
> server which talk to the database server or is there some other way
> in which it could be done. What do Amazon do?
>
> Sorry if these are rather basic questions, but I would like the view of
> as many users as possible, and if possible the names of the sites which
> work on that principle.
Having worked on a site that is processing several million transactions
a day I have developed a small set of rules to keep things running
quickly.
1) first make it run then make it fast.
If you are developing a new site the first problem is to get it
working correctly in the first place. You can spend a lot of time
trying to optimize portions of the application that make no
difference. It is
highly likely that the site will be redisigned as it becomes popular.
2) find the things you do most often and then optimize them.
What you really want to do is find where you are spending the most
time and then speed that up. The hard part can be figuring out where
the time is spent.
3) Don't do now what is better put off till later.
There is often a burning desire to enter information into the
databases as soon as possible. At one point we were keeping website
statistics in the database and updating them on every page access.
Althought the marketting people loved the abilty to get "instant"
reports they only ever actually got reports processed at regular
intervals. Moving to postprocessing the apache logs gave us the
ability to suppport 4-5 times the users on the same hardware.
4) Realtime is nice. Distribution is better.
You will often take a real performance hit to run a database that has
lots of updates/deletes/inserts/selects going on at the same time. If
processing of updates can be put off till a slow time then it is better
to do that.
A design that has a single database that all users must access is
limited by the performance of the hardware that that database is
running on.
These are broad rules of thumb and in a given circumstance they may not
apply but I have found they have worked for me.
So for you. If you have to keep the database on the otherside of the wan
try and only run actions where the time of trasmission is much less than
the time to perform the transaction. You may want to think about keeping
a local database for more transient or time-critical information.
--
Alvin Starr || voice: (416)585-9971
Interlink Connectivity || fax: (416)585-9974
[EMAIL PROTECTED] ||
---------------------------------------------------------------------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php