Jeremy Zawodny writes:

Can you, or anyone comment on the praticality of doing so? I estimate 10,000 to 30,000 records per web server, per day using 3 remote web servers. The number of web servers would not likely grow to more than 12.

That should be a problem at all.  I know of much larger instances
(millions of records) doing the same on similar (or less) hardware.

Jeremy - good to hear.


Now that I know this is technically possible, which of the following possible solutions would be the cleanest or most efficient from a management perspective:

1) Use mysql replication to have mirror dbs on the DEPOT server.
A job would regularly run on DEPOT to consolidate all data
into one db so that an external system can query/report on.
2) Do not use mysql replication and instead have a job on DEPOT
regularly pull from each webserver and consolidate all data
into one db so that an external system can query/report on.
3) Same as #2, except the web servers would *push* to DEPOT
instead of being *pulled* from.


As another reader commented, #1 could be difficult to manage because of the number of DBs (N*2). Plus, DEPOT is already a master to all web servers for read only data.

#2 and #3 seem to be more appropriate, as long as the jobs are FAST and can be managed. Would Perl be the ideal candidate for this? Since the web servers are remote, performance of DEPOT updates is important - something replication was good at.

It's nice to have different solutions to this puzzle. Choosing the most elegant solution is tricky!

--
../mk


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to