Rahul wrote: > Ave, > > It's definitely not live data, so that is not a problem at all. But I'm > not > sure I understand your method very well. > > I do understand getting data from both the existing DBF and the multiple > mySQL tables into a temporary mySQL table. But if I do go ahead and do > that, > I guess I could write a 'delete-duplicates' kind of code that deletes all > rows in that temporary table which are duplicates, and then add the > leftover > into the DBF. > > Not sure how this sounds, or how close this is to what you were saying. > And > not even sure how to implement this.
Do you need to update this more than once a day? Is there a date field in all the tables? If you can do it daily, then query for records from the previous day and run it once daily at midnight via cron job. If it has to be done more often than once a day there are other solutions. One thing I thought is to store the name of the table it came from along with the primary key in that table. These 2 fields combined will be your unique identifier. Store this in your DBF and check for existence before you insert. With that solution in mind however you're still querying for the entire set of data which is very inefficient. A better solution would be to add a column in the MySQL table, maybe call it "processed" with a default value of 0, and update this value to 1 with each row inserted. Then you are only querying records where processed=0. Of course this will not work if you cannot modify the MySQL table. Best of luck, Brad -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php