The method which I use is to:
1. Perform periodic backup of the entire Web site, including SQL dumps
of any databases driving it.
2. Download the backup files to PC.
3. Open them (into a subdirectory and import into a new DB instance,
respectively).
4. Run 'diff' between the opened files and the previous backup.

For regular files, use 'diff'.  For DB comparison of two MySQL DBs, I
use a Python script, which I wrote.
                                           --- Omer

On Mon, 2008-01-28 at 09:03 +0200, Geoffrey S. Mendelson wrote:
> Yesterday my wife went to a perfectly normal web page and after
> a few seconds a porn page replaced it. 
> 
> I looked at the HTML page source and found that at the bottom of the
> page were hundreds of links, which did not belong there. I called
> the publisher of the page, and he determined that his server had been
> "hacked" and the links added. 
> 
> He is not technicaly inclined at all, and does not have the ability
> to check his pages without going to each one in a browser and looking
> at the page source. He has thousands of pages and runs the site as
> a Jewish news site, with no income.
> 
> I was thinking that I could write a program that scans each of his
> web pages using wget or lynx to download them, but don't want to 
> start writing code if it has been already done. 
> 
> Any suggestions? 

-- 
MS-Windows is the Pal-Kal of the PC world.
My own blog is at http://www.zak.co.il/tddpirate/

My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with which
I may be affiliated in any way.
WARNING TO SPAMMERS:  at http://www.zak.co.il/spamwarning.html


=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to