Yesterday my wife went to a perfectly normal web page and after
a few seconds a porn page replaced it. 

I looked at the HTML page source and found that at the bottom of the
page were hundreds of links, which did not belong there. I called
the publisher of the page, and he determined that his server had been
"hacked" and the links added. 

He is not technicaly inclined at all, and does not have the ability
to check his pages without going to each one in a browser and looking
at the page source. He has thousands of pages and runs the site as
a Jewish news site, with no income.

I was thinking that I could write a program that scans each of his
web pages using wget or lynx to download them, but don't want to 
start writing code if it has been already done. 

Any suggestions? 

Thanks in advance, Geoff.

-- 
Geoffrey S. Mendelson, Jerusalem, Israel [EMAIL PROTECTED]  N3OWJ/4X1GM
IL Voice: (07)-7424-1667 U.S. Voice: 1-215-821-1838 
Visit my 'blog at http://geoffstechno.livejournal.com/

=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to