I am trying to download a page from a specific site.  
The page URL is: http://stockcharts.com/charts/adjusthist.html

First, when you open a browser, say FireFox and go to the site - the top
part (above the dotted line) will appear.  Within a few seconds the
bottom will appear which contains the data I need.
I could adjust the top part in the browser to bring up a different set
of data, but for simplicity the default page that shows - with the data
- is all I need.

The problem is that they are using some backend java app (ajax or
something).  If you try to download the page with wget, you get only the
'top' part of the page with no history.  Yet if you wait a moment with
Firefox, it will show the page - then you save it and voila - the data
is in the HTML. 

Up to now I am using Firefox, with a script tool that pauses after the
page is brought up then does the keystrokes to save the page.  Works
when testing but in the real world you lose focus - the thing fires when
the PC is being used.

Somehow it would be nice to have wget call the page - pause - and then
save it.  I am seeing retries and timeout and other such time related
options but nothing that would just pause.  Another thought would be
issuing a POST command of some sort, and then use a SLEEP option before
I issue the next wget call.  Unfortunately, though I use WGET a lot, I
haven't ventured into the "post" side of the downloads.  

Any suggestions or comments would be appreciated!
cheers
-- 
  Jeff Holicky
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Access your email from home and the web

Reply via email to