Hi Jason,

This doesn't solve/answer all your questions, but it should get you on the
way. The following code will
parse through page with all the courses listed on it and pull out all the
links to the subdirectory pages.
Then the second part of the script, reads/saves these pages into the
%/courses directory on your system, in the format of %courses/stlucia.htm
%courses/californiacreek etc., the code also shows simple error handling
using 'try [], which is needed as the bowen is a dead link

This should give you a taster of how powerful the command parse is. If you
read through the parse docs at http://www.rebol.com/users/parintro.html it
should help you solve the rest of your requirements.


REBOL []
site: http://www.teetime.com.au/club_pics/qld/
links: copy []
; find the links on the page
link-pattern: [some [skip thru {<A HREF="} copy link to {">} (append links
link)]]
parse read site link-pattern

; save the files in the form of %courses/stlucia.htm
if not exists? %courses/ [make-dir %courses/]

foreach link links [
    prin ["Grabbing" site/:link "..."]
    either error? try [
        write rejoin [%courses/ copy/part link (length? link) - 1 ".htm"]
read site/:link][
        print ["Error"]


       print "OK"
    ]
]

Cheers

Allen K


Reply via email to