On Sun, 23 Apr 2000, Vidiot wrote regarding Re: downloading web sites:
> >On the contrary, at this time most sites are HTML based and not XML. The ones I
> >am interested in not likely to be the kind of dynamically driven sites you are
> >referring to, and anyway if that is the case I have another method of
> >extracting that info (which is unfortunately not available on my Linux box and
> >so I am using a friend who uses Windows) -- I just want this because it is
> >preferrable to that other method.
> >Alan
> 
> I wasn't talking about XML.  I'm talking about CGI programs that build the HTML
> on the fly.
> 
> I don't know what site you are after, so I do not know how complicated the
> HTML code is for the site.
> 
> If you were to try traversing my site and downloading it, it would take
> approximately 5 GB of space.  I wonder what other sites are like that are
> corporate sites.
> 
> Good luck.  You are going to need it.
> 
 Umm, thanks. Well yes I admit that these are an issue. But then I am not going
to do all my browsing in this way -- it is just to gather available material
that can be read off-line. Other material can be printed as pdf's using
Acrobat, but it is better to have the original stuff, I think.

Alan

 -- 
AlphaByte: PO Box 1941, Auckland, New Zealand
Specialising in:Graphic Design, Education and Training,
Technical Documentation, Consulting.
http://www.alphabyte.co.nz


-- 
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.

Reply via email to