I used wget http://www.usatoday.com/wireless/palm_os/ and it saved the wap file 
as index.html and plucked this file off my harddrive.  The file plucked ok on 
the first level, but none of the links would open to the 2nd level.  How do I 
convert the links to html?

Marty

On Fri, 17 Feb 2006 07:59:44 -0500 (EST)
"David A. Desrosiers" <[EMAIL PROTECTED]> wrote:

> 
> > also, if you rename the downloaded xml or wml file to an html file 
> > then firefox will open it as a readable web page.  The links to the 
> > news stories won't work though (even if you put the usatoday web 
> > address before the link).
> 
>       You'll need to convert it to something Plucker can handle. In 
> my case, I use some Perl XML modules to do the magic. I've never done 
> wml, but its just XML, so that should be easy. I have Plucker doing 
> rss, rdf, opml, xml feeds now, through a web-based application, and I 
> don't see WML being any more difficult.
> 
>       For the page changing names, you have to follow the 301 
> response to the other page. In this case, it points to:
> 
>       http://www.usatoday.com/wireless/palm_os/
> 
>       You can find the details of the status code here:
>       http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.2
> 
>       Good luck!
> 
> 
> 
> David A. Desrosiers
> [EMAIL PROTECTED]
> http://gnu-designs.com
> 
> _______________________________________________
> plucker-list mailing list
> plucker-list@rubberchicken.org
> http://lists.rubberchicken.org/mailman/listinfo/plucker-list
_______________________________________________
plucker-list mailing list
plucker-list@rubberchicken.org
http://lists.rubberchicken.org/mailman/listinfo/plucker-list

Reply via email to