On Wed, Jul 1, 2009 at 7:42 PM, Mark Smith<li...@futilism.com> wrote: > Sarah, to get the size of what will be returned by a "get url", you need to > issue an HTTP HEAD request, which will return the http headers that would be > returned from a GET request, but without the actual content. Something like > this in a button script: > snip > > Sometimes this seems to take quite a few seconds, and I don't know why (I > think libUrl doesn't like non-GET/POST requests), but if you have curl > available, you can do this: > > get shell("curl -s -I " && quote & tUrl & quote) -- that '-I' is an > uppercase 'I' for India > which will give you the same thing, without any delay.
Thanks very much for this Mark. I tried the UrlHead method first, but got only "Bad request". It worked fine for your html page, but trying on a php page or on an image file gave the bad request, although I can't tell if this is server-dependent, or related to the file types. I have tested Dave Cragg's suggested modifications and they made no difference. The curl method works beautifully on my Mac, but is curl available on Windows computers? However I have now realised that Wikipedia includes the image file size in it's web page, so I can search for it there before downloading - it just isn't as slick as it relies on their formating no changing. If anyone has any further suggestions, I would love to try them. Cheers, Sarah _______________________________________________ use-revolution mailing list use-revolution@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-revolution