Re: URL download and Cache problems

2004-12-16 Thread Dave Cragg
On 16 Dec 2004, at 08:56, Chipp Walters wrote:
Dave Cragg wrote:
Richard's thought may stem from a similar experience to mine. 
Previously, "load" was often preferred because it was the only way to 
show  progress of the download, and not because there was a need to 
do other processing. In my own apps, I almost always need to pause 
other things until a download completes. (e.g. a learner chooses a 
lesson to open, and can't work on it until it has downloaded) "load" 
wasn't ideal for this. But "load" was often recommended over "get" 
because of this ability to show progress. With the 
libUrlStatusCallback option, I now rarely need to use load. It's much 
simpler to use get.
I love libUrlStatusCallback! It also works great with POST and ftp 
uploads/downloads.

Caveat:  when using "get", there's no obvious way to abort a download 
before it completes. This should probably go on the to-do list.
In my apps, I issue 'resetAll' which stops the download. Dave, you 
once mentioned a command something like libURLResetAll? I think it 
does the same thing.
Then it's detention for you, Chipp. :-)
All "resetAll" does is call libUrlResetAll. liburlResetAll is the 
"preferred and official" command. (Because there's a high chance 
someone has a library or stack script with a resetAll handler which 
does something completely unrelated, but more understandable such as 
resetting a set of preferences to their defaults.)

Then I need to reinitialize my libUrlStatusCallback. This can present 
problems when calling from the IDE as it kills all socket activity for 
the engine everywhere, so best be careful.
Be very careful! libUrlResetAll is a particularly brutal way to stop a 
download. I don't recommend it to anyone. For me, it's just a 
development tool and never include it in a distributed stack. But if 
anyone feels the need to include it in an app, be sure that 
libUrlResetAll is the last libUrl call in the handler where it appears.

But until I add an official way to cancel a "get" download, here's a 
little tidbit. You can cancel a download by using this command:

  ulCancelRequest 
This is an internal handler used by libUrl during the "unload url" 
procedure. This handler alone should happily stop a "get" request too.

WARNING:  If you use this in a stack, be prepared to change it when a 
future libUrl release adds an official interface.

Cheers
Dave
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-16 Thread Alex Tweedly
At 08:31 16/12/2004 +, Dave Cragg wrote:
On 15 Dec 2004, at 23:33, Alex Tweedly wrote:
3 if you count libURLDownloadToFile  (though it's for ftp only)
This works with http too. (Perhaps you were thinkig of libUrlFtpUploadFile.)
No, I was thinking of libURLDownloadToFile, and believing what I read in 
the docs :-)

First line of  libURLDownloadToFile entry says
Downloads a file from an Internet server asynchronously via FTP.
Funny thing is, looking back at a couple of my stacks, I've used it to 
download via HTTP - but forgot that while writing that email.

BZ entry coming soon 
-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-16 Thread Chipp Walters
Dave Cragg wrote:
Richard's thought may stem from a similar experience to mine. 
Previously, "load" was often preferred because it was the only way to 
show  progress of the download, and not because there was a need to do 
other processing. In my own apps, I almost always need to pause other 
things until a download completes. (e.g. a learner chooses a lesson to 
open, and can't work on it until it has downloaded) "load" wasn't ideal 
for this. But "load" was often recommended over "get" because of this 
ability to show progress. With the libUrlStatusCallback option, I now 
rarely need to use load. It's much simpler to use get.

I love libUrlStatusCallback! It also works great with POST and ftp 
uploads/downloads.

Caveat:  when using "get", there's no obvious way to abort a download 
before it completes. This should probably go on the to-do list.
In my apps, I issue 'resetAll' which stops the download. Dave, you once 
mentioned a command something like libURLResetAll? I think it does the 
same thing.

Then I need to reinitialize my libUrlStatusCallback. This can present 
problems when calling from the IDE as it kills all socket activity for 
the engine everywhere, so best be careful.

-Chipp
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-16 Thread Dave Cragg
On 15 Dec 2004, at 23:33, Alex Tweedly wrote:
At 12:59 15/12/2004 -0800, Richard Gaskin wrote:
This conversation raises a question -
There are currently two ways to download files, the "load" command 
and "get URL".
3 if you count libURLDownloadToFile  (though it's for ftp only)
This works with http too. (Perhaps you were thinkig of 
libUrlFtpUploadFile.)



It used to be the case that "load" was a better option for longer 
downloads and/or if you need to update a progress bar, since it was 
the only one of the two that was non-blocking and you could querty 
the urlStatus for those downloads.

But now that we have the libUrlStatusCallback option, which provides 
periodic messages for "get URL", is there any benefit to using 
"load"?
Yes, lots of them (I think).
But they can all probably be summarized under one heading, i.e "load" 
is good when you need to get on with something else while the stuff is 
downloading.

Richard's thought may stem from a similar experience to mine. 
Previously, "load" was often preferred because it was the only way to 
show  progress of the download, and not because there was a need to do 
other processing. In my own apps, I almost always need to pause other 
things until a download completes. (e.g. a learner chooses a lesson to 
open, and can't work on it until it has downloaded) "load" wasn't ideal 
for this. But "load" was often recommended over "get" because of this 
ability to show progress. With the libUrlStatusCallback option, I now 
rarely need to use load. It's much simpler to use get.

Caveat:  when using "get", there's no obvious way to abort a download 
before it completes. This should probably go on the to-do list.

Cheers
Dave 

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Alex Tweedly
At 12:59 15/12/2004 -0800, Richard Gaskin wrote:
This conversation raises a question -
There are currently two ways to download files, the "load" command and 
"get URL".
3 if you count libURLDownloadToFile  (though it's for ftp only)
It used to be the case that "load" was a better option for longer 
downloads and/or if you need to update a progress bar, since it was the 
only one of the two that was non-blocking and you could querty the 
urlStatus for those downloads.

But now that we have the libUrlStatusCallback option, which provides 
periodic messages for "get URL", is there any benefit to using "load"?
Yes, lots of them (I think).
libURLSetStatusCallback provides you with periodic callbacks during 
(amongst other things) a
   get URL 
command. However, it would be a bit convoluted to carry on and do other 
processing from within such a callback; updating a progress bar/field would 
be fine, but going on to do more than that feels a bit complicated.

Using load allows you to carry on with other general purpose processing of 
any kind, while still providing both periodic callbacks AND "when complete" 
callback.

Also (if I read the docs correctly - haven't tried it), you cannot initiate 
a second blocking operation while the first one is still in progress (docs 
say this results in an error "Error Previous request not completed").  If 
you want to retrieve a number of URLs in parallel, the load command is 
still the only way.

-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Richard Gaskin
This conversation raises a question -
There are currently two ways to download files, the "load" command and 
"get URL".

It used to be the case that "load" was a better option for longer 
downloads and/or if you need to update a progress bar, since it was the 
only one of the two that was non-blocking and you could querty the 
urlStatus for those downloads.

But now that we have the libUrlStatusCallback option, which provides 
periodic messages for "get URL", is there any benefit to using "load"?

--
 Richard Gaskin
 Fourth World Media Corporation
 __
 Rev tools and more: http://www.fourthworld.com/rev
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Rick Harrison
On Dec 15, 2004, at 2:03 PM, Alex Tweedly wrote:
 put URL "http://www..."; into myVar
Alex,
That solved my problem.
Thanks!
Rick
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Alex Tweedly
At 13:41 15/12/2004 -0500, Rick Harrison wrote:
On Dec 15, 2004, at 1:18 PM, Alex Tweedly wrote:
Yes - simply use
  put URL "http://www..."; into myVar
(or into a field, or wherever)
Alex,
Ok, but how does the program know when it is done downloading
if I use:

  put URL "http://www..."; into myVar
This one does wait until it's done (or has failed)
The docs say:
All actions that refer to a URL container are blocking: that is, the 
handler pauses until Revolution is finished accessing the URL. Since 
fetching a web page may take some time due to network lag, accessing URLs 
may take long enough to be noticeable to the user. To avoid this delay, 
use the load command (which is non-blocking) to cache web pages before you 
need them.
-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Rick Harrison
On Dec 15, 2004, at 1:18 PM, Alex Tweedly wrote:
Yes - simply use
  put URL "http://www..."; into myVar
(or into a field, or wherever)

Alex,
Ok, but how does the program know when it is done downloading
if I use:

  put URL "http://www..."; into myVar
Rick
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Alex Tweedly
At 13:05 15/12/2004 -0500, Rick Harrison wrote:
Alex,
I have some code that changes the URL correctly every time, so that is not 
the issue.

I didn't want to list it because it takes up a lot of pages and is 
unnecessary for our
discussion purposes.
OK - understand.
So the URL load doesn't wait to finish downloading the first file before 
downloading the second.
Great..., that's  just terrible for my purposes.

What I really want is for one download to complete before it starts doing 
another.
Is there anyway to do this?
Yes - simply use
  put URL "http://www..."; into myVar
(or into a field, or wherever)
-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Rick Harrison
On Dec 15, 2004, at 12:58 PM, Alex Tweedly wrote:
At 12:28 15/12/2004 -0500, Rick Harrison wrote:
Hi there,
I'm getting some Revolution weirdness when trying to download
files from the internet.

Alex,
I have some code that changes the URL correctly every time, so that is 
not the issue.

I didn't want to list it because it takes up a lot of pages and is 
unnecessary for our
discussion purposes.

So the URL load doesn't wait to finish downloading the first file 
before downloading the second.
Great..., that's  just terrible for my purposes.

What I really want is for one download to complete before it starts 
doing another.

Is there anyway to do this?
I haven't yet had time to digest your code yet.
I'll get back to you a little later.
Thanks for the quick response!
Rick Harrison

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: URL download and Cache problems

2004-12-15 Thread Alex Tweedly
At 12:28 15/12/2004 -0500, Rick Harrison wrote:
Hi there,
I'm getting some Revolution weirdness when trying to download
files from the internet.
I don't think there's a real problem there, but I'm not sure, because it's 
hard to follow the description / script to be sure I'm correctly 
interpreting what you want.

The basic issue is that the "load" command is non-blocking. So when you do 
the first "load URL" it does not (and should not) wait. It will accept any 
number of load commands, and queues them up (it probably fetches multiple 
of them in parallel, with some limit on how many it has under way at a time).

I'm using:
put "" into field "CachedURLSListField1" of card 1
put "" into field "ClearedCacheList1" of card 1
put 1 into N2
repeat while N2 < 4
load URL field "JPEGURL" of card 1 with message "downloadComplete"
--(Which sends this message to the stack which I handle by updating a 
field with "Status - Download completed")

  export image "JpegImage" of card 2 to file field "ImageFileName1" of 
card 1 as JPEG -- Gets entire image
  put field "CachedURLSListField1" of card 1 & the cachedURLs into field 
"CachedURLSListField1" of card 1

--to list what files are in my cache
--then I supposedly delete the cache with
  unload URL field "JPEGURL" of card 1
--and to check if the file was deleted out of the cache
  put field "ClearedCacheList1" of card 1 & the cachedURLs into field 
"ClearedCacheList1" of card 1
add 1 to N2
end repeat

The problems I'm running into are the following:
1.  The program doesn't wait until the first download is complete before 
moving on to the second download.
That's good :-)
2. The cache isn't getting cleared
Perhaps because you're clearing it (with unload) before that URL has 
finished being loaded; perhaps because there are multiple in-progress 
fetches, and you can never clear one without the next one appearing.

3. It's like Revolution is executing just too fast.
The purpose of the load command is to act fast, and to allow your stack to 
continue with other work while the URLs are being pre-fetched. If you go 
ahead and use the URL before it has completely loaded, you'll still get the 
benefit.

If I simply put
answer "N2 = " & N2
-- before the end repeat everything downloads - but unfortunately, -- I 
have to be around to press the stupid button everytime to -- let the 
program move on, and I don't want to have to do that.

Any ideas as to what is going on, and as to what I need to do to fix things?
It's not clear just what you want to do, overall. It looks to me as though 
the loop is loading the same URL each time round - or am I missing something ?

You might want to do something like
local lNumberDone
on gettheURLs
  put 0 into lNumberDone
  repeat while N2 < 4
 load ..
  end repeat
  send checkifdone in 50 millisecs
end gettheURLS
on checkIfDone
  if lNumberDone = 4 then
 finished ...
  else
 send checkIfDone in 50 millisecs
  end if
end checkIfDone
on downloadComplete pURL, pURLStatus
  if pURLStatus = "Download Complete" then
  put URL pURL into (wherever you want it)
  else
  put "problem with " && pURL && ":" && pURLStatus & CR after field 
"myStatusfield"
  end if
   add 1 to lNumberDone
end downloadComplete

In summary - load is non-blocking.
If this hasn't helped - send more of the script, or a fuller description of 
what you want.

-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


URL download and Cache problems

2004-12-15 Thread Rick Harrison
Hi there,
I'm getting some Revolution weirdness when trying to download
files from the internet.
I'm using:
put "" into field "CachedURLSListField1" of card 1
put "" into field "ClearedCacheList1" of card 1
put 1 into N2
repeat while N2 < 4
load URL field "JPEGURL" of card 1 with message "downloadComplete"
--(Which sends this message to the stack which I handle by updating a 
field with "Status - Download completed")

  export image "JpegImage" of card 2 to file field "ImageFileName1" of 
card 1 as JPEG -- Gets entire image
  put field "CachedURLSListField1" of card 1 & the cachedURLs into 
field "CachedURLSListField1" of card 1

--to list what files are in my cache
--then I supposedly delete the cache with
  unload URL field "JPEGURL" of card 1
--and to check if the file was deleted out of the cache
  put field "ClearedCacheList1" of card 1 & the cachedURLs into field 
"ClearedCacheList1" of card 1
add 1 to N2
end repeat

The problems I'm running into are the following:
1.  The program doesn't wait until the first download is complete 
before moving on to the second download.
2. The cache isn't getting cleared
3. It's like Revolution is executing just too fast.

If I simply put
answer "N2 = " & N2
-- before the end repeat everything downloads - but unfortunately, 
-- I have to be around to press the stupid button everytime to 
-- let the program move on, and I don't want to have to do that.

Any ideas as to what is going on, and as to what I need to do to fix 
things?

Thanks in advance.
Rick Harrison
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution