Re: [teaser] for third party developers

2009-08-05 Thread Bernard Devlin
Sean, I think this is a great idea.  Not only would it centralize
documentation for different libraries, but it could also serve as a
kind of advert/reminder of those libraries.

Many users of this list have written a wide variety of helpful
libraries.  If these libraries could have their documentation listed
with Rev in this 'Third Party' category, then once someone had Rev
installed merely by looking through this category they would see how
many great solutions have been written using Rev.

I'm thinking that this would need some approval from Runrev, e.g. a
format for the docs (you wouldn't want this broken if the docs
changed).  And it would require the people who write the libraries to
e.g. make their documentation conform to that format and either submit
their documentation to Runrev for inclusion or make it available at a
network reachable URL so that the Dictionary could go out to the
internet and harvest the documentation.

Just imagine how many features this would add to a Rev user's toolkit:
date manipulation libraries, S3, mail, growl, curl, enhanced
quicktime, error reporting, JSON, ip address resolution, kiosk mode,
etc.  I'm sure there are many more that I don't even remember.

Obviously the new data grid documentation could also be included
(although I imagine that's already on the cards for inclusion in the
main dicationary).

Alternatively, maybe we can set up a single website where developers
could submit their formatted documentation, and then there would only
be one place for the Dictionary to check to see if there is new
documentation to add.

Maybe some developers would prefer to just keep their stuff
exclusively on their own site, as a way of drawing attention to the
other things they do.

I can imagine that if Runrev were to make any official way to add to
the docs, they would want to distance themselves from the third party
libraries.

Anyway, I think it would be a way of making the whole Rev ecosphere
stronger.  I've seen it mentioned many times that only a small
fraction of users will ever post a question to a mailing list or
forum.  They might just download Rev, think Rev can't do what they
want, and forget about Rev.

Bernard

On Wed, Aug 5, 2009 at 5:46 AM, Shao Seanshaos...@wehostmacs.com wrote:
 I have been working on this for quite some time now and have the majority of
 the basics working so am going to publicly mention it (a few knew about it
 privately)..

 As the subject states, this is for the group of Rev developers who develop
 third party add-ons but it is also for anyone who uses any of those
 add-ons..

 What is it? It is a small hack to the IDE (no changes are made to any of
 their files) that allows third party developers to place their documentation
 into the Revolution dictionary into a new category called Third Party..

 Check out the latest screen capture  http://shaosean.tk/images/teaser2.png
 and anyone interested in testing it out, please feel free to email me 
 info AT shaosean DOT tk 

 -Sean
 ___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your subscription
 preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


[script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Malte Brill

Hey all,

I just try to implement a life search on a datagrid. I was doing this  
with the dgtext property. However this turns out to be too slow on  
elder machines if there are many records (30k +) Now I am trying to  
instead of setting the dgtext, to work with the dgdata. This could  
speed up the whole process quite a lot, as the data would not need to  
be turned into an array again by the data grid. Problem: arrays can  
not be filtered. So what I would like to do is find the quickest  
script that simulates array filtering in n dimensions if at any rate  
possible. My clumsy first try looks like this. This only filters the  
second level yet, so turning this into a function for n-levels deep  
would be ideal. :)


on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
  put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   put the keys of testarray into tprocess
   repeat for each line theLine in tprocess
  if testarray[theline][name]=john then
 delete variable testarray[theline]
  end if
   end repeat
   answer the number of lines of the keys of testarraycrthe  
millisecs-test

end mouseUp

This runs in 31 ms on my machine (Intel MacBook first gen, 2.16 GHz).  
I would like to have this quicker if possible. Also I´d like the  
runtime on your machines, especially Macs pre Intel era.


Any thoughts highly appreciated.

All the best,

Malte___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Jim Sims


On Aug 5, 2009, at 3:16 PM, Malte Brill wrote:

 Also I´d like the runtime on your machines, especially Macs pre  
Intel era.


17 PB  PPC G4   2 GB Ram   mac os version:  10.5.7

84 ms

sims

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [teaser] for third party developers

2009-08-05 Thread dam-pro.gir...@laposte.net
Sean, this is a really great things!

I contact you off-list on how to integrate your extension inside NativeDoc.

Damien Girard
Dam-pro, France.
http://www.dam-pro.com - Check out the new website!

 Message du 05/08/09 10:57
 De : Bernard Devlin
 A : How to use Revolution
 Copie à :
 Objet : Re: [teaser] for third party developers


 Sean, I think this is a great idea. Not only would it centralize
 documentation for different libraries, but it could also serve as a
 kind of advert/reminder of those libraries.

 Many users of this list have written a wide variety of helpful
 libraries. If these libraries could have their documentation listed
 with Rev in this 'Third Party' category, then once someone had Rev
 installed merely by looking through this category they would see how
 many great solutions have been written using Rev.

 I'm thinking that this would need some approval from Runrev, e.g. a
 format for the docs (you wouldn't want this broken if the docs
 changed). And it would require the people who write the libraries to
 e.g. make their documentation conform to that format and either submit
 their documentation to Runrev for inclusion or make it available at a
 network reachable URL so that the Dictionary could go out to the
 internet and harvest the documentation.

 Just imagine how many features this would add to a Rev user's toolkit:
 date manipulation libraries, S3, mail, growl, curl, enhanced
 quicktime, error reporting, JSON, ip address resolution, kiosk mode,
 etc. I'm sure there are many more that I don't even remember.

 Obviously the new data grid documentation could also be included
 (although I imagine that's already on the cards for inclusion in the
 main dicationary).

 Alternatively, maybe we can set up a single website where developers
 could submit their formatted documentation, and then there would only
 be one place for the Dictionary to check to see if there is new
 documentation to add.

 Maybe some developers would prefer to just keep their stuff
 exclusively on their own site, as a way of drawing attention to the
 other things they do.

 I can imagine that if Runrev were to make any official way to add to
 the docs, they would want to distance themselves from the third party
 libraries.

 Anyway, I think it would be a way of making the whole Rev ecosphere
 stronger. I've seen it mentioned many times that only a small
 fraction of users will ever post a question to a mailing list or
 forum. They might just download Rev, think Rev can't do what they
 want, and forget about Rev.

 Bernard

 On Wed, Aug 5, 2009 at 5:46 AM, Shao Sean wrote:
  I have been working on this for quite some time now and have the majority of
  the basics working so am going to publicly mention it (a few knew about it
  privately)..
 
  As the subject states, this is for the group of Rev developers who develop
  third party add-ons but it is also for anyone who uses any of those
  add-ons..
 
  What is it? It is a small hack to the IDE (no changes are made to any of
  their files) that allows third party developers to place their documentation
  into the Revolution dictionary into a new category called Third Party..
 
  Check out the latest screen capture  http://shaosean.tk/images/teaser2.png
  and anyone interested in testing it out, please feel free to email me 
  info AT shaosean DOT tk 
 
  -Sean
  ___
  use-revolution mailing list
  use-revolution@lists.runrev.com
  Please visit this url to subscribe, unsubscribe and manage your subscription
  preferences:
  http://lists.runrev.com/mailman/listinfo/use-revolution
 
 ___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your subscription 
 preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution

 

 Créez votre adresse électronique prenom@laposte.net 
 1 Go d'espace de stockage, anti-spam et anti-virus intégrés.
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Andre Garzia
Malte,
I don't know if there would be any improvement in what I am telling you but
what if you combined the array, filtered the lines and split up again?

Andre

On Wed, Aug 5, 2009 at 2:16 PM, Malte Brill revolut...@derbrill.de wrote:

 Hey all,

 I just try to implement a life search on a datagrid. I was doing this with
 the dgtext property. However this turns out to be too slow on elder machines
 if there are many records (30k +) Now I am trying to instead of setting the
 dgtext, to work with the dgdata. This could speed up the whole process quite
 a lot, as the data would not need to be turned into an array again by the
 data grid. Problem: arrays can not be filtered. So what I would like to do
 is find the quickest script that simulates array filtering in n dimensions
 if at any rate possible. My clumsy first try looks like this. This only
 filters the second level yet, so turning this into a function for n-levels
 deep would be ideal. :)

 on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
  put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   put the keys of testarray into tprocess
   repeat for each line theLine in tprocess
  if testarray[theline][name]=john then
 delete variable testarray[theline]
  end if
   end repeat
   answer the number of lines of the keys of testarraycrthe millisecs-test
 end mouseUp

 This runs in 31 ms on my machine (Intel MacBook first gen, 2.16 GHz). I
 would like to have this quicker if possible. Also I´d like the runtime on
 your machines, especially Macs pre Intel era.

 Any thoughts highly appreciated.

 All the best,

 Malte___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution




-- 
http://www.andregarzia.com All We Do Is Code.
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Installer Standalone Settings

2009-08-05 Thread Marty Knapp
I'm getting ready to distribute a Mac only standalone that has some 
stacks that I'll need to put into a folder within the Application 
support folder. Would it be a good choice to make my own installer 
program with Rev and suck up the App and all needed stacks into custom 
properties, then upon installation place things where I need?


I'll be using the splash screen style standalone along with with these 
various stacks, and there will be stacks that the user will create with 
the program - do I keep the .rev extension on these stacks or use 
something else? Are there rules or suggestions related to this? I assume 
that the Document Type, Document Extension and Signature fields in 
the Standalone settings (OSX pane) are for this purpose, though I don't 
see anything about these in the docs, nor could I find anything on this 
elsewhere.


My current testing shows that stacks saved from my Rev built standalone 
and using the .rev extension show as Metacard Stacks when I get info 
on them, and the Open with menu shows a program that I downloaded some 
time back that was apparently created as a Metacard standalone . . . I 
do have other Rev created standalones on my drive but the Finder doesn't 
think they're the owner of my test stacks. Kinda strange . . .


At any rate, thanks for any input and suggestions.

Marty Knapp
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Andre Garzia
Malte,
On my machine, your original code took between 28 and 31 milisecs,
this code here:

on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
      put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   repeat for each key x in testarray
      if testarray[x][name]=john then
         delete variable testarray[theline]
      end if
   end repeat
   answer the number of lines of the keys of testarraycrthe millisecs-test
end mouseUp

Takes about 19 milisecs, so it's good improvement.
On Wed, Aug 5, 2009 at 4:34 PM, Andre Garzia an...@andregarzia.com wrote:

 Malte,
 I don't know if there would be any improvement in what I am telling you but 
 what if you combined the array, filtered the lines and split up again?
 Andre

 On Wed, Aug 5, 2009 at 2:16 PM, Malte Brill revolut...@derbrill.de wrote:

 Hey all,

 I just try to implement a life search on a datagrid. I was doing this with 
 the dgtext property. However this turns out to be too slow on elder machines 
 if there are many records (30k +) Now I am trying to instead of setting the 
 dgtext, to work with the dgdata. This could speed up the whole process quite 
 a lot, as the data would not need to be turned into an array again by the 
 data grid. Problem: arrays can not be filtered. So what I would like to do 
 is find the quickest script that simulates array filtering in n dimensions 
 if at any rate possible. My clumsy first try looks like this. This only 
 filters the second level yet, so turning this into a function for n-levels 
 deep would be ideal. :)

 on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
      put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   put the keys of testarray into tprocess
   repeat for each line theLine in tprocess
      if testarray[theline][name]=john then
         delete variable testarray[theline]
      end if
   end repeat
   answer the number of lines of the keys of testarraycrthe millisecs-test
 end mouseUp

 This runs in 31 ms on my machine (Intel MacBook first gen, 2.16 GHz). I 
 would like to have this quicker if possible. Also I´d like the runtime on 
 your machines, especially Macs pre Intel era.

 Any thoughts highly appreciated.

 All the best,

 Malte___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your subscription 
 preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution



 --
 http://www.andregarzia.com All We Do Is Code.



--
http://www.andregarzia.com All We Do Is Code.
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Digging Huge Files

2009-08-05 Thread Mark Wieder
Sivakatirswami-

Well, your tCompleted also remains constant - it's just the ratio of
download locations that changes. Which makes sense, since what you're
displaying is

line 1: tRevHits
line 2: tCompleted - tRevHits

Also, are you really saying put (1quote200 ) into tCompleteCode?
Shouldn't that be put (1quote200) into tCompleteCode?
From your output it looks like you're trying to trap the (1.1 200)
string, which has a space *before* the 200. Same thing for the 206.

You don't say where the mouseUp handler is located, but since you're
not initializing your variables to zero before running, could there be
some leftover garbage in there from the previous run?

-- 
-Mark Wieder
 mwie...@ahsoftware.net

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Bernard Devlin
Malte, using your original handler I was getting an approx time of
22ms.  Creating a new array instead of deleting the variable (see
below) gives me an approx time of 7ms.

Will that work for you?  It's very significantly faster.

Powerbook G4 PPC 1.67ghz

Bernard


on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
  put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   put the keys of testarray into tprocess
   repeat for each line theLine in tprocess
  if testarray[theline][name]=john then
 put john into tNewArray[theLine]
  end if
   end repeat
   answer the number of lines of the keys of tNewArraycrthe millisecs-test
end mouseUp

On Wed, Aug 5, 2009 at 2:16 PM, Malte Brillrevolut...@derbrill.de wrote:
 Hey all,

 I just try to implement a life search on a datagrid. I was doing this with
 the dgtext property. However this turns out to be too slow on elder machines
 if there are many records (30k +) Now I am trying to instead of setting the
 dgtext, to work with the dgdata. This could speed up the whole process quite
 a lot, as the data would not need to be turned into an array again by the
 data grid. Problem: arrays can not be filtered. So what I would like to do
 is find the quickest script that simulates array filtering in n dimensions
 if at any rate possible. My clumsy first try looks like this. This only
 filters the second level yet, so turning this into a function for n-levels
 deep would be ideal. :)

 on mouseUp
   local testarray,tprocess,test
   repeat with i=1 to 3
      put any item of meier,müller,john,doe into testarray[i][name]
   end repeat
   answer the number of lines of the keys of testarray
   put the millisecs into test
   put the keys of testarray into tprocess
   repeat for each line theLine in tprocess
      if testarray[theline][name]=john then
         delete variable testarray[theline]
      end if
   end repeat
   answer the number of lines of the keys of testarraycrthe millisecs-test
 end mouseUp

 This runs in 31 ms on my machine (Intel MacBook first gen, 2.16 GHz). I
 would like to have this quicker if possible. Also I´d like the runtime on
 your machines, especially Macs pre Intel era.

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


cookbook for putting Rev app on the web?

2009-08-05 Thread Sadhu Nadesan

Greetings,

Is there a simple cookbook list somewhere of how to convert a stand 
alone application to a web application?  Something like


1) recompile using these settings
2) put this code on your web page
3) put your executable here on the server
4) you are done

or similar?

Mahalo
Sadhu

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Mark Wieder
Andre-

Wednesday, August 5, 2009, 8:34:36 AM, you wrote:

 I don't know if there would be any improvement in what I am telling you but
 what if you combined the array, filtered the lines and split up again?

I thought so too, but the combine operator seems to be pretty slow. In
addition, I couldn't figure out how to get combine to work with a
multidimensional array...

-- 
-Mark Wieder
 mwie...@ahsoftware.net

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Mark Wieder
Malte-

Here's my attempt. Putting the results into a variable instead of an
array element seems to shave about 25% off the total run time. I'm
assuming you want the results here rather than just a count - doing a
total count is quite a bit faster than accumulating the actual hits.

on mouseUp
local testarray,tprocess,test
local newVar

put empty into newVar
repeat with i=1 to 3
put any item of meier,müller,john,doe into testarray[i][name]
end repeat
put the millisecs into test
put the keys of testarray into tprocess
repeat for each line x in tprocess
if john is in testarray[x][name] then
put x  cr after newVar
end if
end repeat
answer the number of lines of newVar  cr  the millisecs-test
end mouseUp

-- 
-Mark Wieder
 mwie...@ahsoftware.net

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: QTVR no longer works on new Windows computers

2009-08-05 Thread Trevor DeVore

On Aug 5, 2009, at 1:01 PM, stgoldb...@aol.com wrote:

Does anyone know of a workaround for this problem, or whether Apple  
plans
to update its Quicktime Player to support QTVR on new Windows  
computers?


I filed a bug report on this and have been exchanging emails with  
someone at Apple over the last few days regarding this issue. I don't  
know when/if Apple will fix it but they are being proactive and have  
been collecting data on the make/model/video card driver of computers  
that fail to display QTVR.


Regards,

--
Trevor DeVore
Blue Mango Learning Systems
www.bluemangolearning.com-www.screensteps.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [script optimization challenge] Multi dimensional array filtering

2009-08-05 Thread Brian Yennie

Malte,

Beyond the ideas already presented, the only thing I can think of -  
and this would be a bit of work - is that if there are particular  
fields you know you will want to filter on, you could maintain a  
*sorted* copy of dgdata. For example, if you had a copy of dgdata  
sorted by name, you could filter on name very quickly using a binary  
search.



Hey all,

I just try to implement a life search on a datagrid. I was doing  
this with the dgtext property. However this turns out to be too slow  
on elder machines if there are many records (30k +) Now I am trying  
to instead of setting the dgtext, to work with the dgdata. This  
could speed up the whole process quite a lot, as the data would not  
need to be turned into an array again by the data grid. Problem:  
arrays can not be filtered. So what I would like to do is find the  
quickest script that simulates array filtering in n dimensions if at  
any rate possible. My clumsy first try looks like this. This only  
filters the second level yet, so turning this into a function for  
n-levels deep would be ideal. :)


on mouseUp
 local testarray,tprocess,test
 repeat with i=1 to 3
put any item of meier,müller,john,doe into testarray[i][name]
 end repeat
 answer the number of lines of the keys of testarray
 put the millisecs into test
 put the keys of testarray into tprocess
 repeat for each line theLine in tprocess
if testarray[theline][name]=john then
   delete variable testarray[theline]
end if
 end repeat
 answer the number of lines of the keys of testarraycrthe  
millisecs-test

end mouseUp

This runs in 31 ms on my machine (Intel MacBook first gen, 2.16  
GHz). I would like to have this quicker if possible. Also I´d like  
the runtime on your machines, especially Macs pre Intel era.


Any thoughts highly appreciated.

All the best,

Malte

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


revlet and path

2009-08-05 Thread Yves COPPE

Hi list

Here is a very simple script :

on mouseUp
put empty into fld resultat
   set itemDel to /
   put item 1 to -2 of the effective fileName of this stack into tPath
   set itemDel to comma
   put tPath into fld resultat
end mouseUp

In the IDE, it gives the right path
when I build a revlet, put it on my server and open it with Safari, it  
gives ... nothing ! fld resultat stays empty


Any idea ?

Thanks.

Greetings.

Yves COPPE
yvesco...@skynet.be

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Chunks vs Arrays - surprising benchmarking results

2009-08-05 Thread Richard Gaskin
The Multi dimensional array filtering thread reminded me of a 
benchmarking test I've been wanting to do for some time, and since I 
have some tasks coming up on a client project which needs this sort of 
stuff it was a good time to dive in.


The goal is simple enough:  one of the most common tasks I need to 
perform with my data is querying specific fields for criteria, and if 
there's a match then assembling the data from a given set of fields for 
display in a list to the user.


I've been using simple tab-delimited lists for this data because it was 
about as compact as it could be and performs reasonably well.  But with 
multi-dimensional arrays, the question is whether Rev's fast hash index 
into array data might help me gather the data from specific fields in 
each record faster than using chunk expressions.


So I took a minute to put together a simple test stack:
http://fourthworldlabs.com/rev/speed%20test%20chunks%20vs%20arrays.rev.zip

It has a field containing a list of contact info, another field for 
displaying the test results, and a third for displaying the gathered 
data from the query so I can verify that it's doing what I want it to.


If you download the stack you'll see that in addition to the Test 
button there's another one there which converts the list data into an 
array and stores that in a custom property of the field, needed for 
testing the array method.


The code for the Test button is below, and I would appreciate anyone 
here who has the time to look it over and see what I may have missed, 
because the results I'm getting are not what I expected.


The test script is typical of many of the tasks I need to perform on 
this data:  it checks one field to see if it contains a value, checks 
another to see if it contains a different value, and if both are true it 
collects data from three fields into a tab- and return-delimited list so 
I can drop it into a list field to display the output.


I had assumed that using chunk expressions to access items in each line 
would be slower than using array notation to get them through the hash 
in the array.  But instead here's the result I'm getting (times are in 
milliseconds for 100 iterations):


GetFromList: 72
GetFromSubArray: 752
GetFromMainArray: 407
All results the same?: true

As noted in the code below, the GetFromList handler uses simple chunk 
expressions to parse the data; GetFromSubArray uses repeat for each 
element to parse out the second-tier array within each record; 
GetFromMainArray walks through the keys to get the data from the main 
array by addressing both dimensions; the last line simply lets me know 
that all three are returning the same result.


I can understand why GetFromSubArray is the slowest, since it has to 
instantiate an array for the second-tier array each time through the 
loop (using repeat for each element...).


But I had hoped that accessing the main array by specifying the elements 
in both dimensions would get to the data more quickly than would be 
needed when asking the engine to count items, but apparently not.


Of course there is a scaling issue with chunk expressions.  In my sample 
data there are only eight items in each record, but if there were 
several hundred I would imagine it wouldn't perform as well as the array 
methods.  But in my case most of the data I work with has fewer than 30 
fields and since chunk expressions are measuring about five times faster 
  I would expect I'd need many more than that before chunk expressions 
drop below arrays in relative performance.


The same could be said of the size of the data within each item, since 
that will adversely affect the time the engine needs to walk through it 
looking for item delimiters.  But again, it's not often that I have 
field data that's very long (the contact list is a good example, in 
which the longest field data is under 200 chars), and the engine's 
seeking of delimiters seems reasonably efficient.


Another minor drawback to arrays for this sort of thing is what it does 
to the size of the data, esp. if you use meaningful names for your 
fields rather than just numbers:  while simple tab-delimited data needs 
only one set of field names and a function to translate those into item 
numbers before dropping into any loop that uses them, arrays replicate 
the field names for each record.  The longer the field names, the more 
impact this will have on data size.


I'd be happy to accept this as a trade-off if the speed justified it, 
but in my test the speed benefit just isn't there for this type of 
querying task.


Any thoughts on how I might optimize the array functions?  Did I just 
overlook something obvious which would make them run faster?


--
 Richard Gaskin
 Fourth World
 Revolution training and consulting: http://www.fourthworld.com
 Webzine for Rev developers: http://www.revjournal.com


-- code for Test button -

on mouseUp
  put empty into fld Results
  put empty into fld Output
  wait 

Re: revlet and path

2009-08-05 Thread Richard Gaskin

Yves COPPE wrote:

Here is a very simple script :

on mouseUp
 put empty into fld resultat
set itemDel to /
put item 1 to -2 of the effective fileName of this stack into tPath
set itemDel to comma
put tPath into fld resultat
end mouseUp

In the IDE, it gives the right path
when I build a revlet, put it on my server and open it with Safari, it  
gives ... nothing ! fld resultat stays empty


Any idea ?


The stack file on your server isn't the one that's running.  Like web 
pages, what the browser gets is a downloaded copy of the stack file. 
This copy lives in RAM, so its fileName will be empty.


--
 Richard Gaskin
 Fourth World
 Revolution training and consulting: http://www.fourthworld.com
 Webzine for Rev developers: http://www.revjournal.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Digging Huge Files

2009-08-05 Thread Sivakatirswami

Mark Wieder wrote:

Sivakatirswami-

Well, your tCompleted also remains constant - it's just the ratio of
download locations that changes. Which makes sense, since what you're
displaying is

line 1: tRevHits
line 2: tCompleted - tRevHits

Also, are you really saying put (1quote200 ) into tCompleteCode?
Shouldn't that be put (1quote200) into tCompleteCode?
From your output it looks like you're trying to trap the (1.1 200)
string, which has a space *before* the 200. Same thing for the 206.

You don't say where the mouseUp handler is located, but since you're
not initializing your variables to zero before running, could there be
some leftover garbage in there from the previous run?

  


Mark. Thanks for the pointers. It works now, but I have to wonder as 
there is some serious gremlin hiding here:


1) I thought local variables are always initialized to empty after each 
run of  script.

Is there something I'm missing there?

global tStart
local tPartials,tRevHits,tCompleted # these were acting as globals?

2) OK we know that

if x contains Hinduism-Today_Jul-Aug-Sep_2009.pdf then   
   put x  cr after tFoundLInes

   end if

works, consistently thru each run because it returns 4405 GET requests for the 
PDF on each run of the script.

*BUT*

if the string

1quote200

does not even exist in those 4405 found lines how is any number returned at all?

  
 if x contains Hinduism-Today_Jul-Aug-Sep_2009.pdf then 
put x into z

if z contains Revolution then add 1 to tRevHits
put  (1.1quote200 ) into tCompleteCode
put  (1.1quote206 ) into tPartialCode
if z contains tCompleteCode then add 1 to tCompleted # should not 
increment
if z contains tPartialCode then add 1 to tPartials # should not 
increment
put empty into z
put x  cr after tFoundLInes
 end if

but we get:

Summary: 
Downloaded with Revolution HT Navigator: 834

Complete Downloads via HT site: 1488 # should be 0
Partial Downloads 206's (mostly failures, some successes): 6801 # should be 0

3) I took your suggestions initialized the vars and fixed the string.



on mouseUp
  put 0 into tPartials
  put 0 into tRevHits
  put 0 into tCompleted

if x contains Hinduism-Today_Jul-Aug-Sep_2009.pdf then 
put x into z

if z contains Revolution then add 1 to tRevHits
put  (1.1quote 200 ) into tCompleteCode
put  (1.1quote 206 ) into tPartialCode
if z contains tCompleteCode then add 1 to tCompleted
if z contains tPartialCode then add 1 to tPartials
put empty into z
put x  cr after tFoundLInes
 end if

and now I get consistent, on each run:

Summary: 
Downloaded with Revo  HT Navigator: 205

Complete Downloads via HT site: 840
Partial Downloads 206's (mostly failures, some successes): 2877

numbers verified in BBEDit with Search and Replace on those strings... 


But still leave Questions 1 and 2 unanswer: why are local vars carrying values 
across sessions and how is a string that doesn't exist in the text being found 
at all?

I'm not sure I'll be able to answer either of those questions, but it's a 
lesson in good practice on initializing varibles, even locals.

Thanks for the help... Now we can dig the  logs.

Sivakatirswami





___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Chunks vs Arrays - surprising benchmarking results

2009-08-05 Thread Trevor DeVore

On Aug 5, 2009, at 3:05 PM, Richard Gaskin wrote:

Any thoughts on how I might optimize the array functions?  Did I  
just overlook something obvious which would make them run faster?


Richard,

The main slowdown in your test for GetFromMainArray seems to be in  
transferring the data from the custom property to tDataA. This is  
consistent with my findings when implementing persistent data storage  
in the data grid.


I changed the code a bit so that the field contents and uData custom  
prop array were put into script locals outside of the timers. Here is  
what I got:


GetFromList: 30
GetFromSubArray: 244
GetFromMainArray: 27
All results the same?: true

So if your data is already in a script local then arrays seem to be  
faster.


Regards,

--
Trevor DeVore
Blue Mango Learning Systems
www.bluemangolearning.com-www.screensteps.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Digging Huge Files

2009-08-05 Thread Jim Ault

Tip:
There are globals, script locals, and handler locals.

Globals belong to Revolution, not the stack or script that created  
them in memory.

These values persist even if the stack is removed from memory.

Script locals are kept in memory as long as the stack is open.
In a way. like globals, but these are only available to handlers and  
functions in the *same* script.
Declaring a script local variable shoebox in different script  
containers creates separate shoeboxes.  Each shoebox will die when the  
script is removed from memory.

 the script of card 1 Husband's closet---
   local shoebox
   put brown loafers into shoebox

 the script of card 2  Wife's closet  ---
   local shoebox
   put sequin pumps into shoebox

... and you now have two shoeboxes in two places

Handler locals die at the end of the handler process.
on doThisQuickly
   local shoebox
   --and now we are done
end doThisQuickly
... and now this shoebox is gone

global shoebox
... not a good idea to name a global and a local the same
Usual convention is to use  gShoebox for a global name
My convention is to use
zShoebox so that all the globals are shown at the bottom of the  
Variable Watcher
xShoebox so that all the script locals are shown just above the  
globals in the Variable Watcher


Hope this helps someone out there.

Jim Ault
Las Vegas


On Aug 5, 2009, at 12:21 PM, Sivakatirswami wrote:

1) I thought local variables are always initialized to empty after  
each run of  script.

Is there something I'm missing there?

global tStart
local tPartials,tRevHits,tCompleted # these were acting as globals?


Jim Ault
jimaultw...@yahoo.com



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Chunks vs Arrays - surprising benchmarking results

2009-08-05 Thread Richard Gaskin

Trevor DeVore wrote:


On Aug 5, 2009, at 3:05 PM, Richard Gaskin wrote:

Any thoughts on how I might optimize the array functions?  Did I  
just overlook something obvious which would make them run faster?


Richard,

The main slowdown in your test for GetFromMainArray seems to be in  
transferring the data from the custom property to tDataA. This is  
consistent with my findings when implementing persistent data storage  
in the data grid.


I changed the code a bit so that the field contents and uData custom  
prop array were put into script locals outside of the timers. Here is  
what I got:


GetFromList: 30
GetFromSubArray: 244
GetFromMainArray: 27
All results the same?: true

So if your data is already in a script local then arrays seem to be  
faster.


Excellent sleuthing, Trevor.   Confirmed:  with that change I'm getting 
the same results.  Who would have thought there could be so much 
overhead moving a custom property array into a variable array?


Unfortunately for my case, this data is only one of several tables 
stored in user documents.  I could move all the data out of the stack 
file I'm using for the document into a global when the document is 
opened, and could even use a master array within the global to keep the 
data from different open documents separate, but that adds another layer 
of management and the increase in data size for a speed gain of 0.03ms 
per iteration.


I had hoped that I might be able to get around the need to copy the data 
out of the properties by using array notation directly on those 
properties, but alas it doesn't seem the property array syntax is yet 
parallel with variable array syntax as it used to be before we got 
multi-dimensional arrays.


I'll think this over, but so far the two methods are so close in 
performance that I'm inclined to stick with what's in place with chunk 
expressions for now.


Thanks again for finding the bottleneck in the data loading.  That's 
valuable info.


--
 Richard Gaskin
 Fourth World
 Revolution training and consulting: http://www.fourthworld.com
 Webzine for Rev developers: http://www.revjournal.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Chunks vs Arrays - surprising benchmarking results

2009-08-05 Thread Trevor DeVore
I had hoped that I might be able to get around the need to copy the  
data out of the properties by using array notation directly on those  
properties, but alas it doesn't seem the property array syntax is  
yet parallel with variable array syntax as it used to be before we  
got multi-dimensional arrays.


Yes, this is unfortunate. Being able to access multi-dimensional  
arrays as stored in an object would be very useful.


Regards,

--
Trevor DeVore
Blue Mango Learning Systems
www.bluemangolearning.com-www.screensteps.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Chunks vs Arrays - surprising benchmarking results

2009-08-05 Thread Paul Looney

Richard,
I have nothing to add directly to the chunk vs array discussion  
(Trevor's reply was very good) but I have often found it helpful to  
increase the speed of compound selections by breaking them into  
individual ones.


For instance if you have a large database of names and sexes and you  
want to select every female named Jan (Jan could be male or female).
Select all of the Jans first (this will run much faster than the  
compound selection).
Then select all of the females from the result of the first selection  
(this will run faster because it is searching only Jans - a very  
small list).

This double selection will run faster than a single compound selection.

Obviously this requires a known data-set where one filter will  
eliminate a lot of records (selecting female, then selecting Jan  
would be much slower in our example because, presumably, half of the  
list is female and a small portion is Jan).
On many lists this can create a much bigger speed difference than  
chunk vs array variance you noted.

Paul Looney

On Aug 5, 2009, at 12:05 PM, Richard Gaskin wrote:

The Multi dimensional array filtering thread reminded me of a  
benchmarking test I've been wanting to do for some time, and since  
I have some tasks coming up on a client project which needs this  
sort of stuff it was a good time to dive in.


The goal is simple enough:  one of the most common tasks I need to  
perform with my data is querying specific fields for criteria, and  
if there's a match then assembling the data from a given set of  
fields for display in a list to the user.


I've been using simple tab-delimited lists for this data because it  
was about as compact as it could be and performs reasonably well.   
But with multi-dimensional arrays, the question is whether Rev's  
fast hash index into array data might help me gather the data from  
specific fields in each record faster than using chunk expressions.


So I took a minute to put together a simple test stack:
http://fourthworldlabs.com/rev/speed%20test%20chunks%20vs% 
20arrays.rev.zip


It has a field containing a list of contact info, another field for  
displaying the test results, and a third for displaying the  
gathered data from the query so I can verify that it's doing what I  
want it to.


If you download the stack you'll see that in addition to the Test  
button there's another one there which converts the list data into  
an array and stores that in a custom property of the field, needed  
for testing the array method.


The code for the Test button is below, and I would appreciate  
anyone here who has the time to look it over and see what I may  
have missed, because the results I'm getting are not what I expected.


The test script is typical of many of the tasks I need to perform  
on this data:  it checks one field to see if it contains a value,  
checks another to see if it contains a different value, and if both  
are true it collects data from three fields into a tab- and return- 
delimited list so I can drop it into a list field to display the  
output.


I had assumed that using chunk expressions to access items in each  
line would be slower than using array notation to get them through  
the hash in the array.  But instead here's the result I'm getting  
(times are in milliseconds for 100 iterations):


GetFromList: 72
GetFromSubArray: 752
GetFromMainArray: 407
All results the same?: true

As noted in the code below, the GetFromList handler uses simple  
chunk expressions to parse the data; GetFromSubArray uses repeat  
for each element to parse out the second-tier array within each  
record; GetFromMainArray walks through the keys to get the data  
from the main array by addressing both dimensions; the last line  
simply lets me know that all three are returning the same result.


I can understand why GetFromSubArray is the slowest, since it has  
to instantiate an array for the second-tier array each time through  
the loop (using repeat for each element...).


But I had hoped that accessing the main array by specifying the  
elements in both dimensions would get to the data more quickly than  
would be needed when asking the engine to count items, but  
apparently not.


Of course there is a scaling issue with chunk expressions.  In my  
sample data there are only eight items in each record, but if there  
were several hundred I would imagine it wouldn't perform as well as  
the array methods.  But in my case most of the data I work with has  
fewer than 30 fields and since chunk expressions are measuring  
about five times faster   I would expect I'd need many more than  
that before chunk expressions drop below arrays in relative  
performance.


The same could be said of the size of the data within each item,  
since that will adversely affect the time the engine needs to walk  
through it looking for item delimiters.  But again, it's not often  
that I have field data that's very long (the contact list is a good  
example, 

MacSpeech?

2009-08-05 Thread Peter Brigham MD
Anyone know if the new Dragon speech recognition/dictation app for the  
Mac, MacSpeech Dictate -- which is getting fairly good reviews --  
will pour text into a field in a rev stack? I'd like to confirm that  
this will work before I shell out $200 for this thing, since my main  
use of it will be to transcribe notes into a field in one of my stack  
systems. The MacSpeech website doesn't list Rev as one of the  
supported apps, and I waited 20 on hold on their customer service  
line before having to dash off (and I was somewhat pessimistic about  
getting an answer anyway, assuming that when I asked about runrev they  
were going to say, what's that?).


Any feedback appreciated.

-- Peter

Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: cookbook for putting Rev app on the web?

2009-08-05 Thread Sarah Reichelt
 Is there a simple cookbook list somewhere of how to convert a stand alone
 application to a web application?  Something like

 1) recompile using these settings
 2) put this code on your web page
 3) put your executable here on the server
 4) you are done

1. Go to Standalone settings - Web and check Build for Web and
accept the default settings.
2. Build the standalone. This creates a folder containing a revlet and
an html file. It also opens the html file in your default browser for
testing.
3. Edit the html file to suit or copy  paste the relevant parts into
your own html. There are 2 vital sections: one detects the plugin and
the other displays your revlet. These are marked by comments in the
test.html page, so you can easily see what sections you need.
4. Upload both the html file and the revlet file to the same folder on
your web site.
5. All done :-)

Cheers,
Sarah
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: MacSpeech?

2009-08-05 Thread Rick Harrison

Hi Peter,

Yes MacSpeech Dictate version 1.3 works great with Revolution.  I just  
finished talking into

a scrolling field in Pre-beta 4.0 with no problems at all.

Rick

On Aug 5, 2009, at 7:16 PM, Peter Brigham MD wrote:

Anyone know if the new Dragon speech recognition/dictation app for  
the Mac, MacSpeech Dictate -- which is getting fairly good reviews  
-- will pour text into a field in a rev stack? I'd like to confirm  
that this will work before I shell out $200 for this thing, since my  
main use of it will be to transcribe notes into a field in one of my  
stack systems. The MacSpeech website doesn't list Rev as one of the  
supported apps, and I waited 20 on hold on their customer service  
line before having to dash off (and I was somewhat pessimistic about  
getting an answer anyway, assuming that when I asked about runrev  
they were going to say, what's that?).


Any feedback appreciated.

-- Peter

Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


__
Rick Harrison

You can buy my $10 music album Funny Time Machine digital CD on the  
iTunes Store Now!


To visit the iTunes Store now to listen to samples of my CD please  
click on the
following link.  (Please note you must have iTunes installed on your  
computer for this link to work.)


http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewAlbum?playListId=213668290


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Digging Huge Files

2009-08-05 Thread Sivakatirswami

Jim Ault wrote:

Tip:
There are globals, script locals, and handler locals.

Globals belong to Revolution, not the stack or script that created 
them in memory.

These values persist even if the stack is removed from memory.

Script locals are kept in memory as long as the stack is open.
In a way. like globals, but these are only available to handlers and 
functions in the *same* script.
Declaring a script local variable shoebox in different script 
containers creates separate shoeboxes.  Each shoebox will die when the 
script is removed from memory.


Oh! aha!.. that's it... I don't think I ever used a script local, ever, 
until building this stack.
I normally always use handler locals. But I thought I might improve on 
best practices and declare them first.

But I did not know that the persistence nature of the local would change.

Learn something new every day...

Thanks for clearing that up.




___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: [teaser] for third party developers

2009-08-05 Thread Shao Sean

Not only would it centralize documentation for different libraries,
Which is the main point of it, even for my own libraries I forget  
what they do ;)




Many users of this list have written a wide variety of helpful
libraries.

Yes and I use many of them in my personal toolbox thank you everyone :D



(you wouldn't want this broken if the docs changed).
Actually I bypass most of the encoding Rev has done with their  
documentation and just read pure HTML files and display them




And it would require the people who write the libraries to
e.g. make their documentation conform to that format and either submit
Everyone has their own style of writing documentation and I will not  
impose any limits other than the limits of Rev HTML.. I do have a  
quick and dirty page creator that will ship, but any old pre- 
formatted HTML files should work just fine..




network reachable URL so that the Dictionary could go out to the
internet and harvest the documentation.
Documentation is provided with the libraries/objects that you have  
downloaded and have running in Rev already so no need to have an  
active internet connection..

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


onrev client as stack?

2009-08-05 Thread Nicolas Cueto
Is the onrev client available as a stack?

I'd like to modify it to suit my own way of working.

--
Nicolas Cueto
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution