Re: Any suggestions on how to "onion skinning"?

2007-12-03 Thread Mark Smith
I must rather shamefacedly admit that my seemingly clever  
'binaryDecode' method is actually just extracting the red channel --  
so not very clever :(


Best,

Mark

On 3 Dec 2007, at 21:41, BNig wrote:



the timing on a MacBook Pro 2.33 GHz for the scripts is

on average 190 Milliseconds for the whole thing, i.e. passing the  
original
file to Colorsyncscripting, creating the grayscale file on disk and  
reading

the file into Revolution and display the image.

if you change the applescript so that you keep Colorsyncscripting open
instead of closing it as the current script does then the whole  
thing takes

about 90 Milliseconds. This of course if you intend to do multiple
conversions. Right now ColorSyncScripting is closed after five  
seconds, if
you set the filename of the image and start the conversion within  
this time

you get the 90 milliseconds as it is.

It also helps to put into a startup script that starts  
ColorSyncScripting
during startUp, somehow it "initialises" applescript and revolution  
and

ColorSyncScripting is ready when you do the first conversion.

a script like this would do:

on openStack
put "tell application " & quote & "ColorSyncScripting" & quote & " to
launch" into forASVar  
do forASVar as applescript
if the result is not empty then answer the result
end openStack
---

the timing for the two variants strictly within Revolution to do  
the same as

the applescript variant:

Ron Woods: 1080 milliseconds
Mark Smith 660 milliseconds ( the binarydecode variant)

having set the paintcompression to "RLE" on startup these values  
change to:

Ron Woods: 800 milliseconds
Mark Smith 400 milliseconds ( the binarydecode variant)

just setting the imagedata with the "RLE" startUp
22 milliseconds !! brilliant

all measurements made with the same picture 640 by 480 pixels


thank you for pointing me to the RLE and PNG problem, I was not  
aware of
this. I use set imagedata in correcting movies for shifts, this  
involves 900
to 1200 images/movie of 768 by 576 pixels, and the 400 milliseconds  
it takes
for each image on an iMac 2 GHz definitely add up. the actual  
taking apart
of the image and putting it back together again in Revolution is  
quite fast

(about 60 milliseconds). So I will try to set the paintcompression on
openstack.

BTW I very much like your stacks on imagemanipulation in  
Revolution, it is

amazing what you do with them

Thank you.

Bernd






Wilhelm Sanke wrote:



On Mon Dec 3, 2007 BNig niggemann at uni-wh.de wrote:


Ken,

if you are on MacOSX system > = 10.4 and you just want the image  
to be

grayscale then you can try an applescript for colorsyncscripting.
it is a lot faster than Revolution. It takes Revolution from "set  
the
imagedata of image x to y" for a 640x480 on a MacBook Pro 2.3  
about 280

Milliseconds to display the image. So whatever you do you have this
overhead.



You did not tell us in your post how fast using "applescript for
colorsynscripting" actually is?

Concerning the occurring "overhead" you mention when you display the
changed imagedata from a variable in Revolution, like "set the  
imagedate

of img "x" to changeddata",  you have to take into account which
paintcompression is set. PNG can be up to ten times slower than RLE.

See bug # 5113 "Slower speed of imagedataprocessing with engines  
>2.6.1
and PNG compression" with the attached test stack. Curiously, this  
bug
is still left as "unconfirmed" although we had a discussion about  
this

on the improve list half a year ago.

On my 2 GHz machine to display a 640x480 image from a changed  
imagedata
variable takes 50 milliseconds with the paintcompression set to  
RLE and

580 when set to PNG.

The Revolution engine defaults to RLE, but the Rev IDE changes  
that to

PNG on startup.

For fastest imagedata processing use engine 2.6.1 and the Metacard  
IDE -

or set the paintcompression to RLE on openstack.

Regards,

Wilhelm Sanke


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your
subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution




--
View this message in context: http://www.nabble.com/Any-suggestions- 
on-how-to-%22onion-skinning%22--tf4892376.html#a14139524

Sent from the Revolution - User mailing list archive at Nabble.com.

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-12-03 Thread BNig

the timing on a MacBook Pro 2.33 GHz for the scripts is 

on average 190 Milliseconds for the whole thing, i.e. passing the original
file to Colorsyncscripting, creating the grayscale file on disk and reading
the file into Revolution and display the image.

if you change the applescript so that you keep Colorsyncscripting open
instead of closing it as the current script does then the whole thing takes
about 90 Milliseconds. This of course if you intend to do multiple
conversions. Right now ColorSyncScripting is closed after five seconds, if
you set the filename of the image and start the conversion within this time
you get the 90 milliseconds as it is.

It also helps to put into a startup script that starts ColorSyncScripting
during startUp, somehow it "initialises" applescript and revolution and
ColorSyncScripting is ready when you do the first conversion.

a script like this would do:

on openStack
put "tell application " & quote & "ColorSyncScripting" & quote & " to
launch" into forASVar   
do forASVar as applescript
if the result is not empty then answer the result
end openStack
---

the timing for the two variants strictly within Revolution to do the same as
the applescript variant:

Ron Woods: 1080 milliseconds
Mark Smith 660 milliseconds ( the binarydecode variant) 

having set the paintcompression to "RLE" on startup these values change to:
Ron Woods: 800 milliseconds
Mark Smith 400 milliseconds ( the binarydecode variant) 

just setting the imagedata with the "RLE" startUp
22 milliseconds !! brilliant

all measurements made with the same picture 640 by 480 pixels


thank you for pointing me to the RLE and PNG problem, I was not aware of
this. I use set imagedata in correcting movies for shifts, this involves 900
to 1200 images/movie of 768 by 576 pixels, and the 400 milliseconds it takes
for each image on an iMac 2 GHz definitely add up. the actual taking apart
of the image and putting it back together again in Revolution is quite fast
(about 60 milliseconds). So I will try to set the paintcompression on
openstack. 

BTW I very much like your stacks on imagemanipulation in Revolution, it is
amazing what you do with them

Thank you.

Bernd






Wilhelm Sanke wrote:
> 
> 
> On Mon Dec 3, 2007 BNig niggemann at uni-wh.de wrote:
> 
>> Ken,
>>
>> if you are on MacOSX system > = 10.4 and you just want the image to be
>> grayscale then you can try an applescript for colorsyncscripting.
>> it is a lot faster than Revolution. It takes Revolution from "set the
>> imagedata of image x to y" for a 640x480 on a MacBook Pro 2.3 about 280
>> Milliseconds to display the image. So whatever you do you have this
>> overhead.
> 
> 
> You did not tell us in your post how fast using "applescript for 
> colorsynscripting" actually is?
> 
> Concerning the occurring "overhead" you mention when you display the 
> changed imagedata from a variable in Revolution, like "set the imagedate 
> of img "x" to changeddata",  you have to take into account which 
> paintcompression is set. PNG can be up to ten times slower than RLE.
> 
> See bug # 5113 "Slower speed of imagedataprocessing with engines >2.6.1 
> and PNG compression" with the attached test stack. Curiously, this bug 
> is still left as "unconfirmed" although we had a discussion about this 
> on the improve list half a year ago.
> 
> On my 2 GHz machine to display a 640x480 image from a changed imagedata 
> variable takes 50 milliseconds with the paintcompression set to RLE and 
> 580 when set to PNG.
> 
> The Revolution engine defaults to RLE, but the Rev IDE changes that to 
> PNG on startup.
> 
> For fastest imagedata processing use engine 2.6.1 and the Metacard IDE - 
> or set the paintcompression to RLE on openstack.
> 
> Regards,
> 
> Wilhelm Sanke
> 
> 
> ___
> use-revolution mailing list
> use-revolution@lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Any-suggestions-on-how-to-%22onion-skinning%22--tf4892376.html#a14139524
Sent from the Revolution - User mailing list archive at Nabble.com.

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-12-03 Thread Wilhelm Sanke


On Mon Dec 3, 2007 BNig niggemann at uni-wh.de wrote:


Ken,

if you are on MacOSX system > = 10.4 and you just want the image to be
grayscale then you can try an applescript for colorsyncscripting.
it is a lot faster than Revolution. It takes Revolution from "set the
imagedata of image x to y" for a 640x480 on a MacBook Pro 2.3 about 280
Milliseconds to display the image. So whatever you do you have this
overhead.



You did not tell us in your post how fast using "applescript for 
colorsynscripting" actually is?


Concerning the occurring "overhead" you mention when you display the 
changed imagedata from a variable in Revolution, like "set the imagedate 
of img "x" to changeddata",  you have to take into account which 
paintcompression is set. PNG can be up to ten times slower than RLE.


See bug # 5113 "Slower speed of imagedataprocessing with engines >2.6.1 
and PNG compression" with the attached test stack. Curiously, this bug 
is still left as "unconfirmed" although we had a discussion about this 
on the improve list half a year ago.


On my 2 GHz machine to display a 640x480 image from a changed imagedata 
variable takes 50 milliseconds with the paintcompression set to RLE and 
580 when set to PNG.


The Revolution engine defaults to RLE, but the Rev IDE changes that to 
PNG on startup.


For fastest imagedata processing use engine 2.6.1 and the Metacard IDE - 
or set the paintcompression to RLE on openstack.


Regards,

Wilhelm Sanke


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-12-03 Thread BNig

Ken, 

if you are on MacOSX system > = 10.4 and you just want the image to be
grayscale then you can try an applescript for colorsyncscripting.
it is a lot faster than Revolution. It takes Revolution from "set the
imagedata of image x to y" for a 640x480 on a MacBook Pro 2.3 about 280
Milliseconds to display the image. So whatever you do you have this
overhead. Doing a chartonum 900.000 times doesnt help either.
On the other hand revolution is quite fast in loading a image file and
displaying it.

here is a recipe:

make a new stack with an image, call it "i1", make a field call it "f1",
make a button

put the following applescript into the field f1:

-- thisFile is provided by user in revolution
-- thisFileNewName is provided by user in  revolution

set thisFile to thisFile as alias
set sourceProf to POSIX file "/System/Library/ColorSync/Profiles/Generic RGB
Profile.icc"
set destProf to POSIX file "/System/Library/ColorSync/Profiles/Generic Gray
Profile.icc"

tell application "ColorSyncScripting"
launch
try
-- you can  specify where to save the image in revolution by 
setting the
variable for thisFileNewName
match thisFile from source (sourceProf) to destination 
(destProf) saving
into file thisFileNewName with replacing
on error errmsg
activate
display dialog errmsg
end try
set quit delay to 5
end tell

-- end of the applescript

set the script of the button to:

on mouseUp
-- make shure we are on MacOS X and system version  >= 10.4
-- because of ColorSyncScripting
if platform () <> "MacOs" then 
answer "works only on Macs because of Applescript"
exit mouseUp
get  systemversion ()
if word 1 of it < 10 and word 2 of it < 4 then 
answer "MacOs X version => 10.4 required"
exit mouseUp
end if
end if

put the millisec into theStart

put the filename of image 1 into theFilename
if theFilename is "" then exit mouseUp

-- in MacOS X 10.5.1 and Revolution 2.8.1 the filename of the image, if
you chose the image in the inspector
-- is something like "./../../../Desktop/nameOfTheFile.jpg",
"./../../../" confuses revMacFromUnixPath
-- so we try to make a viable filename using specialforderpath
if theFilename contains ".." then
set the itemdelimiter to "/"
repeat with i =  the number of items of theFilename down to 1
if item i of theFilename is "." or item i of theFilename is ".."
then delete item i of theFilename
end repeat
put specialfolderpath(cusr) into pathToUser
put pathToUser & "/" & theFilename into theFilename
end if

-- now make a name for a new file which will contain the grayscale
picture in the same place where
-- the original file is, just append " 01" to the file name
set the itemdelimiter to "."
put theFilename into thisFileNewName
put " 01" after item -2 of thisFileNewName

-- convert rev-style path to macintosh path
put revMacFromUnixPath (theFilename) into theMacFileName
put revMacFromUnixPath (thisFileNewName) into theMacFileNewName

-- now build the applescript command
put "set thisFile to " & quote & theMacFileName & quote & return into
tVarForApplescript
put "set thisFileNewName to " & quote & theMacFileNewName & quote &
return after tVarForApplescript
put field "f1" after tVarForApplescript  
do tVarForApplescript as Applescript

if the result <> "" then answer the result

set the filename of image 1 to thisFileNewName
put the millisec - theStart into msg
end mouseUp 


--

in the inspector choose a color jpeg file for the image "i1" then click the
button

if all works as expected the applescript generates a greyscale file from the
original jpeg file in the folder of the original and opens it in the image
"i1"

if you do the conversion on a grayscale file the resulting file will be
broken

to avoid all the confusion with setting the filename of the image i1 from
the inspector in Leopard you may want to make a button with answer file to
set the filename of image i1

I filed a bug report for the filename problem in Leopard

ColorSyncScripting.app is on every mac from 10.4.0 on as far as I know. 





Ken Ray wrote:
> 
> On Wed, 28 Nov 2007 20:19:27 +, Ian Wood wrote:
> 
 I'm working on a program with my son that does simple card-based
 animation, but one of the things he asked how to do in Rev stumped me,
 and that is doing an "onion skin"
>>> 
>>> I would try overlaying a translucent screen capture of the prev or next
>>> card.
>> 
>> Agreed. Or do it by taking a snapshot directly from the card - this 
>> should work even if it's not the frontmost card.
> 
> The problem with those is that if the card has color in it, the 
> translucency is also in color. I was hoping to keep it in 
> grayscale/gray.
> 
> Ken R

Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Mark Smith
Well, one alternative that also seems to work is to map the 24bit  
value of each pixel to an 8bit value :


function makeGS @inData
   repeat with n = 1 to length(inData) - 3 step 4
  get binarydecode("M", char n to n+3 of inData, tPix)
  put numtochar(tPix div 65536) into tv
  put null & tv & tv & tv after outData
   end repeat
   return outData
end makeGS

It's maybe 25% quicker than dealing with each color component  
individually.


Best,

Mark

On 29 Nov 2007, at 09:48, Wilhelm Sanke wrote:


Mark Smith mark at maseurope.net wrote:


This is sort of interesting:

if you simply take one of the color bytes of each pixel, and copy  
it  to the other two color bytes, you get a gray-scale result.  
The  brightness/contrast varies with which color you choose. For  
the few  images I've tried, it seems to be red =brighter/less  
contrast  to  blue= darker/more contrast. This may be no surprise  
to the pro image  wranglers among us, but seemed intriguing to me.



And Chipp Walters chipp at chipp.com wrote:


Mark,

Unless you average the 3, your gray-scale result may not work
properly. Try it on an image with 3 circles: 100%R, 100%G, 100%B and
you'll see what I mean.




My experience is that with most photos you get a very nice  
grayscale image using the red pixel and copying the value to the  
other two pixels like Mark suggested.


The last public version of my "Imagedata Toolkit Preview 3" (update  
of April 17)




contains both grayscale routines using "average" and those with  
copying one color pixel to the other two - implemented for all  
three colors.


Speed for "average gray" and a 640X480 image (on a 2 GHz machine)  
is about 1.1 seconds and for "gray from red" about 600 milliseconds.-


The next update of the Imagedata Toolkit, which will be the last  
with a restriction to an enforced image size of  640X480, will  
probably be released before Xmas and contain a number of major  
enhancements (among them: scripted Rev emulation of cubic  
enlargement, integration and expanding of some new Gluas filters  
from Gimp - translated into Revolution - "stretch contrast",  
"compress contrast", enhancement of "jitter" filters with various  
multi-pixel jitters, another despeckle filter based on minimum  
differences between surrounding pixel pairs [this is another Gimp/ 
Gluas development that is identical in effectiveness to the  
"median" approach, but somewhat slower], exchanging color values  
within a defined range by clicking on image and/or color scale,  
copying - and enlarging or shrinking - and pasting oval or  
rectangular portions of an image into the same or another image  
with variable fringe and/or overall blending into the basic image).


Best regards,

Wilhelm Sanke



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Ian Wood

Oh dear. Wikipedia strikes again. :-(

Those percentages are for very specific purposes and are NOT what is  
generally used in conversion to greyscale. The article isn't really  
*wrong*, but it doesn't bear much resemblance to most real-life usage.


Ian

On 29 Nov 2007, at 10:56, Luis wrote:


From Wikipedia: http://en.wikipedia.org/wiki/Greyscale

'Converting color to grayscale

To convert any color to its most approximate level of gray, first  
one must obtain the values of its red, green and blue (RGB) primaries.


Then, add 30% of the red value, 59% of the green value, and 11% of  
the blue value, together. Regardless of the scale employed (0.0 to  
1.0, 0 to 255, 0% to 100%, etc.), the resultant number is the  
desired gray value, such that a new RGB color would have red,  
green, and blue values equal to the new number. These percentages  
are chosen due to the different relative sensitivity of the normal  
human eye to each of the primary colors (less sensitive to green,  
more to blue).'


Cheers,

Luis.


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Luis

From Wikipedia: http://en.wikipedia.org/wiki/Greyscale

'Converting color to grayscale

To convert any color to its most approximate level of gray, first one  
must obtain the values of its red, green and blue (RGB) primaries.


Then, add 30% of the red value, 59% of the green value, and 11% of  
the blue value, together. Regardless of the scale employed (0.0 to  
1.0, 0 to 255, 0% to 100%, etc.), the resultant number is the desired  
gray value, such that a new RGB color would have red, green, and blue  
values equal to the new number. These percentages are chosen due to  
the different relative sensitivity of the normal human eye to each of  
the primary colors (less sensitive to green, more to blue).'


Cheers,

Luis.


On 29 Nov 2007, at 09:54, Luis wrote:


Hiya,

Some time ago we used to add the RGB values together and then  
divide by 3. This was wy back, and the results were ok then.


I spotted this on the net:

R*.3+G*.59+B*.11 to get the grey value, haven't tried it.

Cheers,

Luis.


On 29 Nov 2007, at 00:39, Ken Ray wrote:


On Wed, 28 Nov 2007 20:50:11 -0200, Andre Garzia wrote:


Ken,

Have you tried snapshoting the card, then somewhere off screen  
you put

a gray image on top of the snapshot with some blend and take another
shot. Depending on ink combinations you might have a nice result.


Good idea - I'll compare that speed-wise with other suggestions  
people

have made.

Another way, which I don't know how fast it is, is to read each  
pixel

in the snapshot and convert it using some proportional gray value.


Yeah, that was Chipp's suggestion - which I might use too - took  
about

500ms on a 400x400 image on my MacBookPro.


Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Luis

Hiya,

Some time ago we used to add the RGB values together and then divide  
by 3. This was wy back, and the results were ok then.


I spotted this on the net:

R*.3+G*.59+B*.11 to get the grey value, haven't tried it.

Cheers,

Luis.


On 29 Nov 2007, at 00:39, Ken Ray wrote:


On Wed, 28 Nov 2007 20:50:11 -0200, Andre Garzia wrote:


Ken,

Have you tried snapshoting the card, then somewhere off screen you  
put

a gray image on top of the snapshot with some blend and take another
shot. Depending on ink combinations you might have a nice result.


Good idea - I'll compare that speed-wise with other suggestions people
have made.


Another way, which I don't know how fast it is, is to read each pixel
in the snapshot and convert it using some proportional gray value.


Yeah, that was Chipp's suggestion - which I might use too - took about
500ms on a 400x400 image on my MacBookPro.


Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Wilhelm Sanke

Mark Smith mark at maseurope.net wrote:


This is sort of interesting:

if you simply take one of the color bytes of each pixel, and copy it  
to the other two color bytes, you get a gray-scale result. The  
brightness/contrast varies with which color you choose. For the few  
images I've tried, it seems to be red =brighter/less contrast  to  
blue= darker/more contrast. This may be no surprise to the pro image  
wranglers among us, but seemed intriguing to me.



And Chipp Walters chipp at chipp.com wrote:


Mark,

Unless you average the 3, your gray-scale result may not work
properly. Try it on an image with 3 circles: 100%R, 100%G, 100%B and
you'll see what I mean.




My experience is that with most photos you get a very nice grayscale 
image using the red pixel and copying the value to the other two pixels 
like Mark suggested.


The last public version of my "Imagedata Toolkit Preview 3" (update of 
April 17)




contains both grayscale routines using "average" and those with copying 
one color pixel to the other two - implemented for all three colors.


Speed for "average gray" and a 640X480 image (on a 2 GHz machine) is 
about 1.1 seconds and for "gray from red" about 600 milliseconds.-


The next update of the Imagedata Toolkit, which will be the last with a 
restriction to an enforced image size of  640X480, will probably be 
released before Xmas and contain a number of major enhancements (among 
them: scripted Rev emulation of cubic enlargement, integration and 
expanding of some new Gluas filters from Gimp - translated into 
Revolution - "stretch contrast", "compress contrast", enhancement of 
"jitter" filters with various multi-pixel jitters, another despeckle 
filter based on minimum differences between surrounding pixel pairs 
[this is another Gimp/Gluas development that is identical in 
effectiveness to the "median" approach, but somewhat slower], exchanging 
color values within a defined range by clicking on image and/or color 
scale, copying - and enlarging or shrinking - and pasting oval or 
rectangular portions of an image into the same or another image with 
variable fringe and/or overall blending into the basic image).


Best regards,

Wilhelm Sanke



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-29 Thread Ian Wood

You're extracting one of the three RGB channels.

Ian

On 29 Nov 2007, at 03:32, Mark Smith wrote:


This is sort of interesting:

if you simply take one of the color bytes of each pixel, and copy  
it to the other two color bytes, you get a gray-scale result. The  
brightness/contrast varies with which color you choose. For the few  
images I've tried, it seems to be red =brighter/less contrast  to  
blue= darker/more contrast. This may be no surprise to the pro  
image wranglers among us, but seemed intriguing to me.


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Mark Smith
That makes sense. Duh!  I only tried it on some photos...ah well, no  
free lunch again :)


Best,

Mark

On 29 Nov 2007, at 03:59, Chipp Walters wrote:


Mark,

Unless you average the 3, your gray-scale result may not work
properly. Try it on an image with 3 circles: 100%R, 100%G, 100%B and
you'll see what I mean.

On Nov 28, 2007 9:32 PM, Mark Smith <[EMAIL PROTECTED]> wrote:

This is sort of interesting:

if you simply take one of the color bytes of each pixel, and copy it
to the other two color bytes, you get a gray-scale result. The
brightness/contrast varies with which color you choose. For the few
images I've tried, it seems to be red =brighter/less contrast  to
blue= darker/more contrast. This may be no surprise to the pro image
wranglers among us, but seemed intriguing to me.

function MakeGS @indata  the imageData of the source image
repeat with n = 1 to length(inData) - 3 step 4
   get char n+3 of inData   blue byte, 1 for red, 2 for green
   put null & it & it & it after outData
end repeat
return outData
end MakeGS

and it runs perhaps twice as fast as taking an average.

Best,

Mark




On 28 Nov 2007, at 23:06, Ian Wood wrote:



On 28 Nov 2007, at 21:24, Chipp Walters wrote:

Or, you could probably do it really fast with an optimized  
imagedata

script where you average the values of each pixel and reapply. I
would
think that would zip right along.


I managed to find a function from March last year from a discussion
about making alphadata from images.
Originally written by Wilhelm Sanke, with a few tweaks by me to
make it universal for any image size.

Pass it the long ID of an image and it will return a one-channel
image suitable for a mask.

On 13 Mar 2006, at 20:51, Ian Wood wrote:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask


Add another tweak to put it back into RGB:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) into tPix
  put tPix & tPix & tPix & tPix after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask

And you can do something like:

put makeMask(long id of img 1) into tData
set the imagedata of img 1 to tData

to turn the specified image into greyscale. Takes about a second
for a 640x480px image on a MBP 2GHz Core Duo, so not too speedy.

Ian
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your
subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Chipp Walters
Mark,

Unless you average the 3, your gray-scale result may not work
properly. Try it on an image with 3 circles: 100%R, 100%G, 100%B and
you'll see what I mean.

On Nov 28, 2007 9:32 PM, Mark Smith <[EMAIL PROTECTED]> wrote:
> This is sort of interesting:
>
> if you simply take one of the color bytes of each pixel, and copy it
> to the other two color bytes, you get a gray-scale result. The
> brightness/contrast varies with which color you choose. For the few
> images I've tried, it seems to be red =brighter/less contrast  to
> blue= darker/more contrast. This may be no surprise to the pro image
> wranglers among us, but seemed intriguing to me.
>
> function MakeGS @indata  the imageData of the source image
> repeat with n = 1 to length(inData) - 3 step 4
>get char n+3 of inData   blue byte, 1 for red, 2 for green
>put null & it & it & it after outData
> end repeat
> return outData
> end MakeGS
>
> and it runs perhaps twice as fast as taking an average.
>
> Best,
>
> Mark
>
>
>
>
> On 28 Nov 2007, at 23:06, Ian Wood wrote:
>
> >
> > On 28 Nov 2007, at 21:24, Chipp Walters wrote:
> >
> >> Or, you could probably do it really fast with an optimized imagedata
> >> script where you average the values of each pixel and reapply. I
> >> would
> >> think that would zip right along.
> >
> > I managed to find a function from March last year from a discussion
> > about making alphadata from images.
> > Originally written by Wilhelm Sanke, with a few tweaks by me to
> > make it universal for any image size.
> >
> > Pass it the long ID of an image and it will return a one-channel
> > image suitable for a mask.
> >
> > On 13 Mar 2006, at 20:51, Ian Wood wrote:
> >> function makeMask tMaskImg
> >>   set the cursor to watch
> >>   put width of tMaskImg into tW
> >>   put height of tMaskImg into tH
> >>   put the milliseconds into Start
> >>   put the imageData of tMaskImg into iData
> >>   put empty into tmaskdata
> >>   put tW * 4 into re
> >>   repeat with i = 0 to (tH - 1)
> >> repeat with j = 0 to (tW - 1)
> >>   put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
> >>   put chartonum(char (i*re + (j*4+3)) of idata) into tC2
> >>   put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
> >>   put the round of ((tc1 + tc2 + tc3)/3) into tM
> >>   put numToChar(tM) after tMaskData
> >> end repeat
> >>   end repeat
> >>   return tMaskData
> >> end makeMask
> >
> > Add another tweak to put it back into RGB:
> >
> > function makeMask tMaskImg
> >   set the cursor to watch
> >   put width of tMaskImg into tW
> >   put height of tMaskImg into tH
> >   put the milliseconds into Start
> >   put the imageData of tMaskImg into iData
> >   put empty into tmaskdata
> >   put tW * 4 into re
> >   repeat with i = 0 to (tH - 1)
> > repeat with j = 0 to (tW - 1)
> >   put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
> >   put chartonum(char (i*re + (j*4+3)) of idata) into tC2
> >   put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
> >   put the round of ((tc1 + tc2 + tc3)/3) into tM
> >   put numToChar(tM) into tPix
> >   put tPix & tPix & tPix & tPix after tMaskData
> > end repeat
> >   end repeat
> >   return tMaskData
> > end makeMask
> >
> > And you can do something like:
> >
> > put makeMask(long id of img 1) into tData
> > set the imagedata of img 1 to tData
> >
> > to turn the specified image into greyscale. Takes about a second
> > for a 640x480px image on a MBP 2GHz Core Duo, so not too speedy.
> >
> > Ian
> > ___
> > use-revolution mailing list
> > use-revolution@lists.runrev.com
> > Please visit this url to subscribe, unsubscribe and manage your
> > subscription preferences:
> > http://lists.runrev.com/mailman/listinfo/use-revolution
>
> ___
> use-revolution mailing list
> use-revolution@lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your subscription 
> preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution
>
>
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Mark Smith

This is sort of interesting:

if you simply take one of the color bytes of each pixel, and copy it  
to the other two color bytes, you get a gray-scale result. The  
brightness/contrast varies with which color you choose. For the few  
images I've tried, it seems to be red =brighter/less contrast  to  
blue= darker/more contrast. This may be no surprise to the pro image  
wranglers among us, but seemed intriguing to me.


function MakeGS @indata  the imageData of the source image
   repeat with n = 1 to length(inData) - 3 step 4
  get char n+3 of inData   blue byte, 1 for red, 2 for green
  put null & it & it & it after outData
   end repeat
   return outData
end MakeGS

and it runs perhaps twice as fast as taking an average.

Best,

Mark



On 28 Nov 2007, at 23:06, Ian Wood wrote:



On 28 Nov 2007, at 21:24, Chipp Walters wrote:


Or, you could probably do it really fast with an optimized imagedata
script where you average the values of each pixel and reapply. I  
would

think that would zip right along.


I managed to find a function from March last year from a discussion  
about making alphadata from images.
Originally written by Wilhelm Sanke, with a few tweaks by me to  
make it universal for any image size.


Pass it the long ID of an image and it will return a one-channel  
image suitable for a mask.


On 13 Mar 2006, at 20:51, Ian Wood wrote:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask


Add another tweak to put it back into RGB:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) into tPix
  put tPix & tPix & tPix & tPix after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask

And you can do something like:

put makeMask(long id of img 1) into tData
set the imagedata of img 1 to tData

to turn the specified image into greyscale. Takes about a second  
for a 640x480px image on a MBP 2GHz Core Duo, so not too speedy.


Ian
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ken Ray
On Wed, 28 Nov 2007 20:50:11 -0200, Andre Garzia wrote:

> Ken,
> 
> Have you tried snapshoting the card, then somewhere off screen you put
> a gray image on top of the snapshot with some blend and take another
> shot. Depending on ink combinations you might have a nice result.

Good idea - I'll compare that speed-wise with other suggestions people 
have made.
 
> Another way, which I don't know how fast it is, is to read each pixel
> in the snapshot and convert it using some proportional gray value.

Yeah, that was Chipp's suggestion - which I might use too - took about 
500ms on a 400x400 image on my MacBookPro.


Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ken Ray
> to turn the specified image into greyscale. Takes about a second for 
> a 640x480px image on a MBP 2GHz Core Duo, so not too speedy.

Thanks, Ian! I'll give that a try...


Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ian Wood


On 28 Nov 2007, at 21:24, Chipp Walters wrote:


Or, you could probably do it really fast with an optimized imagedata
script where you average the values of each pixel and reapply. I would
think that would zip right along.


I managed to find a function from March last year from a discussion  
about making alphadata from images.
Originally written by Wilhelm Sanke, with a few tweaks by me to make  
it universal for any image size.


Pass it the long ID of an image and it will return a one-channel  
image suitable for a mask.


On 13 Mar 2006, at 20:51, Ian Wood wrote:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask


Add another tweak to put it back into RGB:

function makeMask tMaskImg
  set the cursor to watch
  put width of tMaskImg into tW
  put height of tMaskImg into tH
  put the milliseconds into Start
  put the imageData of tMaskImg into iData
  put empty into tmaskdata
  put tW * 4 into re
  repeat with i = 0 to (tH - 1)
repeat with j = 0 to (tW - 1)
  put  chartonum(char (i*re + (j*4+2)) of idata) into tC1
  put chartonum(char (i*re + (j*4+3)) of idata) into tC2
  put  chartonum(char (i*re + (j*4+4)) of idata) into tC3
  put the round of ((tc1 + tc2 + tc3)/3) into tM
  put numToChar(tM) into tPix
  put tPix & tPix & tPix & tPix after tMaskData
end repeat
  end repeat
  return tMaskData
end makeMask

And you can do something like:

put makeMask(long id of img 1) into tData
set the imagedata of img 1 to tData

to turn the specified image into greyscale. Takes about a second for  
a 640x480px image on a MBP 2GHz Core Duo, so not too speedy.


Ian
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Andre Garzia
Ken,

Have you tried snapshoting the card, then somewhere off screen you put
a gray image on top of the snapshot with some blend and take another
shot. Depending on ink combinations you might have a nice result.

Another way, which I don't know how fast it is, is to read each pixel
in the snapshot and convert it using some proportional gray value.

Andre

On 11/28/07, Jim Ault <[EMAIL PROTECTED]> wrote:
> change the blending of the snapshot to see if that gets you the contrast or
> color reversal or transparency...
>
> Jim Ault
> Las Vegas
>
>
> On 11/28/07 12:41 PM, "Ken Ray" <[EMAIL PROTECTED]> wrote:
>
> > On Wed, 28 Nov 2007 20:19:27 +, Ian Wood wrote:
> >
>  I'm working on a program with my son that does simple card-based
>  animation, but one of the things he asked how to do in Rev stumped me,
>  and that is doing an "onion skin"
> >>>
> >>> I would try overlaying a translucent screen capture of the prev or next
> >>> card.
> >>
> >> Agreed. Or do it by taking a snapshot directly from the card - this
> >> should work even if it's not the frontmost card.
> >
> > The problem with those is that if the card has color in it, the
> > translucency is also in color. I was hoping to keep it in
> > grayscale/gray.
> >
>
>
> ___
> use-revolution mailing list
> use-revolution@lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your subscription 
> preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution
>


-- 
http://www.andregarzia.com All We Do Is Code.
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Jim Ault
change the blending of the snapshot to see if that gets you the contrast or
color reversal or transparency...

Jim Ault
Las Vegas


On 11/28/07 12:41 PM, "Ken Ray" <[EMAIL PROTECTED]> wrote:

> On Wed, 28 Nov 2007 20:19:27 +, Ian Wood wrote:
> 
 I'm working on a program with my son that does simple card-based
 animation, but one of the things he asked how to do in Rev stumped me,
 and that is doing an "onion skin"
>>> 
>>> I would try overlaying a translucent screen capture of the prev or next
>>> card.
>> 
>> Agreed. Or do it by taking a snapshot directly from the card - this
>> should work even if it's not the frontmost card.
> 
> The problem with those is that if the card has color in it, the
> translucency is also in color. I was hoping to keep it in
> grayscale/gray.
> 


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Chipp Walters
Hi Ken,

You could use Wilhelm Sanke's image tools to remove the color from the
screencaptured image. I think that would actually be pretty fast.

Or, you could probably do it really fast with an optimized imagedata
script where you average the values of each pixel and reapply. I would
think that would zip right along.

best,

Chipp
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ken Ray
On Wed, 28 Nov 2007 20:19:27 +, Ian Wood wrote:

>>> I'm working on a program with my son that does simple card-based
>>> animation, but one of the things he asked how to do in Rev stumped me,
>>> and that is doing an "onion skin"
>> 
>> I would try overlaying a translucent screen capture of the prev or next
>> card.
> 
> Agreed. Or do it by taking a snapshot directly from the card - this 
> should work even if it's not the frontmost card.

The problem with those is that if the card has color in it, the 
translucency is also in color. I was hoping to keep it in 
grayscale/gray.

Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ian Wood


On 28 Nov 2007, at 19:44, Scott Rossi wrote:


Recently, Ken Ray wrote:


I'm working on a program with my son that does simple card-based
animation, but one of the things he asked how to do in Rev stumped  
me,

and that is doing an "onion skin"


I would try overlaying a translucent screen capture of the prev or  
next

card.


Agreed. Or do it by taking a snapshot directly from the card - this  
should work even if it's not the frontmost card.



On 28 Nov 2007, at 18:17, Ken Ray wrote:

At best, what I'd *like* to do is to basically convert images to
grayscale, and at worst I'd settle for turning all non-white pixels a
single color of gray. I know I can walk through the imageData and
convert things pixel by pixel, but I fear that would be really slow on
larger (say 800x600) images (although I haven't tried it yet).


It might be worth having a look through the ink modes, I *think*  
there was some trick for getting near-greyscale by layering black and  
white graphics above and below the image, but I can't remember any  
details. :-(


Ian


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Any suggestions on how to "onion skinning"?

2007-11-28 Thread Scott Rossi
Recently, Ken Ray wrote:

> I'm working on a program with my son that does simple card-based
> animation, but one of the things he asked how to do in Rev stumped me,
> and that is doing an "onion skin"

I would try overlaying a translucent screen capture of the prev or next
card.

Regards,

Scott Rossi
Creative Director
Tactile Media, Multimedia & Design


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Any suggestions on how to "onion skinning"?

2007-11-28 Thread Ken Ray
I'm working on a program with my son that does simple card-based 
animation, but one of the things he asked how to do in Rev stumped me, 
and that is doing an "onion skin" - showing a "grayed out" version of 
the previous card's objects on the current card. 

I know how to do *some* of it (I can take actual Rev objects from the 
previous card, copy them, change their lines/fills/blends/etc. to work 
and then group them and put them behind the current set of objects on 
the current card), but what stumps me is how to handle imported/created 
images. 

At best, what I'd *like* to do is to basically convert images to 
grayscale, and at worst I'd settle for turning all non-white pixels a 
single color of gray. I know I can walk through the imageData and 
convert things pixel by pixel, but I fear that would be really slow on 
larger (say 800x600) images (although I haven't tried it yet).

Any suggestions?

Ken Ray
Sons of Thunder Software, Inc.
Email: [EMAIL PROTECTED]
Web Site: http://www.sonsothunder.com/
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution