Re: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-03 Thread Matthias Rebbe
Dear all,

i think it is all said. Please stop this annoying discussion.

This list is called  use-revolution, so maybe we can come back to this again. 

Thank you!

Matthias
Am 03.05.2010 um 07:47 schrieb Randall Lee Reetz:

 Why don't you ask the guys at adobe if their content is really aware.
 
 -Original Message-
 From: Ian Wood revl...@azurevision.co.uk
 Sent: Sunday, May 02, 2010 9:27 PM
 To: How to use Revolution use-revolution@lists.runrev.com
 Subject: OT?: AI, learning networks and pattern recognition (was: Apples 
 actual   response to the Flash issue)
 
 Now we're getting somewhere that actually has some vague relevance to  
 the list.
 
 
 On 2 May 2010, at 22:39, Randall Reetz wrote:
 
 I had assumed your questions were rhetorical.
 
 If I ask the same questions multiple times you can be sure that  
 they're not rhetorical.
 
 When I say that software hasn't changed I mean to say that it hasn't  
 jumped qualitative categories.  We are still living in a world where  
 computing exists as pre-written and compiled software that is  
 blindly executed by machines and stacked foundational code that has  
 no idea what it is processing, can only process linearly, all  
 semantics have been stripped, it doesn't learn from experience or  
 react to context unless this too has been pre-codified and frozen in  
 binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
 our little wrote tricks can be made more elaborate within the  
 substantial confines mentioned.  These same in-paradigm restrictions  
 apply to both the software users slog through and the software we  
 use to write software.
 
 As a result, these very plastic machines with mercurial potential  
 are reduced to simple players that react to user interrupts.  They  
 are sequencing systems, not unlike the lead type setting racks of  
 Guttenburg-era printing presses.  Sure we have taught them some  
 interesting seeming tricks – if you can represent something as  
 digital media, be it sound, video, multi-dimentional graph space,  
 markup – our sequencer doesn't know enough to care.
 
 So for you, for something to be 'revolutionary' it has to involve a  
 full paradigm shift? That's a more extreme definition than most people  
 use.
 
 Current processors are capable of 6.5 million instructions per  
 second but are used less than a billionth of available cycles by the  
 standard users running standard software.
 
 From a pedantic, technical point of view, these days if the processor  
 is being used that little then it will ramp down the clock speed,  
 which has some environmental and practical benefits in itself. ;-)
 
 As regards photo editing software, anyone aware of the history of  
 image processing will recognize that most of the stuff seen in  
 photoshop and other programs was proposed and executed on systems  
 long before some guys in france democratized these algorithms for  
 consumer use and had their code acquired by adobe.  It used to be  
 called array arithmetic and applied smoothly to images divided up  
 into a grid of pixels.  None of these systems see an image for its  
 content except as an array of numbers that can be crunched  
 sequentially like a spread sheet.
 
 It was only when object recognition concepts were applied to photos  
 that any kind of compositional grammar could be extracted from an  
 image and compared as parts to other images similarly decomposed.   
 This is a form of semantic processing and has its parallels in other  
 media like text parsers and sound analysis software.
 
 You haven't looked up what content-aware fill *is*, have you? It's  
 based on the same basic concepts of pattern-matching/feature detection  
 that facial recognition software is based on but with a different  
 emphasis.
 
 To paraphrase, it's not facial recognition that you think is the only  
 revolutionary feature in photography in twenty years, it's pattern- 
 matching/detection/eigenvectors. A lot of time and frustration would  
 have been saved if you'd said that in the first place.
 
 Semantics opens the door to the building of systems that  
 understand the content they process.  That is the promised second  
 revolution in computation that really hasn't seen any practical  
 light of day as of yet.
 
 You're jumping too many steps here - object recognition concepts are  
 in *widespread* use in consumer software and devices, whether it's the  
 aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
 many different pieces of software, feature recognition in panoramic  
 stitching software or even live stitching in some of the new Sony  
 cameras.
 
 Semantic processing of content doesn't magically enable a computer to  
 initiate action.
 
 Data mining really isn't semantically mindful, simply uses  
 statistical reduction mechanisms to guess at the existence of the  
 location of pattern ( a good first step but missing the grammatical  
 hierarchy necessary

RE: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-03 Thread Randall Lee Reetz
Why don't you ask the guys at adobe if their content is really aware.

-Original Message-
From: Ian Wood revl...@azurevision.co.uk
Sent: Sunday, May 02, 2010 9:27 PM
To: How to use Revolution use-revolution@lists.runrev.com
Subject: OT?: AI,   learning networks and pattern recognition (was: Apples 
actual   response to the Flash issue)

Now we're getting somewhere that actually has some vague relevance to  
the list.


On 2 May 2010, at 22:39, Randall Reetz wrote:

 I had assumed your questions were rhetorical.

If I ask the same questions multiple times you can be sure that  
they're not rhetorical.

 When I say that software hasn't changed I mean to say that it hasn't  
 jumped qualitative categories.  We are still living in a world where  
 computing exists as pre-written and compiled software that is  
 blindly executed by machines and stacked foundational code that has  
 no idea what it is processing, can only process linearly, all  
 semantics have been stripped, it doesn't learn from experience or  
 react to context unless this too has been pre-codified and frozen in  
 binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
 our little wrote tricks can be made more elaborate within the  
 substantial confines mentioned.  These same in-paradigm restrictions  
 apply to both the software users slog through and the software we  
 use to write software.

 As a result, these very plastic machines with mercurial potential  
 are reduced to simple players that react to user interrupts.  They  
 are sequencing systems, not unlike the lead type setting racks of  
 Guttenburg-era printing presses.  Sure we have taught them some  
 interesting seeming tricks – if you can represent something as  
 digital media, be it sound, video, multi-dimentional graph space,  
 markup – our sequencer doesn't know enough to care.

So for you, for something to be 'revolutionary' it has to involve a  
full paradigm shift? That's a more extreme definition than most people  
use.

 Current processors are capable of 6.5 million instructions per  
 second but are used less than a billionth of available cycles by the  
 standard users running standard software.

 From a pedantic, technical point of view, these days if the processor  
is being used that little then it will ramp down the clock speed,  
which has some environmental and practical benefits in itself. ;-)

 As regards photo editing software, anyone aware of the history of  
 image processing will recognize that most of the stuff seen in  
 photoshop and other programs was proposed and executed on systems  
 long before some guys in france democratized these algorithms for  
 consumer use and had their code acquired by adobe.  It used to be  
 called array arithmetic and applied smoothly to images divided up  
 into a grid of pixels.  None of these systems see an image for its  
 content except as an array of numbers that can be crunched  
 sequentially like a spread sheet.

 It was only when object recognition concepts were applied to photos  
 that any kind of compositional grammar could be extracted from an  
 image and compared as parts to other images similarly decomposed.   
 This is a form of semantic processing and has its parallels in other  
 media like text parsers and sound analysis software.

You haven't looked up what content-aware fill *is*, have you? It's  
based on the same basic concepts of pattern-matching/feature detection  
that facial recognition software is based on but with a different  
emphasis.

To paraphrase, it's not facial recognition that you think is the only  
revolutionary feature in photography in twenty years, it's pattern- 
matching/detection/eigenvectors. A lot of time and frustration would  
have been saved if you'd said that in the first place.

 Semantics opens the door to the building of systems that  
 understand the content they process.  That is the promised second  
 revolution in computation that really hasn't seen any practical  
 light of day as of yet.

You're jumping too many steps here - object recognition concepts are  
in *widespread* use in consumer software and devices, whether it's the  
aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
many different pieces of software, feature recognition in panoramic  
stitching software or even live stitching in some of the new Sony  
cameras.

Semantic processing of content doesn't magically enable a computer to  
initiate action.

 Data mining really isn't semantically mindful, simply uses  
 statistical reduction mechanisms to guess at the existence of the  
 location of pattern ( a good first step but missing the grammatical  
 hierarchy necessary to work towards a self optimized and domain  
 independent ability to detect and represent salience in the stacked  
 grammar that makes up any complex system.

Combining pattern-matching with adaptive systems, whether they be  
neural networks or something else is another matter

RE: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-03 Thread Randall Lee Reetz
I can see how the word revolution in the context of this list has acquired so 
anemic and castrated a meaning.  I am sorry.  Next time, I will use a word that 
means all the way around, or when a king is replaced by a democracy. time.  

-Original Message-
From: Ian Wood revl...@azurevision.co.uk
Sent: Sunday, May 02, 2010 9:27 PM
To: How to use Revolution use-revolution@lists.runrev.com
Subject: OT?: AI,   learning networks and pattern recognition (was: Apples 
actual   response to the Flash issue)

Now we're getting somewhere that actually has some vague relevance to  
the list.


On 2 May 2010, at 22:39, Randall Reetz wrote:

 I had assumed your questions were rhetorical.

If I ask the same questions multiple times you can be sure that  
they're not rhetorical.

 When I say that software hasn't changed I mean to say that it hasn't  
 jumped qualitative categories.  We are still living in a world where  
 computing exists as pre-written and compiled software that is  
 blindly executed by machines and stacked foundational code that has  
 no idea what it is processing, can only process linearly, all  
 semantics have been stripped, it doesn't learn from experience or  
 react to context unless this too has been pre-codified and frozen in  
 binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
 our little wrote tricks can be made more elaborate within the  
 substantial confines mentioned.  These same in-paradigm restrictions  
 apply to both the software users slog through and the software we  
 use to write software.

 As a result, these very plastic machines with mercurial potential  
 are reduced to simple players that react to user interrupts.  They  
 are sequencing systems, not unlike the lead type setting racks of  
 Guttenburg-era printing presses.  Sure we have taught them some  
 interesting seeming tricks – if you can represent something as  
 digital media, be it sound, video, multi-dimentional graph space,  
 markup – our sequencer doesn't know enough to care.

So for you, for something to be 'revolutionary' it has to involve a  
full paradigm shift? That's a more extreme definition than most people  
use.

 Current processors are capable of 6.5 million instructions per  
 second but are used less than a billionth of available cycles by the  
 standard users running standard software.

 From a pedantic, technical point of view, these days if the processor  
is being used that little then it will ramp down the clock speed,  
which has some environmental and practical benefits in itself. ;-)

 As regards photo editing software, anyone aware of the history of  
 image processing will recognize that most of the stuff seen in  
 photoshop and other programs was proposed and executed on systems  
 long before some guys in france democratized these algorithms for  
 consumer use and had their code acquired by adobe.  It used to be  
 called array arithmetic and applied smoothly to images divided up  
 into a grid of pixels.  None of these systems see an image for its  
 content except as an array of numbers that can be crunched  
 sequentially like a spread sheet.

 It was only when object recognition concepts were applied to photos  
 that any kind of compositional grammar could be extracted from an  
 image and compared as parts to other images similarly decomposed.   
 This is a form of semantic processing and has its parallels in other  
 media like text parsers and sound analysis software.

You haven't looked up what content-aware fill *is*, have you? It's  
based on the same basic concepts of pattern-matching/feature detection  
that facial recognition software is based on but with a different  
emphasis.

To paraphrase, it's not facial recognition that you think is the only  
revolutionary feature in photography in twenty years, it's pattern- 
matching/detection/eigenvectors. A lot of time and frustration would  
have been saved if you'd said that in the first place.

 Semantics opens the door to the building of systems that  
 understand the content they process.  That is the promised second  
 revolution in computation that really hasn't seen any practical  
 light of day as of yet.

You're jumping too many steps here - object recognition concepts are  
in *widespread* use in consumer software and devices, whether it's the  
aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
many different pieces of software, feature recognition in panoramic  
stitching software or even live stitching in some of the new Sony  
cameras.

Semantic processing of content doesn't magically enable a computer to  
initiate action.

 Data mining really isn't semantically mindful, simply uses  
 statistical reduction mechanisms to guess at the existence of the  
 location of pattern ( a good first step but missing the grammatical  
 hierarchy necessary to work towards a self optimized and domain  
 independent ability to detect and represent salience in the stacked

Re: OT?: AI, learning networks and pattern recognition

2010-05-03 Thread Ian Wood


On 3 May 2010, at 06:47, Randall Lee Reetz wrote:


Why don't you ask the guys at adobe if their content is really aware.


So your only response to someone taking the time to go through your  
email in a serious manner and discuss the topics included is to take a  
pot-shot and not respond to any of the points?


Yep, put me down as another person who's putting your email address  
into the spam filter as a troll.


Ian


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: OT?: AI, learning networks and pattern recognition

2010-05-03 Thread Andre Garzia
what is happening on my list :-(

I stay away for a couple of days and all things break loose... tesc tesc
tesc... Now, I've just devised the perfect solution for this!

Now, Revolution powered list monitor software will scan every email and
assign a Revolution Content Rate factor to it, if it has a high RCR
number, it will simply go thru, if its RCR is too low, then you will be
driven to the Quality Center and the system will request that you solve an
engine bug. If/When you solve it, then, your mail will go thru.

The bugs will be assigned using a simple algorithm where the severity or age
of the bug is inversely proportional to the RCR value of the email. So that
if you rate quite low on RCR you will be given the most old powerful engine
bugs to solve.

I hope you all understand that this is for the good of the community and
we'll benefit from it, if the low RCR rate continues like what I've been
seeing here, I grok that we'll solve all the engine bugs plus port the
engine to Haiku, Solaris (again), FreeBSD (again), Android (Android is the
new black) in about a week.

If some user reaches ZERO KRCR, which stands for 0 Kelvin Revolution Content
Rate which is really absolute zero RCR, he will be given flight tickets to
Switzerland and a big dossie on the LHC and the task to prevent it from
destroying the world. If he ever solves all CERN bugs, we'll ship our hero
to SETI and then after that small taks, he'll go to Redmond to solve Windows
and throw chairs at Ballmer.

PS: This message has an RCR of 2, so I've been given a Bug to solve, but
since QA center is down, I am yet to know which one.

On Mon, May 3, 2010 at 7:17 AM, Ian Wood revl...@azurevision.co.uk wrote:


 On 3 May 2010, at 06:47, Randall Lee Reetz wrote:

  Why don't you ask the guys at adobe if their content is really aware.


 So your only response to someone taking the time to go through your email
 in a serious manner and discuss the topics included is to take a pot-shot
 and not respond to any of the points?

 Yep, put me down as another person who's putting your email address into
 the spam filter as a troll.

 Ian


 ___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution




-- 
http://www.andregarzia.com All We Do Is Code.
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: OT?: AI, learning networks and pattern recognition

2010-05-03 Thread J. Landman Gay

Andre Garzia wrote:


PS: This message has an RCR of 2, so I've been given a Bug to solve, but
since QA center is down, I am yet to know which one.


It's back up again, so get to work. :)

--
Jacqueline Landman Gay | jac...@hyperactivesw.com
HyperActive Software   | http://www.hyperactivesw.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: OT?: AI, learning networks and pattern recognition

2010-05-03 Thread Pierre Sahores
LOL :-)

Le 3 mai 2010 à 15:13, Andre Garzia a écrit :

 what is happening on my list :-(
 
 I stay away for a couple of days and all things break loose... tesc tesc
 tesc... Now, I've just devised the perfect solution for this!
 
 Now, Revolution powered list monitor software will scan every email and
 assign a Revolution Content Rate factor to it, if it has a high RCR
 number, it will simply go thru, if its RCR is too low, then you will be
 driven to the Quality Center and the system will request that you solve an
 engine bug. If/When you solve it, then, your mail will go thru.
 
 The bugs will be assigned using a simple algorithm where the severity or age
 of the bug is inversely proportional to the RCR value of the email. So that
 if you rate quite low on RCR you will be given the most old powerful engine
 bugs to solve.
 
 I hope you all understand that this is for the good of the community and
 we'll benefit from it, if the low RCR rate continues like what I've been
 seeing here, I grok that we'll solve all the engine bugs plus port the
 engine to Haiku, Solaris (again), FreeBSD (again), Android (Android is the
 new black) in about a week.
 
 If some user reaches ZERO KRCR, which stands for 0 Kelvin Revolution Content
 Rate which is really absolute zero RCR, he will be given flight tickets to
 Switzerland and a big dossie on the LHC and the task to prevent it from
 destroying the world. If he ever solves all CERN bugs, we'll ship our hero
 to SETI and then after that small taks, he'll go to Redmond to solve Windows
 and throw chairs at Ballmer.
 
 PS: This message has an RCR of 2, so I've been given a Bug to solve, but
 since QA center is down, I am yet to know which one.
 
 On Mon, May 3, 2010 at 7:17 AM, Ian Wood revl...@azurevision.co.uk wrote:
 
 
 On 3 May 2010, at 06:47, Randall Lee Reetz wrote:
 
 Why don't you ask the guys at adobe if their content is really aware.
 
 
 So your only response to someone taking the time to go through your email
 in a serious manner and discuss the topics included is to take a pot-shot
 and not respond to any of the points?
 
 Yep, put me down as another person who's putting your email address into
 the spam filter as a troll.
 
 Ian
 
 
 ___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution
 
 
 
 
 -- 
 http://www.andregarzia.com All We Do Is Code.
 ___
 use-revolution mailing list
 use-revolution@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your subscription 
 preferences:
 http://lists.runrev.com/mailman/listinfo/use-revolution
 

--
Pierre Sahores
mobile : (33) 6 03 95 77 70

www.wrds.com
www.sahores-conseil.com






___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-02 Thread Ian Wood
Now we're getting somewhere that actually has some vague relevance to  
the list.



On 2 May 2010, at 22:39, Randall Reetz wrote:


I had assumed your questions were rhetorical.


If I ask the same questions multiple times you can be sure that  
they're not rhetorical.


When I say that software hasn't changed I mean to say that it hasn't  
jumped qualitative categories.  We are still living in a world where  
computing exists as pre-written and compiled software that is  
blindly executed by machines and stacked foundational code that has  
no idea what it is processing, can only process linearly, all  
semantics have been stripped, it doesn't learn from experience or  
react to context unless this too has been pre-codified and frozen in  
binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
our little wrote tricks can be made more elaborate within the  
substantial confines mentioned.  These same in-paradigm restrictions  
apply to both the software users slog through and the software we  
use to write software.


As a result, these very plastic machines with mercurial potential  
are reduced to simple players that react to user interrupts.  They  
are sequencing systems, not unlike the lead type setting racks of  
Guttenburg-era printing presses.  Sure we have taught them some  
interesting seeming tricks – if you can represent something as  
digital media, be it sound, video, multi-dimentional graph space,  
markup – our sequencer doesn't know enough to care.


So for you, for something to be 'revolutionary' it has to involve a  
full paradigm shift? That's a more extreme definition than most people  
use.


Current processors are capable of 6.5 million instructions per  
second but are used less than a billionth of available cycles by the  
standard users running standard software.


From a pedantic, technical point of view, these days if the processor  
is being used that little then it will ramp down the clock speed,  
which has some environmental and practical benefits in itself. ;-)


As regards photo editing software, anyone aware of the history of  
image processing will recognize that most of the stuff seen in  
photoshop and other programs was proposed and executed on systems  
long before some guys in france democratized these algorithms for  
consumer use and had their code acquired by adobe.  It used to be  
called array arithmetic and applied smoothly to images divided up  
into a grid of pixels.  None of these systems see an image for its  
content except as an array of numbers that can be crunched  
sequentially like a spread sheet.


It was only when object recognition concepts were applied to photos  
that any kind of compositional grammar could be extracted from an  
image and compared as parts to other images similarly decomposed.   
This is a form of semantic processing and has its parallels in other  
media like text parsers and sound analysis software.


You haven't looked up what content-aware fill *is*, have you? It's  
based on the same basic concepts of pattern-matching/feature detection  
that facial recognition software is based on but with a different  
emphasis.


To paraphrase, it's not facial recognition that you think is the only  
revolutionary feature in photography in twenty years, it's pattern- 
matching/detection/eigenvectors. A lot of time and frustration would  
have been saved if you'd said that in the first place.


Semantics opens the door to the building of systems that  
understand the content they process.  That is the promised second  
revolution in computation that really hasn't seen any practical  
light of day as of yet.


You're jumping too many steps here - object recognition concepts are  
in *widespread* use in consumer software and devices, whether it's the  
aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
many different pieces of software, feature recognition in panoramic  
stitching software or even live stitching in some of the new Sony  
cameras.


Semantic processing of content doesn't magically enable a computer to  
initiate action.


Data mining really isn't semantically mindful, simply uses  
statistical reduction mechanisms to guess at the existence of the  
location of pattern ( a good first step but missing the grammatical  
hierarchy necessary to work towards a self optimized and domain  
independent ability to detect and represent salience in the stacked  
grammar that makes up any complex system.


Combining pattern-matching with adaptive systems, whether they be  
neural networks or something else is another matter - but it's been a  
long hard slog to find out that this is what you're talking about.


Adaptive systems themselves are also quite widespread by now, from  
Tivos learning what programmes you watch to predictive text on an  
iPhone, from iTunes 'Genius' playlists  recommendations through to  
Siri (just bought up by Apple, as it happens).


Such systems will need to work all of