Randall, Take it up with this guy:
http://en.wikipedia.org/wiki/Douglas_Lenat He's spent thirty-five years thinking about the same issues. --- On Sun, 5/2/10, Randall Reetz <rand...@randallreetz.com> wrote: > From: Randall Reetz <rand...@randallreetz.com> > Subject: Re: Apples actual response to the Flash issue > To: "How to use Revolution" <use-revolution@lists.runrev.com> > Date: Sunday, May 2, 2010, 4:39 PM > OK, Ian, I promised I would respond > and here goes. Sorry I didn't before, I had assumed > your questions were rhetorical. > > When I say that software hasn't changed I mean to say that > it hasn't jumped qualitative categories. We are still > living in a world where computing exists as pre-written and > compiled software that is blindly executed by machines and > stacked foundational code that has no idea what it is > processing, can only process linearly, all semantics have > been stripped, it doesn't learn from experience or react to > context unless this too has been pre-codified and frozen in > binary or byte code, etc. etc etc. Hardware has been > souped up. So our little wrote tricks can be made more > elaborate within the substantial confines mentioned. > These same in-paradigm restrictions apply to both the > software users slog through and the software we use to write > software. > > As a result, these very plastic machines with mercurial > potential are reduced to simple players that react to user > interrupts. They are sequencing systems, not unlike > the lead type setting racks of Guttenburg-era printing > presses. Sure we have taught them some interesting > seeming tricks – if you can represent something as digital > media, be it sound, video, multi-dimentional graph space, > markup – our sequencer doesn't know enough to care. > > Current processors are capable of 6.5 million instructions > per second but are used less than a billionth of available > cycles by the standard users running standard > software. The current paradigm absolutely > abhors processor access not initiated by user input. > But even if it had the inclination to get some work down on > its own… what would it do? It doesn't know anything > about anything so deciding what to do as the day progresses > is impossible. > > As regards photo editing software, anyone aware of the > history of image processing will recognize that most of the > stuff seen in photoshop and other programs was proposed and > executed on systems long before some guys in france > democratized these algorithms for consumer use and had their > code acquired by adobe. It used to be called array > arithmetic and applied smoothly to images divided up into a > grid of pixels. None of these systems "see" an image > for its content except as an array of numbers that can be > crunched sequentially like a spread sheet. > > It was only when object recognition concepts were applied > to photos that any kind of compositional grammar could be > extracted from an image and compared as parts to other > images similarly decomposed. This is a form of > semantic processing and has its parallels in other media > like text parsers and sound analysis software. > > Semantics opens the door to the building of systems that > "understand" the content they process. That is the > promised second revolution in computation that really hasn't > seen any practical light of day as of yet. Data mining > really isn't semantically mindful, simply uses statistical > reduction mechanisms to guess at the existence of the > location of pattern ( a good first step but missing the > grammatical hierarchy necessary to work towards a self > optimized and domain independent ability to detect and > represent salience in the stacked grammar that makes up any > complex system. > > Such systems will need to work all of the time. ALL > OF THE TIME! Only pausing momentarily to pay attention > to our interactions as needed. Once they are running, > these systems will subsume all of the manual activity we > have been made to perform to this day. Think "fly by > wire" for processing. Gone is the need to discreetly > encode every single bit in exactly the only possible > sequence. We simply wont be able to know what bits are > being processed, who or what made them, and more > importantly, we won't have to care. > > What it means is the difference between writing a letter > and our computer interceding by understanding the > meta-intent of the wrote and inefficient processes we engage > in today – what are letters for? What resources is > this user or entity after and why? Who has those > resources? Whom of those who have the desired > resources need something that we might have in > exchange? How are the vectors of intent among all > entities entangled and grouped and how can our systems work > towards the optimization of this global intent matrix? > > So, when I use the word "revisionist" I am calling > attention to the old sheep dressed up in new clothing but > still being sheep. Software feature creep is not > really evolving software. As the good programmers at > REV know, most of the work to maintain a product is incurred > just keeping current of changes in the OS substrate on which > they run. This rarely results in qualitative paradigm > jumps. > > That the jump is so long in coming is understandable. > It is easy to send a punch card through a machine and have > it react accordingly every time. The jump from wrote > execution of static code to self aware semantically self > optimized pattern engines is a big big big jump. But > it isn't as big as it might at first seem. It is > happening. It will happen. And computing will > finally result in the kind of substantial increase in > productivity that its expense requires. > > Randall Reetz > > > On May 2, 2010, at 12:32 PM, Ian Wood wrote: > > > > > On 2 May 2010, at 20:13, Randall Lee Reetz wrote: > > > >> So, how about some content? A substantive > rebuttal? Putting your ideas out there for all to > see? > > > > How about replying to direct questions asked of you, > for instance why facial recognition is revolutionary but > content-aware fill isn't? Or why the examples of things > facial recognition is being used for *now* in consumer > products is 'Almost nothing'. > > > > It would also be useful if you could explain what you > mean by revisionist applications. I *assume* you are talking > about apps that are evolutionary rather than revolutionary > in how they change what people do with them, but it's not > clear and 'revisionist' has some very specific > connotations. > > > > Ian > > _______________________________________________ > > use-revolution mailing list > > use-revolution@lists.runrev.com > > Please visit this url to subscribe, unsubscribe and > manage your subscription preferences: > > http://lists.runrev.com/mailman/listinfo/use-revolution > > > > _______________________________________________ > use-revolution mailing list > use-revolution@lists.runrev.com > Please visit this url to subscribe, unsubscribe and manage > your subscription preferences: > http://lists.runrev.com/mailman/listinfo/use-revolution > _______________________________________________ use-revolution mailing list use-revolution@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-revolution