Mike: take a whole set of diverse patterns – Koch curve, Mandelbrot, herringbone, cellular automaton etc . etc. – and explain how the brain is able to abstract from *all of them together* and recognize them collectively as “patterns”... Where’s the pattern in a set of diverse patterns, B & B? And where’s the complexity, Jim?
What do you mean by that question? Jim On Wed, Aug 22, 2012 at 7:40 PM, Mike Tintner <[email protected]>wrote: > Yeah, I can’t see why Fuster is a big deal. He summarises what we > *know* - and sure we know that the brain progressively abstracts – but we > don’t know or have any consensus on *how*. > > Abstracting from patterns is relatively simple. But the real world scenes > and objects that confront the human brain aren’t patterned or easy to > abstract – wh. is why B & B & other AGI-ers ignore them & stick to their > artificial worlds.. > > If you want to put that mathematically, take a whole set of diverse > patterns – Koch curve, Mandelbrot, herringbone, cellular automaton etc . > etc. – and explain how the brain is able to abstract from *all of them > together* and recognize them collectively as “patterns” (and not just as > Koch curves/herringbones etc. etc). > > Where’s the pattern in a set of diverse patterns, B & B? And where’s the > complexity, Jim? > > http://www.alexander-hamilton.net/assets/images/geometric_samples.jpg > > Loud silence. > > *From:* Jim Bromer <[email protected]> > *Sent:* Thursday, August 23, 2012 12:06 AM > *To:* AGI <[email protected]> > *Subject:* Re: [agi] Boris Explains His Theory > > I found a short lecture by Fuster, > Joaquin Fuster: Distributed Memory and the Perception-Action Cycle (2007) > http://archive.org/details/Brain_Network_Dynamics_2007-13-Joaquin_Fuster > > On Wed, Aug 22, 2012 at 5:19 PM, Boris Kazachenko <[email protected]>wrote: > >> ** >> > However, I probably won't be able to read it for a few weeks >> >> It will take you much longer to actually read through it :). >> See esp. chapter 3: Functional Architecture of the Cognit (buzzword >> alarm). >> >> *From:* Jim Bromer <[email protected]> >> If you want a mainstream source, read "Cortex & Mind" by Joaquin Fuster, >> he is a paramount authority on the subject. >> >> If it was convenient I would get it tonight. However, I probably won't >> be able to read it for a few weeks. >> Jim >> >> On Wed, Aug 22, 2012 at 10:52 AM, Boris Kazachenko >> <[email protected]>wrote: >> >>> ** >>> Jim, >>> >>> >>> >> "You keep confusing source with destination, because you insist on >>> >> operating within your declarative memory, which is a rather >>> >> superficial subset of your cognitive model :)." >>> > >>> > Are you replying using your theory as a model of the mind (indeed, as >>> > a model of my mind!) >>> >>> It's not *my* theory, a mainstream position in neuroscience is that >>> neocortex is a hierarchy of generalization, from primary sensory & motor >>> areas to incrementally higher association areas. It's also well known that >>> declarative memory is restricted to the latter. Besides, these things are >>> tautologically self-evident to me. >>> >>> >>> > with a smiley face to represent some humor about doing that? >>> >>> That mostly represents my self-satisfaction with putting things well >>> :). >>> >>> > And, are you saying that declarative memory is a destination in your >>> > model rather than a source? Is declarative memory derived? That is >>> > what you are saying right? >>> >>> Yes, see the above. If you want a mainstream source, read "Cortex & >>> Mind" by Joaquin Fuster, he is a paramount authority on the subject. >>> >>> >>> > Is your theory a theory of how the brain works, a theory for >>> > artificial general intelligence using computers or both? >>> >>> Both, but the artificial version is a whole lot cleaner, the brain is >>> loaded with evolutionary artifacts. For example, I don't have this >>> artificial distinction between implicit & declarative memory, between >>> sensory & motor hierarchies, & a bunch of other things. >>> >>> >>> > Do you regularly see the kinds of thinking that people do in the terms >>> > of your model? >>> >>> Yes, except that "my" part of it is well below the surface (low-level >>> processing), the mainstream part is usually sufficient to qualitatively >>> explain declarative thinking. >>> >>> http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html >>> >>> -------------------------------------------------- >>> From: "Jim Bromer" <[email protected]> >>> Sent: Wednesday, August 22, 2012 9:42 AM >>> To: "AGI" <[email protected]> >>> Subject: [agi] Boris Explains His Theory >>> >>> >>> > Boris, >>> > I am just not getting this. So let me try starting with some simple >>> questions. >>> > I had said, "Forcing semantic values into 3-dimensional orthogonal >>> > space seems amazingly confused to me." >>> > You replied, >>> > "You keep confusing source with destination, because you insist on >>> > operating within your declarative memory, which is a rather >>> > superficial subset of your cognitive model :)." >>> > >>> > Are you replying using your theory as a model of the mind (indeed, as >>> > a model of my mind!) with a smiley face to represent some humor about >>> > doing that? Did you think that my statement about forcing semantic >>> > values was made in reference to something in your theory? Because >>> > that is not what I meant. I was just saying that I have read papers >>> > about using semantic vectors and my thoughts on that is that trying to >>> > force semantic vectors into 3-dimensional space seems confused. >>> > >>> > And, are you saying that declarative memory is a destination in your >>> > model rather than a source? Is declarative memory derived? That is >>> > what you are saying right? >>> > >>> > Is your theory a theory of how the brain works, a theory for >>> > artificial general intelligence using computers or both? >>> > >>> > Do you regularly see the kinds of thinking that people do in the terms >>> > of your model? >>> > Jim Bromer >>> > >>> > >>> > >>> > --------------- Previous Messages --------------- >>> > Jim, >>> >> I don't understand your comments about detecting patterns. You said: >>> > >>> > This is interactive pattern projection, but you have to discover those >>> > patterns first. Technically, you simply multiply all the vectors in a >>> > pattern by a relative distance to a target coordinate. And then you >>> > compare multiple patterns projected to the same coordinate, & multiply >>> > the difference by relative strength of each pattern. That gives you a >>> > combined prediction, or probability distribution if the patterns are >>> > mutually exclusive. >>> > >>> > That comment was about projecting patterns, not detecting them. >>> > >>> >> What kind of patterns are you talking about? How do the elemental >>> observations (from the sensory device) get turned into vectors? >>> > >>> > Comparisons generate derivatives. A vector is d(input) over >>> > d(coordinate). Conventionally, it's over multiple coordinates >>> > (dimensions), & the input can be a lower coordinate, but that's not >>> > essential. >>> > >>> >> Are you saying that the "higher level of search and generalization" >>> are where/how the pattern vectors are created? >>> > >>> > No, all levels. >>> > >>> >> Why or how would you pick out a particular target coordiate to use to >>> combine a prediction? >>> > >>> > Well, coordinate resolution is variable, so I am talking about a >>> > min->max span. Basically, vector projection is part of input selection >>> > for a higher-level search. The target coordinate span is a feedback >>> > from that higher level, or, if there aren't any, current_search_span * >>> > selection_rate: preset lossiness / sparseness of representation on the >>> > higher level. >>> > >>> >> Are you saying that all predictions have individual coordinates? >>> > >>> > Individual coordinate span. It's what + where, you got to have both. >>> > >>> >> That alone means that they would have to exist in dynamic virtual >>> space of many dimensions. Forcing semantic values into 3-dimensional >>> orthogonal space seems amazingly confused to me. >>> > >>> > You keep confusing source with destination, because you insist on >>> > operating within your declarative memory, which is a rather >>> > superficial subset of your cognitive model :). >>> > >>> > We *derive* all our "semantic" values from 4D-continuous observation, >>> > no need to "force" them into it. >>> > >>> >> What kind of space would your vectors exist in, how do they get there >>> and why do you choose a particular coordinate for a combination of >>> predictions? >>> > >>> > As I said, hierarchical search generates incremental syntax, & >>> > variables within it are individually evaluated for search on >>> > successive levels. The strongest variable, whether it's an original >>> > coordinate | modality or a derivative thereof, becomes a coordinate >>> > for a higher level. The strength here must be averaged over higher >>> > level span. >>> > >>> > It's hard to explain this on "semantic" level, which is profoundly >>> > confused in humans anyway. But a good intermediate example is Periodic >>> > Table. You take atomic mass (which is a derived, not an original >>> > variable) as top coordinate, compare pH value along that coordinate, & >>> > notice recurrent periodicity in it's variation. Since pH is a main >>> > chemical property, you then use it as a primary dimension that defines >>> > a period, & atomic mass becomes a secondary dimension that defines a >>> > sequence of periods. Both dimensions are derived, they may seem kind >>> > of a halfway between original & "semantic", but the same derivation >>> > process will get you to the latter >>> > >>> > http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html >>> > >>> > Boris, >>> > >>> > I don't understand your comments about detecting patterns. You said: >>> > >>> > This is interactive pattern projection, but you have to discover those >>> > patterns first. Technically, you simply multiply all the vectors in a >>> > pattern by a relative distance to a target coordinate. And then you >>> > compare multiple patterns projected to the same coordinate, & multiply >>> > the difference by relative strength of each pattern. That gives you a >>> > combined prediction, or probability distribution if the patterns are >>> > mutually exclusive :). >>> > >>> > What kind of patterns are you talking about? How do the elemental >>> > observations (from the sensory device) get turned into vectors? Are >>> > you saying that the "higher level of search and generalization" are >>> > where/how the pattern vectors are created? Why or how would you pick >>> > out a particular target coordiate to use to combine a prediction? Are >>> > you saying that all predictions have individual coordinates? >>> > >>> > I have read papers on Semantic Vectors, (I do not need to be told that >>> > the sources of semantic vectors are different than the sources of the >>> > products of your system) and I have always felt that they were >>> > absurdly inappropriate for semantics (or concepts) because they forced >>> > the semantic concepts into a system that they did not fit into. As is >>> > so obvious to Two-Door, concepts are relativistic. That alone means >>> > that they would have to exist in dynamic virtual space of many >>> > dimensions. Forcing semantic values into 3-dimensional orthogonal >>> > space seems amazingly confused to me. >>> > >>> > What kind of space would your vectors exist in, how do they get there >>> > and why do you choose a particular coordinate for a combination of >>> > predictions? >>> > >>> > (Incidentally, just to remind you, my ideas of concepts are not >>> > necessarily expressed as vectors although I am not close minded about >>> > the idea.) >>> > >>> > Jim Bromer >>> > >>> > >>> > On Tue, Aug 21, 2012 at 2:22 PM, Boris Kazachenko <[email protected]> >>> wrote: >>> > >>> >> On the other hand I am interested in conjectures about conceptual >>> vectors and stuff like that >>> > >>> > You can't formalize "conceptual" vectors, except in terms of >>> > "conceptual" coordinates . >>> > >>> > Jim Bromer >>> > >>> > Thanks for the smiley faces Boris... >>> > I disagree that you have to multiply all the vectors in a pattern by >>> > a relative distance to a target coordinate in order to combine >>> > imagined complex ideas and related observations. Our theories are >>> > very different. (On the other hand I am interested in conjectures >>> > about conceptual vectors and stuff like that.) >>> > >>> > I am interested in a continuation of the explanation of your theories >>> > and I hope to get back to it soon. >>> > Jim Bromer >>> > >>> > >>> > On Tue, Aug 21, 2012 at 7:57 AM, Boris Kazachenko <[email protected]> >>> wrote: >>> > >>> > Jim, >>> > >>> >>Where Boris and I disagree is that I feel that because of >>> relativity the input source of an idea may not be the most elemental >>> source of the idea that needs to be considered. >>> > >>> > Right, but that's the simplest assumption, you must make it unless >>> > you know otherwise. And you only know otherwise if you've >>> > discovered more "elemental" (stable) source on some higher level of >>> > search & generalization. That would generate a focusing / motor >>> > feedback, always derived from prior feedforward. As I keep saying, >>> > complexity must be incremental :). >>> > >>> >> One simple example is that we can use our imagination and study >>> of the subject of the concept in order to extend our ideas about the >>> subject beyond those ideas which came directly from observations of it. >>> > >>> > This is interactive pattern projection, but you have to discover >>> > those patterns first. Technically, you simply multiply all the >>> > vectors in a pattern by a relative distance to a target coordinate. >>> > And then you compare multiple patterns projected to the same >>> > coordinate, & multiply the difference by relative strength of each >>> > pattern. That gives you a combined prediction, or probability >>> > distribution if the patterns are mutually exclusive :). >>> > >>> > >>> > ------------------------------------------- >>> > AGI >>> > Archives: https://www.listbox.com/member/archive/303/=now >>> > RSS Feed: >>> https://www.listbox.com/member/archive/rss/303/18407320-d9907b69 >>> > Modify Your Subscription: https://www.listbox.com/member/?& >>> > Powered by Listbox: http://www.listbox.com >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/18407320-d9907b69> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
