Hi Aaron, I agree with your point!
My term related to what you're talking about is a "skeleton hierarchy" (also you talk about the highest levels of the cognitive hierarchy). OpenCog seems to me (AFAIK, I'm not an expert in it) also similar to a "skeleton hierarchy" - it should be, if it's possible to be connected with DeSTIN. A bottom-up sensori-motor generalizing engine also builds that hierarchy - it generalizes and specifies, goes up and down through the hierarchy while learning and while acting, and the highest levels get very abstract and simple/having the fewest combinations, up to the case when there's not enough variety to generalize. On the bottom side, human mind goes into higher resolution of perception/sensory inputs and higher resolution of control over the lowest level input - we went from substances to molecules, atoms, electrons, quarks... nanotechnology. I believe that bottom-up generalization can do the generalization quick enough on its own (it is up to be tested), but I also believe that it's possible to build thinking machines using carefully devised "skeleton hierarchies", because the "fluid" ones eventually converge to levels which in similar environments would be functionally and structurally similar for any AGI or a human. Also, for a general generalizing algorithm that systematically construct higher levels in a systematic way, the "mechanics" of the levels above also should be predictable/constructable or the same. I have thought of something like that regarding language many years ago, a more physical/grounded way of defining language (than what NLP is doing), but I haven't elaborated it. It also might be harder to devise those levels manually, than to let them grow alone... We are up to see... The growing ones are supposed to be more flexible, too, but a skeleton hierarchy also has to have interfaces for generalization-specialization flow, so maybe it "has to be flexible also". Another thing to see... Classical logic is an example of a top-level generalization, as "bones" of the skeleton at a very high level. As of the OOP, I think it can be produced the following way: *- *Linear machine code - separate instructions, *the first programs ever* , *single cases, specifics;* simplest program counter. - Jump, Conditional branches and *Subroutines* (*repeating blocks of code*, a way to save time and space) - Libraries (*repeating blocks of subroutines*) - Generalized subroutines and types (prototyping/generics - *repeating operations *over different data types) - Classes (*repeating blocks of subroutines and data*) - Design patterns (*repeating blocks of data-flows, transformations, classes usage*, ...) - Types of applications (like word processors, video editing... -*repeating overall functions, regarding the way the user uses them and the type of the output and input data and the interface*, such as sound/image/animation/text...) It's all about repetition of elements and compression of time and space that represent those elements. In developmental terms - time of development, of typing/input. In terms of space, it includes also mind space - there's limited complexity for a level. The elements get too big to keep in mind/a generalizing level (or in a register, L1, L2 cache, RAM ... ), so their representations within that level must be shortened, or the level must stop adding more complexity and "grow" another level abovel.* (see note, about a paper) ... I wrote the discussion below for the other thread, but then omitted it, and I think it matches this thread even better: *A mind doesn't have to care about the endless diversity* - a baby may know only 1 or 2 or 3 people, and live in one house and one yard, and still be capable to learn to speak, to see and to manipulate physical objects. Also she doesn't need to meet 1000000 people in order to learn how to recognize a face (unlike some superficial dumb neural networks). *Why it's possible? Namely because the general features are supposed to be available in all samples of the class* - after all that's exactly why they are "general". *In a way they should be available even if you're given a single example, *and that's why mind doesn't have to care about all the details and all possible cases, when there are too many combinations. One of the reasons is that human mind (in particular) is too bad in remembering details. It's perhaps mutual - human mind generalizes, that's why it doesn't remember all the details, and vice-verse - human brain has failed to remember the details, or has found that it can get them from the sensory inputs (retina,movements of the body, touch, sound ... all together etc.) when needed - that's why it has gone into abstracting only the essential. Both the details are intractable for mental simulation, and they can be accessed from a lower level virtual universe (my terminology) - the reality. That incapability was one of the reasons to build machines and computers - and the thinking computers will have this very big advantage of being capable of doing both precise simulations and an approximate quick generalized thought. Indeed, *that weakness, or strength of the human mind is one reason why many people see the variety/novelty as something special or magical - they cannot remember or think of it well, and they cannot manage it well - the same like with randomness, luck, chance*. *A similar phenomenon goes with creativity, and also "free will"* - *they are both related to lower resolution of predictability of the "special" things than the observer/evaluator expects that there should be, in order to accept the evaluated "thing" as creative/unusual/based on "free will"/"special"/"original", rather than usual, expected, predictable, not creative, trivial, deterministic, not original etc*. That correlation is covered at least in my own old works on mind and creativity. Ben Goertzel has discussed a similar case in his recent article about creativity and computers in H+ *Can Computers Be Creative? – A Dialogue on Creativity, Radical Novelty, AGI, Physics and the Brain* http://hplusmagazine.com/2012/10/20/can-computers-be-creative/ Find the story with the bubbles in the boiling water. The moral is that something that may appear as a new and unpredictable "radical novelty" for one observer, might be fully predictable and already encapsulated in the model, including with concise formulae - the informed observer has learnt about temperature, fluid thermodynamics etc. and can detect what's the fluid, what's the temperature etc. Thus, the bubbles would appear as trivial, simple and predictable to the better informed observer - *"novelty" or "radical novelty" are relative.* In the case of creativity, in arts or engineering/invention, the cognitive aspect of the factors that make a particular art piece or an invention to appear as extraordinary, creative, original, ingenious etc is the lesser amount of intelligence/expertise/skills/talents/experience/insight etc. of the "masses" (competition) who evaluate a piece, compared to the capabilities of the artist/author/creator/inventor. The pieces are ingenious for the observers who *are not ingenious*. *The author/creator should be capable to predict the outcomes better and to systematically produce "unpredictable", original, creative pieces - from the point of view of the observers, - otherwise she first wouldn't be capable to produce the piece at all, and second that's exactly what makes her cognitive capabilities superior and impressive *- she can robustly predict (produce systematic results in the domain) things that the others can't (for example correctly render the perspective, shading, lightning etc. on a piece of art). *"Predict" can be substituted with "simulate", and the outcomes for the producer are somewhat continuous, within an analogy from Calculus*.* For the others it's not a prediction/simulation, it's a discontinuous unpredictable function, a glitch, magic, radical novelty etc... * Those unpredictability concepts are related to "*randomness*" and "* indeterminacy*", which is related to* lack of control*, which is related to the concepts of *supreme supernatural powers*, *transcendental forces and God - The Creator*... That all is related to why many uninformed people believe thinking machines can't be creative. *Creativity -->* * --> Unpredictable --> Randomness, Indeterminacy --> Lack of Control --> Supernatural Phenomenons, Supernatural Powers --> God, Transcendental Forces* [that matches God as the Creator] * --> "WOW, Oh my God, he's so creative!" * * * There are other non-cognitive** aspects of appreciation of art and creativity, too but that's enough already for now... :) ... *** Non-cognitive - in this context I mean not contained within the data describing the piece of art of invention and within the cognitive ("data crunching") capabilities of the creator and the evaluator. For example social factors, physical reward subsystem factors, mood, personal experience/attitude and others...* ...... **Note: I've got an old paper related to this thread exactly in in the domain of Computer science, I also noticed the OOP and the lower levels adding the specifics, it was in accordance with my overall view on intelligence and hierarchical generalization in mind and Universe- "Theory of Mind and Universe". The paper is called "Abstract Theory of the Exceptions of the Rules in Computers", but it was written in Bulgarian, I have to translate it.* .... *Todor "Tosh" Arnaudov ....* * -- Twenkid Research:* http://research.twenkid.com -- *Self-Improving General Intelligence Conference*: http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html *-- Todor Arnaudov's Researches Blog**: *http://artificial-mind.blogspot.com > From: Aaron Hosford <hosfor...@gmail.com> > To: a...@listbox.com > Subject: Re: [agi] Re: Simulation for Perception, Symbols for Understanding > Date: Sun, 4 Nov 2012 22:58:20 -0600 > I'm of the opinion that if we want to deal with complexity effectively, we > should look at existing technologies used to handle it. The Object Oriented > paradigm is, I think, an excellent example. It is specifically designed to > limit complexity through encapsulation, clumping related information > together and putting it behind a firewall, of sorts. The bonus is, we > already think in terms of objects and classes, so not only does maintenance > of an Object Oriented program become easier due to limits in the > interconnectedness of classes introduced by encapsulation, but reasoning > about it becomes easier due to our natural way of understanding things in > Object Oriented terms. > So why does the brain clump things into objects and classes? I think the > reason the Object Oriented approach works for software development carries > over perfectly to thought and reasoning. It is simpler to categorize things > and ignore their detailed internal workings in favor of high level > summaries of expectations. Saying dogs can bite is saying there is a "bite" > method for class "dog". Who cares about how a dog does its biting when > we're trying to decide whether to go near one or not? > Once you've shifted to an Object Oriented perspective, it's also fairly > easy to describe a situation in those terms, and it comes out looking > remarkably like natural language. (In many Object Oriented languages, > method calls directly parallel English grammar: if dog.bite(me, time = > past) then me.avoid(dog).) This is more evidence, to me, that Object > Oriented is a useful metaphor for how our minds are organized. > The simulation techniques these guys are using is a way to recognize the > current behaviors of people and objects in the visual field, which can then > be used to generate Object Oriented descriptions of the scene. (I don't > have a reference on hand, but has been shown, I believe, that typically > once a person looks away from a scene, they only remember a general > description, not all the details. It's true of me personally, at the > least.) Once an effective description has been put together in this > high-level representational scheme, it is much easier to identify a small > set of relevant possibilities and reason about them to put together a plan > of action. Combinatorics are still present, but they are on the scale of > thousands of cases instead of billions. After a plan of action has been > generated at the abstract level, the process of generalization can then be > reversed to move back down the generalization/specialization hierarchy > towards a detailed simulation, at which point flaws in the plan can be > identified and it can be iteratively revised through repeated > generalization/specialization cycles until an effective one is produced. On Sun, Nov 4, 2012 at 6:27 AM, Jim Bromer <jimbro...@gmail.com> wrote: > On Tue, Oct 30, 2012 at 2:11 PM, hosfor...@gmail.com <hosfor...@gmail.com> > wrote: > >> They need certainty or confidence values, and a list of possibilities, >> not just a single outcome. Then reasoning can choose which >> interpretation(s) make the most sense in context. But for their purposes -- >> automated video logging & alerts -- this works fine. Once the work is done, >> attaching confidence vaues and multiple possibilities should be relatively >> minor. > > On Sat, Nov 3, 2012 at 7:53 PM, Todor Arnaudov <twen...@gmail.com> wrote: > >> You don't need millions of dumb samples of "all possible cases of ..." >> like the brute force (dumb) machine learning, the problem must be >> approached right with finding the appropriate correlations, then there is >> not a combinatorial explosion. >> > --------------------------------------------------- >  > I essentially agree with this although I would not do it in just the way > you guys are indicating. The mind is the solution to combinatorial > explosion problem. However, it is not all that simple. While simulation > and other kinds of imagination are important to producing good insight > about being able to understand what is going on and reacting to it in an > appropriate way, it does not show how you can't react to many different > situations quickly. The problem of responding to many different > situations at once throws the crackpot into the assumption of simplicity > about this because when you deal with different kinds of situations at once > there could be good reasons to see them as combined. This introduces a > potential for more complexity into the problem. Should the different > situations be treated as separate or should they be considered to > be interrelated? If you are going to rely on the imagination or previous > correlation then you are effectively introducing additional categories of > possibilities (of situations to be recognized). Is this methodology > really a simplification of complexity or is it just a rerouting of > complexity? The potential for combinatorial complexity occurs because of > the introduction or definition of separate components, not because someone > has an intrinsic desire to make thorough inspections of the possibilities. >  > Jim Bromer >  >  >  >  > On Sat, Nov 3, 2012 at 7:53 PM, Todor Arnaudov <twen...@gmail.com> wrote: > >> Nice link, Aaron, >> >> That's what I was talking about with 3D-reconstruction - it involves also >> reconstruction of the light sources - in GI. You don't need millions of >> dumb samples of "all possible cases of ..." like the brute force (dumb) >> machine learning, the problem must be approached right with finding the >> appropriate correlations, then there is not a combinatorial explosion. >> >> Different human activities are encoded in the possible motions of the >> human body and their possible interactions with other bodies (in classical >> physics sense), i.e. initially those bodies must be identified, and that >> must be evaluated in as lower resolution, as general the activity is. >> >> The symbols are essentially low resolution representations, they are >> always fewer than the raw data. >> >> The activities obviously encoded in the trajectories of those bodies >> (that's something physics is dealing with for centuries), and when >> generalized in low resolution, the trajectories would converge to the cases >> of carrying, throwing, kicking etc., depending on some rough/global >> specifics like is it accelerating, is it by a hand, an arm, head, are the >> arms moving in parallel etc. etc. >> >>  -- >> ** Todor "Tosh" Arnaudov ** >> * >> -- Twenkid Research:*  http://research.twenkid.com >> >> -- *Self-Improving General Intelligence Conference*: >> http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html >> >> *-- Todor Arnaudov's Researches Blog**: * >> http://artificial-mind.blogspot.com >> >> >> >>> - *From:* Aaron Hosford <hosfor...@gmail.com> >>> >>> >>> - *To:* a...@listbox.com >>> >>> >>> - *Subject:* Simulation for Perception, Symbols for Understanding >>> >>> >>> - *Date:* Tue, 30 Oct 2012 09:30:55 -0500 >>> >>> http://www.rec.ri.cmu.edu/about/news/11_01_minds.php >>>  >>> Recognizing and predicting human activity in video footage is a >>> difficult problem. People do not all perform the same action in the same >>> way.  Different actions may look very similar on video. And videos of >>> the same action can vary wildly in appearance due to lighting, perspective, >>> background, the individuals involved, and more. >>> To minimize the effects of these variations, Carnegie Mellon's Mind’s >>> Eye software will generate 3D models of the human activities and match >>> these models to the person’s motion in the video. It will compare the >>> video motion to actions it’s already been trained to recognize (such as >>> walk, jump, and stand) and identify patterns of actions (such as pick up >>> and carry). The software examines these patterns to infer what the person >>> in the video is doing. It also makes predictions about what is likely to >>> happen next and can guess at activities that might be obscured or occur >>> off-camera. >>> This project's approach is to use 3D simulation to detect and classify >>> behavior, and then generate symbolic information about the events that were >>> observed. I'm encouraged to see someone doing work on this stage of >>> cognition, as I see perception as the "missing link" that's stopping AGI >>> from developing. >>>  >>> I wonder, will a certain naysayer feel vindicated that someone else sees >>> simulation as vital to intelligence (and is using it to solve precisely the >>> problems he says it's needed to solve), or will he be annoyed that the >>> ultimate form the information takes is symbolic, which is compatible with >>> semantic nets or any number of other existing AGI approaches? >>> >> >> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> >> | Modify <https://www.listbox.com/member/?&> Your >> Subscription<http://www.listbox.com/> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> > | Modify <https://www.listbox.com/member/?&> Your > Subscription<http://www.listbox.com/> > > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com