My approach is to use knowledge graphs with graded edge weights to represent degrees of truth. This allows me to put structure together with statistics in a fine grained manner that cannot be accomplished with ordinary symbolic representations. With properly designed structural conventions, such a knowledge graph can represent anything either of the two proposed systems can represent.
As an example of the fine-grained representational capabilities such an approach offers, consider the misunderstanding I initially had about the meaning of, "Could a crocodile run a steeplechase?" Not initially knowing what a steeplechase was, I took "run" in the sense of "operate" or "manage", rather than "participate in". After I read the footnote and educated myself on steeplechases, I realized my mistake. However, having recognized both interpretations, I have representations of both in my head, with one weighted more heavily as the correct interpretation. These can be represented in a weighted knowledge graph by linking the representation of the sentence itself to the two representations of its meaning using weighted edges. As new information and evidence is used to evaluate the two meanings, the weights of the edges connecting the sentence to them can be adjusted appropriately. The same technique can in fact be used *within *the representations of meaning themselves. For example, suppose I were to make an ambiguous anaphoric reference, which could resolve to one of two completely different entities the system is familiar with. I can link the particular vertex representing the noun phrase that was uttered via appropriately weighted edges to those two entities. Representing this in a propositional form (which is the interpretation I give to the phrase "symbolic representation" -- correct me if I am wrong) would require two separate propositions, each with a graded truth value, but with no clear connection to each other. To record the relationship between them -- that they are both candidate bindings for the anaphoric noun phrase -- a third proposition would have to be created which contained the first two as sub-propositions explicitly related to the noun phrase and their respective weights. But with the weights contained inside the proposition, each time a change was made to those weights, we would have a new proposition, and would have to throw away the old one for being out of date. (The other option being to make propositions mutable, which would make search a nightmare.) Additionally, the number of ways to represent the same information would grow combinatorically with each additional anaphoric binding option, since the relationships are commutative. Trying to connect this information together with other anaphoric ambiguities in the same sentence would add another layer of combinatorics on top of the one we already have. The proposition used to represent the full, ambiguous meaning of a single sentence would be monstrous. Instead, with a graph, I can represent each option with a single weighted edge, and there is precisely one, maximally compact representation no matter how many options we have for any number of anaphoric noun phrases. Making a change requires only a modification of a single weight, or the addition of a single new edge, operating in-place without significant effect on other nearby information structures. As another advantage, when using graph form, we can take advantage of the many algorithms from graph theory, and the explicit locality of reference for related information, greatly speeding up and simplifying searches for relevant information. I can have a vertex to represent alligators, and all the information I know about alligators connected directly to it, meaning that the system only needs to search vertices connected to the alligator vertex for relevant information, rather than all information in the entire database. I can use spreading activation to quickly find all the vertices that relate both to alligators and to steeplechases, making it easy to determine whether alligators can jump or run and thereby participate in such a race. (A physical or other special-purpose model of alligator behavior could have a reference to it stored under a particular vertex, making that accessible just as quickly as any other data about them.) And if I learn something new about alligators, I can add it without interfering with the other information already present and immediately know which information needs to be collated with the new data. Finally, it should be clear that since graphs can be used to represent computer programs (flowcharts) and data structures (pointer indirection networks), graphs are representationally complete -- they can represent any thing or process that can be represented on a computer in any way. So if our understanding can be modeled computationally, as I believe it can, then graphs are up to the job. I've gone on way longer than I originally intended singing the praises of weighted knowledge graphs, and for that I apologize, but I feel they are the missing link standing between us and AGI. The key is to take advantage of locality of reference features, and to properly structure the graphs. Most uses of graphs that I have seen to date take a pretty naive approach, blandly dealing with inheritance features like "is-a" class relationships or path identification in maps. With a sufficiently sophisticated approach to universal representation with graphs, they can reach the level of power and expressiveness that Derek has indicated is necessary for AGI, and can further act as the "glue" by which other, more special-purpose (non-symbolic) tools can be put together with potentially ambiguous symbolic information. On Mon, Dec 16, 2013 at 3:33 PM, Piaget Modeler <piagetmode...@hotmail.com>wrote: > I think we won't know until we try these approaches, individually and in > combination, > and see what works and what doesn't, and to what extent. As you say, most > likely some > combinaton will yield the best results, rather than any single approach > de jour (such as > Deep Learning). > > ~PM > > > ------------------------------ > From: derekz...@msn.com > > To: a...@listbox.com > Subject: RE: [agi] On our best behavior > Date: Mon, 16 Dec 2013 12:14:13 -0700 > > > Hmm... I fear that such terse descriptions gloss over the interesting > issues... :) > > If by "symbolic or statistical representations" you mean formal systems > consisting of databases of amodal symbol relations associated with binary > or graded (i.e. "statistical") truth values, along with a priori > truth-preserving transformations, I don't think the suggestion works. In > theory (a la computability theory), maybe... but in actual practice such > things hardly seem rich enough to represent things that people for example > work with very easily. From an engineering perspective, tasked with > designing a system that can answer questions like "Could a crocodile run a > steeplechase?" (from the paper), I'd be an idiot not to build in spatial > dynamic physical modeling into a representational scheme --- it's just such > a more efficient way of representing many of the issues at hand than trying > to represent such "naive physics" with predicate calculus or something > similar (which surely seems doomed to everybody by now...). > > You can just call that "symbolic" if you like, but then the word isn't > doing very much work. > > It seems like a system capable of being intelligent in our universe would > need some nifty ways of operating with models that are logical, > statistical, spatial, causal, physical, temporal, etc, and moving between > those modeling modalities as needed. I wouldn't call such a thing a hybrid > of symbolic and statistical, I'd call it something considerably more > powerful and expressive than that. > > But that's just me... > > Also note that knowledge acquisitition is an intimate part of this... I > could say that "C++!" is a good thing to build cognitive models with, but > it raises serious issues about where the code comes from... > > Whatever we're doing in our heads, it isn't computing statistical > conclusions against a database of statistical facts about crocodile leg > lengths and hedge heights and (most troublingly) jumping abilities.... > > Which isn't to say it is impossible to do so, but why would anybody > willingly start so far from the target? > > derek > > ------------------------------ > From: piagetmode...@hotmail.com > To: a...@listbox.com > Subject: RE: [agi] On our best behavior > Date: Mon, 16 Dec 2013 10:07:17 -0800 > > DZ: "What can we build such models from? " > > One answer is symbolic or statistical representations, or some hybrid > thereof. > > ~PM > ------------------------------ > > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com