Jim,
Ok, so let's say that the prior conversation had been about a train
shaped clock that was bought on ebay and shipped by UPS. In this case
the clock interpretation and taking a look at UPS Quantum View(tm)
(their online tracking system) would be the more valid interpretation.
Of course, many jokes are based on this type of ambiguity.
What is different as opposed to the old idea of schemas?
(e.g. http://sites.wiki.ubc.ca/etec510/Schema_Theory )
Thanks,
Dimitry
On 10/6/2012 9:18 PM, Jim Bromer wrote:
I don't have any details on how it would actually operate because it
is a fairly wild model. I would have to control it using a somewhat
precise special language to direct it so I could test the basic ideas
out without having it be a full fledged AGI program.
Let's say that the program was trying to interpret what a sentence meant.
"What time is the train arriving?"
Suppose that it had recognized the words but now was trying to make
sense of them. (I am not going to write a program that has a
vocabulary at the start by the way.) It would know that trains depart
and arrive at train stations if those concepts were already associated
with the concept of a train (through previous learning). If it knew
that departures and arrivals were made according to a schedule which
was based on time and station, then it should be able to interpret
that the sentence was concerned with the arrival time of a train at
some station. It might not be absolutely certain of
this interpretation. But it would be able to make that interpretation
if those kinds of relations had been associated with the concept of a
train. Other possible interpretations, like an odd one that inferred
that a train was a kind of time piece would not be confirmed by the
knowledge that it had about trains. Suppose however, that it had
knowledge of a clock that was shaped like a model train for example.
Then there might be some confusion about what the sentence meant.
However, even in this special case the program could learn that
arrival times were a more common issue when talking about trains than
the much rarer case of a clock that was made to look like a train. So
even though the program might be exposed to a lot of odd cases, it
could also have a way to designate more common conceptual relations in
its conceptual network.
But this idea goes beyond associating facts with a particular
concept. Conceptual relations can also be used to shape how ideas
work. In fact, even this simple case demonstrates one way this can occur.
Jim Bromer
On Sat, Oct 6, 2012 at 9:45 PM, Dimitry Volfson <[email protected]
<mailto:[email protected]>> wrote:
Jim,
I'm trying to understand. Could you show how your conceptual
network would ~~ see how parts are being used and see how much
sense that makes to the central concept ~~. And what the result
would be depending on how much sense was made. A hypothetical
example is what I'd like to see.
Thanks,
Dimitry
On 10/6/2012 7:12 AM, Jim Bromer wrote:
I am presenting a rough idea of a conceptual network as a
potential advancement from earlier ideas like semantic networks.
Looking on Wikipedia I found some examples of semantic networks.
In a semantic network the nodes are the "concepts" and the edges
are "relations between concepts". A semantic network was usually
defined with a conveniently finite number of definitions of the
edges (as types of relations between concepts) and a lot of nodes
(which were the concepts). One difference then is that the
conceptual network that I envision will not be limited by the
number of relations between concepts. This initial presentation,
however, is a little misleading because, as can easily be deduced
from an inspection of a semantic network, it is obvious that the
edges, which are called "relations between the concepts," are
concepts themselves. So in the conceptual network, a relation
could become a concept itself. And the conceptual network that I
am thinking of does not have a single systematic method of being
'activated' in some way (although searches would be made through
it). Furthermore, the network does not have to be envisioned as a
single network, but since different kinds of concepts may be
associated arbitrarily the potential for interrelations would
tend to be extensive.
Since this network is not as simple as a semantic network, the
utilization of the parts of the conceptual network would probably
be defined as they are used. So the different parts would not all
work just the same way. (However, the underlying methodology of
how the different parts are used might be drawn from a standard
system). Finally, since the network is not used in one simple
way, deduction (derived from conceptual knowledge) would also
rely on what I call structural relations. Different concepts
would have different structural relations when used with other
concepts. This way an expectation of structural relations
concerning a central concept can help to derive meaning from a
sentence or an observation. So if the central concepts of a
sentence (for example) were recognized then other parts of the
sentence that were directly related to the central concepts could
be found by fitting them to some of the potential structural
relationships that had been previously defined for those central
concepts.
Different people have different kinds of knowledge about things,
so the structural relations that I am talking about are not
(usually) normative. For instance, a causal relation is a
structural relation, but different people will believe different
kinds of things so there would be no pre-defined underlying
normative system of causality for the AGI program. However, the
program would be interested in trying to understand what other
people are describing and if this model of structural relations
could be used as a successful basis for an AGI program then it
would learn something about how people structure their own
conceptual relations. Many other kinds of relations between
concepts could be considered as structural; I mentioned causality
only because it is such a familiar concept.
The structural concept thing that I am thinking about is
distinctly different than (what I call) the funneling AGI models.
Conclusions are not derived through a funneling of deductions or
weight-based reasoning. Yes, I would use deduction and
weight-based reasoning and yes the reaching of a conclusion would
have a terminal point, but the structural concept method means
that you don't just try to smush a measurement of the validity of
all ideas that are related to some central concept into a common
hopper even when the conclusion would not be homogenous for that
combination of things. Instead the program would look to see how
the parts are being used and whether or not that makes sense for
the kind of central concepts that are being considered at that
moment.(I am using the term "structural" to denote the fact that
interrelated concepts should not all be funneled through one
single circuit of reasoning).
While many people have come to the conclusion that my ideas about
conceptual structure only represented a high-level form of GOFAI
or that they were the same as the desired high level products of
machine learning, my theory is that that the structural relations
between (individuated and instanced) concepts have to be seen as
part of the basis of reasoning, not just the resultant of it. So
while the individuated structural relations between concepts in a
particular instance would (usually) be learned, the underlying
programming has to take their usage into account. I believe that
the use of conceptual structure concerning some central idea that
is to be considered has to be a part of the foundational process
of artificial intelligence.And this idea can be used as an
explanation of how we can derive meaning from combinations of
ideas that are somewhat novel.
This is not an easy model but I believe it could be developed and
at least tested with some simple cases.
Jim Bromer
On Fri, Oct 5, 2012 at 2:16 PM, Piaget Modeler
<[email protected] <mailto:[email protected]>> wrote:
Sure.
~PM
------------------------------------------------------------------------
I am curious about something. Is anyone interested in
discussing my ideas about conceptual structure?
Jim Bromer
*AGI* | Archives
<https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/10215994-5ed4e9d1> |
Modify <https://www.listbox.com/member/?&> Your Subscription
[Powered by Listbox] <http://www.listbox.com>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/10561250-164650b2>
| Modify <https://www.listbox.com/member/?&> Your Subscription
[Powered by Listbox] <http://www.listbox.com>
____________________________________________________________
*Woman is 53 But Looks 25*
Mom reveals 1 simple wrinkle trick that has angered doctors...
<http://thirdpartyoffers.juno.com/TGL3142/5070decacaac15ec94a3est01duc>ConsumerLifestyleMag.com
<http://thirdpartyoffers.juno.com/TGL3142/5070decacaac15ec94a3est01duc>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/10215994-5ed4e9d1> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com