Re: [agi] creativity

2008-10-12 Thread Mike Tintner
Ben,

I'm glad that you have decided to respond to, - or at least recognize - my 
criticisms/points re creativity, because they are extremely important and 
central to AGI - & as I said, it isn't just you but everyone who is avoiding 
them - when it is in all your interests to confront them *now*/*urgently*. I 
think in fact my criticisms do hold - but obviously I will have to look at your 
book first. [I may have looked at it already - I've read quite a bit of you - 
but you've written a lot]. If you could link me, or send me a copy, I will 
reply in a more considered way.
  ... some loose ends in reply to a message from a few days back ...

  Mike Tintner wrote:

  ***
  Be honest - when and where have you ever addressed creative problems? 
[Just count how many problems I have raised).. 
  ***

  In my 1997 book FROM COMPLEXITY TO CREATIVITY

   

  *** 
  Just as it is obvious that I know next to nothing about programming, it 
is also obvious that you have v. little experience of discussing creative 
problemsolving - at, I stress, a *metacognitive* level. (And nor, AFAIK, do any 
AGI-ers -  only partly excepting Minsky).

  ***


  The 1997 book I referenced above in fact contains a significant amount of 
metacognition about creativity.  You seem to have the idea that it's supposed 
to be possible to explain an AGI's creative process in detail, in specific 
instances ... and I don't know why you think that, since it's not even the case 
for humans.
   

  *** 
  All this stands in total, stark contrast to any discussion of logical or 
mathematical, problems, where you are always delighted to engage in detail, and 
v. helpful and constructive - and do not make excuses to cover up your 
inexperience.
  ***

  Aspects of the mind that are closer to the deliberative, intensely conscious 
level are easier to discuss explicitly and in detail.

  Aspects of the mind that are mainly unconscious and have to do mainly with 
the coordinated activity of a large number of different processes, are harder 
to describe in detail in specific instances.  One can describe the underlying 
processes but this then becomes technical and lengthy!!

  -- Ben


  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
A brief and non-technical description of the two types of semantics
mentioned in the previous discussions:

(1) Model-Theoretic Semantics (MTS)

(1.1) There is a world existing independently outside the intelligent
system (human or machine).

(1.2) In principle, there is an objective description of the world, in
terms of objects, their properties, and relations among them.

(1.3) Within the intelligent system, its knowledge is an approximation
of the objective description of the world.

(1.4) The meaning of a symbol within the system is the object it
refers to in the world.

(1.5) The truth-value of a statement within the system measures how
close it approximates the fact in the world.

(2) Experience-Grounded Semantics (EGS)

(2.1) There is a world existing independently outside the intelligent
system (human or machine). [same as (1.1), but the agreement stops
here]

(2.2) Even in principle, there is no objective description of the
world. What the system has is its experience, the history of its
interaction of the world.

(2.3) Within the intelligent system, its knowledge is a summary of its
experience.

(2.4) The meaning of a symbol within the system is determined by its
role in the experience.

(2.5) The truth-value of a statement within the system measures how
close it summarizes the relevant part of the experience.

To further simplify the description, in the context of learning and
reasoning: MTS takes "objective truth" of statements and "real
meaning" of terms as aim of approximation, while EGS refuses them, but
takes experience (input data) as the only thing to depend on.

As usual, each theory has its strength and limitation. The issue is
which one is more proper for AGI. MTS has been dominating in math,
logic, and computer science, and therefore is accepted by the majority
people. Even so, it has been attacked by other people (not only the
EGS believers) for many reasons.

A while ago I made a figure to illustrate this difference, which is at
http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
manifesto of EGS is at
http://nars.wang.googlepages.com/wang.semantics.pdf

Since the debate on the nature of "truth" and "meaning" has existed
for thousands of years, I don't think we can settle down it here by
some email exchanges. I just want to let the interested people know
the theoretical background of the related discussions.

Pei


On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
>
> Hi,
>
>>
>> > What this highlights for me is the idea that NARS truth values attempt
>> > to reflect the evidence so far, while probabilities attempt to reflect
>> > the world
>
> I agree that probabilities attempt to reflect the world
>
>>
>> .
>>
>> Well said. This is exactly the difference between an
>> experience-grounded semantics and a model-theoretic semantics.
>
> I don't agree with this distinction ... unless you are construing "model
> theoretic semantics" in a very restrictive way, which then does not apply to
> PLN.
>
> If by model-theoretic semantics you mean something like what Wikipedia says
> at http://en.wikipedia.org/wiki/Formal_semantics,
>
> ***
> Model-theoretic semantics is the archetype of Alfred Tarski's semantic
> theory of truth, based on his T-schema, and is one of the founding concepts
> of model theory. This is the most widespread approach, and is based on the
> idea that the meaning of the various parts of the propositions are given by
> the possible ways we can give a recursively specified group of
> interpretation functions from them to some predefined mathematical domains:
> an interpretation of first-order predicate logic is given by a mapping from
> terms to a universe of individuals, and a mapping from propositions to the
> truth values "true" and "false".
> ***
>
> then yes, PLN's semantics is based on a mapping from terms to a universe of
> individuals, and a mapping from propositions to truth values.  On the other
> hand, these "individuals" may be for instance **elementary sensations or
> actions**, rather than higher-level individuals like, say, a specific cat,
> or the concept "cat".  So there is nothing non-experience-based about
> mapping terms into a "individuals" that are the system's direct experience
> ... and then building up more abstract terms by grouping these
> directly-experience-based terms.
>
> IMO, the dichotomy between experience-based and model-based semantics is a
> misleading one.  Model-based semantics has often been used in a
> non-experience-based way, but that is not because it fundamentally **has**
> to be used in that way.
>
> To say that PLN tries to model the world, is then just to say that it tries
> to make probabilistic predictions about sensations and actions that have not
> yet been experienced ... which is certainly the case.
>
>>
>> Once
>> again, the difference in truth-value functions is reduced to the
>> difference in semantics, what is, what the "truth-value" attempts to
>> measure.
>
> Agreed...
>
> Be

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Ben Goertzel
Thanks Pei,

I would add (for others, obviously you know this stuff) that there are many
different
theoretical justifications of probability theory, hence that the use of
probability
theory does not imply model-theoretic semantics nor any other particular
approach to semantics.

My own philosophy is even further from your summary of model-theoretic
semantics than it is from (my reading of) Tarski's original version of model
theoretic semantics.  I am not an objectivist whatsoever  (I read too
many
Oriental philosophy books in my early youth, when my mom was studying
for her PhD in Chinese history, and my brain was even more pliant  ;-).
I deal extensively with objectivity/subjectivity/intersubjectivity issues in
"The Hidden Pattern."

As an example, if one justifies probability theory according a Cox's-axioms
approach, no model theory is necessary.  In this approach, it is justified
as a set of a priori constraints that the system chooses to impose on its
own
reasoning.

In a de Finetti approach, it is justified because the system wants to
be able to "win bets" with other agents.  The intersection between this
notion and the hypothesis of an "objective world" is unclear, but it's not
obvious why these hypothetical agents need to have objective existence.

As you say, this is a deep philosophical rat's-nest... my point is just that
it's
not correct to imply "probability theory = traditional
model theoretic semantics"

-- Ben G

On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:

> A brief and non-technical description of the two types of semantics
> mentioned in the previous discussions:
>
> (1) Model-Theoretic Semantics (MTS)
>
> (1.1) There is a world existing independently outside the intelligent
> system (human or machine).
>
> (1.2) In principle, there is an objective description of the world, in
> terms of objects, their properties, and relations among them.
>
> (1.3) Within the intelligent system, its knowledge is an approximation
> of the objective description of the world.
>
> (1.4) The meaning of a symbol within the system is the object it
> refers to in the world.
>
> (1.5) The truth-value of a statement within the system measures how
> close it approximates the fact in the world.
>
> (2) Experience-Grounded Semantics (EGS)
>
> (2.1) There is a world existing independently outside the intelligent
> system (human or machine). [same as (1.1), but the agreement stops
> here]
>
> (2.2) Even in principle, there is no objective description of the
> world. What the system has is its experience, the history of its
> interaction of the world.
>
> (2.3) Within the intelligent system, its knowledge is a summary of its
> experience.
>
> (2.4) The meaning of a symbol within the system is determined by its
> role in the experience.
>
> (2.5) The truth-value of a statement within the system measures how
> close it summarizes the relevant part of the experience.
>
> To further simplify the description, in the context of learning and
> reasoning: MTS takes "objective truth" of statements and "real
> meaning" of terms as aim of approximation, while EGS refuses them, but
> takes experience (input data) as the only thing to depend on.
>
> As usual, each theory has its strength and limitation. The issue is
> which one is more proper for AGI. MTS has been dominating in math,
> logic, and computer science, and therefore is accepted by the majority
> people. Even so, it has been attacked by other people (not only the
> EGS believers) for many reasons.
>
> A while ago I made a figure to illustrate this difference, which is at
> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
> manifesto of EGS is at
> http://nars.wang.googlepages.com/wang.semantics.pdf
>
> Since the debate on the nature of "truth" and "meaning" has existed
> for thousands of years, I don't think we can settle down it here by
> some email exchanges. I just want to let the interested people know
> the theoretical background of the related discussions.
>
> Pei
>
>
> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > Hi,
> >
> >>
> >> > What this highlights for me is the idea that NARS truth values attempt
> >> > to reflect the evidence so far, while probabilities attempt to reflect
> >> > the world
> >
> > I agree that probabilities attempt to reflect the world
> >
> >>
> >> .
> >>
> >> Well said. This is exactly the difference between an
> >> experience-grounded semantics and a model-theoretic semantics.
> >
> > I don't agree with this distinction ... unless you are construing "model
> > theoretic semantics" in a very restrictive way, which then does not apply
> to
> > PLN.
> >
> > If by model-theoretic semantics you mean something like what Wikipedia
> says
> > at http://en.wikipedia.org/wiki/Formal_semantics,
> >
> > ***
> > Model-theoretic semantics is the archetype of Alfred Tarski's semantic
> > theory of truth, based on his T-schema, and is one of the founding
> concepts
> > o

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
Ben,

Of course, "probability theory", in its mathematical form, is not
bounded to any semantics at all, though it implicitly exclude some
possibilities. A semantic theory is associated to it when probability
theory is applied to a practical situation.

There are several major schools in the interpretation of probability
(see http://plato.stanford.edu/entries/probability-interpret/), and
their relations with NARS is explained in Section 8.5.1 of my book.

As for the interpretation of probability in PLN, I'd rather wait for
your book than to make comment based on your brief explanation.

Pei


On Sun, Oct 12, 2008 at 9:13 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Thanks Pei,
>
> I would add (for others, obviously you know this stuff) that there are many
> different
> theoretical justifications of probability theory, hence that the use of
> probability
> theory does not imply model-theoretic semantics nor any other particular
> approach to semantics.
>
> My own philosophy is even further from your summary of model-theoretic
> semantics than it is from (my reading of) Tarski's original version of model
> theoretic semantics.  I am not an objectivist whatsoever  (I read too
> many
> Oriental philosophy books in my early youth, when my mom was studying
> for her PhD in Chinese history, and my brain was even more pliant  ;-).
> I deal extensively with objectivity/subjectivity/intersubjectivity issues in
> "The Hidden Pattern."
>
> As an example, if one justifies probability theory according a Cox's-axioms
> approach, no model theory is necessary.  In this approach, it is justified
> as a set of a priori constraints that the system chooses to impose on its
> own
> reasoning.
>
> In a de Finetti approach, it is justified because the system wants to
> be able to "win bets" with other agents.  The intersection between this
> notion and the hypothesis of an "objective world" is unclear, but it's not
> obvious why these hypothetical agents need to have objective existence.
>
> As you say, this is a deep philosophical rat's-nest... my point is just that
> it's
> not correct to imply "probability theory = traditional
> model theoretic semantics"
>
> -- Ben G
>
> On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>
>> A brief and non-technical description of the two types of semantics
>> mentioned in the previous discussions:
>>
>> (1) Model-Theoretic Semantics (MTS)
>>
>> (1.1) There is a world existing independently outside the intelligent
>> system (human or machine).
>>
>> (1.2) In principle, there is an objective description of the world, in
>> terms of objects, their properties, and relations among them.
>>
>> (1.3) Within the intelligent system, its knowledge is an approximation
>> of the objective description of the world.
>>
>> (1.4) The meaning of a symbol within the system is the object it
>> refers to in the world.
>>
>> (1.5) The truth-value of a statement within the system measures how
>> close it approximates the fact in the world.
>>
>> (2) Experience-Grounded Semantics (EGS)
>>
>> (2.1) There is a world existing independently outside the intelligent
>> system (human or machine). [same as (1.1), but the agreement stops
>> here]
>>
>> (2.2) Even in principle, there is no objective description of the
>> world. What the system has is its experience, the history of its
>> interaction of the world.
>>
>> (2.3) Within the intelligent system, its knowledge is a summary of its
>> experience.
>>
>> (2.4) The meaning of a symbol within the system is determined by its
>> role in the experience.
>>
>> (2.5) The truth-value of a statement within the system measures how
>> close it summarizes the relevant part of the experience.
>>
>> To further simplify the description, in the context of learning and
>> reasoning: MTS takes "objective truth" of statements and "real
>> meaning" of terms as aim of approximation, while EGS refuses them, but
>> takes experience (input data) as the only thing to depend on.
>>
>> As usual, each theory has its strength and limitation. The issue is
>> which one is more proper for AGI. MTS has been dominating in math,
>> logic, and computer science, and therefore is accepted by the majority
>> people. Even so, it has been attacked by other people (not only the
>> EGS believers) for many reasons.
>>
>> A while ago I made a figure to illustrate this difference, which is at
>> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
>> manifesto of EGS is at
>> http://nars.wang.googlepages.com/wang.semantics.pdf
>>
>> Since the debate on the nature of "truth" and "meaning" has existed
>> for thousands of years, I don't think we can settle down it here by
>> some email exchanges. I just want to let the interested people know
>> the theoretical background of the related discussions.
>>
>> Pei
>>
>>
>> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> >
>> >
>> >
>> > Hi,
>> >
>> >>
>> >> > What this highlights for me is the idea that NARS

Re: [agi] open or closed source for AGI project?

2008-10-12 Thread Ben Goertzel
On Sun, Oct 12, 2008 at 1:32 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Sun, Oct 12, 2008 at 12:56 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > OpenCog has VariableNodes in the AtomTable, which are used to represent
> > variables in the sense of FOL ...
>
> I'm still unclear as to how OC performs inference with variables,
> unification, etc.  Maybe you can explain that during a tutorial
> session?


Are your questions about the math level, or the code level?

Only Ari Heljakka understands exactly how the current PLN code does that
stuff ... but Joel is figuring it out, and will I'm sure be happy to explain
it once he does ;-)

OTOH, I can explain it on the mathematical and conceptual level...


>
> The sentential approach is more classical and helps me think more
> clearly about optimization issues (ie inference control), which is a
> big unsolved problem.



I think the biggest issues w/ inference control are more than the "cognitive
science" or algorithmics level than the implementation level, hence
independent of the choice of a sentential vs. hypergraph knowledge
representation...

The next couple Wed. night OpenCog tutorial/discussion sessions will focus
on inference, and one will focus on inference control specifically...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] creativity

2008-10-12 Thread Ben Goertzel
Mike,

A very messily formatted rough draft of From Complexity to Creativity is
here

http://www.goertzel.org/books/complex/contents.html

Alas I long ago lost the wordperfect 5.1 file that was used to generate the
final proofs way back when...

The chapter that gives an overall theory of the psychology of creativity is
here

http://www.goertzel.org/books/complex/ch14.html

however that chapter is very high level and to make it concrete you'd need
to trace the foundations of the ideas there back into the prior chapters...

Here is the intro text of that chapter ... some of it sounds like it could
have come out of your own mouth ;-)

*

Creativity is the great mystery at the center of Western culture. We
preach order, science, logic and reason. But none of the great
accomplishments of science, logic and reason was actually achieved in a
scientific, logical, reasonable manner. Every single one must, instead, be
attributed to the strange, obscure and definitively irrational process of
creative inspiration. Logic and reason are indispensible in the working out
ideas, once they have arisen -- but the actual * conception* of bold,
original ideas is something else entirely.

No creative person completely understands what they do when they create.
And no two individuals' incomplete accounts of creative process would be the
same. But nevertheless, there are some common patterns spanning different
people's creativity; and there is thus some basis for theory.

In previous chapters, the phenomenon of creativity has lurked around the
edges of the discussion. Here I will confront it head-on. Drawing on the
ideas of most of the previous chapters, I will frame a comprehensive
complexity-theoretic answer to the question: How do those most exquisitely
complex systems, minds, go about creating forms?

I will begin on the whole-mind, personality level, with the idea that
certain individuals possess creatively-inspired, largely medium-dependent
"creative subselves." In conjunction with the Fundamental Principle of
Personality Dynamics, this idea in itself gives new insight into the
much-discussed relationship between inspired creativity and madness. A
healthy creative person, it is argued, maintains I-You relationships between
their creative subselves and their everyday subselves. In the mind of a
"mad" creative person, on the other hand, the relationship is strained and
competitive, in the I-It mold.

The question of the * internal workings* of the creative subself is then
addressed. Different complex systems models are viewed as capturing
different * aspects* of the creative process.

First, the analogy between creative thought and the genetic algorithm is
pursued. It is argued that the creative process involves two main aspects:
combination and mutation of ideas, in the spirit of the genetic algorithm;
and analogical spreading of ideas, following the lines of the dynamically
self-organizing associative memory network. The dual network model explains
the interconnection of these two processes. While these processes are
present throughout the mind, creative subselves provide an environment in
which they are allowed to act with unusual liberty and flexibility.

This flexibility is related to the action of the perceptual-cognitive
loop, which, when "coherentizing" thought-systems within the creative
subself, seems to have a particularly gentle hand, creating systems that can
relatively easily be dissected and put back together in new ways. Other
subselves create their own realities having to do with physical
sense-perceptions and actions; creative subselves, on the other hand, create
their own realities having to do with abstract forms and structures. Because
the creative subself deals with a more flexible "environment," with a more
amenable fitness landscape, it can afford to be more flexible internally.

In dynamical systems terms, the process of creative thought may be
viewed as the simultaneous creation and exploration of autopoietic
attractors. Ideas are explored, and allowed to lead to other ideas, in
trajectories that evolve in parallel. Eventually this dynamic process leads
to a kind of rough "convergence" on a strange attractor -- a basic sense for
what kind of idea, what kind of product one is going to have. The various
parts of this attractor are then explored in a basically chaotic way, until
a particular * part* of the attractor is converged to. In formal language
terms, we may express this by saingy that the act of creative
inspiration *creates its own languages
*, which it then narrows down into simpler and simpler languages, until it
arrives at languages that the rest of the mind can understand.

The hierarchical structure of the dual network plays a role here, in
that attractors formed on higher levels progressively give rise to
attractors dealing with lower levels. One thus has a kind of iterative
substitution, similar to the L-system model of sentence production. Instea

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-12 Thread Eric Burton
I think it's normal for tempers to flare during a depression. This
kind of technology really pays for itself. The only thing that matters
is the code

Eric B

On 10/12/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> No idea, Mentifex ... I haven't filtered out any of your messages (or
> anyone's) ... but sometimes messages get held up at listbox.com by their
> automated spam filters (or for other random reasons) and I take too long to
> log in there and approve them...
>
> ben
>
>
>> >
>>
>> Well, how come my posts aren't getting through? (Going out
>> to the list) What do you call that?
>>
>> ATM/Mentifex
>> --
>> http://code.google.com/p/mindforth/
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome "  - Dr Samuel Johnson
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Abram Demski
Pei,

In this context, how do you justify the use of 'k'? It seems like, by
introducing 'k', you add a reliance on the truth of the future "after
k observations" into the semantics. Since the induction/abduction
formula is dependent on 'k', the truth values that result no longer
only summarize experience; they are calculated with prediction in
mind.

--Abram

On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
> A brief and non-technical description of the two types of semantics
> mentioned in the previous discussions:
>
> (1) Model-Theoretic Semantics (MTS)
>
> (1.1) There is a world existing independently outside the intelligent
> system (human or machine).
>
> (1.2) In principle, there is an objective description of the world, in
> terms of objects, their properties, and relations among them.
>
> (1.3) Within the intelligent system, its knowledge is an approximation
> of the objective description of the world.
>
> (1.4) The meaning of a symbol within the system is the object it
> refers to in the world.
>
> (1.5) The truth-value of a statement within the system measures how
> close it approximates the fact in the world.
>
> (2) Experience-Grounded Semantics (EGS)
>
> (2.1) There is a world existing independently outside the intelligent
> system (human or machine). [same as (1.1), but the agreement stops
> here]
>
> (2.2) Even in principle, there is no objective description of the
> world. What the system has is its experience, the history of its
> interaction of the world.
>
> (2.3) Within the intelligent system, its knowledge is a summary of its
> experience.
>
> (2.4) The meaning of a symbol within the system is determined by its
> role in the experience.
>
> (2.5) The truth-value of a statement within the system measures how
> close it summarizes the relevant part of the experience.
>
> To further simplify the description, in the context of learning and
> reasoning: MTS takes "objective truth" of statements and "real
> meaning" of terms as aim of approximation, while EGS refuses them, but
> takes experience (input data) as the only thing to depend on.
>
> As usual, each theory has its strength and limitation. The issue is
> which one is more proper for AGI. MTS has been dominating in math,
> logic, and computer science, and therefore is accepted by the majority
> people. Even so, it has been attacked by other people (not only the
> EGS believers) for many reasons.
>
> A while ago I made a figure to illustrate this difference, which is at
> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
> manifesto of EGS is at
> http://nars.wang.googlepages.com/wang.semantics.pdf
>
> Since the debate on the nature of "truth" and "meaning" has existed
> for thousands of years, I don't think we can settle down it here by
> some email exchanges. I just want to let the interested people know
> the theoretical background of the related discussions.
>
> Pei
>
>
> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>
>>
>>
>> Hi,
>>
>>>
>>> > What this highlights for me is the idea that NARS truth values attempt
>>> > to reflect the evidence so far, while probabilities attempt to reflect
>>> > the world
>>
>> I agree that probabilities attempt to reflect the world
>>
>>>
>>> .
>>>
>>> Well said. This is exactly the difference between an
>>> experience-grounded semantics and a model-theoretic semantics.
>>
>> I don't agree with this distinction ... unless you are construing "model
>> theoretic semantics" in a very restrictive way, which then does not apply to
>> PLN.
>>
>> If by model-theoretic semantics you mean something like what Wikipedia says
>> at http://en.wikipedia.org/wiki/Formal_semantics,
>>
>> ***
>> Model-theoretic semantics is the archetype of Alfred Tarski's semantic
>> theory of truth, based on his T-schema, and is one of the founding concepts
>> of model theory. This is the most widespread approach, and is based on the
>> idea that the meaning of the various parts of the propositions are given by
>> the possible ways we can give a recursively specified group of
>> interpretation functions from them to some predefined mathematical domains:
>> an interpretation of first-order predicate logic is given by a mapping from
>> terms to a universe of individuals, and a mapping from propositions to the
>> truth values "true" and "false".
>> ***
>>
>> then yes, PLN's semantics is based on a mapping from terms to a universe of
>> individuals, and a mapping from propositions to truth values.  On the other
>> hand, these "individuals" may be for instance **elementary sensations or
>> actions**, rather than higher-level individuals like, say, a specific cat,
>> or the concept "cat".  So there is nothing non-experience-based about
>> mapping terms into a "individuals" that are the system's direct experience
>> ... and then building up more abstract terms by grouping these
>> directly-experience-based terms.
>>
>> IMO, the dichotomy between experience-based and model-bas

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
Abram: The parameter 'k' does not really depend on the future, because
it makes no assumption about what will happen in that period of time.
It is just a "ruler" or "weight" (used with scale) to measure the
amount of evidence, as a "reference amount".

For other people: The definition of confidence c = w/(w+k) states that
confidence is the proportion of current evidence among future
evidence, after the coming of evidence of amount k.

Pei

On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Pei,
>
> In this context, how do you justify the use of 'k'? It seems like, by
> introducing 'k', you add a reliance on the truth of the future "after
> k observations" into the semantics. Since the induction/abduction
> formula is dependent on 'k', the truth values that result no longer
> only summarize experience; they are calculated with prediction in
> mind.
>
> --Abram
>
> On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
>> A brief and non-technical description of the two types of semantics
>> mentioned in the previous discussions:
>>
>> (1) Model-Theoretic Semantics (MTS)
>>
>> (1.1) There is a world existing independently outside the intelligent
>> system (human or machine).
>>
>> (1.2) In principle, there is an objective description of the world, in
>> terms of objects, their properties, and relations among them.
>>
>> (1.3) Within the intelligent system, its knowledge is an approximation
>> of the objective description of the world.
>>
>> (1.4) The meaning of a symbol within the system is the object it
>> refers to in the world.
>>
>> (1.5) The truth-value of a statement within the system measures how
>> close it approximates the fact in the world.
>>
>> (2) Experience-Grounded Semantics (EGS)
>>
>> (2.1) There is a world existing independently outside the intelligent
>> system (human or machine). [same as (1.1), but the agreement stops
>> here]
>>
>> (2.2) Even in principle, there is no objective description of the
>> world. What the system has is its experience, the history of its
>> interaction of the world.
>>
>> (2.3) Within the intelligent system, its knowledge is a summary of its
>> experience.
>>
>> (2.4) The meaning of a symbol within the system is determined by its
>> role in the experience.
>>
>> (2.5) The truth-value of a statement within the system measures how
>> close it summarizes the relevant part of the experience.
>>
>> To further simplify the description, in the context of learning and
>> reasoning: MTS takes "objective truth" of statements and "real
>> meaning" of terms as aim of approximation, while EGS refuses them, but
>> takes experience (input data) as the only thing to depend on.
>>
>> As usual, each theory has its strength and limitation. The issue is
>> which one is more proper for AGI. MTS has been dominating in math,
>> logic, and computer science, and therefore is accepted by the majority
>> people. Even so, it has been attacked by other people (not only the
>> EGS believers) for many reasons.
>>
>> A while ago I made a figure to illustrate this difference, which is at
>> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
>> manifesto of EGS is at
>> http://nars.wang.googlepages.com/wang.semantics.pdf
>>
>> Since the debate on the nature of "truth" and "meaning" has existed
>> for thousands of years, I don't think we can settle down it here by
>> some email exchanges. I just want to let the interested people know
>> the theoretical background of the related discussions.
>>
>> Pei
>>
>>
>> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>
>>> Hi,
>>>

 > What this highlights for me is the idea that NARS truth values attempt
 > to reflect the evidence so far, while probabilities attempt to reflect
 > the world
>>>
>>> I agree that probabilities attempt to reflect the world
>>>

 .

 Well said. This is exactly the difference between an
 experience-grounded semantics and a model-theoretic semantics.
>>>
>>> I don't agree with this distinction ... unless you are construing "model
>>> theoretic semantics" in a very restrictive way, which then does not apply to
>>> PLN.
>>>
>>> If by model-theoretic semantics you mean something like what Wikipedia says
>>> at http://en.wikipedia.org/wiki/Formal_semantics,
>>>
>>> ***
>>> Model-theoretic semantics is the archetype of Alfred Tarski's semantic
>>> theory of truth, based on his T-schema, and is one of the founding concepts
>>> of model theory. This is the most widespread approach, and is based on the
>>> idea that the meaning of the various parts of the propositions are given by
>>> the possible ways we can give a recursively specified group of
>>> interpretation functions from them to some predefined mathematical domains:
>>> an interpretation of first-order predicate logic is given by a mapping from
>>> terms to a universe of individuals, and a mapping from propositions to the
>>> truth values "true" and "false".
>

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Ben Goertzel
On the other hand, in PLN's indefinite probabilities there is a parameter k
which
plays a similar mathematical role,  yet **is** explicitly interpreted as
being about
a "number of hypothetical future observations" ...

Clearly the interplay btw algebra and interpretation is one of the things
that makes
this area of research (uncertain logic) "interesting" ...

ben g

On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> Abram: The parameter 'k' does not really depend on the future, because
> it makes no assumption about what will happen in that period of time.
> It is just a "ruler" or "weight" (used with scale) to measure the
> amount of evidence, as a "reference amount".
>
> For other people: The definition of confidence c = w/(w+k) states that
> confidence is the proportion of current evidence among future
> evidence, after the coming of evidence of amount k.
>
> Pei
>
> On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> > Pei,
> >
> > In this context, how do you justify the use of 'k'? It seems like, by
> > introducing 'k', you add a reliance on the truth of the future "after
> > k observations" into the semantics. Since the induction/abduction
> > formula is dependent on 'k', the truth values that result no longer
> > only summarize experience; they are calculated with prediction in
> > mind.
> >
> > --Abram
> >
> > On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]>
> wrote:
> >> A brief and non-technical description of the two types of semantics
> >> mentioned in the previous discussions:
> >>
> >> (1) Model-Theoretic Semantics (MTS)
> >>
> >> (1.1) There is a world existing independently outside the intelligent
> >> system (human or machine).
> >>
> >> (1.2) In principle, there is an objective description of the world, in
> >> terms of objects, their properties, and relations among them.
> >>
> >> (1.3) Within the intelligent system, its knowledge is an approximation
> >> of the objective description of the world.
> >>
> >> (1.4) The meaning of a symbol within the system is the object it
> >> refers to in the world.
> >>
> >> (1.5) The truth-value of a statement within the system measures how
> >> close it approximates the fact in the world.
> >>
> >> (2) Experience-Grounded Semantics (EGS)
> >>
> >> (2.1) There is a world existing independently outside the intelligent
> >> system (human or machine). [same as (1.1), but the agreement stops
> >> here]
> >>
> >> (2.2) Even in principle, there is no objective description of the
> >> world. What the system has is its experience, the history of its
> >> interaction of the world.
> >>
> >> (2.3) Within the intelligent system, its knowledge is a summary of its
> >> experience.
> >>
> >> (2.4) The meaning of a symbol within the system is determined by its
> >> role in the experience.
> >>
> >> (2.5) The truth-value of a statement within the system measures how
> >> close it summarizes the relevant part of the experience.
> >>
> >> To further simplify the description, in the context of learning and
> >> reasoning: MTS takes "objective truth" of statements and "real
> >> meaning" of terms as aim of approximation, while EGS refuses them, but
> >> takes experience (input data) as the only thing to depend on.
> >>
> >> As usual, each theory has its strength and limitation. The issue is
> >> which one is more proper for AGI. MTS has been dominating in math,
> >> logic, and computer science, and therefore is accepted by the majority
> >> people. Even so, it has been attacked by other people (not only the
> >> EGS believers) for many reasons.
> >>
> >> A while ago I made a figure to illustrate this difference, which is at
> >> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
> >> manifesto of EGS is at
> >> http://nars.wang.googlepages.com/wang.semantics.pdf
> >>
> >> Since the debate on the nature of "truth" and "meaning" has existed
> >> for thousands of years, I don't think we can settle down it here by
> >> some email exchanges. I just want to let the interested people know
> >> the theoretical background of the related discussions.
> >>
> >> Pei
> >>
> >>
> >> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >>>
> >>>
> >>>
> >>> Hi,
> >>>
> 
>  > What this highlights for me is the idea that NARS truth values
> attempt
>  > to reflect the evidence so far, while probabilities attempt to
> reflect
>  > the world
> >>>
> >>> I agree that probabilities attempt to reflect the world
> >>>
> 
>  .
> 
>  Well said. This is exactly the difference between an
>  experience-grounded semantics and a model-theoretic semantics.
> >>>
> >>> I don't agree with this distinction ... unless you are construing
> "model
> >>> theoretic semantics" in a very restrictive way, which then does not
> apply to
> >>> PLN.
> >>>
> >>> If by model-theoretic semantics you mean something like what Wikipedia
> says
> >>> at http://en.wikipedia.org/wiki/Formal_semant

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
True. Similar parameters can be found in the work of Carnap and
Walley, with different interpretations.

Pei

On Sun, Oct 12, 2008 at 2:11 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> On the other hand, in PLN's indefinite probabilities there is a parameter k
> which
> plays a similar mathematical role,  yet **is** explicitly interpreted as
> being about
> a "number of hypothetical future observations" ...
>
> Clearly the interplay btw algebra and interpretation is one of the things
> that makes
> this area of research (uncertain logic) "interesting" ...
>
> ben g
>
> On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>
>> Abram: The parameter 'k' does not really depend on the future, because
>> it makes no assumption about what will happen in that period of time.
>> It is just a "ruler" or "weight" (used with scale) to measure the
>> amount of evidence, as a "reference amount".
>>
>> For other people: The definition of confidence c = w/(w+k) states that
>> confidence is the proportion of current evidence among future
>> evidence, after the coming of evidence of amount k.
>>
>> Pei
>>
>> On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]>
>> wrote:
>> > Pei,
>> >
>> > In this context, how do you justify the use of 'k'? It seems like, by
>> > introducing 'k', you add a reliance on the truth of the future "after
>> > k observations" into the semantics. Since the induction/abduction
>> > formula is dependent on 'k', the truth values that result no longer
>> > only summarize experience; they are calculated with prediction in
>> > mind.
>> >
>> > --Abram
>> >
>> > On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]>
>> > wrote:
>> >> A brief and non-technical description of the two types of semantics
>> >> mentioned in the previous discussions:
>> >>
>> >> (1) Model-Theoretic Semantics (MTS)
>> >>
>> >> (1.1) There is a world existing independently outside the intelligent
>> >> system (human or machine).
>> >>
>> >> (1.2) In principle, there is an objective description of the world, in
>> >> terms of objects, their properties, and relations among them.
>> >>
>> >> (1.3) Within the intelligent system, its knowledge is an approximation
>> >> of the objective description of the world.
>> >>
>> >> (1.4) The meaning of a symbol within the system is the object it
>> >> refers to in the world.
>> >>
>> >> (1.5) The truth-value of a statement within the system measures how
>> >> close it approximates the fact in the world.
>> >>
>> >> (2) Experience-Grounded Semantics (EGS)
>> >>
>> >> (2.1) There is a world existing independently outside the intelligent
>> >> system (human or machine). [same as (1.1), but the agreement stops
>> >> here]
>> >>
>> >> (2.2) Even in principle, there is no objective description of the
>> >> world. What the system has is its experience, the history of its
>> >> interaction of the world.
>> >>
>> >> (2.3) Within the intelligent system, its knowledge is a summary of its
>> >> experience.
>> >>
>> >> (2.4) The meaning of a symbol within the system is determined by its
>> >> role in the experience.
>> >>
>> >> (2.5) The truth-value of a statement within the system measures how
>> >> close it summarizes the relevant part of the experience.
>> >>
>> >> To further simplify the description, in the context of learning and
>> >> reasoning: MTS takes "objective truth" of statements and "real
>> >> meaning" of terms as aim of approximation, while EGS refuses them, but
>> >> takes experience (input data) as the only thing to depend on.
>> >>
>> >> As usual, each theory has its strength and limitation. The issue is
>> >> which one is more proper for AGI. MTS has been dominating in math,
>> >> logic, and computer science, and therefore is accepted by the majority
>> >> people. Even so, it has been attacked by other people (not only the
>> >> EGS believers) for many reasons.
>> >>
>> >> A while ago I made a figure to illustrate this difference, which is at
>> >> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
>> >> manifesto of EGS is at
>> >> http://nars.wang.googlepages.com/wang.semantics.pdf
>> >>
>> >> Since the debate on the nature of "truth" and "meaning" has existed
>> >> for thousands of years, I don't think we can settle down it here by
>> >> some email exchanges. I just want to let the interested people know
>> >> the theoretical background of the related discussions.
>> >>
>> >> Pei
>> >>
>> >>
>> >> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> >>>
>> >>>
>> >>>
>> >>> Hi,
>> >>>
>> 
>>  > What this highlights for me is the idea that NARS truth values
>>  > attempt
>>  > to reflect the evidence so far, while probabilities attempt to
>>  > reflect
>>  > the world
>> >>>
>> >>> I agree that probabilities attempt to reflect the world
>> >>>
>> 
>>  .
>> 
>>  Well said. This is exactly the difference between an
>>  experience-grounded semantics and a model-theoretic 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Abram Demski
Pei,

You are right, it doesn't make any such assumptions while Bayesian
practice does. But, the parameter 'k' still fixes the length of time
into the future that we are interested in predicting, right? So it
seems to me that the truth value must be predictive, if its
calculation depends on what we want to predict.

That is why 'k' is hard to incorporate into the probabilistic NARSian
scheme I want to formulate...

--Abram

On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> Abram: The parameter 'k' does not really depend on the future, because
> it makes no assumption about what will happen in that period of time.
> It is just a "ruler" or "weight" (used with scale) to measure the
> amount of evidence, as a "reference amount".
>
> For other people: The definition of confidence c = w/(w+k) states that
> confidence is the proportion of current evidence among future
> evidence, after the coming of evidence of amount k.
>
> Pei
>
> On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> Pei,
>>
>> In this context, how do you justify the use of 'k'? It seems like, by
>> introducing 'k', you add a reliance on the truth of the future "after
>> k observations" into the semantics. Since the induction/abduction
>> formula is dependent on 'k', the truth values that result no longer
>> only summarize experience; they are calculated with prediction in
>> mind.
>>
>> --Abram
>>
>> On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>> A brief and non-technical description of the two types of semantics
>>> mentioned in the previous discussions:
>>>
>>> (1) Model-Theoretic Semantics (MTS)
>>>
>>> (1.1) There is a world existing independently outside the intelligent
>>> system (human or machine).
>>>
>>> (1.2) In principle, there is an objective description of the world, in
>>> terms of objects, their properties, and relations among them.
>>>
>>> (1.3) Within the intelligent system, its knowledge is an approximation
>>> of the objective description of the world.
>>>
>>> (1.4) The meaning of a symbol within the system is the object it
>>> refers to in the world.
>>>
>>> (1.5) The truth-value of a statement within the system measures how
>>> close it approximates the fact in the world.
>>>
>>> (2) Experience-Grounded Semantics (EGS)
>>>
>>> (2.1) There is a world existing independently outside the intelligent
>>> system (human or machine). [same as (1.1), but the agreement stops
>>> here]
>>>
>>> (2.2) Even in principle, there is no objective description of the
>>> world. What the system has is its experience, the history of its
>>> interaction of the world.
>>>
>>> (2.3) Within the intelligent system, its knowledge is a summary of its
>>> experience.
>>>
>>> (2.4) The meaning of a symbol within the system is determined by its
>>> role in the experience.
>>>
>>> (2.5) The truth-value of a statement within the system measures how
>>> close it summarizes the relevant part of the experience.
>>>
>>> To further simplify the description, in the context of learning and
>>> reasoning: MTS takes "objective truth" of statements and "real
>>> meaning" of terms as aim of approximation, while EGS refuses them, but
>>> takes experience (input data) as the only thing to depend on.
>>>
>>> As usual, each theory has its strength and limitation. The issue is
>>> which one is more proper for AGI. MTS has been dominating in math,
>>> logic, and computer science, and therefore is accepted by the majority
>>> people. Even so, it has been attacked by other people (not only the
>>> EGS believers) for many reasons.
>>>
>>> A while ago I made a figure to illustrate this difference, which is at
>>> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
>>> manifesto of EGS is at
>>> http://nars.wang.googlepages.com/wang.semantics.pdf
>>>
>>> Since the debate on the nature of "truth" and "meaning" has existed
>>> for thousands of years, I don't think we can settle down it here by
>>> some email exchanges. I just want to let the interested people know
>>> the theoretical background of the related discussions.
>>>
>>> Pei
>>>
>>>
>>> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:



 Hi,

>
> > What this highlights for me is the idea that NARS truth values attempt
> > to reflect the evidence so far, while probabilities attempt to reflect
> > the world

 I agree that probabilities attempt to reflect the world

>
> .
>
> Well said. This is exactly the difference between an
> experience-grounded semantics and a model-theoretic semantics.

 I don't agree with this distinction ... unless you are construing "model
 theoretic semantics" in a very restrictive way, which then does not apply 
 to
 PLN.

 If by model-theoretic semantics you mean something like what Wikipedia says
 at http://en.wikipedia.org/wiki/Formal_semantics,

 ***
 Model-theoretic semantics i

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
On Sun, Oct 12, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Pei,
>
> You are right, it doesn't make any such assumptions while Bayesian
> practice does. But, the parameter 'k' still fixes the length of time
> into the future that we are interested in predicting, right? So it
> seems to me that the truth value must be predictive, if its
> calculation depends on what we want to predict.

The truth-value is defined/measured according to past experience, but
is used to predict future experience. Especially, this is what the
"expectation" function is about. But still, a high expectation only
means that the system will behave under the assumption that the
statement may be confirmed again, which by no means guarantee the
actual confirmation of the statement in the future.

> That is why 'k' is hard to incorporate into the probabilistic NARSian
> scheme I want to formulate...

For this purpose, the interval version of the truth value may be easier.

Pei

> --Abram
>
> On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
>> Abram: The parameter 'k' does not really depend on the future, because
>> it makes no assumption about what will happen in that period of time.
>> It is just a "ruler" or "weight" (used with scale) to measure the
>> amount of evidence, as a "reference amount".
>>
>> For other people: The definition of confidence c = w/(w+k) states that
>> confidence is the proportion of current evidence among future
>> evidence, after the coming of evidence of amount k.
>>
>> Pei
>>
>> On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>>> Pei,
>>>
>>> In this context, how do you justify the use of 'k'? It seems like, by
>>> introducing 'k', you add a reliance on the truth of the future "after
>>> k observations" into the semantics. Since the induction/abduction
>>> formula is dependent on 'k', the truth values that result no longer
>>> only summarize experience; they are calculated with prediction in
>>> mind.
>>>
>>> --Abram
>>>
>>> On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes "objective truth" of statements and "real
 meaning" of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of "truth" and "meaning" has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
>
> Hi,
>
>>
>> > What this highlights for me is the idea that NARS truth values attempt
>> > to reflect the evi

[agi] Hugh Loebner talks AI

2008-10-12 Thread Eric Burton
Hugh Loebner talks AI

http://developers.slashdot.org/article.pl?sid=08/10/11/2137200

I may have written my signature twice on the OpenCog list, earlier
today. I'm going to try to not do that. Otherwise I have nothing to
report


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com