Re: Richard Loosemore's below copied post

LOOSEMORE>> Overall, I believe that possible-worlds semantics serves no
purpose in
AI except to justify the idea that statements like "It is the case that
all cups are drinking vessels that possess a handle" can have something
like a "truth value" that is usable in a logical calculus.

EWP>> I am perhaps uneducated in such things, but I thought one of the
purposes of possible-world semantics was to get an idea of the size of the
possible universe of outcomes relative to a particular class of outcome,
the probability of which, you want to calculate.  In many small cases,
where you can make assumptions about probability distribution over each
variable, and independence assumptions between the interactions between
many such variables, it can be quite useful.

EWP>> Please correct me if I am wrong, which is quite possible, because I
am almost entirely self taught.

Ed Porter (aka "Ewp")

P.S., DUE TO MULTIPLE COMPLAINTS ABOUT MY USE OF ALL CAPS, I THINK THE
EASIEST THING, ALTHOUGH NOT THE MOST PRETTY. WILL BE TO LABEL PARAGRAPHS
BY THE NAME OF THEIR ORGINATOR.  AT LEAST I WILL TRY IT FOR A WHILE AND
SEE HOW IT SEEMS TO WORK OUT.



-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 18, 2007 12:53 PM
To: agi@v2.listbox.com
Subject: Re: Semantics [WAS Re: [agi] "symbol grounding" Q&A]


Linas Vepstas wrote:
> On Wed, Oct 17, 2007 at 10:25:18AM -0400, Richard Loosemore wrote:
>> One way this group have tried to pursue their agenda is through an
>> idea
>> due to Montague and others, in which meanings of terms are related to
>> something called "possible worlds".  They imagine infinite numbers of
>> possible worlds, in which all the possible variations of every
>> conceivable parameter are allowed to vary, and then they define the
>> meanings of actual things in our world in terms of functions across
>> those possible worlds.  Such an idea is, of course, not usable in any
>> computer program, since it requires unthinkably large infinities
[sic!],
>
> I don't beleive the last sentence follows from the previous.  There
> are plenty of ways of integrating over infinities to get finite
> answers. There are an infinite number of points inside a circle, yet
> we can still define the area of a circle. There are an extremely large
> number of ways of drawing black and white balls out of urns, and yet
> we can still define averages and expectations (and these are even
> analytic, differentiable, smooth functions!)
>
> If instead of talking about black and white balls, we talked about "is
> and is not a chair", and then considered some infinite number of
> universes where some things were chairs and others were not, and we
> drew items out of each universe, each labeled as "chair" or "not a
> chair", one can still, in principle, obtain some usable average idea
> of chair-ness that a real-world computer program could approximate to
> some degree, just as real-world programs approximate pi=3.14159...

You're right:  I should not have slipped so easily into an "of course it
is not usable" statement without saying more.

As you point out, infinities don't stop other ideas from being useful --
in general.

The problem here is that the formalism does not come with any way to
apply it to real implementations, in such a way that is verifiably
correct rather than just a guess.  Possible worlds semantics does not,
to the best of my knowledge, make any objective, counterintuitive,
falsifiable predictions about anything in the universe of things.  Nor
does it supply even a hint of how it could be applied to produce a real
computation that would yield such predictions.  It is post-hoc and
descriptive:  it comes along after the fact and tries to impose order on
some observable characteristics of human minds.

That statement I just made is a minefield of controversy, of course.  I
am sure someone will immediately start thinking of ways in which
predictions could perhaps be extracted from it, but then we would get
dragged into a minute examination of exactly where those predictions
come from, and whether they are really due to the formalism itself or
are just intuitively obvious, or built into it from the beginning.

Overall, I believe that possible-worlds semantics serves no purpose in
AI except to justify the idea that statements like "It is the case that
all cups are drinking vessels that possess a handle" can have something
like a "truth value" that is usable in a logical calculus.

In other words, it is a crutch.



Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55180428-34ed6a

Reply via email to