Mike,
>The plane flew over the hill
>The play is over
Using a formal language can help to avoid many of these issues.
>But then the program must be able to tell what is "in" what or outside, what
>is behind/over etc.
The communication module in my experimental AGI design includes
several speci
- Original Message -
From: "John G. Rose" <[EMAIL PROTECTED]>
To:
Sent: Thursday, September 11, 2008 8:28 AM
Subject: RE: [agi] Artificial humor
>> From: John LaMuth [mailto:[EMAIL PROTECTED]
>>
>> As I have previously written, this issue boils down as "one is serious
>> or
>> one is n
Matt,
"To understand/realise" is to be distinguished from (I would argue) "to
comprehend" statements.
The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the
Mike Tintner <[EMAIL PROTECTED]> wrote:
To "understand" is to "REALISE" what [on earth, or
in the [real] world] is being talked about.
Matt: Nice dodge. How do you distinguish between when a computer realizes
something and when it just reacts as if it realizes it?
Yeah, I know. Turing do
Mike Tintner <[EMAIL PROTECTED]> wrote:
>To "understand" is to "REALISE" what [on earth, or
>in the [real] world] is being talked about.
Nice dodge. How do you distinguish between when a computer realizes something
and when it just reacts as if it realizes it?
Yeah, I know. Turing dodged the qu
Matt,
Jeez, massive question :).
Let me 1st partly dodge it, by giving you an example of the difficulty of
understanding, say, "over", both in NLP terms and ultimately (because it
will be the same more or less) in practical object recognition/movement
terms - because I suspect none of you h
Mike, your argument would be on firmer ground if you could distinguish between
when a computer "understands" something and when it just reacts as if it
understands. What is the test? Otherwise, you could always claim that a machine
doesn't understand anything because only humans can do that.
-
Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?
They are allowed to work by GPS but there are parts of the course where they
are required to work without it.
Shouldn't you already have basic knowledge like this before proclaiming
things like "neither do any others"
Jiri,
Clearly a limited 3d functionality is possible for a program such as you
describe - as for SHRDLU. But what we're surely concerned with here is
generality. So fine start with a restricted world of say different kinds of
kid's blocks and similar. But then the program must be able to tell
I suppose in order to justify my cost estimate I need to define more precisely
what I mean by AGI. I mean the cost of building an automated economy in which
people don't have to work. This is not the same as automating what people
currently do. Fifty years ago we might have imagined a future wit
Mike,
Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question "Is the smaller sphere inside the bigger sphere?"
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals
Quick answer because in rush. Notice your "if" ... Which programs actually
do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.
What about the programs that control Stanley and the other DARPA Grand
Challenge vehicles?
- Orig
> From: John LaMuth [mailto:[EMAIL PROTECTED]
>
> As I have previously written, this issue boils down as "one is serious
> or
> one is not to be taken this way" a meta-order perspective)... the key
> feature in humor and comedy -- the meta-message being "don't take me
> seriously"
>
> That is why
Jiri,
Quick answer because in rush. Notice your "if" ... Which programs actually
do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.
The v. word "orientation" indicates the reality that every picture has a
point of view, and refe
On Thu, Sep 11, 2008 at 2:28 PM, Jiri Jelinek wrote:
> If you talk to a program about changing 3D scene and the program then
> correctly answers questions about [basic] spatial relationships
> between the objects then I would say it understands 3D. Of course the
> program needs to work with a queri
Samantha, & Mike,
>> Would you also say that without a body, you couldn't understand
>> 3D space ?
>
> It depends on what is meant by, and the value of, "understand 3D space".
> If the intelligence needs to navigate or work with 3D space or even
> understand intelligence whose very concepts are fi
Samantha,
This is a really great posting. Just one comment:
On 9/11/08, Samantha Atkins <[EMAIL PROTECTED]> wrote:
>
>
> On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:
>
> A human brain has about 10^9 bits of knowledge, of which probably 10^7 to
>> 10^8 bits are unique to each individual.
>>
>
I think it's the surprize that makes you laugh actually, not physical
pain in other people. I find myself laughing at my own mistakes often
- not because they hurt (in fact if they did hurt they wouldn't be
funny) but because I get surprized by them.
Valentina
On 9/10/08, Jiri Jelinek <[EMAIL PRO
On Sep 10, 2008, at 12:29 PM, Jiri Jelinek wrote:
On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner <[EMAIL PROTECTED]
> wrote:
Without a body, you couldn't understand the joke.
False. Would you also say that without a body, you couldn't understand
3D space ?
It depends on what is meant by, an
On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:
--- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
The fact is that thousands of very intelligent people have been
trying
to solve AI for the last 50 years, and most of them shared
20 matches
Mail list logo