Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Vladamir,

On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> See http://www.overcomingbias.com/2008/01/newcombs-proble.html


This is a PERFECT talking point for the central point that I have been
trying to make. Belief in the Omega discussed early in that article is
essentially a religious belief in a greater power. Most Christians see
examples of the power of God at around a monthly rate. Whenever chance works
for apparent good or against perceived evil, there is clear evidence of God
doing his job.

Story: A Baptist minister neighbor had his alternator come loose just as he
was leaving for an important meeting, so I temporarily secured it with an
industrial zip tie, and told him to remove the zip tie and properly bold the
alternator back into place when he got back home. Three weeks later, his
alternator came loose again. He explained that he had done NOTHING wrong
this week, and so he just couldn't see why God took this occasion to smite
his alternator. I suggested that we examine it for clues. Sure enough, there
were the remnants of my zip tie which he had never replaced. He explained
that God seemed to be holding things together OK, so why bother fixing it.
Explaining the limitations of industrial zip ties seemed to be hopeless, so
I translated my engineering paradigm to his religious paradigm:

I explained that he had been testing God by seeing how long God would
continue to hold his alternator in place, and apparently God had grown tired
of playing this game. "Oh, I see what you mean" he said quite contritely,
and he immediately proceeded to properly bolt his alternator back down.
Clearly, God had yet again shown his presence to him.

Christianity (and other theologies) are no less logical than the one-boxer
in the page you cited. Indeed, the underlying thought process is essentially
identical.


> "It is precisely the notion that Nature does not care about our
> algorithm, which frees us up to pursue the winning Way - without
> attachment to any particular ritual of cognition, apart from our
> belief that it wins.  Every rule is up for grabs, except the rule of
> winning."


Now, consider that ~50% of our population believes that people who do not
believe in God are fundamentally untrustworthy. This tends to work greatly
to the disadvantage of atheists, thereby showing that God does indeed favor
his believers. After many postings on this subject, I still assert that ANY
rational AGI would be religious. Atheism is a radical concept and atheists
generally do not do well in our society. What sort of "rational" belief
(like atheism) would work AGAINST winning? In short, your Omega example has
apparently made my point - that religious belief IS arguably just as logical
(if not more so)than atheism. Do you agree?

Thank you.

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-07 Thread Stefan Pernar
On Thu, May 8, 2008 at 3:54 AM, Richard Loosemore <[EMAIL PROTECTED]>
wrote:

> On Wed, May 7, 2008 at 12:27 AM, Richard Loosemore <[EMAIL PROTECTED] > [EMAIL PROTECTED]>> wrote:
> >
> >Stefan Pernar wrote:
> >
> >On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore
> ><[EMAIL PROTECTED] 
> >>> wrote:
> >
> >
> >Ben: I admire your patience.
> >Richard: congrats - you just made my ignore list - and that's a
> >first
> >
> >
> >Another person who cannot discuss the issues.
> >
> >
> > Richard - after having spent time looking through your stuff here is my
> > conclusion:
> >
> > You postulate that "Achieving AGI requires solving a complex problem"
> > and that you do not see this being properly incorporated in current AGI
> > research.
> >
> > As pointed out by others this position puts you in the "scruffies" camp
> > of AI research (http://en.wikipedia.org/wiki/Neats_vs._scruffies)
> >
> > What follows are wild speculations and grand pie-in-the-sky plans
> > without substance with a letter to investors attached. Oh, come on!
> >
> > PS: obviously my ignore list sucks ;-)
> >
>
> Now, if I understand correctly, you got mad at me the other day for being
> hypercritical of the AGI-06 conference (and frankly, I would agree with
> anyone who said that I should have been less negative)  but can you not
> see that when you make vague, sweeping allegations of the above sort, you
> are hardly rising above the kind of behavior that you just criticised?
>

Richard, there is no substance behind your speculations - zero. Zip. And all
the fantasy and imagination you so clearly demonstrated here on the board
wont make up for that. You make stuff up as you go along and as you need it
and you clearly have enough time at your hand to do so.


> All of the points you just made could be met, if you articulated them.
> Scruffies?  Some people only use that as a derogatory term:  what did you
> mean by it?  I am not necessarily even a 'scruffy' by any accepted
> definition of that term, and certainly not by the definition from Russell
> and Norvig that I quoted in my paper.  As far as I am aware, *nobody* has
> accused me of being a scruffy ... it was actually me who first mentioned the
> scruffy-neat divide!
>

Let's not use shady rhetoric here - shall we? You know exactly that scruffy
refers to a technical distinction. How do you expect to be taken seriously
if you try to manipulate like this? Not going to happen with me.

"Wild speculations"?  Which, exactly?  "Grand pie-in-the-sky plans without
> substance"?  Again, what are you referring to?  Don't these all sound like
> Stefan's personal opinion?
>

Beside Kaj - can we see a show of hand who disagrees with me? Happy to step
back and be quiet then. It is too often that people stay quite and let stuff
like this slide.

On all of these points, we could have had meaningful discussion (if you
> chose), but if you keep them to yourself and simply decide that I am an
> idiot, what chance do I have to meet your objections?  I am always open to
> criticism, but to be fair it has to be detailed, specific and not personal.
>

The lack of consistency and quality to your writings make it not worthwhile
for me to point out particular points of criticism that would be even worth
debating with you. It is not that there are two or three point that I do not
understand. No - your whole concept is is an uninteresting house of cards to
me. Your rhetoric is shady and dogmatic - you are unresponsive to
substantial criticisms. No matter what people say you will continue to make
up stuff and throw it right back at them  - spiked with subtle personal
attacks.

In short you are not worth my time and the only reason why I am spending
time on this is because I hope the list will wake up to it.

Also, I am a little confused by the first sentence of the above.  It implies
> that you only just started looking through my 'stuff' ... have you read the
> published papers?  The blog posts?  The technical discussions on this list
> with Mark Waser, Kaj Sotala, Derek Zahn and others?
>

It did not take more than about an hour to look through all your stuff on
your website so yeah - anything else I missed please send me a link. And
although I think it is to much to ask to go through the many emails you
wrote before I actually did that to and what I found only confirmed my
opinion. For example:

Kaj:
I'd be curious to hear your opinion of Omohundro's "The Basic AI Drives"
paper

Richard:
Omohundros's analysis is all predicated on the Goal Stack approach, so my
response is that nothing he says has any relevance to the type of AGI that I
talk about (which, as I say, is probably going to be the only type ever
created).

Stefan:
Utter nonsense and not worthy of learned debate.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
1000

[agi] Accidental Genius

2008-05-07 Thread Brad Paulsen
I happened to catch a program on National Geographic Channel today entitled
"Accidental Genius."  It was quite interesting from an AGI standpoint.

One of the researchers profiled has invented a device that, by sending
electromagnetic pulses through a person's skull to the appropriate spot in
the left hemisphere of that person's brain, can achieve behavior similar to
that of an idiot savant in a non-brain-damaged person (in the session shown,
this was a volunteer college student).

Before being "zapped" by the device, the student is taken through a series
of exercises.  One is to draw a horse from memory.  The other is to read
aloud a very familiar "saying" with a slight grammatical mistake in it (the
word "the" is duplicated, i.e., "the the," in the saying -- sorry I can't
recall the saying used). Then the student is shown a computer screen full of
"dots" for about 1 second and asked to record his best guess at how many
dots there were.  This exercise is repeated several times (with different
numbers of dots each time).

The student is then zapped by the electromagnetic pulse device for 15
minutes.  It's kind of scary to watch the guy's face flinch uncontrollably
as each pulse is delivered. But, while he reported feeling something, he
claimed there was no pain or disorientation. His language facilities were
unimpaired (they zap a very particular spot in the left hemisphere based on
brain scans taken of idiot savants).

After being zapped, the exercises are repeated.  The results were
impressive.  The horse drawn after the zapping contained much more detail
and was much better rendered than the horse drawn before the zapping.
Before the zapping, the subject read the familiar saying correctly (despite
the duplicate "the").  After zapping, the duplicate "the" stopped him dead
in his tracks.  He definitely noticed it.  The dots were really impressive
though.  Before being zapped, he got the count right in only two cases.
After being zapped, he got it right in four cases.

The effects of the electromagnetic zapping on the left hemisphere fade
within a few hours.  Don't know about you, but I'd want that in writing.

You can watch the episode on-line here:
http://channel.nationalgeographic.com/tv-schedule.  It's not scheduled for
repeat showing anytime soon.

That's not a direct link (I couldn't find one).  When you get to that Web
page, navigate to Wed, May 7 at 3PM and click the "More" button under the
picture.  Unfortunately, the "full-motion" video is the size of a large
postage stamp.  The "full screen" view uses "stop motion" (at least i did on
my laptop using a DSL-based WiFi hotspot). The audio is the same in both
versions.

Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Stefan Pernar <[EMAIL PROTECTED]> wrote:
>   What follows are wild speculations and grand pie-in-the-sky plans without 
> substance with a letter to investors attached. Oh, come on!

Um, people, is this list really the place for fielding personal insults?

For what it's worth, my two cents: I don't always see, off the bat,
why Richard says something or holds a particular opinion, and as I
don't see the inferential steps that he's taken to reach his
conclusion, his sayings might occasionally seem like "wild
speculation". However, each time that I've asked him for extra
details, he has without exception delivered a prompt and often rather
long explanation of what his premises are and how he's arrived at a
particular conclusion. If that hasn't been enough to clarify things,
I've pressed for more details, and I've always received a clear and
logical response until I've finally figured out where he's coming
from.

I do admit that my qualifications to discuss any AGI-related subject
are insignficant compared to most of this list's active posters (heck,
I don't even have my undergraduate degree completed yet), and as such
I might have unwittingly ignored some crucial details of what's been
going on From what I've been able to judge, though, I've seen no
absolutely reasons to dismiss Richard as "dogmatic", "irrational" or a
"wild speculator". (At least not any more than anyone else on this
list...)


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to ..P.S.

2008-05-07 Thread Mike Tintner

Ah mon dieu - c'est "Blessent mon COEUR.."

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread Mike Tintner

YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)

"LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone."

You don't just read those words, (and most words), you hear them. How's 
logic going to hear them?


"YOY YKY?"

You understood that. How's logic going to?




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread Stephen Reed
YKY,

Ahh yes, I recall having studied this work a few years ago.   That project has 
parsed and applied their disambiguation methods on the WordNet glosses and I 
have a copy of that somewhere.  I thought to use their work, especially the 
higher quality entries, to prime my own project's understanding of the WordNet 
glosses.

There is a straightforward conversion from the formulas you gave to the RDF 
semantic representation that I am using for Texai:

john(e1)
mary(e3)
love(e2, e1, e3)

is equivalent to the RDF:

 ?e1 is the same 
identity as John
?e3 is the same identity 
as Mary 
?e2 is a love 
situation
in ?e2 (the 
love situation), ?e1 (John) is the lover
 in ?e2, ?e3 (Mary) 
is the thing loved

I invented terms for loving, lover and thing loved, which Cyc lacks.  Cyc does 
have a relationship loves, but that only directly relates the agent with the 
thing loved.  After thinking about the needs of natural language, I have come 
to believe that relationships should always be represented with respect to a 
containing situation, event, or action.  Natural language verbs map nicely to 
situations, events and actions.  OpenCyc has a lot of vocabulary for these but 
they are not uniformly applied throughout its knowledge base.  I hope remedy 
that with my Texai approach.

Cheers.
-Steve


 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, May 7, 2008 2:30:11 PM
Subject: Re: [agi] standard way to represent NL in logic?

On 5/7/08, Stephen Reed <[EMAIL PROTECTED]> wrote:

> I have not heard about Rus form.  Could you provide a link or reference?


This is one of the papers:
http://citeseer.ist.psu.edu/cache/papers/cs/22812/http:zSzzSzwww.seas.smu.eduzSz~vasilezSzictai2001.pdf/rus01high.pdf
you can find some examples in the figures.

The main thing is that (nearly) every word is "reified".

For example, for "John loves Mary", we say that there is an entity e1
which is a John, and entity e2 which is an act of loving, and an
entity e3 which is a Mary.

So we have these formulae:
john(e1)
mary(e3)
love(e2, e1, e3)

Anyway, something like that

Rus form is popularly used in text entailment programs.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-07 Thread Richard Loosemore


On Wed, May 7, 2008 at 12:27 AM, Richard Loosemore <[EMAIL PROTECTED] 
> wrote:


Stefan Pernar wrote:

On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore
<[EMAIL PROTECTED] 
>> wrote:


Ben: I admire your patience.
Richard: congrats - you just made my ignore list - and that's a
first


Another person who cannot discuss the issues.


Richard - after having spent time looking through your stuff here is my 
conclusion:


You postulate that "Achieving AGI requires solving a complex problem" 
and that you do not see this being properly incorporated in current AGI 
research.


As pointed out by others this position puts you in the "scruffies" camp 
of AI research (http://en.wikipedia.org/wiki/Neats_vs._scruffies)


What follows are wild speculations and grand pie-in-the-sky plans 
without substance with a letter to investors attached. Oh, come on!


PS: obviously my ignore list sucks ;-)



Now, if I understand correctly, you got mad at me the other day for 
being hypercritical of the AGI-06 conference (and frankly, I would agree 
with anyone who said that I should have been less negative)  but can 
you not see that when you make vague, sweeping allegations of the above 
sort, you are hardly rising above the kind of behavior that you just 
criticised?


All of the points you just made could be met, if you articulated them. 
Scruffies?  Some people only use that as a derogatory term:  what did 
you mean by it?  I am not necessarily even a 'scruffy' by any accepted 
definition of that term, and certainly not by the definition from 
Russell and Norvig that I quoted in my paper.  As far as I am aware, 
*nobody* has accused me of being a scruffy ... it was actually me who 
first mentioned the scruffy-neat divide!


"Wild speculations"?  Which, exactly?  "Grand pie-in-the-sky plans 
without substance"?  Again, what are you referring to?  Don't these all 
sound like Stefan's personal opinion?


On all of these points, we could have had meaningful discussion (if you 
chose), but if you keep them to yourself and simply decide that I am an 
idiot, what chance do I have to meet your objections?  I am always open 
to criticism, but to be fair it has to be detailed, specific and not 
personal.


Also, I am a little confused by the first sentence of the above.  It 
implies that you only just started looking through my 'stuff' ... have 
you read the published papers?  The blog posts?  The technical 
discussions on this list with Mark Waser, Kaj Sotala, Derek Zahn and others?




Richard Loosemore

















---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Stephen Reed <[EMAIL PROTECTED]> wrote:

> I have not heard about Rus form.  Could you provide a link or reference?


This is one of the papers:
http://citeseer.ist.psu.edu/cache/papers/cs/22812/http:zSzzSzwww.seas.smu.eduzSz~vasilezSzictai2001.pdf/rus01high.pdf
you can find some examples in the figures.

The main thing is that (nearly) every word is "reified".

For example, for "John loves Mary", we say that there is an entity e1
which is a John, and entity e2 which is an act of loving, and an
entity e3 which is a Mary.

So we have these formulae:
john(e1)
mary(e3)
love(e2, e1, e3)

Anyway, something like that

Rus form is popularly used in text entailment programs.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread Stephen Reed
YKY,


The "Rus form" is also a popular logical form, have you heard of it?
I think it is complete in the sense that all English (or NL) sentences
can be represented in it, but the drawback is that it's somewhat
indirect.

I have not heard about Rus form.  Could you provide a link or reference?
Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Stephen Reed <[EMAIL PROTECTED]> wrote:

> To my knowledge there is a standard style but there is of course no standard 
> ontology.  Roughly the standard style is First Order Predicate Calculus 
> (FOPC) and within the linguistics community this is called logical form.  For 
> reference see James Allen's Natural Language Understanding, 2nd Edition, 
> Chapter 8 - Semantics and Logical Form.   Also see Terence Parson's Events in 
> the Semantics of English, for a view that I have adopted with regard to the 
> semantics of verbs.
>
> As Texai is taught the principle English grammar constructions, I would be 
> glad to contribute the form <--> semantics pairings to the wiki-like place 
> you propose.


Thanks, I'll check out those books.

The "Rus form" is also a popular logical form, have you heard of it?
I think it is complete in the sense that all English (or NL) sentences
can be represented in it, but the drawback is that it's somewhat
indirect.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> No.  But it hasn't stopped people from trying.
>
> The meaning of sentences and even paragraphs depends on context that is
> not captured in logic.  Consider the following examples, where a different
> word is emphasized in each case:
>
> - I didn't steal that.
> - I DIDN'T steal that.
> - I didn't STEAL that.
> - I didn't steal THAT.
>
> And the following where you can guess the emphasis by context.
>
> - I didn't steal that.  He did.
> - I didn't steal that.  It is still there.
> - I didn't steal that.  I borrowed it.
> - I didn't steal that.  I stole this.


Contexts can be captured in logic.  For example, John McCarthy's
method is to use the special predicate "ist":
   ist(x, c)   means that x is true in the context of c

Your example of emphasis may be dealt with using multiple logical
formulae.  An additional formula may state that a certain word (or
concept) is being emphasized.

I wish to have a standard for the *surface* translation of NL to
logic.  Which means that the resulting logical forms are still open to
interpretation within rich contexts.

Logic can deal with almost everything, depending on how much effort
you put in it =)

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Charles D Hixson

Steve Richfield wrote:

...
have played tournament chess. However, when faced with a REALLY GREAT 
chess player (e.g. national champion), as I have had the pleasure of 
on a couple of occasions, they at first appear to play as novices, 
making unusual and apparently stupid moves that I can't quite 
capitalize on, only to pull things together later on and soundly beat 
me. While retrospective analysis would show them to be brilliant, that 
would not be my evaluation early in these games.
 
Steve Richfield
But that's a quite reasonable action on their part.  Many players have 
memorized some number of standard openings.  But by taking the game away 
from the standard openings (or into the less commonly known ones) they 
enable the player with the stronger chess intuition to gain an 
edge...and they believe that it will be themselves.


E.g.:  The Orangutan opening is a trifle weak, but few know it well.  
But every master would know it, and know both it's strengths and 
weaknesses.  If you don't know the opening, though, it just looks weak.  
Looks, however, are deceptive.  If you don't know it, you're quite 
likely to find it difficult to deal with against someone who does know 
it, even if they're a generally weaker player than you are.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread Stephen Reed
Hi YKY,

To my knowledge there is a standard style but there is of course no standard 
ontology.  Roughly the standard style is First Order Predicate Calculus (FOPC) 
and within the linguistics community this is called logical form.  For 
reference see James Allen's Natural Language Understanding, 2nd Edition, 
Chapter 8 - Semantics and Logical Form.   Also see Terence Parson's Events in 
the Semantics of English, for a view that I have adopted with regard to the 
semantics of verbs.

As Texai is taught the principle English grammar constructions, I would be glad 
to contribute the form <--> semantics pairings to the wiki-like place you 
propose.

-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, May 7, 2008 9:48:00 AM
Subject: [agi] standard way to represent NL in logic?

Is there any standard (even informal) way of representing NL sentences in logic?

Especially complex sentences like "John eat spaghetti with a fork" or
"The dog that chased the cat jumped over the fence." etc.

I have my own way of translating those sentences, but having a
standard would be much better.

Maybe we need to create such a standard, using a wiki-like place where
people can contribute their NL <--> logic translations.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Overlapping Interrelated Bounded Logical Models

2008-05-07 Thread Jim Bromer
I believe that logic could work with general AI if it
was partially bounded and related to other partially bounded logical
models.  That is, logical systems can be used to examine theoretical
(or theory-like relational) models of the IO data environment.  However, the
possibility of creating a logical theory of everything as an
implementation of general AI is not reasonable. 

So then, a logical analysis of a subject of interest has to be bounded
and protected.  But a particular logical analysis has to also be
integrated with other related logical analyses in some way.  These
relational connectors may also be logical as long as the entire system
does not have to be integrated into a single (traditional) logical
system.

But this will allow logical errors to survive.  Therefore this kind of
system also needs to use something like overlapping logical analyses in the 
hope that
that it will be able to detect some flaws in the
various partially bounded logical references to the subject matter. A logical 
flaw that may not show up in
one boundary levelmay show up in another overlapping boundary level.

I believe that the way human beings think shows traces of these kinds of 
systems (the
way people deal with ideas - not the neural mechanics of the brain). 
Many people have lucid remarks to offer but they deal with the same
subject matter in different ways.  To some extent this can be seen as
exemplary of having different points of view, but this characteristic
even shows up in very focused discussions of some subject.  To give one
outstanding example, these overlapping partially bounded logical models
often show up in the casual discussion of highly specified and
formalized subjects, even when the discussants are very familiar with
each others views on the subject.

This overlapping models theory requires the explicit use of more
complex programming constructs than is typically discussed in these AI
discussion groups. But I believe that overlapping logical models will develop 
naturally in a program that is written around the theory.

Jim Bromer


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Graph mining

2008-05-07 Thread Jim Bromer
Bob Mottram wrote:

http://blogs.zdnet.com/emergingtech/?p=911
The ability to discover patterns, especially from partial information,
would seem to be a central concern of AGI.
-
That was interesting.  I may (actually) read the paper "Hierarchical structure 
and the prediction of missing
links in networks" at
http://www-personal.umich.edu/~mejn/papers/cmn08.pdf

I am not sure about what they are getting at, but an idealized model of the 
hierarchal structure might be useful.  There are probably more than one 
idealized structural models for a complicated network.  These models might be 
used in AGI for reason-derived what-if kinds of conjectures.
Jim Bromer



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread Matt Mahoney

--- "YKY (Yan King Yin)" <[EMAIL PROTECTED]> wrote:

> Is there any standard (even informal) way of representing NL sentences
> in logic?

No.  But it hasn't stopped people from trying.

The meaning of sentences and even paragraphs depends on context that is
not captured in logic.  Consider the following examples, where a different
word is emphasized in each case:

- I didn't steal that.
- I DIDN'T steal that.
- I didn't STEAL that.
- I didn't steal THAT.

And the following where you can guess the emphasis by context.

- I didn't steal that.  He did.
- I didn't steal that.  It is still there.
- I didn't steal that.  I borrowed it.
- I didn't steal that.  I stole this.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
Is there any standard (even informal) way of representing NL sentences in logic?

Especially complex sentences like "John eat spaghetti with a fork" or
"The dog that chased the cat jumped over the fence." etc.

I have my own way of translating those sentences, but having a
standard would be much better.

Maybe we need to create such a standard, using a wiki-like place where
people can contribute their NL <--> logic translations.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Graph mining

2008-05-07 Thread Bob Mottram
This might be of interest.

  http://blogs.zdnet.com/emergingtech/?p=911

The ability to discover patterns, especially from partial information,
would seem to be a central concern of AGI.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Vladimir Nesov
On Wed, May 7, 2008 at 11:14 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
>
> On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > As your example illustrates, a higher intelligence will appear to be
> > irrational, but you cannot conclude from this that irrationality
> > implies intelligence.
>
> Neither does it imply a lack of intelligence.
>
> Note that had the master left the table and another good but less than
> masterful player taken his position, the master's moves would probably have
> left his replacement at a disadvantage.
>
> The test of intelligence is whether it is successful in achieving the
> desired goal. Irrationality may be a help or a hindrance, depending on how
> it is applied.
>

I think you are using a wrong concept for 'rationality'. It is not a
particular procedure, fixed and eternal. If your 'rationality' is bad
for achieving your goals, you are not being rational.

See http://www.overcomingbias.com/2008/01/newcombs-proble.html
"It is precisely the notion that Nature does not care about our
algorithm, which frees us up to pursue the winning Way - without
attachment to any particular ritual of cognition, apart from our
belief that it wins.  Every rule is up for grabs, except the rule of
winning."

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> Certainly a rational AGI may find it useful to appear irrational, but
>  that doesn't change the conclusion that it'll want to think rationally
>  at the bottom, does it?

Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html ,
especially parts 5 - 6.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Kaj,

On 5/6/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
>
> Certainly a rational AGI may find it useful to appear irrational, but
> that doesn't change the conclusion that it'll want to think rationally
> at the bottom, does it?


The concept of rationality contains a large social component. For example,
the Eastern concept of "face" forces actions there that might seem to us to
be quite irrational. Polygamy works quite well under Islam, but fails here,
because of social perceptions and expectations. Sure, our future AGI must
calculate these things, but I suspect that machines will never understand
people as well as people do, and hence will never become a serious social
force.

Take for example the very intelligent people on this forum. We aren't any
more economically successful in the world than people with half our our
average IQs - or else we would be too busy to make all of these postings. If
you are so smart, then why aren't you rich? Of course you know that you have
directed your efforts in other directions, but is that path really worth
more *to you* than the millions of dollars that you may have "left on the
table"?

The whole question of goals also contains a large social component. What is
a LOGICAL goal?!

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Matt,

On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Steve Richfield <[EMAIL PROTECTED]> wrote:
>
> > I have played tournament chess. However, when faced with a REALLY
> GREAT
> > chess player (e.g. national champion), as I have had the pleasure of
> > on a
> > couple of occasions, they at first appear to play as novices, making
> > unusual
> > and apparently stupid moves that I can't quite capitalize on, only to
> > pull things together later on and soundly beat me. While
> retrospective
> > analysis would show them to be brilliant, that would not be my
> > evaluation early in these games.
>
> As your example illustrates, a higher intelligence will appear to be
> irrational, but you cannot conclude from this that irrationality
> implies intelligence.


Neither does it imply a lack of intelligence.

Note that had the master left the table and another good but less than
masterful player taken his position, the master's moves would probably have
left his replacement at a disadvantage.

The test of intelligence is whether it is successful in achieving the
desired goal. Irrationality may be a help or a hindrance, depending on how
it is applied.

I once found myself in the process of being stiffed for $30K by a business
associate who clearly had the money, but with no obvious means for me
to force collection. Cutting a LONG story short, I collected by composing
and sending my associate a copy of a letter to government regulators
explaining exactly what the problem was - that would probably have sunk BOTH
of our careers - a sort of "doomsday machine" but still under my control as
I held the letter. This worked only because I successfully projected that I
really was crazy enough to actually send this letter and sink both of our
careers, rather than see $30K successfully stolen from me. Had I projected a
calm and calculating mindset, this wouldn't have worked at all. It was at
once irrational and brilliantly successful - but only because I projected
irrationality/insanity.

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com