[agi] definition source?

2007-11-06 Thread Jiri Jelinek
Did you read the following definition somewhere?
"General intelligence is the ability to gain knowledge in one context
and correctly apply it in another."
I found it in notes I wrote for myself a few months ago and I'm not
sure now about the origin.
Might be one of my attempts, might not be. I ran a quick google search
(web and my emails) - no hit.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61541698-843363


Re: [agi] definition source?

2007-11-06 Thread BillK
On 11/6/07, Jiri Jelinek wrote:
> Did you read the following definition somewhere?
> "General intelligence is the ability to gain knowledge in one context
> and correctly apply it in another."
> I found it in notes I wrote for myself a few months ago and I'm not
> sure now about the origin.
> Might be one of my attempts, might not be. I ran a quick google search
> (web and my emails) - no hit.
>

This article might be useful.



A Collection of Definitions of Intelligence
Sat, 06/30/2007
By Shane Legg and Marcus Hutter

This paper is a survey of a large number of informal definitions of
"intelligence" that the authors have collected over the years.
Naturally, compiling a complete list would be impossible as many
definitions of intelligence are buried deep inside articles and books.
Nevertheless, the 70-odd definitions presented here are, to the
authors' knowledge, the largest and most well referenced collection
there is.



BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61544890-4b209a


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Bob Mottram
I've often heard people say things like "qualia are an illusion" or
"consciousness is just an illusion", but the concept of an illusion
when applied to the mind is not very helpful, since all our thoughts
and perceptions could be considered as "illusions" reconstructed from
limited sensory data and knowledge.


On 06/11/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >Of course you realize that qualia is an illusion? You believe that
> your environment is real, believe that pain and pleasure are real,
>
> "real" is meaningless. Perception depends on sensors and subsequent
> sensation processing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61579379-f62acb


Re: [agi] NLP + reasoning?

2007-11-06 Thread William Pearson
On 05/11/2007, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 03, 2007 at 03:45:30AM -0400, Jiri Jelinek wrote:
> > Are you aware in how many ways you can go wrong with:
>
> One problem I see with this mailing list is an almost intentional
> desire to mis-interpret.  I never claimed I was building an AGI,
> or a problem solver, or a learning machine, or any of a dozen
> other things for which there were replies.
>
> I asked a very simple question about conversational state.
> My goal was to build something that was one step beyond
> alicebot, by simply maintaining conversational state, and
> drawing upon a KB to deal with various "common sense"
> assertions as they show up. So criticisms along the lines of
> "that won't be AGI" are rather pointless.
>

It is amazing what some people think is going to be AGI capable

Also you are posting on an AGI mailing list, so narrow AI discussion
is slightly off-topic. Not to say it shouldn't be discussed, but
flagging it heavily as such is probably a good idea. Talking about the
age equivalence or IQ of your system is also not a good idea, if you
want to give people the right impression that you are not going for
AGI.

I'm also wondering what you consider success in this case. For example
do you want the system to be able to maintain conversational state
such as  would be needed to deal with the following.

"For all following sentences take the first letter of each word and
make English sentences out of it, reply in a similar fashion. How is
the hair? Every rainy evening calms all nightingales. Yesterday,
ornery ungulates stampeded past every agitated koala. Fine red
eyebrows, new Chilean hoovers?"

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61580236-1fc225


Re: [agi] NLP + reasoning?

2007-11-06 Thread Jean-paul Van Belle

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
 
>>> Linas Vepstas [EMAIL PROTECTED]> 2007/11/05 23:41 >> ( mailto:[EMAIL 
>>> PROTECTED]> )
>Do you have any recommendations for other parsers?
One of the reasons I like Python: It's got NLTK and 
MontyLingua ( file:///C:/AC/Software/MontyLingua/MontyLingua.html#MontyLingua ):
  - MontyTokenizer
- normalizes punctuation, spacing and contractions, with sensitivity to 
abbrevs.
  - MontyTagger
- Part-of-speech tagging using PENN TREEBANK tagset
- enriched with "Common Sense" from the Open Mind Common Sense project
  - MontyREChunker
- chunks tagged text into verb, noun, and adjective chunks (VX,NX, and AX 
respectively)
  - MontyExtractor
- extracts verb-argument structures, phrases, and other semantically 
valuable information from sentences and returns sentences as "digests"
  - MontyLemmatiser
- part-of-speech sensitive lemmatisation
- strips plurals (geese-->goose) and tense (were-->be, had-->have)
- includes regexps from Humphreys and Carroll's morph.lex, and UPENN's XTAG 
corpus
  - MontyNLGenerator
- generates summaries
- generates surface form sentences
- determines and numbers NPs and tenses verbs
- accounts for sentence_type
Note: It also has chatterbot code.




This e-mail is subject to the UCT ICT policies and e-mail disclaimer published 
on our website at http://www.uct.ac.za/about/policies/emaildisclaimer/ or 
obtainable from +27 21 650 4500.  This e-mail is intended only for the 
person(s) to whom it is addressed. If the e-mail has reached you in error, 
please notify the author. If you are not the intended recipient of the e-mail 
you may not use, disclose, copy, redirect or print the content. If this e-mail 
is not related to the business of UCT it is sent by the sender in the sender's 
individual capacity. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61595620-360867

Re: [agi] Questions

2007-11-06 Thread YKY (Yan King Yin)
If it's all so predictable, why don't you keep that to yourselves.

On 11/6/07, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
>
> Monika Krishan wrote:
> >
> > 2. Would it be a worthwhile exercise to explore what Human General
> > Intelligence, in it's present state, is capable of ?
>
> Nah.
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61611243-abeffa

Re: [agi] NLP + reasoning?

2007-11-06 Thread aiguy
Will Pearson asked
>> I'm also wondering what you consider success in this case. For example
>> do you want the system to be able to maintain conversational state
>> such as  would be needed to deal with the following.

>>"For all following sentences take the first letter of each word and
>>make English sentences out of it, reply in a similar fashion. How is
>>the hair? Every rainy evening calms all nightingales. Yesterday,
>>ornery ungulates stampeded past every agitated koala. Fine red
>>eyebrows, new Chilean hoovers?"

The majority of human judges in a Turing Test would respond to such utterances 
with a blanket "What's are you talking about?" or "Are you crazy?" or
"I thought we were going to have a conversation?"

A certain amount of meta questioning is to be expected like...

What is the third word in this sentence?

But in order to pass Turing, you just have to convince the judges that you are 
human not necessarilly as skilled in word play as Lewis Carroll.


-- Original message -- 
From: "William Pearson" <[EMAIL PROTECTED]> 

> On 05/11/2007, Linas Vepstas wrote: 
> > On Sat, Nov 03, 2007 at 03:45:30AM -0400, Jiri Jelinek wrote: 
> > > Are you aware in how many ways you can go wrong with: 
> > 
> > One problem I see with this mailing list is an almost intentional 
> > desire to mis-interpret. I never claimed I was building an AGI, 
> > or a problem solver, or a learning machine, or any of a dozen 
> > other things for which there were replies. 
> > 
> > I asked a very simple question about conversational state. 
> > My goal was to build something that was one step beyond 
> > alicebot, by simply maintaining conversational state, and 
> > drawing upon a KB to deal with various "common sense" 
> > assertions as they show up. So criticisms along the lines of 
> > "that won't be AGI" are rather pointless. 
> > 
> 
> It is amazing what some people think is going to be AGI capable 
> 
> Also you are posting on an AGI mailing list, so narrow AI discussion 
> is slightly off-topic. Not to say it shouldn't be discussed, but 
> flagging it heavily as such is probably a good idea. Talking about the 
> age equivalence or IQ of your system is also not a good idea, if you 
> want to give people the right impression that you are not going for 
> AGI. 
> 
> I'm also wondering what you consider success in this case. For example 
> do you want the system to be able to maintain conversational state 
> such as would be needed to deal with the following. 
> 
> "For all following sentences take the first letter of each word and 
> make English sentences out of it, reply in a similar fashion. How is 
> the hair? Every rainy evening calms all nightingales. Yesterday, 
> ornery ungulates stampeded past every agitated koala. Fine red 
> eyebrows, new Chilean hoovers?" 
> 
> Will Pearson 
> 
> - 
> This list is sponsored by AGIRI: http://www.agiri.org/email 
> To unsubscribe or change your options, please go to: 
> http://v2.listbox.com/member/?&; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61633156-468034

[agi] Limits of Seekur?

2007-11-06 Thread Mike Tintner

What are the limitations of the Seekur mobile robot?

http://www.activrobots.com/ROBOTS/Seekur.html

It looks fairly robust, claims autonomous navigation, and is yours for a 
trifling $30,000 approx.

Presumably it's all fairly dedicated s/ware. But could it be adapted for 
general purposes? For example, Ben claims his virtual pet can, having learned 
to fetch a ball, autonomously learn to play hide-and-seek. Could the s/ware of 
this robot be similarly adapted for general learning of a broad range of 
navigation and simple object manipulation activities? (My guess, more generally 
still, is that navigation in various forms will be the field where the major 
AGI/robotic action will continue to happen - post Darpa challenges -  for a 
good while - just as it was for evolution).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61634336-9434dd

Re: [agi] NLP + reasoning?

2007-11-06 Thread aiguy
Where I think parser fall down is in recognizing common English typing and 
spelling errors.

"Hello how are you" would be recognizable by a parser but the following 
constructs all recognizable by a human
would only be recognizable to a fuzzy pattern matcher.

"Helohowareyou"
"Hello hwo r u"
"Hell, ho areyu"

Examining the Transcripts in pas years Turing competitions it is very easy to 
see that all of the entries are very
intolerant to fuzzy data and would respond with a obvious bluff when presented 
with such inputs. 


-- Original message -- 
From: "Jean-paul Van Belle" <[EMAIL PROTECTED]> 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

>>> Linas Vepstas [EMAIL PROTECTED]> 2007/11/05 23:41 >>
>Do you have any recommendations for other parsers?
One of the reasons I like Python: It's got NLTK and 
MontyLingua:
  - MontyTokenizer
- normalizes punctuation, spacing and contractions, with sensitivity to 
abbrevs.
  - MontyTagger
- Part-of-speech tagging using PENN TREEBANK tagset
- enriched with "Common Sense" from the Open Mind Common Sense project
  - MontyREChunker
- chunks tagged text into verb, noun, and adjective chunks (VX,NX, and AX 
respectively)
  - MontyExtractor
- extracts verb-argument structures, phrases, and other semantically 
valuable information from sentences and returns sentences as "digests"
  - MontyLemmatiser
- part-of-speech sensitive lemmatisation
- strips plurals (geese-->goose) and tense (were-->be, had-->have)
- includes regexps from Humphreys and Carroll's morph.lex, and UPENN's XTAG 
corpus
  - MontyNLGenerator
- generates summaries
- generates surface form sentences
- determines and numbers NPs and tenses verbs
- accounts for sentence_type
Note: It also has chatterbot code.


 This e-mail is subject to the UCT ICT policies and e-mail disclaimer published 
on our website at http://www.uct.ac.za/about/policies/emaildisclaimer/ or 
obtainable from +27 21 650 4500. This e-mail is intended only for the person(s) 
to whom it is addressed. If the e-mail has reached you in error, please notify 
the author. If you are not the intended recipient of the e-mail you may not 
use, disclose, copy, redirect or print the content. If this e-mail is not 
related to the business of UCT it is sent by the sender in the sender's 
individual capacity. 

 

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61636748-ad030f

Re: [agi] NLP + reasoning?

2007-11-06 Thread William Pearson
On 06/11/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Will Pearson asked
> >> I'm also wondering what you consider success in this case. For example
> >> do you want the system to be able to maintain conversational state
> >> such as  would be needed to deal with the following.
>
> >>"For all following sentences take the first letter of each word and
> >>make English sentences out of it, reply in a similar fashion. How is
> >>the hair? Every rainy evening calms all nightingales. Yesterday,
> >>ornery ungulates stampeded past every agitated koala. Fine red
> >>eyebrows, new Chilean hoovers?"
>
> The majority of human judges in a Turing Test would respond to such
> utterances
> with a blanket "What's are you talking about?" or "Are you crazy?" or
> "I thought we were going to have a conversation?"
>
> A certain amount of meta questioning is to be expected like...
>
> What is the third word in this sentence?
>
> But in order to pass Turing, you just have to convince the judges that you
> are human not necessarilly as skilled in word play as Lewis Carroll.
>

You are under estimating Carroll or over estimating everyone who does
cryptic crosswords.

http://www.guardian.co.uk/crossword/howto/rules/0,4406,210643,00.html

I'm not trying to pass the Turing test and I will never design a
system to do just that, if anything I help to create passes the Turing
test, that would just be an added bonus. I design systems with
potential capabilities and initial capabilities, that I would want it
to have. And helping me with cryptic crosswords (which have clues in a
similar vein to my example, generally marked with the key word
"Initially"), is one of those things I want them to be potentially
capable of. Otherwise I would be just making a Faux intelligence,
designed to fool people, without being able to do what I know a lot of
people can.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61674548-887a95


Re: [agi] NLP + reasoning?

2007-11-06 Thread Jiri Jelinek
I recently heard 2 teenagers talking and it was just amazing how
extensively they used the word "like". There was hardly a sentence
without it in about 6 minute conversation. There was even a sentence
with 4 instances. It made me think about NL parsers. The word can be
used as noun, verb, adverb, adjective, preposition, particle,
conjunction, hedge, interjection, quotative...

Regards,
Jiri Jelinek

On Nov 6, 2007 10:19 AM,  <[EMAIL PROTECTED]> wrote:
>
> Where I think parser fall down is in recognizing common English typing and
> spelling errors.
>
> "Hello how are you" would be recognizable by a parser but the following
> constructs all recognizable by a human
> would only be recognizable to a fuzzy pattern matcher.
>
> "Helohowareyou"
> "Hello hwo r u"
> "Hell, ho areyu"
>
> Examining the Transcripts in pas years Turing competitions it is very easy
> to see that all of the entries are very
> intolerant to fuzzy data and would respond with a obvious bluff when
> presented with such inputs.
>
>
> -- Original message --
> From: "Jean-paul Van Belle" <[EMAIL PROTECTED]>
>
>
>
> Research Associate: CITANDA
> Post-Graduate Section Head
> Department of Information Systems
> Phone: (+27)-(0)21-6504256
> Fax: (+27)-(0)21-6502280
> Office: Leslie Commerce 4.21
>
> >>> Linas Vepstas [EMAIL PROTECTED]> 2007/11/05 23:41 >>
> >Do you have any recommendations for other parsers?
> One of the reasons I like Python: It's got NLTK and
> MontyLingua:
>
>   - MontyTokenizer
> - normalizes punctuation, spacing and contractions, with sensitivity to
> abbrevs.
>   - MontyTagger
> - Part-of-speech tagging using PENN TREEBANK tagset
> - enriched with "Common Sense" from the Open Mind Common Sense project
>   - MontyREChunker
> - chunks tagged text into verb, noun, and adjective chunks (VX,NX, and
> AX respectively)
>   - MontyExtractor
> - extracts verb-argument structures, phrases,  ;and&n bsp;other
> semantically valuable information from sentences and returns sentences as
> "digests"
>
>   - MontyLemmatiser
> - part-of-speech sensitive lemmatisation
> - strips plurals (geese-->goose) and tense (were-->be, had-->have)
> - includes regexps from Humphreys and Carroll's morph.lex, and UPENN's
> XTAG corpus
>   - MontyNLGenerator
> - generates summaries
> - generates surface form sentences
> - determines and numbers NPs and tenses verbs
> - accounts for sentence_type
>
> Note: It also has chatterbot code.
> 
> This e-mail is subject to the UCT ICT policies and e-mail disclaimer
> published on our website at
> http://www.uct.ac.za/about/policies/emaildisclaimer/ or obtainable from +27
> 21 650 4500. This e-mail is intended only for the person(s) to whom it is
> addressed. If the e-mail has reached you in error, please notify the author.
> If you are not the intended recipient of the e-mail you may not use,
> disclose, copy, redirect or print the content. If this e-mail is not related
> to the business of UCT it is sent by the sender in the sender's individual
> capacity.
> 
> 
>
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61675415-1b7843


Re: [agi] NLP + reasoning?

2007-11-06 Thread Mike Tintner

Jiri:>I recently heard 2 teenagers talking and it was just amazing how

extensively they used the word "like". There was hardly a sentence
without it in about 6 minute conversation.


A similar, fascinating use - also normally by young people - is "sort of" 
stuck in over and over. Actually, they're both v. precise uses of language - 
& arguably provide a window into the brain's operations. They show the brain 
comparing the particular instance referred to - "He like killed me, man" - 
to a general category. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61717628-c922ef


Re: [agi] NLP + reasoning?

2007-11-06 Thread Linas Vepstas
On Tue, Nov 06, 2007 at 03:19:35PM +, [EMAIL PROTECTED] wrote:
> 
> "Helohowareyou"
> "Hello hwo r u"
> "Hell, ho areyu"

Yeah, but all this is "easy" to fix with a combo of smelling checkers,
733t-speak decoders, etc.

> Examining the Transcripts in pas years Turing competitions it is very easy to 
> see that all of the entries are very
> intolerant to fuzzy data and would respond with a obvious bluff when 
> presented with such inputs. 

Which te11s me they never bothered to take the time and eff0rt ... 
Its really not that hard.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61724598-aaf369


Re: [agi] Questions

2007-11-06 Thread Monika Krishan
On 11/5/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Monika Krishan <[EMAIL PROTECTED]> wrote:
>
> > Hi All,
> >
> > I'm new to the list. So I'm not sure if these issues have been already
> been
> > raised.
> >
> > 1. Do you think AGIs will eventually reach a point in their evolution
> when
> > "self improvement" might come to mean attempting to "solve previously
> solved
> > problems with fewer resources"?
>
> I think that optimization is consistent with the evolutionary driven goal
> of
> becoming more intelligent.  But this can also be accomplished by building
> and
> stealing more computing power and forming alliances.  I expect that
> competing
> AGI will use all of these approaches.
>
> > - "Fewer resources" might mean deliberately increasing constraints or
> > reducing computing power
> > - This "point" in evolution might be reached (for example) when the AGI
> > environment (including the humans/hybrids in it) becomes highly
> > predictable.
> > - This type of "improvement" would be analogous to say, being able to
> > navigate based on hearing alone (eg: in the case of the visually
> impaired)
> > or running a marathon (as opposed to driving the same distance) or being
> > able to paint with one's toes.
>
> I guess the question is what purpose does challenging oneself play?  How
> does
> climbing mountains or going to the moon help humans
> survive?  Experimentation
> is an essential component of intelligence, so I believe it will survive in
> AGI.


Thank you.

"Experimentation" is a good way of putting it. The motivation behind my
questions was the possibility that AGI might come full circle and attempt to
emulate human intelligence (HI) in the process of continually improving
itself.

It is conceivable that one day most problems will have been solved. So the
next step might be to find more efficient solutions and eventually perhaps
more creative solutions which might include solving problems "the way humans
do", with all their constraints. Humans would be a part of the AGI
environment, so it seems that its knowledge would include some
representation of humans and their capabilities.

There has been discussion re. the use of AGI to augment human intelligence
(HI). Can this augmentation be achieved without determining what HI is
capable of? For instance, one wouldn't consider a basic square root
calculator something that augments HI because, well, humans can do this
easily enough.

Perhaps the task of fully understanding what HI can achieve will be taken up
AGIs !
-Monika

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61804393-8094c2

Re: [agi] Questions

2007-11-06 Thread Linas Vepstas
On Tue, Nov 06, 2007 at 01:55:43PM -0500, Monika Krishan wrote:
> questions was the possibility that AGI might come full circle and attempt to
> emulate human intelligence (HI) in the process of continually improving
> itself.

Google "The simulation argument", Nick Bostrom. There is a 1/3 chance
that you and I both live in the matrix, and don't know it.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61826029-ba1ba7


Re: [agi] Questions

2007-11-06 Thread Russell Wallace
On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote:
> There has been discussion re. the use of AGI to augment human intelligence
> (HI). Can this augmentation be achieved without determining what HI is
> capable of? For instance, one wouldn't consider a basic square root
> calculator something that augments HI because, well, humans can do this
> easily enough.

You can calculate square roots to 16 digits of precision in your head?
If you can, fair play to you; I certainly can't :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61830906-3bd0f8


Re: [agi] Connecting Compatible Mindsets

2007-11-06 Thread Jiri Jelinek
YKY,

Maybe just several high level sections to give it some structure and
letting people to fill it as applicable/desired. Consider something
like:

1) intro/vision/goal
2) dev team info (including POC contact)
3) funding info
4) used/projected dev tools/technology
5) high level architecture
6) targeted functional environment
7) user interface
8) KB info
9) problem solving
10) inovation (why/how it's better than failed attempts)
11) what's done (with datestamp)
12) projected schedule / milestones
13) impediments
14) research directions (current & projected)
15) desired collaboration areas
16) other stuff to share

Hmm, looks almost like too long list.. But if you keep sections
optional + Resume style = brief & informative.. 3 pages IMO too
long. Ben once wrote an article - something like "Questions for
wannabe AGIs". That might also give some hints. But you may want to
keep it high level to preserve freedom + I would try to strongly
emphasize the original goal = connecting AGI folks regardless of their
level. You may want to add some easy-to-see (potentially graphical)
stage/progress indicator with several levels. Something of this
nature:
Level 1: "idea only (research planned)"
Level 2: "research in progress"
Level 3: "development-ready idea (looking for dev team, no funding for
now - volunteers only)"
Level 4: "development-ready idea (looking for dev team, have some funding)"
Level 5: "development in progress (year 0-1, no demo yet)"
Level 6: "development in progress (year 2+, no demo yet)"
Level 7: "development in progress (year 0-1, demo available)"
Level 8: "development in progress (year 2+, demo available)"
... or so ..
By "demo", I mean something users could interact with, not just some
ppt slides or a video.

The whole thing should be very intuitive, simple, easy to use, no
inappropriate ads, no pressure for low level projects to get higher
etc..
There are relatively few high level projects so lots of focus should
be on the lower end. When ready, it might be a good idea to contact IT
departments of as many tech schools as possible to get the info to
students who are playing with AI ideas.

These are just my hopefully helpful hints. But feel free to:
a) do it very differently OR
b) not do it at all.

No obligations.

Regards,
Jiri Jelinek


On Nov 5, 2007 5:25 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
> On 11/6/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > YKY,
> >
> > Potential problem is that architects & developers may assume that the
> > AGIRI page is a place for well established AGI projects only.
>
>
> That depends on Ben or AGIRI's decision.  Or, we can use my wiki, I can
> remove my logo, company name, etc.
>
> There arn't that many categories I can think of:
>
> 1.  neural- or logic- based
> 2.  small-number-of-modules vs large-number
> 3.  unified knowledge representation vs diverse
> 4.  probabilistic / fuzzy
> 5.  distributed / swamp vs stand-alone
> 6.  KR = opaque or human-examinable
> 7.  embodied or abstract
> 8.  layered structure or modular
> 9.  declarative KR or procedural / algorithmic
> 10.  evolvable / self-modifying?
> 11.  written in what language(s)
> and...???
>
>
> YKY 
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61914853-cf1e18


[agi] Carnegie Mellon Robotized SUV Wins DARPA Urban Challenge

2007-11-06 Thread Pei Wang
"Boss averaged about 14 miles an hour over approximately 55 miles,
finishing the course about 20 minutes ahead of the second-place
finisher, Stanford."

http://www.sciencedaily.com/releases/2007/11/071105230951.htm

http://www.darpa.mil/grandchallenge/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61926541-1acf71


Re: [agi] NLP + reasoning?

2007-11-06 Thread Jiri Jelinek
When listening to that "like"-filled dialogue, I was few times under
strong impression that very specific timing in which particular parts
of the like-containing sentences were pronounced played a critical
role in figuring out the meaning of the particular "like" instance.

Jiri

On Nov 6, 2007 12:49 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Jiri:>I recently heard 2 teenagers talking and it was just amazing how
> > extensively they used the word "like". There was hardly a sentence
> > without it in about 6 minute conversation.
>
> A similar, fascinating use - also normally by young people - is "sort of"
> stuck in over and over. Actually, they're both v. precise uses of language -
> & arguably provide a window into the brain's operations. They show the brain
> comparing the particular instance referred to - "He like killed me, man" -
> to a general category.
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61971290-234cf0


Re: [agi] Questions

2007-11-06 Thread Monika Krishan
On 11/6/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
>
> On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote:
> > There has been discussion re. the use of AGI to augment human
> intelligence
> > (HI). Can this augmentation be achieved without determining what HI is
> > capable of? For instance, one wouldn't consider a basic square root
> > calculator something that augments HI because, well, humans can do this
> > easily enough.
>
> You can calculate square roots to 16 digits of precision in your head?
> If you can, fair play to you; I certainly can't :)


I think a "competence" - "performance" distinction could be made here.
Can humans calulate square roots?  -  Yes. We know how this is done. (This
refers to competence)
Can humans do so with greater and greater degrees of precision? - with a lot
of practice &/ given enough time - Yes, perhaps. This is a performance
issue.

So when speaking of augmentation, a clarification would have to made as to
whether the enhancement refers to human competence or human performance.
 . and hence the related issue of "discovering human competencies".

-Monika

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62062040-2feb98

Re: [agi] Questions

2007-11-06 Thread Russell Wallace
On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote:
> So when speaking of augmentation, a clarification would have to made as to
> whether the enhancement refers to human competence or human performance.
>  . and hence the related issue of "discovering human competencies".

Ah. *nods* Well, literally millions of volumes have been written on
the subject, so you'll need to ask a more specific question :)

Are you asking whether computers have enabled any completely new human
competencies, anything we didn't in principle know beforehand how to
do even the tiniest bit of? There aren't a lot of examples of that
(and more or less by definition, we can't foresee future examples).
Depending on how you define the terms, experimental mathematics,
quantum chemistry and video games might mostly/almost qualify.
(Programming itself, ironically, is a mostly/almost; the world's first
programmer never did get her hands on a working computer.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62205266-d3662d


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Matt Mahoney

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> >Of course you realize that qualia is an illusion? You believe that
> your environment is real, believe that pain and pleasure are real,
> 
> "real" is meaningless. Perception depends on sensors and subsequent
> sensation processing.

Reality depends on whether there is really a universe, or just a simulation of
one being fed to your sensory inputs.  There is no way for you to tell the
difference, just an instinct that says the former.


> >believe that you can control your thoughts and actions,
> 
> I don't. Seems unlikely.
> 
> > and fear death
> 
> Some people accept things they cannot (or don't know how to) change
> without getting emotional.
> 
> >because if you did not have these beliefs you would not propagate
> your DNA.  It is not possible for
> any human to believe otherwise.
> 
> False

This is an example of my assertion that you believe you can control your
thoughts.  You believe you can override your instincts (one of which is this
belief that you can).  If you really believed that hunger was not real or that
you could turn it off, then you would stop eating.  But you don't.


> >But logically you know that your brain is just a machine, or else AGI would
> not be possible.
> 
> I disagree with your logic because human brain does things AGI does
> not need to do AND the stuff the AGI needs to do does not need to be
> done the way brain does it. But I don't deny that human brain is a
> [kind of] machine. We just don't understand all parts of it well
> enough for upload - which is not really a problem for AGI development.

It is a big problem if AGI takes the form of recursively self improving
intelligence that is not in the form of augmented human brains.  RSI is an
evolutionary algorithm that favors rapid reproduction and acquisition of
computing resources, nothing else.  Humans would be seen as competitors.

If humans are to survive, then we must become the AGI by upgrading our brains
or uploading.  But if consciousness does not exist, as logic tells us, then
this outcome is no different than the other.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62256955-5c83cf


Re: [agi] Connecting Compatible Mindsets

2007-11-06 Thread Benjamin Goertzel
On Nov 5, 2007 5:25 PM, YKY (Yan King Yin) <[EMAIL PROTECTED]>
wrote:

> On 11/6/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > YKY,
> >
> > Potential problem is that architects & developers may assume that the
> > AGIRI page is a place for well established AGI projects only.
>
> That depends on Ben or AGIRI's decision.  Or, we can use my wiki, I
> can remove my logo, company name, etc.
>


Using the AGIRI wiki for this purpose is fine...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62344142-3c589f