Re: [agi] What's the diff. between a simulation and a copy?

2007-12-30 Thread Matt Mahoney
The difference is that a simulation is an approximation of a copy.  Simulation
code has to be complex because we want to preserve those features that are
important, which is never a straightforward problem.

An exact simulation of the universe down to the last particle would be a very
simple program, probably a few hundred bits to specify the laws of physics. 
But it would require a computer larger than the universe to represent its
quantum state (as a string of 10^122 bits).  It would take even longer to run
the simulation because the best known algorithms for computing the wave
equations are exponential, or 2^(10^122) steps.

So we approximate.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80322602-300c67


Re: [agi] NL interface

2007-12-30 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On Dec 21, 2007 11:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  What is the goal of your system. What application?
 Sorry about the delay, and Merry Xmas =)
 
 The goal is to provide an easy input for AGI, temporarily until full NL
 capacity is achievable.
 
 I guess most AGIers would have realized by now, that a separate NL module
 (such as a chart parser, even with statistical learning) would not work for
 AGI.  The semantics of words, together with syntactic knowledge, should be
 integrated in one big KB, ie, generic memory.  I'm planning ultimately to do
 that, but this is not happening immediately.
 
 That's why I want to build an interface that lets users provide grammatical
 information and the likes.  The exact form of the GUI is still unknown --
 maybe like a panel with a lot of templates to choose from, or like
 the autocomplete feature.
 
 It will be useful for logic-based / symbolic / hybrid AGIs.
 
 YKY

What would you do with the knowledge base after you build it?  I know this
sounds like a dumb question, but Cyc has built a huge base of common sense
knowledge in a structured format, but it isn't useful for anything.  Of course
that is not the result they anticipated.  How will you avoid the same type of
(very expensive) failure?  What type of knowledge will it contain?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80323735-4bde31


Re: [agi] NL interface

2007-12-30 Thread Benjamin Goertzel
Matt,

I agree w/ your question...

I actually think KB's can be useful in principle, but I think they
need to be developed
in a pragmatic way, i.e. where each item of knowledge added can be validated via
how useful it is for helping a functional intelligent agent to achieve
some interesting
goals...

ben g


 What would you do with the knowledge base after you build it?  I know this
 sounds like a dumb question, but Cyc has built a huge base of common sense
 knowledge in a structured format, but it isn't useful for anything.  Of course
 that is not the result they anticipated.  How will you avoid the same type of
 (very expensive) failure?  What type of knowledge will it contain?


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80324636-31670d


Re: [agi] NL interface

2007-12-30 Thread Günther Greindl
The problem with all traditional knowledge bases (CYC too) it is still 
all meaningless symbols for the processing computer program, processed 
according to relational rules in the KB, which are all entered by _human 
beings_.


An AI would need to develop it's own KB, like a child - we all have our 
own little KB's in our brains, each developed individually be their 
education and experiences.


The KB which the AI would learn would not make sense to us, of course; 
as little as the neural connections in a human brain make sense to us 
(in the sense of concepts).


Regards,
Günther

Benjamin Goertzel wrote:

Matt,

I agree w/ your question...

I actually think KB's can be useful in principle, but I think they
need to be developed
in a pragmatic way, i.e. where each item of knowledge added can be validated via
how useful it is for helping a functional intelligent agent to achieve
some interesting
goals...

ben g


What would you do with the knowledge base after you build it?  I know this
sounds like a dumb question, but Cyc has built a huge base of common sense
knowledge in a structured format, but it isn't useful for anything.  Of course
that is not the result they anticipated.  How will you avoid the same type of
(very expensive) failure?  What type of knowledge will it contain?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
Günther Greindl
Department of Philosophy of Science
University of Vienna
[EMAIL PROTECTED]
http://www.univie.ac.at/Wissenschaftstheorie/

Blog: http://dao.complexitystudies.org/
Site: http://www.complexitystudies.org

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80329343-0aa736