Ok. Maybe we mean different things under the name AGI.
I agree that traditional AI is just the beginning. And even if human
intelligence is no proof for that what I mean with AGI 
it is clear that human intelligence is far way more powerful than any AI
until now. But perhaps only for subtle reasons or reasons we will not know
in 100 years, who know.

I am convinced that super human intelligence is possible.
But mainly because we will use faster and more hardware than our brain.

Biology has created intelligence by trial and error and used billions of
years and trillions of animals for this.

The goal to do it better within 10 or 20 years from now and from scratch
seems to me way too ambitious. 

I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.
 

 

-----Ursprüngliche Nachricht-----
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 27. April 2008 15:03
An: agi@v2.listbox.com
Betreff: Re: [agi] How general can be and should be AGI?

If by "truly general" you mean "absolutely general", I agree it is not
possible, but it is not what we are after. Again, I hope you find out
what people are doing under the name "AGI", then make your argument
against it, rather than against the "AGI" in your imagination.

For example, I fully agree that "visiting every state-action pair" is
hopeless, but who in AGI is doing that or suggesting that? Just
because traditional AI is trapped by this methodology doesn't mean
there is no other possibility. Who said AI systems must do state-based
planning?

I'm not trying to convince you that AGI can be achieved --- that is
what people are exploring --- but that you should not assume the
traditional AI has tried all possibilities, and there cannot be
anything new.

Of course every intelligent system (human or computer) has its limit
(nobody denied that), but that limit is fundamentally different from
the limit of the current "AI systems".

Pei

On Sun, Apr 27, 2008 at 8:46 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> Performance not an unimportant question. I assume that AGI has necessarily
>  has costs which grow exponentially with the number of states and actions
so
>  that AGI will always be interesting only for toy domains.
>
>  My assumption is that human intelligence is not truly general
intelligence
>  and therefore cannot hold as a proof of existence that
>  AGI is possible. Perhaps we see more intelligence than  there really is.
>  Perhaps the human intelligence is to some extend overestimated and an
>  illusion as the free will.
>
>  Why? In truly general domains every experience of an agent only can be
used
>  for the single certain state and action when the experience was made.
Every
>  time when your algorithm makes generalizations from known state-action
pairs
>  to unknown state-action pairs then this is in fact usage of knowledge
about
>  the underlying state-action space or it is just guessing and only a
matter
>  of luck.
>
>  So truly general AGI algorithms must visit every state-action pair at
least
>  once to learn what to do in what state.
>  Even in small real world domains the state spaces are so big that it
would
>  take longer than the age of the universe to go through all states.
>
>  For this reason true AGI is impossible and human intelligence must be
narrow
>  to a certain degree.
>
>
>
>
>  -----Ursprüngliche Nachricht-----
>  Von: Pei Wang [mailto:[EMAIL PROTECTED]
>  Gesendet: Sonntag, 27. April 2008 13:50
>
> An: agi@v2.listbox.com
>  Betreff: Re: [agi] How general can be and should be AGI?
>
>
>
> On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger <[EMAIL PROTECTED]>
wrote:
>  >
>  >  What I wanted to say is that any intelligence has
>  >  to be narrow in a sense if it wants be powerful and useful. There must
>  >  always be strong assumptions of the world deep in any algorithm of
useful
>  >  intelligence.
>
>  >From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
>  ---
>  3.3. "General-purpose systems are not as good as special-purpose ones"
>
>  Compared to the previous one, a weaker objection to AGI is to insist
>  that even though general-purpose systems can be built, they will not
>  work as well as special-purpose systems, in terms of performance,
>  efficiency, etc.
>
>  We actually agree with this judgment to a certain degree, though we do
>  not take it as a valid argument against the need to develop AGI.
>
>  For any given problem, a solution especially developed for it almost
>  always works better than a general solution that covers multiple types
>  of problem. However, we are not promoting AGI as a technique that will
>  replace all existing domain-specific AI
>  techniques. Instead, AGI is needed in situations where ready-made
>  solutions are not available, due to the dynamic nature of the
>  environment or the insufficiency of knowledge about the problem. In
>  these situations, what we expect from an AGI system are not optimal
>  solutions (which cannot be guaranteed), but flexibility, creatively,
>  and robustness, which are directly related to the generality of the
>  design.
>
>  In this sense, AGI is not proposed as a competing tool to any AI tool
>  developed before, by providing better results, but as a tool that can
>  be used when no other tool can, because the problem is unknown in
>  advance.
>  ---
>
>  Pei
>
>
> -------------------------------------------
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
>  http://www.listbox.com/member/?&;
>
>
> Powered by Listbox: http://www.listbox.com
>
>  -------------------------------------------
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to