I'm sympathetic to all this.
I'm not sure what you mean by ``higher order functions
Ben Functions that take functions as arguments -- I mean the term in
Ben the sense of functional programming languages like Haskell ...
and ``probabilistic programming language, can you spell out
please?
Ben
Hi,
It seems that what you are saying, though, is that a KR must involve
probabilities in some shape or form and the ability of a
representation to jump up a level and represent/manipulate other
representations, not just represent the world.
Yes, and these two aspects must work together so
This discussion has been skirting very close to what I said in my
AGIRI talk and book...
evolution invested massive computation in getting the KR right. Yes,the KR
is built for (3+1)-D and a lot more-- it's not just a list of facts,
or some database where you enter logical statements that
are
I believe that to be adequate, the code language must incorporate
something loosely analogous to probabilistic logic (however
implemented) and something analogous to higher-order functions
(however implemented). I.e. it must be sensibly viewable as a
probabilistic logic based functional
Ben--
I'm not sure what you mean by ``higher order functions
and ``probabilistic programming language, can you spell out please?
I think it looks like really well written python code.
Is there some difference with the above?
My AGIRI Proceedings paper discusses this in more detail.
Eric
I'm not sure what you mean by ``higher order functions
Functions that take functions as arguments -- I mean the term in the
sense of functional programming languages like Haskell ...
and ``probabilistic programming language, can you spell out please?
I mean a language (or code library
In my case (http://nars.wang.googlepages.com/), that scenario won't
happen --- it is impossible for the project to fail. ;-)
Seriously, if it happens, most likely it is because the control
process is too complicated to be handled properly by the designer's
mind. Or, it is possible that the
Dear Ben,
On 9/25/06, Ben Goertzel [EMAIL PROTECTED] wrote:
1) The design is totally workable but just requires much more hardware
than is currently available. (Our current estimates of hardware
requirements for powerful Novamente AGI are back-of-the-envelope
rather than rigorous
Hi,
Just out of curiosity - would you mind sharing your hardware estimates
with the list? I would personally find that fascinating.
Mant thanks,
Stefan
Well, here is one way to slice it... there are many, of course...
Currently the bottleneck for Novamente's cognitive processing is the
Looking at past and current (likely) failures
trying
to solve the wrong problem is the first place, or
not
having good enough theory/ approaches to solving the right problems, or
poor
implementation
However, even though you specifically restricted
your question
In my way of seeing things, when projects reach that "non
salvageable"
status, one is likely to find serious theoretical errors, possibly
made in the beginning of the journey. That's a problem we cannot
avoid, because we all don't know precisely what it is that we must
do to achieve general
Peter Voss mentioned trying to solve the wrong problem is the first place
as a source for failure in an AGI project. This was actually this first thing
that I thought of, and it brought to my mind a problem that I think of when
considering general intelligence theories--object permanence. Now, I
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
Sergio Navega.
Sergio,
While this is an interesting pursuit, I find it it much more difficult
than the already-hard problem of articulating some
From: Ben Goertzel [EMAIL PROTECTED]
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
Sergio Navega.
Sergio,
While this is an interesting pursuit, I find it it much more difficult
than the
, September 25, 2006 3:05 PM
Subject: Re: [agi] Failure scenarios
99% of AI projects failed of being AGI by simply being diverted into
applications. The problem is that it is MUCH easier to write a program to
do
X than it is to write a system that can learn to do X without your having
told
On Monday 25 September 2006 16:48, Ben Goertzel wrote:
My own view is that symbol grounding is not a waste of time ... but,
**exclusive reliance** on symbol grounding is a waste of time.
It's certainly not a waste of time in the general sense, especially if you're
going to be building a robot!
Ben, I take it you're using the word hypergraph in the strict mathematical sense. What do you gain from a hypergraph over an ordinary graph, in terms of representability, say?To return to the topic, didn't Minsky say that 'the trick is that there is no trick'? I doubt there's any single point of
On 9/26/06, Ben Goertzel [EMAIL PROTECTED] wrote:
But, what I would say in response to you is: If you presume a **bad**KR format, you can't match it with a learning mechanism that reliablyfills one's knowledge repository with knowledge...If you presume a sufficiently and appropriately flexible KR
Ben Goertzel wrote:
Hi,
The real grounding problem is the awkward and annoying fact that if
you presume a KR format, you can't reverse engineer a learning mechanism
that reliably fills that KR with knowledge.
Sure...
To go back to the source, in
19 matches
Mail list logo