This week's New Scientist has a fascinating article on a possible 'grand
theory' of the brain that suggests that virtually all brain functions can be
modelled with insert fashionable technique here
But seriously, I use bayes rule on an industrial scale in robotics
software. There is always a
I think it's fine that you use the term atom in your own way. The
important thing is, whatever the objects that you attach probabilities
to, that class of objects should correspond to *propositions* in FOL.
From there it would be easier for me to understand your ideas.
Well, no, we attach
On 6/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
eats(x, mouse)
That's a perfectly legitimate proposition. So it is perfectly OK to write:
P( eats(x,mouse) )
Note here that I assume your mouse refers to a particular instance
of a mouse, as in:
eats(X, mouse_1234)
What's confusing is:
2008/6/2 Ben Goertzel [EMAIL PROTECTED]:
I think the PLN / indefinite probabilities approach is a complete and
coherent solution to the problem. It is complex, true, but these are
not simple issues...
I was wondering whether indefinite probabilities could be used to
represent a particle
Well, it's still difficult for me to get a handle on how your logic
works, I hope you will provide some info in your docs, re the
correspondence between FOL and PLN.
I think it's fine that you use the term atom in your own way. The
important thing is, whatever the objects that you attach
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
YKY, how are you going to solve the natural language interface problem? You
seem to be going down the same path as CYC. What is different about your
system?
One more point:
Yes, my system is similar to Cyc in that it's logic-based. But of
That's getting reasonably close, assuming you don't require the model to have
any specific degree of fidelity -- there's a difference between being
conscious of something and understanding it.
The key is that we judge the consciousness of an entity based on the ability
of its processes and
Ben,
I should not say that FOL is the standard of KR, but that it's
merely more popular. I think researchers ought to be free to explore
whatever they want.
Can we simply treat PLN as a black box, so you don't have to explain
its internals, and just tell us what are the input and output format?
I would imagine so, but I havent thought about the details
I am traveling now but will think about this when I get home and can
refresh my memory by rereading the appropriate sections of
Probabilistic Robotics ...
ben
On 6/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
2008/6/2 Ben Goertzel [EMAIL
More likely though, is that your algorithm is incomplete wrt FOL, ie,
there may be some things that FOL can infer but PLN can't. Either
that, or your algorithm may be actually slower than FOL.
FOL is not an algorithm, it:s a representational formalism...
As compared to standard logical
J. Andrew Rogers wrote:
On Jun 1, 2008, at 5:02 PM, Richard Loosemore wrote:
But this statement is such a blatant contradiction of all the
known facts about neurons, that I am surprised that abyone would try
to defend it. Real neurons are complicated, and their actual
functional role
Ben said:
I think the PLN / indefinite probabilities approach is a complete and
coherent solution to the problem. It is complex, true, but these are
not simple issues...
-
I just started reading Ben's paper Indefinite Probabilities for General
Intelligence and while I
On Mon, Jun 2, 2008 at 6:27 AM, Mark Waser [EMAIL PROTECTED] wrote:
But, why SHOULD there be a *simple* model that produces the same
capabilities?
What if the brain truly is a conglomeration of many complex interacting
pieces?
Because unless I know otherwise, I use simplicity-preferring
Vladimir Nesov wrote:
On Mon, Jun 2, 2008 at 6:27 AM, Mark Waser [EMAIL PROTECTED] wrote:
But, why SHOULD there be a *simple* model that produces the same
capabilities?
What if the brain truly is a conglomeration of many complex interacting
pieces?
Because unless I know otherwise, I use
On 06/01/2008 09:29 PM,, John G. Rose wrote:
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:
A rock is conscious.
Okay, I'll bite. How are rocks conscious under Josh's definition or any
other non-LSD-tripping-or-batshit-crazy definition?
The
Because unless I know otherwise, I use simplicity-preferring prior.
Yes, absolutely. Occam's razor, etc.
However, when you have a local effect that is mediated at a global level,
then you *NEED* to have that as part of your computational model -- and
neural networks simply don't take such
On Mon, Jun 2, 2008 at 8:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
This misses the point I think.
It all has to do with the mistake of *imposing* simplicity on something by
making a black-box model of it.
For example, the Ptolemy model of planetary motion imposed a 'simple' model
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 09:29 PM,, John G. Rose wrote:
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:
A rock is conscious.
Okay, I'll bite. How are rocks conscious under Josh's definition or
any
other
On Mon, Jun 2, 2008 at 10:23 PM, Mark Waser [EMAIL PROTECTED] wrote:
At any rate, as Richard points out, y'all are so far from reality that
arguing with you is not a wise use of time. Do what you want to do. The
proof will be in how far you get.
I don't know what you mean. This particular
To believe that you need
something more complex, you need evidence.
Yes, and the evidence that you need something more complex is overwhelming
in this case (if you have anywhere near adequate knowledge of the field).
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To:
I don't know what you mean. This particular conversation is not placed
in the specific enough context to talk about doing something or
obtaining experimental results.
This particular conversation was discussing neurons. The neural network
model of neurons is *entirely* local. There are
One good way to think of the complexity of a single neuron is to think of it
as taking about 1 MIPS to do its work at that level of organization. (It has
to take an average 10k inputs and process them at roughly 100 Hz.)
This is essentially the entire processing power of the DEC KA10, i.e.
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
A rock is not conscious. I'll stake my scientific reputation on it.
(this excludes silicon rocks with micropatterned circuits :-)
Speaking of neurons and simplicity, I think it's interesting that some of the
how much cpu power needed to replicate brain function arguments use the basic
ANN model, assuming a MULADD per synapse, updating at say 100 times per second
(giving a total computing power of about 10^16 OPS). But
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
A rock is not conscious. I'll stake my scientific reputation on it.
(this
Good luck with your blank slate AI.
Maybe you should read some Steven Pinker about blank slate humans.
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 02, 2008 3:55 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Mon, Jun 2, 2008 at 11:06 PM, Mark Waser [EMAIL PROTECTED] wrote:
To believe that you need
something more complex, you need evidence.
Yes, and the evidence that you need something more complex is overwhelming
in this case (if you have anywhere near adequate knowledge of the field).
You
On Tue, Jun 3, 2008 at 12:04 AM, Mark Waser [EMAIL PROTECTED] wrote:
Good luck with your blank slate AI.
Remember about the blank slate evolution...
--
Vladimir Nesov
[EMAIL PROTECTED]
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
Yep. All 3.5 billion years with uncountable numbers of examples. Like I
said, Good luck!
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 02, 2008 4:18 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Tue,
On Tue, Jun 3, 2008 at 12:26 AM, Mark Waser [EMAIL PROTECTED] wrote:
Yep. All 3.5 billion years with uncountable numbers of examples. Like I
said, Good luck!
Evolution is incredibly slow and short-sighted, compared to intelligence.
--
Vladimir Nesov
[EMAIL PROTECTED]
-Original Message-
From: j.k. [mailto:[EMAIL PROTECTED]
Sent: Monday, June 02, 2008 2:11 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Did this message get completely lost?
On 06/01/2008 09:29 PM,, John G. Rose wrote:
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John
Richard Loosemore wrote:
Anyone at the time who knew that Isaac Newton was trying to do could
have dismissed his efforts and said Idiot! Planetary motion is simple.
Ptolemy explained it in a simple way. I use simplicity-preferring
prior, so epicycles are good enough for me.
Which is why
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Just a quick thought not fully formulated. My model is in fact helpful
here.
Consciousness is an iworld-movie - a self watching and directing a movie
of
the world. How do you know if an agent is conscious - if it directs its
movie - if it tracks
John wrote:
A rock is either conscious or not conscious.
Excluding the middle, are we?
I don't want to put words into Ben company's mouths, but I think what
they are trying to do with PLN is to implement a system that expressly
*includes the middle*. In theory (but not necessarily
I suddenly realised that here are AGI-ers having all this very philosophical
and ethereal conversation about consciousness, when actually consciousness -
and my iworld-movie model of it, or you could call it a POV-movie model - is
instantiated in a very practical way in video games.
In case
Hey kids:
A COMPUTER THAT CAN 'READ' YOUR MIND
http://www.physorg.com/news131623779.html
Cheers,
Brad
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your
This is what we've just been discussing and Richard was criticising as
highly fallible. Your article adds pictures of the predictions, which is
helpful.
But all this raises the question presumably of just how much can be told
from fmri images generally. Does anyone have views about this - or
I had said:
I believe that these mysteries of conceptual complexity (or ideological
interactions) can be discovered through discussion and experiment so long
as that effort is not thwarted by the expression of immature negative
emotions and abusive anti-intellectual rants. While some of
--- On Mon, 6/2/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
YKY, how are you going to solve the natural language
interface problem? You seem to be going down the same path
as CYC. What is different about your system?
One more point:
Yes, my system is similar to Cyc in that it's
Josh,
On 6/2/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
One good way to think of the complexity of a single neuron is to think of
it
as taking about 1 MIPS to do its work at that level of organization. (It
has
to take an average 10k inputs and process them at roughly 100 Hz.)
While
YKY,
Can you give an example of something expressed in PLN that is very
hard or impossible to express in FOL?
FYI, I recently run into some issues with my [under-development]
formal language (which is being designed for my AGI-user
communication) when trying to express statements like:
John
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/02/2008 12:00 PM,, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
John
A rock is either conscious or not conscious (if consciousness is a
boolean
I suppose the optimal approach to AGI has to involve some degree of
connectionism. But to find isomorphic structures to connectionist graphs
that are more efficient. Many things in nature cannot be evolved, for
example few if any animals have wheels. Evolved structures go so far until
43 matches
Mail list logo