On 03/03/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Dont you see the way to go on Neural nets is hybrid with genetic algorithms
in mass amounts?
I experimented with this combination in the early 1990s, and the
results were not very impressive. Such systems still suffered from
On Mon, Mar 3, 2008 at 6:33 AM, [EMAIL PROTECTED] wrote:
Thanks for that.
Dont you see the way to go on Neural nets is hybrid with genetic
algorithms in mass amounts?
No, I dont agree with your buzzword-laden statement :) I experimented EA +
NN's and its still intractable when scaled up to
On Mon, Mar 3, 2008 at 4:29 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
There's a nice flash demonstration about digit generation/classification
http://www.cs.toronto.edu/~hinton/adi/index.htm
Did anyone on this list do experiments with these kind of generative models?
I'd can't find much
Kaj Sotala wrote:
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem
Care to state the exact problem you were having?
My thought is scalability is to do entirely with speed availability
- Original Message -
From: Bob Mottram [EMAIL PROTECTED]
To: agi@v2.listbox.com
Subject: Re: [agi] interesting Google Tech Talk about Neural Nets
Date: Mon, 3 Mar 2008
that's a great idea Vlad, there are other forms of statistical sampling
available.
the closer we get to running accelerated evolution to human intelligence the
better I beleive.
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Subject: Re: [agi]
[EMAIL PROTECTED] wrote:
Care to state the exact problem you were having?
My thought is scalability is to do entirely with speed availability
The problems with bolting together NN and GA are so numerous it is hard
to know where to begin. For one thing, you cannot represent structured
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:
The problems with bolting together NN and GA are so numerous it is hard
to know where to begin. For one thing, you cannot represent structured
information with NNs unless you go to some trouble to add extra
I'm increasingly convinced that the human brain is not a statistical
learner, but a logical learner. There are many examples of humans
learning concepts/rules from one or two examples, rather than thousands of
examples. So I think that at a high level, AGI should be logic-based.
But it would be
Kingma, D.P. wrote:
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
The problems with bolting together NN and GA are so numerous it is hard
to know where to begin. For one thing, you cannot represent structured
information with
i stumbled upon this project recently. it adresses the connectivity in a
neural network. pretty interresting stuff. could be its a known thing but i
just wanted to share this.
http://oege.ie.hva.nl/~bergd/
im sorta new to this agi development but as far as i understand, couldn't
this speed up
How intelligent would any human be if it couldn't be taught by other humans?
Could a human ever learn to speak by itself? The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.
If humans can't become human without the
On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote:
I think Ben's text mining approach has one big flaw: it can
only reason about existing knowledge, but cannot generate new ideas using
words / concepts
There is a substantial amount of literature that claims that *humans*
can't generate new
Too easy ;)
One of the points in patch-space corresponds to X=center, Y=center,
Scale=huge, so this patch is a rescaled version (say 20x20) of the whole
image (say 1000x1000). In this 20x20 patch, the letter 'A' emerges naturally
and can be reconstructed by the NN, and therefore be recognized. It
On Mon, Mar 3, 2008 at 9:50 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
I'm increasingly convinced that the human brain is not a statistical
learner, but a logical learner. There are many examples of humans learning
concepts/rules from one or two examples, rather than thousands of
Yes, an AGI will have to be able to do narrow AI.
What you are doing here - and everyone is doing over and over and over - is
saying: Yes, I know there's a hard part to AGI, but can I please
concentrate on the easy parts - the narrow AI parts - first?
If I give you a problem, I don't want
YKY: the way our language builds up new ideas seems to be very complex, and it
makes natural language a bad knowledge representation for AGI.
An even more complex example:
spread the jam with a knife
draw a circle with a knife
cut the cake with a knife
rape the girl with a knife
stop the
On Mon, Mar 3, 2008 at 11:30 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
Can you explain a bit more, your terms are too vague. I think statistical
learning and logical learning are fundamentally quite different. I'd be
interested in some hybrid approach, if it exists.
Bayesian logic
On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
Good example, but how about: language is open-ended, period and capable of
infinite rather than myriad interpretations - and that open-endedness is
the whole point of it?.
Simple example much like yours : handle. You can attach words for
objects
Sure, AGI needs to handle NL in an open-ended way. But the question is
whether the internal knowledge representation of the AGI needs to allow
ambiguities, or should we use an ambiguity-free representation. It seems
that the latter choice is better. Otherwise, the knowledge stored in
Kingma, D.P. wrote:
Too easy ;)
One of the points in patch-space corresponds to X=center, Y=center,
Scale=huge, so this patch is a rescaled version (say 20x20) of the whole
image (say 1000x1000). In this 20x20 patch, the letter 'A' emerges
naturally and can be reconstructed by the NN, and
Will:Is generalising a skill logically the first thing that you need to
make an AGI? Nope, the means and sufficient architecture to acquire
skills and competencies are more useful early on in an agi
development
Ah, you see, that's where I absolutely disagree, and a good part of why I'm
22 matches
Mail list logo