Hi Mike (Tintner),
You've often made bold claims about what all AGIers do or don't do. This
is despite the fact that you haven't met me in person and I haven't revealed
many of my own long term plans on this list (and I'm sure I'm not the only
one): you're making bold claims about *all* of us,
On Thu, Jan 8, 2009 at 10:41 AM, Ed Porter ewpor...@msn.com wrote:
Ed Porter
This is certainly not true of a Novamente-type system, at least as I
conceive of it being built on the type of massively parallel, highly
interconnected hardware that will be available to AI within 3-7 years.
Mike Tintner wrote:
Richard,
You missed Mike Tintner's explanation . . . .
Mark,
Right
So you think maybe what we've got here is a radical influx of globally
entangled free-association bosons?
Richard,
Q.E.D. Well done.
Now tell me how you connected my ridiculous [or
On Sat, Jan 10, 2009 at 3:47 PM, Jim Bromer jimbro...@gmail.com wrote:
For instance, when it is discovered that probabilistic reasoning isn't
quite good enough for advanced nlp, many hopefuls will rediscover the
creative 'solution' of using orthogonal multidimensional 'measures' of
semantic
Ronald: I didn't have to choose 'Display images' to see your attached
picture again. What are you doing? It's fun, but scary.
On 1/9/09, Ronald C. Blue ronb...@u2ai.us wrote:
But how can it dequark the tachyon antimatter containment field?
Richard Loosemore
A model that can answer all
chain-of-free-association starting say with MAHONEY and going on for
another 10 or so items - and trying to figure out how
- Original Message -
From: Richard Loosemore r...@lightlink.com
To: agi@v2.listbox.com
Sent: Thursday, January 08, 2009 8:05 PM
Subject: Re: [agi] The Smushaby
Ronald C. Blue wrote:
[snip] [snip] ... chaos stimulation because ... correlational
wavelet opponent processing machine ... globally entangled ...
Paul rf trap ... parallel modulating string pulses ... a relative
zero energy value or opponent process ... phase locked ...
parallel opponent
. Traversing related words in M gives you something similar to
your free association chain like rain-wet-water-...
-- Matt Mahoney, matmaho...@yahoo.com
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: [agi] The Smushaby
wrote:
From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: [agi] The Smushaby of Flatway.
To: agi@v2.listbox.com
Date: Friday, January 9, 2009, 10:08 AM
I
_filtered #yiv455060292 {
font-family:Courier;}
_filtered #yiv455060292 {
font-family:Tms Rmn;}
_filtered #yiv455060292 {margin:1.0in
--- On Thu, 1/8/09, Vladimir Nesov robot...@gmail.com wrote:
I claim that K(P) K(Q) because any description of P must include
a description of Q plus a description of what P does for at least one other
input.
Even if you somehow must represent P as concatenation of Q and
something
On Fri, Jan 9, 2009 at 6:34 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Well, it is true that you can find |P| |Q| for some cases of P nontrivially
simulating Q depending on the choice of language. However, it is not true on
average. It is also not possible for P to nontrivially simulate
Richard,
You missed Mike Tintner's explanation . . . .
Mark,
Right
So you think maybe what we've got here is a radical influx of globally
entangled free-association bosons?
Richard,
Q.E.D. Well done.
Now tell me how you connected my ridiculous [or however else you might
In outlook express change format to html and insert picture. Generally this
safer than an attachment.
-Original Message-
From: Eric Burton brila...@gmail.com
To: agi@v2.listbox.com
Sent: 1/9/09 8:03 AM
Subject: Re: [agi] The Smushaby of Flatway.
Ronald: I didn't have to choose 'Display
Mike,
What is the evidence, if any, that it would be difficult for a sophisticated
Novamente-like AGI to switch domains?
In fact, much of valuable AGI thinking would involve patterns and mental
behaviors that extended across different domains. Human natural language
understanding is believed
--- On Wed, 1/7/09, Ben Goertzel b...@goertzel.org wrote:
if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p
obviously, all hard problems that can be solved have already been solved...
???
In theory, FLT could be solved by brute force
Matt: Logic has not solved AGI because logic is a poor model of the way
people think.
Neural networks have not solved AGI because you would need about 10^15
bits of memory and 10^16 OPS to simulate a human brain sized network.
Genetic algorithms have not solved AGI because the
PS I should have said the fundamental deficiencies of the PURELY
logicomathematical form of thinking. It's not deficient in itself - only if
you think like so many AGIers that it's the only form of thinking, or able
to accommodate the entirety of human thinking.
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
What then do you see as the way people *do* think? You
surprise me, Matt, because both the details of your answer
here and your thinking generally strike me as *very*
logicomathematical - with lots of emphasis on numbers and
Matt,
Thanks. But how do you see these:
Pattern recognition in parallel, and hierarchical learning of increasingly
complex patterns by classical conditioning (association), clustering in
context space (feature creation), and reinforcement learning to meet evolved
goals.
as fundamentally
In response to Jim Bromer's post of Wed 1/7/2009 8:24 PM
=Jim Bromer==
All of the major AI paradigms, including those that are capable of learning,
are flat according to my definition. What makes them flat is that the
method of decision making is minimally-structured and they
IFrom: Jim Bromer [mailto:jimbro...@gmail.com]
Sent: Wednesday, January 07, 2009 8:24 PM
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition. What makes them flat
is that the method of decision making is minimally-structured
On Jan 8, 2009, at 10:29 AM, Ronald C. Blue wrote:
...Noise is not noise...
Speaking of noise, was that ghastly HTML formatting really necessary?
It made the email nearly unreadable.
J. Andrew Rogers
---
agi
Archives:
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
Matt,
Thanks. But how do you see these:
Pattern recognition in parallel, and hierarchical
learning of increasingly complex patterns by classical
conditioning (association), clustering in context space
(feature creation),
Matt:Free association is the basic way of recalling memories. If you
experience A followed by B, then the next time you experience A you will
think of (or predict) B. Pavlov demonstrated this type of learning in
animals in 1927.
Matt,
You're not thinking your argument through. Look carefully
That email had really nice images, but I don't know why gmail viewed
them automatically!
On 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
Matt:Free association is the basic way of recalling memories. If you
experience A followed by B, then the next time you experience A you will
think
Mahoney, matmaho...@yahoo.com
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: [agi] The Smushaby of Flatway.
To: agi@v2.listbox.com
Date: Thursday, January 8, 2009, 3:54 PM
Matt:Free association is the basic way of recalling
A picture is like an instant 1000 words and you will remind a picture almost
70 years but not 1000 words.
-Original Message-
From: J. Andrew Rogers and...@ceruleansystems.com
To: agi@v2.listbox.com
Sent: 1/8/09 1:59 PM
Subject: Re: [agi] The Smushaby of Flatway.
On Jan 8, 2009, at 10
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney matmaho...@yahoo.com wrote:
Mike,
Your own thought processes only seem mysterious because you can't predict
what you will think without actually thinking it. It's not just a property of
the human brain, but of all Turing machines. No program can
Ronald C. Blue wrote:
[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent
processing machine ... globally entangled ... Paul rf trap ... parallel
modulating string pulses ... a relative zero energy value or
opponent process ... phase locked ... parallel
--- On Thu, 1/8/09, Vladimir Nesov robot...@gmail.com wrote:
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney
matmaho...@yahoo.com wrote:
Mike,
Your own thought processes only seem mysterious
because you can't predict what you will think without
actually thinking it. It's not just a
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney matmaho...@yahoo.com wrote:
Your earlier counterexample was a trivial simulation. It simulated itself but
did
nothing else. If P did something that Q didn't, then Q would not be
simulating P.
My counterexample also bragged, outside the input
Logic has not solved AGI because logic is a poor model of the way people think.
Neural networks have not solved AGI because you would need about 10^15 bits of
memory and 10^16 OPS to simulate a human brain sized network.
Genetic algorithms have not solved AGI because the computational
If it was just a matter of writing the code, then it would have been done
50 years ago.
if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p
obviously, all hard problems that can be solved have already been solved...
???
33 matches
Mail list logo