Hi Mike (Tintner),
You've often made bold claims about what "all AGIers" do or don't do. This
is despite the fact that you haven't met me in person and I haven't revealed
many of my own long term plans on this list (and I'm sure I'm not the only
one): you're making bold claims about *all* of us,
On Sat, Jan 10, 2009 at 3:47 PM, Jim Bromer wrote:
> For instance, when it is discovered that probabilistic reasoning isn't
> quite good enough for advanced nlp, many hopefuls will rediscover the
> creative 'solution' of using orthogonal multidimensional 'measures' of
> semantic distance. Instead
Mike Tintner wrote:
Richard,
You missed Mike Tintner's explanation . . . .
Mark,
Right
So you think maybe what we've got here is a radical influx of globally
entangled free-association bosons?
Richard,
Q.E.D. Well done.
Now tell me how you connected my "ridiculous" [or howe
On Thu, Jan 8, 2009 at 10:41 AM, Ed Porter wrote:
> Ed Porter>
>
> This is certainly not true of a Novamente-type system, at least as I
> conceive of it being built on the type of massively parallel, highly
> interconnected hardware that will be available to AI within 3-7 years. Such
> a
Mike,
What is the evidence, if any, that it would be difficult for a sophisticated
Novamente-like AGI to switch domains?
In fact, much of valuable AGI thinking would involve patterns and mental
behaviors that extended across different domains. Human natural language
understanding is believed to
In outlook express change format to html and insert picture. Generally this
safer than an attachment.
-Original Message-
From: "Eric Burton"
To: agi@v2.listbox.com
Sent: 1/9/09 8:03 AM
Subject: Re: [agi] The Smushaby of Flatway.
Ronald: I didn't have to choose 'Di
Richard,
You missed Mike Tintner's explanation . . . .
Mark,
Right
So you think maybe what we've got here is a radical influx of globally
entangled free-association bosons?
Richard,
Q.E.D. Well done.
Now tell me how you connected my "ridiculous" [or however else you might
w
On Fri, Jan 9, 2009 at 6:34 PM, Matt Mahoney wrote:
>
> Well, it is true that you can find |P| < |Q| for some cases of P nontrivially
> simulating Q depending on the choice of language. However, it is not true on
> average. It is also not possible for P to nontrivially simulate itself
> because i
--- On Thu, 1/8/09, Vladimir Nesov wrote:
> > I claim that K(P) > K(Q) because any description of P must include
> > a description of Q plus a description of what P does for at least one other
> > input.
> >
>
> Even if you somehow must represent P as concatenation of Q and
> something else (yo
Subject: Re: [agi] The Smushaby of Flatway.
To: agi@v2.listbox.com
Date: Friday, January 9, 2009, 10:08 AM
I
_filtered #yiv455060292 {
font-family:Courier;}
_filtered #yiv455060292 {
font-family:Tms Rmn;}
_filtered #yiv455060292 {margin:1.0in 77.95pt 1.0in 77.95pt;}
#yiv455060292
wet,water] have high values because the words often appear in the same
> paragraph. Traversing related words in M gives you something similar to
> your free association chain like rain-wet-water-...
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --- On Thu, 1/8/09, Mi
Ronald C. Blue wrote:
[snip] [snip] ... chaos stimulation because ... correlational
wavelet opponent processing machine ... globally entangled ...
Paul rf trap ... parallel modulating string pulses ... a relative
zero energy value or opponent process ... phase locked ...
parallel opponent pr
secs. of time - producing your own
chain-of-free-association starting say with "MAHONEY" and going on for
another 10 or so items - and trying to figure out how
- Original Message -
From: "Richard Loosemore"
To:
Sent: Thursday, January 08, 2009 8:05 PM
Subjec
Ronald: I didn't have to choose 'Display images' to see your attached
picture again. What are you doing? It's fun, but scary.
On 1/9/09, Ronald C. Blue wrote:
>> But how can it dequark the tachyon antimatter containment field?
>> Richard Loosemore
>>
> A model that can answer all ques
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney wrote:
>
> Your earlier counterexample was a trivial simulation. It simulated itself but
> did
> nothing else. If P did something that Q didn't, then Q would not be
> simulating P.
My counterexample also bragged, outside the input format that
request
--- On Thu, 1/8/09, Vladimir Nesov wrote:
> On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney
> wrote:
> > Mike,
> >
> > Your own thought processes only seem mysterious
> because you can't predict what you will think without
> actually thinking it. It's not just a property of the
> human brain, but
Ronald C. Blue wrote:
[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent
> processing machine ... globally entangled ... Paul rf trap ... parallel
> modulating string pulses ... a relative zero energy value or
opponent process ... phase locked ... parallel oppone
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney wrote:
> Mike,
>
> Your own thought processes only seem mysterious because you can't predict
> what you will think without actually thinking it. It's not just a property of
> the human brain, but of all Turing machines. No program can non-trivially
A picture is like an instant 1000 words and you will remind a picture almost
70 years but not 1000 words.
-Original Message-
From: "J. Andrew Rogers"
To: agi@v2.listbox.com
Sent: 1/8/09 1:59 PM
Subject: Re: [agi] The Smushaby of Flatway.
On Jan 8, 2009, at 10:29 AM, Rona
ain like rain-wet-water-...
-- Matt Mahoney, matmaho...@yahoo.com
--- On Thu, 1/8/09, Mike Tintner wrote:
> From: Mike Tintner
> Subject: Re: [agi] The Smushaby of Flatway.
> To: agi@v2.listbox.com
> Date: Thursday, January 8, 2009, 3:54 PM
> Matt:Free association is the bas
That email had really nice images, but I don't know why gmail viewed
them automatically!
On 1/8/09, Mike Tintner wrote:
> Matt:Free association is the basic way of recalling memories. If you
> experience A followed by B, then the next time you experience A you will
> think of (or predict) B. Pavl
Matt:Free association is the basic way of recalling memories. If you
experience A followed by B, then the next time you experience A you will
think of (or predict) B. Pavlov demonstrated this type of learning in
animals in 1927.
Matt,
You're not thinking your argument through. Look carefully
--- On Thu, 1/8/09, Mike Tintner wrote:
> Matt,
>
> Thanks. But how do you see these:
>
> "Pattern recognition in parallel, and hierarchical
> learning of increasingly complex patterns by classical
> conditioning (association), clustering in context space
> (feature creation), and reinforcement
On Jan 8, 2009, at 10:29 AM, Ronald C. Blue wrote:
...Noise is not noise...
Speaking of noise, was that ghastly HTML formatting really necessary?
It made the email nearly unreadable.
J. Andrew Rogers
---
agi
Archives: https://www.listbox.com/memb
IFrom: Jim Bromer [mailto:jimbro...@gmail.com]
Sent: Wednesday, January 07, 2009 8:24 PM
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition. What makes them flat
is that the method of decision making is minimally-structured a
---Original Message-
From: Jim Bromer [mailto:jimbro...@gmail.com]
Sent: Wednesday, January 07, 2009 8:24 PM
To: agi@v2.listbox.com
Subject: [agi] The Smushaby of Flatway.
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition. What ma
Matt,
Thanks. But how do you see these:
"Pattern recognition in parallel, and hierarchical learning of increasingly
complex patterns by classical conditioning (association), clustering in
context space (feature creation), and reinforcement learning to meet evolved
goals."
as fundamentally d
--- On Thu, 1/8/09, Mike Tintner wrote:
> What then do you see as the way people *do* think? You
> surprise me, Matt, because both the details of your answer
> here and your thinking generally strike me as *very*
> logicomathematical - with lots of emphasis on numbers and
> compression - yet you
PS I should have said "the fundamental deficiencies of the PURELY
logicomathematical form of thinking". It's not deficient in itself - only if
you think like so many AGIers that it's the only form of thinking, or able
to accommodate the entirety of human thinking.
-
Matt:> Logic has not solved AGI because logic is a poor model of the way
people think.
Neural networks have not solved AGI because you would need about 10^15
bits of memory and 10^16 OPS to simulate a human brain sized network.
Genetic algorithms have not solved AGI because the computationa
--- On Wed, 1/7/09, Ben Goertzel wrote:
>if proving Fermat's Last theorem was just a matter of doing math, it would
>have been done 150 years ago ;-p
>
>obviously, all hard problems that can be solved have already been solved...
>
>???
In theory, FLT could be solved by brute force enumeration of
> If it was just a matter of writing the code, then it would have been done
> 50 years ago.
if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p
obviously, all hard problems that can be solved have already been solved...
???
--
@yahoo.com
--- On Wed, 1/7/09, Jim Bromer wrote:
> From: Jim Bromer
> Subject: [agi] The Smushaby of Flatway.
> To: agi@v2.listbox.com
> Date: Wednesday, January 7, 2009, 8:23 PM
> All of the major AI paradigms, including those that are
> capable of
> learning, are flat ac
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition. What makes them flat
is that the method of decision making is minimally-structured and they
funnel all reasoning through a single narrowly focused process that
smushes different inputs
34 matches
Mail list logo