On 14/02/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Does anyone know of a well-thought-out list of this sort. Of course I
could make one by surveying
the cognitive psych literature, but why reinvent the wheel?
None that I have come across. Biases that I have come across are
things like payin
On 13/04/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
To convey this subtlety as simply as I can, I would suggest that you ask
yourself how much intelligence is being assumed in the preprocessing
system that does the work of (a) picking out patterns to be considered
by the system, and (b) pic
On 26/04/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Consider that, folks, to be a challenge: to those who think there is
such a definition, I await your reply.
While I don't think it the sum of all intelligence, I'm studying
something I think is a precondition of being intelligent. That
My current thinking is that it will take lots of effort by multiple
people, to take a concept or prototype AGI and turn into something
that is useful in the real world. And even one or two people worked on
the correct concept for their whole lives it may not produce the full
thing, they may hit bo
On 11/05/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of the
On 18/05/07, John G. Rose <[EMAIL PROTECTED]> wrote:
Did you arrive at some sort of unit for intelligence? Typically
measurements are constructed of combinations of basic units for example 1
watt = 1 kg * m^2/s^3. Or is it not a unit but a set of units?
Interesting idea. They I would go towar
On 01/06/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
Ray Kurzweil has arranged to put a couple of sample chapters up on his site:
Kinds of Minds
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0707.html
The Age of Virtuous Machines
http://www.kurzweilai.net/meme/frame.html?main
Is there space within the charity world for another one related to
intelligence but with a different focus to SIAI?
Rather than specifically funding an AGI effort or creating one in
order to bring about a specific goal state of humanity in mind, it
would be dedicated to funding a search for the a
On 04/06/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.
I suspect an AGI that executes one fixed unchangeable program is not
physically possible.
On 05/06/07, Ricardo Barreira <[EMAIL PROTECTED]> wrote:
On 6/5/07, William Pearson <[EMAIL PROTECTED]> wrote:
> On 04/06/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Suppose you build a human level AGI, and argue
> > that it is not autonomous no m
On 06/06/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
There're several reasons why AGI teams are fragmented and AGI designers
don't want to join a consortium:
A. believe that one's own AGI design is superior
B. want to ensure that the global outcome of AGI is "friendly"
C. want to get b
Attempt to define General Intelligence
A general intelligent system is an eco-system of memeplexes aimed at:
This is very close to my view of an intelligent system. Although mine
would be something like
My current best guess for what a general intelligent systems is: An
eco-system of programs*
On 22/06/07, Pei Wang <[EMAIL PROTECTED]> wrote:
Hi,
I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm , including an "AGI
Overview" followed by "Representative AGI Projects".
It is basically a bunch of links and quotations organized according to
my opinion. H
On 23/06/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
- Will Pearson:> My theory is that the computer architecture has to be
more brain-like
> than a simple stored program architecture in order to allow resource
> constrained AI to implemented efficiently. The way that I am
> investigating, i
On 24/06/07, Bo Morgan <[EMAIL PROTECTED]> wrote:
On Sun, 24 Jun 2007, William Pearson wrote:
) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves bett
Sorry, sent accidentally while half finished.
Bo wrote:
This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.
I'm talking about control in memory access, and by memory access I am
referring to synaptic changes
On 27/09/2007, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
> This is why the word "impossible" has no place outside of math
> departments.
>
> Original Message
> Subject: [bafuture] An amazing blind (?!!) boy (and a super mom!)
> Date: Thu, 27 Sep 2007 11:49:42 -0700
> From:
On 29/09/2007, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Although it indeed seems off-topic for this list, calling it a
> religion is ungrounded and in this case insulting, unless you have
> specific arguments.
>
> Killing huge amounts of people is a pretty much possible venture for
> regular hum
On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The real danger is this: a program intelligent enough to understand software
> would be intelligent enough to modify itself.
Well it would always have the potential. But you are assuming it is
implemented on standard hardware.
There are man
On 01/10/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- William Pearson <[EMAIL PROTECTED]> wrote:
>
> > On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > The real danger is this: a program intelligent enough to understand
> > soft
On 02/10/2007, Mark Waser <[EMAIL PROTECTED]> wrote:
> > A quick question, do people agree with the scenario where, once a non
> > super strong RSI AI becomes mainstream it will replace the OS as the
> > lowest level of software?
>
> For the system that it is running itself on? Yes, eventually. F
On 05/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> We have good reason to believe, after studying systems like GoL, that
> even if there exists a compact theory that would let us predict the
> patterns from the rules (equivalent to predicting planetary dynamics
> given the inverse square
On 05/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> William Pearson wrote:
> > On 05/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> We have good reason to believe, after studying systems like GoL, that
> >> even if there exists a compa
On 07/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I have a question for you, Will.
>
> Without loss of generality, I can change my use of Game of Life to a new
> system called GoL(-T) which is all of the possible GoL instantiations
> EXCEPT the tiny subset that contain Turing Machine i
On 07/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> William Pearson wrote:
> > On 07/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> The TM implementation not only has no relevance to the behavior of
> GoL(-T) at all, it also has even less relevan
On 08/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> William Pearson wrote:
> > On 07/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> William Pearson wrote:
> >>> On 07/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote
On 08/10/2007, Mark Waser <[EMAIL PROTECTED]> wrote:
> From: "William Pearson" <[EMAIL PROTECTED]>
> > Laptops aren't TMs.
> > Please read the wiki entry to see that my laptop isn't a TM.
>
> But your laptop can certainly implement/simulate a
On 12/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
> (2) WITH REGARD TO BOOKWORLD -- IF ALL THE WORLD'S BOOKS WERE IN
> ELECTRONIC
> FORM AND YOU HAD A MASSIVE AMOUNT OF AGI HARDWARD TO READ THEM ALL I
> THINK
> YOU WOULD BE ABLE TO GAIN A TREMENDOUS AMOUNT OF WORLD KNOWLEDGE
> FROM THEM
On 16/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
> Josh, your Tue 10/16/2007 8:58 AM post was a very good one. I have just a
> few comments in all-caps.
>
> "The view I suggest instead is that it's not the symbols per se, but the
> machinery that manipulates them, that provides se
On 17/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> IN RESPONSE TO
>
> "Hmm, how then is a modern PC valuable? It has no representation of the
> type advocated by AI designers interested in that sort of things.
>
> BUT IT DOES HAVE REPRESENTATION IN THE FORM OR CODE AND DATA. DOES IT
> HA
On 18/10/2007, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> I'd be interested in everyone's take on the following:
>
> 1. What is the single biggest technical gap between current AI and AGI? (e.g.
> we need a way to do X or we just need more development of Y or we have the
> ideas, just need har
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote:
> I think that there really needs to be more very specifically defined
> quantitative measures of intelligence. If there were questions that could be
> asked of an AGI that would require x units of intelligence to solve
> otherwise they would b
On 20/10/2007, Robert Wensman <[EMAIL PROTECTED]> wrote:
> It seems your question stated on the meta discussion level, since that you
> ask for a reason why a there are two different beliefs.
>
> I can only answer for myself, but to me some form of evolutionary learning
> is essential to AGI. Actua
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote:
> > From: William Pearson [mailto:[EMAIL PROTECTED]
> > Subject: Re: [agi] An AGI Test/Prize
> >
> > I do not think such things are possible. Any problem that we know
> > about and can define, can be s
On 20/10/2007, Robert Wensman <[EMAIL PROTECTED]> wrote:
> I am not exactly sure how GA(genetic algorithm) and GP (genetic programming)
> is defined. It seems that the concept of gene and evolution are very much
> interconnected, so how we define genetic algorithm and genetic programming
> depends
On 20/10/2007, Robert Wensman <[EMAIL PROTECTED]> wrote:
> First of all, I do not believe science can allways respect past exact
> definitions of words in order to make progress. How about if Einstein
> refrained from publishing his relativity theory, because it would contradict
> the way people no
I have recently been trying to find better formalisms than TMs for
different classes of adaptive systems (including the human brain), and
have come across the Persistent Turing Machines[1], which seem to be a
good first step in that direction.
They have expressiveness claimed to be greater than TM
On 30/10/2007, Pei Wang <[EMAIL PROTECTED]> wrote:
> Thanks for the link. I agree that this work is moving in an
> interesting direction, though I'm afraid that for AGI (and adaptive
> systems in general), TM may be too low as a level of description ---
> the conclusions obtained in this kind of wo
On 02/11/2007, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > > On Oct 31, 2007 8:53 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > > Natural language is a fundamental part of the knowle
On 05/11/2007, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 03, 2007 at 03:45:30AM -0400, Jiri Jelinek wrote:
> > Are you aware in how many ways you can go wrong with:
>
> One problem I see with this mailing list is an almost intentional
> desire to mis-interpret. I never claimed I was b
On 06/11/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Will Pearson asked
> >> I'm also wondering what you consider success in this case. For example
> >> do you want the system to be able to maintain conversational state
> >> such as would be needed to deal with the following.
>
> >>"For
On 08/11/2007, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
>
> My impression is that most machine learning theories assume a search space
> of hypotheses as a given, so it is out of their scope to compare *between*
> learning structures (eg, between logic and neural networks).
>
> Algorithmic lea
On 08/11/2007, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
>
> Thanks for the input.
>
> There's one perplexing theorem, in the paper about the algorithmic
> complexity of programming, that "the language doesn't matter that much", ie,
> the algorithmic complexity of a program in different languag
On 08/11/2007, Jef Allbright <[EMAIL PROTECTED]> wrote:
> I'm sorry I'm not going to be able to provide much illumination for
> you at this time. Just the few sentences of yours quoted above, while
> of a level of comprehension equal or better than average on this list,
> demonstrate epistemologic
On 09/11/2007, Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 11/8/07, William Pearson <[EMAIL PROTECTED]> wrote:
> > On 08/11/2007, Jef Allbright <[EMAIL PROTECTED]> wrote:
>
> > > This discussion reminds me of hot rod enthusiasts arguing passionately
>
On 10/11/2007, William Pearson <[EMAIL PROTECTED]> wrote:
> On 09/11/2007, Jef Allbright <[EMAIL PROTECTED]> wrote:
> > On 11/8/07, William Pearson <[EMAIL PROTECTED]> wrote:
> > > 1) Doesn't treat computation as outputting to the environment, thus
>
On 21/11/2007, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> Benjamin,
>
> > That's massive amount of work, but most AGI research and development
> > can be shared with narrow AI research and development.
>
> > There is plenty overlap btw AGI and narrow AI but not as much as you
> > suggest...
>
> T
One thing that has been puzzling me for a while is, why some people
expect an intelligence to be less flexible than a PC.
What do I mean by this? A PC can have any learning algorithm, bias or
representation of data we care to create. This raises another
question: how are we creating a representati
On 06/12/2007, Ed Porter <[EMAIL PROTECTED]> wrote:
> Matt,
> So if it is perceived as something that increases a machine's vulnerability,
> it seems to me that would be one more reason for people to avoid using it.
> Ed Porter
Why are you having this discussion on an AGI list?
Will Pearson
-
Some of you may remember me from other places, and once before on this
list. But I thought now is the correct time for some criticism of my
ideas as they are slightly refined.
Now first off I have recently renounced my status as an AI researcher,
as studying intelligence is not what I wish to do.
> Eugen Leitl Thu, 23 Jun 2005 02:18:14 -0700
> Do any of you here use MPI, and assume 10^3..10^5 node parallelism?
I assume 2^14 node parallelism with only a small fraction computing at
any time. But then my nodes are really smart memory rather than
full-blown processors and not async yet. At th
On 9/9/05, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
> Leitl wrote:
> > > >In the language of Gregory Bateson (see his book "Mind and Nature"),
> > > >you're suggesting to do away with "learning how to learn" --- which is
> > > >not at all a workable idea for AGI.
> >
> > Learning to evolve by
On 9/9/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> "learning to learn" which I interpret as applying the current knowledge
> rules to the knowledge base itself. Your idea is to build an AGI that can
> modify its own ways of learning. This is a very fanciful idea but is not the
>
> most direct
From the authors website-
Self-organising associative kernel memory for multi-domain pattern
classification.
http://www.bsp.brain.riken.jp/~hoya/papers/alcosp2004-2.pdf
Should give some hint what the book is about. It seems to be a cross
between radial basis function neural nets and bayesian netw
On 9/12/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> Will Pearson wrote:
>
> Define what you mean by an AGI. Learning to learn is vital if you wish to
> > try and ameliorate the No Free Lunch theorems of learning.
>
> I suspect that No Free Lunch is not very relevant in practice. Any learning
On 9/20/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> William wrote:
>
> I suspect that it will be quite important in competition between agents. If
>
> > one agent has a constant method of learning it will be more easily
> predicted
> > by an agent that can figure out its constant method (if it i
Some people might find this mini essay interesting as it touches on
what I think of as the problem of general intelligence, if such a
thing can be well defined.
My interest in machine learning is concentrated in problems where the
problem is not easy to define, or may be easily misdefined.
A rei
On 01/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:
>
> A reinforcement scenario, from wikipedia is defined as
>
> "Formally, the basic reinforcement learning model consists of:
>
On 02/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Will,
Comments taken, but the direction of my critique may have gotten lost in
the details:
Suppose I proposed a solution to the problem of unifying quantum
mechanics and gravity, and suppose I came out with a solution that said
that th
I don't think this has been raised before, the only similar suggestion
is that we should start by understanding systems that might be weak
and then convert it to a strong system rather than aiming for weakness
that is hard to convert to a strong system.
Caveats:
1) I don't believe strong self-im
On 08/06/06, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
William Pearson wrote:
>
> I tried posting this to SL4 and it got sucked into some vacuum.
As far as I can tell, it went normally through SL4. I got it.
It is harder to tell on gmail than other email systems what gets
On 08/06/06, William Pearson <[EMAIL PROTECTED]> wrote:
With regards to how careful I am being with the system: one of the
central design guidances for the system is to assume the programs in
the hardware are selfish and may do things I don't want. The failure
mode I envisag
On 09/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Likewise, an artificial general
intelligence is not "a set of environment states S, a set of actions A,
and a set of scalar "rewards" in the Reals".)
Watching history repeat itself is pretty damned annoying.
While I would agree with y
On 09/06/06, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
William,
> It is very simple and I wouldn't apply it to everything that
> behaviourists would (we don't get direct rewards for solving crossword
> puzzles).
How do you know that we don't get direct rewards on solving crossword
puzzles (or a
On Fri, 09 Jun 2006 19:13:19 -500, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
What about punishment?
Currently I see it as the programs in control of outputting (and hence
the ones to get reward), losing the control and the chance to get
reinforcement. However experiment or better theory wou
Dennis
1) I agree that direct reward has to be in-built
(into brain / AI system).
Okay.
2) I don't see why direct reward cannot be used for rewarding mental
achievements.
They could be, dependant upon the type of system you are interested
in. Not easily the one that I am interested in.
I
On 11/06/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
An article with an opposing point of view than the one I mentioned yesterday...
http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf
Why do you find whether there are bayesian estimators in the brain an
interesting que
On 12/06/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Will,
Right now I would think that a negative reward would be usable for this
aspect.
I agree it is usable. But I am not sure it necessary, you can just
normalise the reward value.
Let say for most states you normally give 0 for a satiat
On 10/06/06, sanjay padmane <[EMAIL PROTECTED]> wrote:
I feel you should discontinue the list. That will force people to post there.
I'm not using the forum only because no one else is using it (or very
few), and everyone is perhaps doing the same.
I also wouldn't be interested in using the for
On 13/06/06, sanjay padmane <[EMAIL PROTECTED]> wrote:
On the suggestion of creating a wiki, we already have it here
http://en.wikipedia.org/wiki/Artificial_general_intelligence
I wouldn't want to pollute the wiki proper with our unverified claims.
, as you know, and its exposure is much wid
On 13/06/06, Yan King Yin <[EMAIL PROTECTED]> wrote:
Will,
I've been thinking of hosting a wiki for some time, but not sure if we have
reached critical mass here.
Possibly not. I may just collate my own list of questions and answers
until the time does come.
When we get down to the details,
On 13/06/06, Yan King Yin <[EMAIL PROTECTED]> wrote:
One question is whether there is some definite advantage to using NNs
instead of say, predicate logic. Can you give an example of a thought, or a
line of inference, etc, that the NN-type representation is particularly
suited?
The distribute
On 15/06/06, arnoud <[EMAIL PROTECTED]> wrote:
On Thursday 15 June 2006 21:35, Ben Goertzel wrote:
> > If this doesn't seem to be the case, this is because of that some
> > concepts are so abstract that they don't seem to be tied to perception
> > anymore. It is obvious that they are (directly) t
I wrote:
> Which is as useful as knowing information about soccer players. And
> yet I value the neutrino information more, because of the way I have
> been told it connects with all the other information that has been
> useful when I fiddled about with chemicals.
On 16/06/06, Anneke Siemons <[E
On 17/06/06, arnoud <[EMAIL PROTECTED]> wrote:
> As long as some of those things are learnt by watching humans doing
> them, in practise I agree with you. In theory though a sufficiently
> powerful Giant look up table, could also seem to learn these things,
> so I also going to be look at the
On 06/07/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On Wed, 05 Jul 2006 15:58:28 -500, [EMAIL PROTECTED]
That just gets you a circular definition: if intelligence is the ability to
self-improve, what counts as improvement? Change in the direction of greater
intelligence? But then what's i
On 06/07/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 7/6/06, William Pearson <[EMAIL PROTECTED]> wrote:
> How would you define the sorts of tasks humans are designed to carry
> out? I can't see an easy way of categorising all the problems
> individual humans
On 06/07/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 7/6/06, William Pearson <[EMAIL PROTECTED]> wrote:
> A
> generic PC almost fulfils the description, programmable, generic and
> if given the right software to start with can solve problems. But I am
> guessing it
On 08/07/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 7/8/06, William Pearson <[EMAIL PROTECTED]> wrote:
> Agreed, but I think looking at it in terms of a single language is a
> mistake. Humans use body language and mimicry to acquire spoken
> language and spoken/bo
On 28/08/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 8/28/06, Stephen Reed <[EMAIL PROTECTED]> wrote:
Google wouldn't work at all well under the GPL. Why? Because if everyone
had their own little Google, it would be quite useless [1]. The system's
usefulness comes from the fact that there
On 28/08/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 8/28/06, William Pearson <[EMAIL PROTECTED]> wrote:
> If the macro AGI can't translate between differences in language or
> representation that the micro AGIs have acquired from being open
> source, then we
On 28/08/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 8/28/06, William Pearson <[EMAIL PROTECTED]> wrote:
> We may well not have enough computing resources available to do it on
> the cheap using local resources. But that is the approach I am
> inclined to take, I&
On 28/08/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 8/28/06, William Pearson <[EMAIL PROTECTED]> wrote:
> Things like hooking it up to low quality sound video feeds and have it
> judge by posture/expression/time of day what the most useful piece of
> information i
I am interested in meta-learning voodoo, so I thought I would add my
view on KR in this type of system.
If you are interested in meta-learning the KR you have to ditch
thinking about knowledge as the lowest level of changeable
information in your system, and just think about changing state. Stat
On 27/09/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
William Pearson wrote:
> I am interested in meta-learning voodoo, so I thought I would add my
> view on KR in this type of system.
>
> If you are interested in meta-learning the KR you have to ditch
> thinking a
Richard Loosemoore
> As for your suggestion about the problem being centered on the use of
> model-theoretic semantics, I have a couple of remarks.
>
> One is that YES! this is a crucial issue, and I am so glad to see you
> mention it. I am going to have to read your paper and discuss with you
On 21/11/06, Pei Wang <[EMAIL PROTECTED]> wrote:
That sounds better to me. In general, I'm against attempts to get
complete, consistent, certain, and absolute descriptions (of either
internal or external state), and prefer partial,
not-necessarily-consistent, uncertain, and relative ones --- not
On 24/11/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
The open questions are representation -- I'm leaning towards >CSG
Constructive solid geometry? You could probably go quite far towards a
real world navigator with this, but I'm not usre how you plan to get
it to represent the intern
On 02/12/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I think that our propensity for music is pretty damn simple: it's a
> side-effect of the general skill-learning machinery that makes us memetic
> substrates. Tunes are trajectories in n-space as are the series of motor
> signals involved in w
On 04/12/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Why must you argue with everything I say? Is this not a sensible
> statement?
I don't argue with everything you say. I only argue with things that I
believe are wrong. And no, the statements "You cannot turn off hunger or
pain. You cannot
On 07/01/2008, Robert Wensman <[EMAIL PROTECTED]> wrote:
> I think what you really want to use is the
> concept of adaptability, or maybe you could say you want an AGI system that
> is programmed in an indirect way (meaning that the program instructions are
> very far away from what the system actu
On 09/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> Let's assume one is working within the scope of an AI system that
> includes an NLP parser,
> a logical knowledge representation system, and needs some intelligent way to
> map
> the output of the latter into the former.
>
> Then, in th
On 10/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> Processing a dictionary in a useful way
> requires quite sophisticated language understanding ability, though.
>
> Once you can do that, the hard part of the problem is already
> solved ;-)
While this kind of system requires sophisticat
On 10/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > I'll be a lot more interested when people start creating NLP systems
> > that are syntactically and semantically processing statements *about*
> > words, sentences and other linguistic structures and adding syntactic
> > and semantic r
On 10/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> On Jan 10, 2008 10:26 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> > On 10/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > > > I'll be a lot more interested when people
Vladimir,
> What do you mean by difference in processing here?
I said the difference was after the initial processing. By processing
I meant syntactic and semantic processing. After processing the
syntax related sentence the realm of action is changing the system
itself, rather than knowledge of
My problem with both these definitions (and the one underpinning
AIXI), is that they either don't define the word problem well or
define it in a limited way.
For example AIXI defines it as the solution of a problem as finding a
function that transforms an input to an output. No mention of having
t
On 12/01/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Every time a dispute erupts about what the real definition of
> "intelligence" is, all we really get is noise, because nobody is clear
> about the role that the definition is supposed to play.
>
> If the role is to distinguish Narrow A
On 14/01/2008, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Jan 13, 2008 7:40 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> > And, as I indicated, my particular beef was with Shane Legg's paper,
> > which I found singularly content-free.
>
> Shane Legg and Marcus Hutter have a recent publication
Something I noticed while trying to fit my definition of AI into the
categories given.
There is another way that definitions can be principled.
This similarity would not be on the function of percepts to action.
Instead it would require a similarity on the function of percepts to
internal state a
1 - 100 of 225 matches
Mail list logo