Re: [agi] hello

2008-08-15 Thread Joel Pitt
On Wed, Aug 13, 2008 at 6:31 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> To use Thorton's example, he demontrated that a "checkerboard" pattern can
> be learned using logic easily, but it will drive a NN learner crazy.

Note that neural networks are a broad subject and don't only include
perceptrons, but also self-organising maps and other connectionist set
ups.

In particular, Hopfield networks are an associative memory system that
would have no problem learning/memorising a checkerboard pattern (or
any other pattern, the only problem occurs when memorized patterns
begin to overlap).

A logic system system would be a lot more efficient though.

J


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Brains + Sleep, Bird Brains, Brain Rules

2008-08-15 Thread Brad Paulsen

(1) STUDY FINDS THAT SLEEP SELECTIVELY PRESERVES EMOTIONAL MEMORIES
http://www.physorg.com/news137908693.html

(2) BIG-BRAINED ANIMALS [BIRDS] EVOLVE FASTER
http://www.physorg.com/news138003096.html

(3) BRAIN RULES
Here's a guy selling a book/DVD ("Brain Rules") about how to improve your
mental performance.  Many people on this list will already be familiar with the
brain science behind the book.

Rule #1?  Exercise Boosts Brain Power.  On the site, the author gives a talk
on video (probably from the DVD) about how exercise can improve your brain's
performance.  There's, uh, just one problem: he does his own videos and I
would say he's morbidly obese himself.  Do as I say, not as I do?  Go figure.

Rule #7 - Sleep is Good For the Brain.  Given his weight, he probably suffers
from sleep apnea to some extent.  Geeze, this guy is breaking all of his own
rules!  But, note that he was still smart enough to earn (I presume) a PhD
(or MD) and write a book called "Brain Rules?"  Again, go figure.

To be fair, though, his science seems to be conservative and based on
peer-reviewed research, some of which is summarized in the PhysOrg link
above (1).

From the Web site:

"Dr. John Medina is a developmental molecular biologist and research
consultant.  He is an affiliate Professor of Bioengineering at the
University of Washington School of Medicine. He is also the director of the
Brain Center for Applied Learning Research at Seattle Pacific University."

The videos are well done and,  occasionally, humorous (intentionally so, I 
presume).
http://www.brainrules.net/?gclid=CPuLzubfkJUCFSAUagodXhqUPA

Here's the US Amazon site page for the book (301 pages + DVD), p. 2008, $20
(hardcover).
http://www.amazon.com/gp/product/product-description/097904/ref=dp_proddesc_0?ie=UTF8&n=283155&s=books

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-15 Thread rick the ponderer
On 8/15/08, rick the ponderer <[EMAIL PROTECTED]> wrote:
>
>
>
> On 8/13/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>> On Wed, Aug 13, 2008 at 4:14 AM, rick the ponderer <[EMAIL PROTECTED]>
>> wrote:
>> >
>> > Thanks for replying YKY
>> > Is the logic learning you are talking about inductive logic programming.
>> If
>> > so, isn't ilp basically a search through the space of logic programs (i
>> may
>> > be way off the mark here!), wouldn't it be too large of a search space
>> to
>> > explore if you're trying reach agi.
>> >
>> > And if you're determined to learn a symbolic representation, wouldn't
>> > genetic programming be a better choice, since it won't get stuck in
>> local
>> > minima.
>>
>>
>> There is no reason why symbolic reasoning could not incorporate some
>> kind of random combinatoric search methods like those used in GA
>> searches. Categorical imagination can be used to examine the possible
>> creation of new categories; the method does not have to be limited to
>> the examination of new combinations of previously derived categories.
>> And it does not have to be limited to incremental methods either.
>>
>> For example, the method might be used to combine fragments of surface
>> features observed in the IO data environment. Combinatoric search can
>> be also used with the creation and consideration of conjectures about
>> possible explanations of observed data events.  One of the most
>> important aspects of these kinds of searches is that they can be used
>> in serendipitous methods to detect combinations or conjectures that
>> might be useful in some other problem even when they don't solve the
>> current search goal that they were created for.
>>
>> While discussions about these subjects must utilize some traditional
>> frames of reference, the conventions of their use in conversation
>> should not be considered as absolute limitations on their possible
>> modifications.  They can be used as starting points of further
>> conversation.  YKY's and Ben Goetzel's recent comments sound as if
>> they are referring to strictly predefined categories when they talk
>> about symbolic methods, but I would be amazed if that represents their
>> ultimate goals in AI research.
>>
>> Similarly, other unconventional methods can be considered when
>> thinking about ANN's and GA's, but I think that novel approaches to
>> symbolic methods offers the best bet for some of the same  reasons
>> that YKY mentioned.
>>
>> Jim Bromer
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
> "
> For example, the method might be used to combine fragments of surface
> features observed in the IO data environment. Combinatoric search can
> be also used with the creation and consideration of conjectures about
> possible explanations of observed data events. One of the most
> important aspects of these kinds of searches is that they can be used
> in serendipitous methods to detect combinations or conjectures that
> might be useful in some other problem even when they don't solve the
> current search goal that they were created for.
> "
> Is that any different to clustering?
>
where you talk about discovering new categories from IO data.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-15 Thread rick the ponderer
On 8/13/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
> On Wed, Aug 13, 2008 at 4:14 AM, rick the ponderer <[EMAIL PROTECTED]>
> wrote:
> >
> > Thanks for replying YKY
> > Is the logic learning you are talking about inductive logic programming.
> If
> > so, isn't ilp basically a search through the space of logic programs (i
> may
> > be way off the mark here!), wouldn't it be too large of a search space to
> > explore if you're trying reach agi.
> >
> > And if you're determined to learn a symbolic representation, wouldn't
> > genetic programming be a better choice, since it won't get stuck in local
> > minima.
>
>
> There is no reason why symbolic reasoning could not incorporate some
> kind of random combinatoric search methods like those used in GA
> searches. Categorical imagination can be used to examine the possible
> creation of new categories; the method does not have to be limited to
> the examination of new combinations of previously derived categories.
> And it does not have to be limited to incremental methods either.
>
> For example, the method might be used to combine fragments of surface
> features observed in the IO data environment. Combinatoric search can
> be also used with the creation and consideration of conjectures about
> possible explanations of observed data events.  One of the most
> important aspects of these kinds of searches is that they can be used
> in serendipitous methods to detect combinations or conjectures that
> might be useful in some other problem even when they don't solve the
> current search goal that they were created for.
>
> While discussions about these subjects must utilize some traditional
> frames of reference, the conventions of their use in conversation
> should not be considered as absolute limitations on their possible
> modifications.  They can be used as starting points of further
> conversation.  YKY's and Ben Goetzel's recent comments sound as if
> they are referring to strictly predefined categories when they talk
> about symbolic methods, but I would be amazed if that represents their
> ultimate goals in AI research.
>
> Similarly, other unconventional methods can be considered when
> thinking about ANN's and GA's, but I think that novel approaches to
> symbolic methods offers the best bet for some of the same  reasons
> that YKY mentioned.
>
> Jim Bromer
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
"
For example, the method might be used to combine fragments of surface
features observed in the IO data environment. Combinatoric search can
be also used with the creation and consideration of conjectures about
possible explanations of observed data events. One of the most
important aspects of these kinds of searches is that they can be used
in serendipitous methods to detect combinations or conjectures that
might be useful in some other problem even when they don't solve the
current search goal that they were created for.
"
Is that any different to clustering?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> The paradox seems trivial, of course. I generally agree with your
> analysis (describing how we consider the sentence, take into account
> its context, and so on. But the big surprise to logicians was that the
> paradox is not just a lingual curiosity, it is an essential feature of
> any logic satisfying some broad, seemingly reasonable requirements.
>
> A logical "sentence" corresponds better to a concept/idea, so bringing
> in the lingual context and so on does not help much in the logic-based
> version (although I readily admit that it solves the paradox in the
> lingual form I presented it in my previous email). The question
> becomes, does the system allow "This thought is false" to be thought,
> and if so, how does it deal with it? Intuitively it seems that we
> cannot think such a silly concept.

> you said "I don't think the problem of self-reference is
> significantly more difficult than the problem of general reference",
> so I will say "I don't think the frame problem is significantly more
> difficult than the problem of general inference." And like I said, for
> the moment I want to ignore computational resources...

Ok but what are you getting at?  I don't want to stop you from going
on and explaining what it is that you are getting at, but I want to
tell you about another criticism I developed from talking to people
who asserted that everything could be logically reduced (and in
particular anything an AI program could do could be logically
reduced.)  I finally realized that what they were saying could be
reduced to something along the lines of "If I could understand
everything then I could understand everything."  I mentioned that to
the guys I was talking to but I don't think that they really got it.
Or at least they didn't like it. I think you might find yourself on
the same lane if you don't keep your eyes open.  But I really want to
know what where it is you are going.

I just read the message that you referred to in OpenCog Prime wikibook
and... I really didn't understand it completely but I still don't
understand what the problem is.  You should realize that you cannot
expect to use inductive processes to create a single logical theory
about everything that can be understood.  I once discussed things with
Pei and he agreed that the representational system that contains the
references to ideas can be logical even though the references may not
be.  So a debugged referential program does not mean that the system
that the references referred to have to be perfectly sound. We can
consider paradoxes and the like.

Your argument sounds as if you are saying that a working AI system,
because it would be perfectly logical would imply that the Goedel
Theorem and the Halting Problem weren't problems.  But I have already
expressed my point of view on this, I don't think that the ideas that
an AI program can create are going to be integrated into a perfectly
logical system.  We can use logical sentences to input ideas very
effectively as you pointed out. But that does not mean that those
logical sentences have to be integrated into a single sound logical
system.

Where are you going with this?
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Abram Demski
> I don't think the problems of a self-referential paradox is
> significantly more difficult than the problems of general reference.
> Not only are there implicit boundaries, some of which have to be
> changed in an instant as the conversation develops, there are also
> multiple levels of generalization in conversation.  These multiple
> levels of generalization are not simple or even reliably constructive
> (reinforcing).  They are complex and typically contradictory.  In my
> opinion they can be understood because we are somehow able access
> different kinds of relevant information necessary to decode them.

The paradox seems trivial, of course. I generally agree with your
analysis (describing how we consider the sentence, take into account
its context, and so on. But the big surprise to logicians was that the
paradox is not just a lingual curiosity, it is an essential feature of
any logic satisfying some broad, seemingly reasonable requirements.

A logical "sentence" corresponds better to a concept/idea, so bringing
in the lingual context and so on does not help much in the logic-based
version (although I readily admit that it solves the paradox in the
lingual form I presented it in my previous email). The question
becomes, does the system allow "This thought is false" to be thought,
and if so, how does it deal with it? Intuitively it seems that we
cannot think such a silly concept. (Oh, and don't let the quotes
around it make you try to just think the sentence... I can say "This
thought is false" in my head, but can I actually think a thought that
asserts its own falsehood? Not so sure...)

> This is one reason why I think that the Relevancy Problem of the Frame
> Problem is the primary problem of contemporary AI.  We need to be able
> to access relevant information even though the appropriate information
> may change dramatically in response to the most minror variations in
> the comprehension of a sentence or of a situation.

Well, you said "I don't think the problem of self-reference is
significantly more difficult than the problem of general reference",
so I will say "I don't think the frame problem is significantly more
difficult than the problem of general inference." And like I said, for
the moment I want to ignore computational resources...

On Fri, Aug 15, 2008 at 2:21 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Our ability to think about abstractions and extrapolations off of
> abstractions comes because we are able to create game boundaries
> around the systems that we think about.  So yes you can talk about
> infinite resources and compare it to the domain of the lambda
> calculus, but this kind of thinking is possible only because we are
> able to abstract ideas by creating rules and barriers for the games.
> People don't always think of these as games because they can be so
> effective at producing material change that they seem and can be as
> practical as truck, or as armies of trucks.
>
>> It is possible that your logic, fleshed out, could circumnavigate the
>> issue. Perhaps you can provide some intuition about how such a logic
>> should deal with the following line of argument (most will have seen
>> it, but I repeat it for concreteness):
>>
>> "Consider the sentence "This sentence is false". It is either true or
>> false. If it is true, then it is false. If it is false, then it is
>> true. In either case, it is both true and false. Therefore, it is both
>> true and false."
>
> Why?  I mean that my imagined program is a little like a method actor
> (like Marlon Brando).  What is its motivation?  Is it a children's
> game?  A little like listening to ghost stories? Or watching movies
> about the undead?
>
> The sentence, 'this sentence is false,' obviously relates to a
> boundary around the sentence. However, that insight wasn't obvious to
> me every time I came across the sentence.  Why not?  I don't know, but
> I think that when statements like that are unfamiliar, you put them
> into their own abstracted place and wait to see how it they are going
> to be used relative to other information.
>
> Let's go with your statement and suppose that the argument is
> unfamiliar.  Basically, the first step would be to interpret the
> elementary partial meanings of the sentences without necessarily
> integrating them.  Each sentence is put into a temporary boundary.
> 'It is either true or false.'  Ok got it, but since this kind of
> argument is unfamiliar to my imaginary program, it does not
> immediately realize that the second sentence is referring to the
> first.  Why not?  Because the first sentence creates an aura of
> reference, and if the self-reference that was intended is appreciated,
> then the sense that second sentence is going to refer to the first
> sentence will - in some cases - be made less likely.  In other cases,
> the awareness that the first sentence is self referential might make
> it more likely that the next sentence will also be interpreted as
> referring to it.
>
> T

Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
Our ability to think about abstractions and extrapolations off of
abstractions comes because we are able to create game boundaries
around the systems that we think about.  So yes you can talk about
infinite resources and compare it to the domain of the lambda
calculus, but this kind of thinking is possible only because we are
able to abstract ideas by creating rules and barriers for the games.
People don't always think of these as games because they can be so
effective at producing material change that they seem and can be as
practical as truck, or as armies of trucks.

> It is possible that your logic, fleshed out, could circumnavigate the
> issue. Perhaps you can provide some intuition about how such a logic
> should deal with the following line of argument (most will have seen
> it, but I repeat it for concreteness):
>
> "Consider the sentence "This sentence is false". It is either true or
> false. If it is true, then it is false. If it is false, then it is
> true. In either case, it is both true and false. Therefore, it is both
> true and false."

Why?  I mean that my imagined program is a little like a method actor
(like Marlon Brando).  What is its motivation?  Is it a children's
game?  A little like listening to ghost stories? Or watching movies
about the undead?

The sentence, 'this sentence is false,' obviously relates to a
boundary around the sentence. However, that insight wasn't obvious to
me every time I came across the sentence.  Why not?  I don't know, but
I think that when statements like that are unfamiliar, you put them
into their own abstracted place and wait to see how it they are going
to be used relative to other information.

Let's go with your statement and suppose that the argument is
unfamiliar.  Basically, the first step would be to interpret the
elementary partial meanings of the sentences without necessarily
integrating them.  Each sentence is put into a temporary boundary.
'It is either true or false.'  Ok got it, but since this kind of
argument is unfamiliar to my imaginary program, it does not
immediately realize that the second sentence is referring to the
first.  Why not?  Because the first sentence creates an aura of
reference, and if the self-reference that was intended is appreciated,
then the sense that second sentence is going to refer to the first
sentence will - in some cases - be made less likely.  In other cases,
the awareness that the first sentence is self referential might make
it more likely that the next sentence will also be interpreted as
referring to it.

The practical problems of understanding the elementary relations of
communication are so complicated, that the problem of dealing with a
paradox is not as severe as you might think.

We are able to abstract and use those abstractions in processes that
can be likened to extrapolation because we have to be able to do that.

I don't think the problems of a self-referential paradox is
significantly more difficult than the problems of general reference.
Not only are there implicit boundaries, some of which have to be
changed in an instant as the conversation develops, there are also
multiple levels of generalization in conversation.  These multiple
levels of generalization are not simple or even reliably constructive
(reinforcing).  They are complex and typically contradictory.  In my
opinion they can be understood because we are somehow able access
different kinds of relevant information necessary to decode them.

This is one reason why I think that the Relevancy Problem of the Frame
Problem is the primary problem of contemporary AI.  We need to be able
to access relevant information even though the appropriate information
may change dramatically in response to the most minror variations in
the comprehension of a sentence or of a situation.

I didn't write much about the self-referential paradox because I think
it is somewhat trivial. Although an AI program will be 'logical' in
the sense of the logic of computing machinery, that does not mean that
a computer program has to be strictly logical.  This means that
thinking can contain errors, but that is not front page news.  Man
bites dog!  Now that's news.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Abram Demski
That made more sense to me. Responses follow.

On Fri, Aug 15, 2008 at 10:57 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 5:05 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> But, I am looking for a system that "is" me.
>
> You, like everyone else's me, has it's limitations.  So there is a
> difference between the potential of the system and the actual system.
> This point of stressing potentiality rather than casually idealizing
> all-inclusiveness, which I originally mentioned only out of technical
> feasibility, is significant because you are applying the idea to
> yourself.  You would not be able to achieve what you have achieved if
> you were busy trying to achieve what all humanity has achieved.  So,
> even the potential of the system is dependent on what has already been
> achieved.  That is, the true potential of the system (of one's
> existence or otherwise) is readjusted as the system evolves.  So a
> baby's potential is not greater than ours, the potential of his or her
> potential is. (This even makes greater sense when you consider the
> fact that individual potential must be within a common range.)
>
>> My only conclusion is that we are talking past eachother because we
>> are applying totally different models to the problem.
>>
>> When I say "logic", I mean something quite general-- an ideal system
>> of mental operation. "Ideal" means that I am ignoring computational
>> resources.
>
> That is an example of how your ideal has gone beyond the feasible
> potential of an individual.

The idea is exactly like saying "computer" in the mathematical sense.
The theory of computation pretends that unbounded memory and time is
available. So, I feel a bit like I am talking about some issue in
lambda calculus and you are trying to tell me that the answer depends
on whether the processor is 32 bit or 64 bit. You do not think we can
abstract away from a particular person?

>
>> I  think what you are saying is that we can apply different
>> logics to different situations, and so we can at one moment operate
>> within a logic but at the next moment transcend that logic. This is
>> all well and good, but that system of operation in and of itself can
>> be seen to be a larger logical system, one that manipulates smaller
>> systems. This larger system, we cannot transcend; we *are* that
>> system.
>>
>> So, if no such logic exists, if there is no one "big" logic that
>> transcends all the "little" logics that we apply to individual
>> situations, then it makes sense to conclude that we cannot exist.
>> Right?
>> --Abram
>
> Whaaa?
>
> You keep talking about things like fantastic resources but then end up
> claiming that your ideal somehow proves that we cannot exist.  (Please
> leave me out of your whole non-existence thing by the way. I like
> existing and hope to continue at it for some time. I recommend that
> you take a similar approach to the problem too.)

OK, to continue the metaphor: I am saying that a sufficient theory of
computation must exist, because actual computers exist. At the very
least, for my mathematical ideal, I could simply take the best
computer around. This would not lead to a particularly satisfying
theory of computation, but it shows that if such an ideal were totally
impossible, we would have to be in a universe in which no computers
existed to serve as minimal examples.

>
> If it weren't for your conclusion I would be thinking that I
> understand what you are saying.
> The boundary issues of logic or of other bounded systems are not
> absolute laws that we have to abide by all of the time, they are
> designed for special kinds of thinking.  I believe they are useful
> because they can be used to illuminate certain kinds of situations so
> spectacularly.

What you are saying corresponds to what I called "little" logics, absolutely.

>
> As far as the logic of some kind of system of thinking, or potential
> of thought, I do not feel that the boundaries are absolutely fixed for
> all problems.  We can transcend the boundaries because they are only
> boundaries of thought.  We can for example create connections between
> separated groups of concepts (or whatever) and if these new systems
> can be used to effectively illuminate the workings of some problem and
> they require some additional boundaries in order to avoid certain
> errors, then new boundaries can be constructed for them over or with
> the previous boundaries.

I see what you are thinking now. The "big" logic that we use changes
over time as we learn, so as humans we escape Tarski's proof by being
an ever-moving target rather than one fixed logical system. However,
if this is the solution, there is a challenge that must be met: how,
exactly, do we change over time? Or, ideally speaking, how *should* we
change over time to optimally adapt? The problem is, *if* this
question is answered, then the answer provides another "big" logic for
Tarski's proof to aim at-- we are no longer a moving target.

This pr

Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
On Thu, Aug 14, 2008 at 5:05 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> But, I am looking for a system that "is" me.

You, like everyone else's me, has it's limitations.  So there is a
difference between the potential of the system and the actual system.
This point of stressing potentiality rather than casually idealizing
all-inclusiveness, which I originally mentioned only out of technical
feasibility, is significant because you are applying the idea to
yourself.  You would not be able to achieve what you have achieved if
you were busy trying to achieve what all humanity has achieved.  So,
even the potential of the system is dependent on what has already been
achieved.  That is, the true potential of the system (of one's
existence or otherwise) is readjusted as the system evolves.  So a
baby's potential is not greater than ours, the potential of his or her
potential is. (This even makes greater sense when you consider the
fact that individual potential must be within a common range.)

> My only conclusion is that we are talking past eachother because we
> are applying totally different models to the problem.
>
> When I say "logic", I mean something quite general-- an ideal system
> of mental operation. "Ideal" means that I am ignoring computational
> resources.

That is an example of how your ideal has gone beyond the feasible
potential of an individual.

> I  think what you are saying is that we can apply different
> logics to different situations, and so we can at one moment operate
> within a logic but at the next moment transcend that logic. This is
> all well and good, but that system of operation in and of itself can
> be seen to be a larger logical system, one that manipulates smaller
> systems. This larger system, we cannot transcend; we *are* that
> system.
>
> So, if no such logic exists, if there is no one "big" logic that
> transcends all the "little" logics that we apply to individual
> situations, then it makes sense to conclude that we cannot exist.
> Right?
> --Abram

Whaaa?

You keep talking about things like fantastic resources but then end up
claiming that your ideal somehow proves that we cannot exist.  (Please
leave me out of your whole non-existence thing by the way. I like
existing and hope to continue at it for some time. I recommend that
you take a similar approach to the problem too.)

If it weren't for your conclusion I would be thinking that I
understand what you are saying.
The boundary issues of logic or of other bounded systems are not
absolute laws that we have to abide by all of the time, they are
designed for special kinds of thinking.  I believe they are useful
because they can be used to illuminate certain kinds of situations so
spectacularly.

As far as the logic of some kind of system of thinking, or potential
of thought, I do not feel that the boundaries are absolutely fixed for
all problems.  We can transcend the boundaries because they are only
boundaries of thought.  We can for example create connections between
separated groups of concepts (or whatever) and if these new systems
can be used to effectively illuminate the workings of some problem and
they require some additional boundaries in order to avoid certain
errors, then new boundaries can be constructed for them over or with
the previous boundaries.

As far as I can tell, the kind of thing that you are talking about
would be best explained by saying that there is only one kind of
'logical' system at work, but it can examine problems using
abstraction by creating theoretical boundaries around the problem.
Why does it have to be good at that?  Because we need to be able to
take information about a single object like a building without getting
entangled into all the real world interrelations. We can abstract
because we have to.

I see that you weren't originally talking about whether "you" could
exist, you were originally talking about whether an AI program could
exist.

I don't see how my idea of multiple dynamic bounded systems does not
provide an answer to your question to be honest.  The problem with
multiple dynamic bounded systems is that it can accept illusory
conclusions.  But these can be controlled, to some extent, by
examining a concept from numerous presumptions and interrelations and
by examining the results of these pov's as they can be interrelated
with other concepts including some of which are grounded on the most
reliable aspects of the IO data environment.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-15 Thread Matt Mahoney
Mike Tintner <[EMAIL PROTECTED]> wrote:


> http://www.wired.com/wired/archive/8.02/warwick.html


An interesting perspective. Instead of brain tissue controlling a machine, we 
have a brain wanting to be controlled by a machine.
 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-15 Thread Mike Tintner

http://www.wired.com/wired/archive/8.02/warwick.html



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-15 Thread Bob Mottram
2008/8/15 Ed Porter <[EMAIL PROTECTED]>:
> The training issue is a real one, but presumably over time electronics that
> would be part of these wetware/hardware combination brains could be
> developed to train the wetware/hardware machines --- under the control
> guidance of external systems at the factory --- relatively rapidly, so that
> in say one year the brains know as much as a bright teenager.  Since all
> this training could take place in parallel, it would not be that great of an
> overall cost.
>
> I personally would prefer all electronic brains where you could have much
> more of an ability to examine virtually any part of the system.  But it is
> possible that it will be quite a while before we can develop electronics as
> cheap and as power efficient as neurons. (Of course it is also possible that
> electronic advances will happen so fast that there is no real reason for
> using wetware.)


I doubt that breeding rats and then training their brains under
controlled conditions for a whole year is going to be a cheap
exercise.  You need appropriate facilities and equipment, food,
probably expert technicians to remove neural tissue overseen by
ethical committees and all the other accompanying beaurocracy.

Compared to a pure machine solution biological or partially biological
(cyborgian) systems are going to be expensive and time consuming to
produce.  A years worth of training can be transmitted between
machines without loss of information in very small amounts of time.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com