Re: [agi] Identity abstraction

2009-01-10 Thread Harry Chesley
Thanks for the more specific answer. It was the most illuminating of the
ones I've gotten. I realize that this isn't really the right list for
questions about human subjects experiments; just thought I'd give it a try.

Richard Loosemore wrote:
 Harry Chesley wrote:
 On 1/9/2009 9:45 AM, Richard Loosemore wrote:
  There are certainly experiments that might address some of your
  concerns, but I am afraid you will have to acquire a general
  knowledge of what is known, first, to be able to make sense of what
  they might tell you.  There is nothing that can be plucked and
  delivered as a direct answer.

 I was not asking for a complete answer. I was asking for experiments
 that shed light on the area. I don't expect a mature answer, only
 more food for thought. Your answer that there are such experiments,
 but you're not going to tell me what they are is not a useful one.
 Don't worry about whether I can digest the experimental context.
 Maybe I know more than you assume I do.

 What I am trying to say is that you will find answers that are
 partially relevant to your question scattered across about a third of
 the chapters of any comprehensive introduction to cognitive
 psychology.  And then, at a deeper level, you will find something of
 relevance in numerous more specialized documents.  But they are so
 scattered that I could not possibly start to produce a comprehensive
 list!

 For example, the easiest things to mention are object perception
 within a developmental psychology framework (see a dev psych textbook
 for entire chapters on that);  the psycholgy of concepts will
 involve numerous experiments that require judgements of whether
 objects are same or different (but in each case the experiment will
 not be focussed on answering the direct question you might be
 asking);  the question of how concepts are represented sometimes
 involves the dialectic between the prototype and exemplar camps
 (see book by Smith and Medin), which partially touches on the
 question;  there are discussions in the connectionist literature about
 the problem of type-token discrimination (see Norman's chapter at the
 end of the second PDP volume - McClelland and Rumelhart 1986/7);  then
 there is neurospychology of naming... see books on psychololinguistics
 like the one written by Trevor Harley for a comprehensive introduction
 to that area);  there are also vast numbers of studies to do with
 recognition of abstract concepts using neural nets (you could pick up
 three or four papers that I wrote in the 1990s which center on the
 problem of extracting the spelled for of words using phoneme clusters
 if you look at the publications section of my website, susaro.com, but
 there are thousands of others).

 Then, you could also wait for my own textbook (in preparation) which
 treats the formation of concepts and the mechanisms of abstraction
 from the Molecular perspective.


 These are just examples picked at random.  none of them answer your
 question, they just give you pieces of the puzzle, for you to assemble
 into a half-working answer after a couple of years of study ;-).


 Anyone who knew the field would say, in response to your inquiry, But
 what exactly do you mean by the question?, and they would say
 this because your question touches upon about six or seven major areas
 of inquiry, in the most general possible terms.





 Richard Loosemore




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Harry Chesley

On 1/9/2009 9:28 AM, Vladimir Nesov wrote:

 You need to name those parameters in a sentence only because it's
 linear, in a graph they can correspond to unnamed nodes. Abstractions
 can have structure, and their applicability can depend on how their
 structure matches the current scene. If you retain in a scene graph
 only relations you mention, that'd be your abstraction.


I'm not sure if you mean a graph in the sense of nodes and edges, or in 
a visual sense.


If the former, any implementation requires that the edges identify or 
link somehow to the appropriate nodes -- so how is this done in humans 
and what experiments reveal it? If the later, the location in space of 
the node in the abstract graph is effectively it's identity -- are you 
suggesting that human abstraction is always visual, and if so what 
experimental evidence is there?


I don't mean to include or exclude your theory of abstraction, but the 
question is whether you know of experiments that shed light on this area.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Harry Chesley

On 1/9/2009 9:45 AM, Richard Loosemore wrote:

 There are certainly experiments that might address some of your
 concerns, but I am afraid you will have to acquire a general
 knowledge of what is known, first, to be able to make sense of what
 they might tell you.  There is nothing that can be plucked and
 delivered as a direct answer.


I was not asking for a complete answer. I was asking for experiments 
that shed light on the area. I don't expect a mature answer, only more 
food for thought. Your answer that there are such experiments, but 
you're not going to tell me what they are is not a useful one. Don't 
worry about whether I can digest the experimental context. Maybe I know 
more than you assume I do.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Harry Chesley

On 12/3/2008 8:11 AM, Richard Loosemore wrote:

 Am I right in thinking that what these people:



http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html


 are saying is that memories can be stored as changes in the DNA
 inside neurons?

 If so, that would upset a few apple carts.


Yes, but it obviously needs a lot more confirmation first. :-)


 Would it mean that memories (including cultural adaptations) could be
 passed from mother to child?


No. As far as I understand it, they are proposing changes to the DNA in 
the neural cells only, so it wouldn't be passed on. And I would expect 
that the changes are specific to the neural structure of the subject, so 
even if you moved the changes to DNA in another subject, it wouldn't work.



 Implication for neuroscientists proposing to build a WBE (whole brain
 emulation):  the resolution you need may now have to include all the
 DNA in every neuron.  Any bets on when they will have the resolution
 to do that?


No bets here. But they are proposing that elements are added onto the 
DNA, not that changes are made in arbitrary locations within the DNA, so 
it's not /quite/ as bad as you suggest




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] To what extent can our minds experience the consciousness of external reality?

2008-11-21 Thread Harry Chesley
Ben Goertzel wrote:
 ...my own belief that consciousness is the underlying
 reality, and physical and computational systems merely *focus* this
 consciousness in particular ways, is also not something that can be
 proven empirically or logically...

For what it's worth, let me throw out a random thought I had some time
ago regarding consciousness. It's half formed and barely alive, so be
nice to it, but it resonates (for me at least) with what Ben has said:

You can think of information as being orthogonal to matter. Matter is
used to represent or embody information, but it is not information. A
pile of rocks may represent some quantity -- say if you add one every
time someone comes into the room -- or it may be just a random pile. In
the same way, could consciousness be orthogonal to information?

Without a lot more work, the idea seems just so much half-assed
pseudo-science, and I haven't had time/energy to work it out further.
But I thought people here might have ideas -- either to flesh it out or
give a quick and merciful death.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Richard Loosemore wrote:
 Harry Chesley wrote:
 Richard Loosemore wrote:
 I completed the first draft of a technical paper on consciousness
 the other day.   It is intended for the AGI-09 conference, and it
 can be found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


 One other point: Although this is a possible explanation for our
 subjective experience of qualia like red or soft, I don't see
 it explaining pain or happy quite so easily. You can
 hypothesize a sort of mechanism-level explanation of those by
 relegating them to the older or lower parts of the brain (i.e.,
 they're atomic at the conscious level, but have more effects at the
 physiological level (like releasing chemicals into the system)),
 but that doesn't satisfactorily cover the subjective side for me.

 I do have a quick answer to that one.

 Remember that the core of the model is the *scope* of the analysis
 mechanism.  If there is a sharp boundary (as well there might be),
 then this defines the point where the qualia kick in.  Pain receptors
 are fairly easy:  they are primitive signal lines.  Emotions are, I
 believe, caused by clusters of lower brain structures, so the
 interface between lower brain and foreground is the place where
 the foreground sees a limit to the analysis mechanisms.

 More generally, the significance of the foreground is that it sets
 a boundary on how far the analysis mechanisms can reach.

 I am not sure why that would seem less satisfactory as an explanation
 of the subjectivity.  It is a raw feel, and that is the key idea,
 no?

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-18 Thread Harry Chesley
Trent Waddington wrote:
 As I believe the is that conciousness? debate could go on forever,
 I think I should make an effort here to save this thread.

 Setting aside the objections of vegetarians and animal lovers, many
 hard nosed scientists decided long ago that jamming things into the
 brains of monkeys and the like is justifiable treatment of creatures
 suspected by many to have similar experiences to humans.

 If you're in agreement with these practices then I think you should
 be in agreement with any and all experimentation on simulated
 networks of complexity up to and including these organisms.

Yes, my intent on starting this thread was not to define consciousness,
but rather to ask how do we make ethical choices with regard to AGI
before we are able to define it?

I agree with your points above. However, I am not entirely sanguine
about animal experiments. I accept that they're sometimes OK, or at
least the lesser of two evils, but I would prefer to avoid even that
level of compromise when experimenting on AGIs. And, given that we have
the ability to design the AGI experimental subject -- as opposed to
being stuck with a pre-designed animal -- it /should/ be possible.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Mark Waser wrote:
 My problem is if qualia are atomic, with no differentiable details,
 why do some feel different than others -- shouldn't they all be
 separate but equal? Red is relatively neutral, while searing
 hot is not. Part of that is certainly lower brain function, below
 the level of consciousness, but that doesn't explain to me why it
 feels qualitatively different. If it was just something like
 increased activity (franticness) in response to searing hot, then
 fine, that could just be something like adrenaline being pumped
 into the system, but there is a subjective feeling that goes beyond
 that.

 Maybe I missed it but why do you assume that because qualia are
 atomic that they have no differentiable details?  Evolution is, quite
 correctly, going to give pain qualia higher priority and less ability
 to be shut down than red qualia.  In a good representation system,
 that means that searing hot is going to be *very* whatever and very
 tough to ignore.

I thought that was the meaning of atomic as used in the paper. Maybe I
got it wrong.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote:
 Harry Chesley wrote:
 A related question: How do you explain the fact that we sometimes
 are aware of qualia and sometimes not? You can perform the same
 actions paying attention or on auto pilot. In one case, qualia
 manifest, while in the other they do not. Why is that?

 I actually *really* like this question:  I was trying to compose an
 answer to it while lying in bed this morning.

 ...

 So when I don't remember anything about those towns, from a few
 minutes ago on my road trip, is it because (a) the attentional
 mechanism did not bother to lay down any episodic memory traces, so I
 cannot bring back the memories and analyze them, or (b) that I was
 actually not experiencing any qualia during that time when I was on
 autopilot?

 I believe that the answer is (a), and that IF I can stopped at any
 point during the observation period and thought about the experience
 I just had, I would be able to appreciate the last few seconds of
 subjective experience.

 ...

 Does this seem to make sense so far, though?

It sounds reasonable. I would suspect (a) also, and that the reason is
that these are circumstances where remembering is a waste of resources,
either because the task being done on auto-pilot is so well understood
that it won't need to be analyzed later, and/or because there is another
task in the works at the same time that has more need for the memory
resources.

Note that your supposition about remembering the last few seconds if
interrupted during an auto-pilot task is experimentally verifiable
fairly easily.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote:

 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see it
explaining pain or happy quite so easily. You can hypothesize a sort
of mechanism-level explanation of those by relegating them to the older
or lower parts of the brain (i.e., they're atomic at the conscious
level, but have more effects at the physiological level (like releasing
chemicals into the system)), but that doesn't satisfactorily cover the
subjective side for me.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
This thread has gone back and forth several times concerning the reality 
of consciousness. So at the risk of extending it further unnecessarily, 
let me give my view, which seems self-evident to me, but I'm sure isn't 
to others (meaning they may reasonably disagree with me, not that 
they're idiots (though I'm open to that possibility too)).


1) I'm talking about the hard question of consciousness.

2) It is real, as it clearly influences our thoughts. On the other hand, 
though it feels subjectively like it is qualitatively different from 
other aspects of the world, it probably isn't (but I'm open to being 
wrong here).


3) We cannot currently define or measure it, but some day we will.

4) Until that day comes, it's really hard to have a non-trivial 
discussion of it, and too easy to fly off into wild theories concerning it.


An analogy: How do you know that humans have blood flowing through their 
veins? Looking at them, you can't tell. Dissecting them after death, you 
can't tell -- they have blood, but it's not moving. Cutting them while 
alive produces spurts of blood, but that could be just because the body 
is generally pressurized, not because there's any on-going flow through 
the veins. It requires observing the internals of the body while alive 
to determine that blood actually flows all the time. And it also helps a 
lot to have a model of the circulatory system that includes the heart as 
a pump, etc.


With consciousness, we're at the pre-scientific stage, because we know 
so little about cognition that we're not yet able to open it up and 
observe it as it operates. This will change, hopefully in my lifetime.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote:
 2) It is real, as it clearly influences our thoughts. On the other
 hand, though it feels subjectively like it is qualitatively
 different from other aspects of the world, it probably isn't (but
 I'm open to being wrong here).

 The correct statement is that you believe it is real. Everybody does.
 Those who didn't, did not pass on their DNA.

No, the correct statement is the one I made. It is real. We have
empirical evidence that it is real since it influences observable actions.

Consciousness *may* be a belief. But we have no empirical evidence for
or against that statement, so it's too early to make blanket statements
like yours.

 3) We cannot currently define or measure it, but some day we will.

 You can define it any time you want, or use the existing common
 definition.

No, you can't define it any way you want. I am talking about a specific
phenomenon that has been observed but not understood. And the
definitions from others that I've seen may allow us to identify shared
experiences of the phenomenon, but don't provide either a good model or
empirical tests, so they're less that I, for one, want in order to say
we've defined it.

 Blood flow can be directly observed, for example, by x-rays during an
 angioplasty. But that isn't the point. Even without direct
 observation, blood flow is supported by a lot of indirect evidence,
 for example, when you inject a drug into a vein it quickly spreads to
 other parts of the body. Even theories for which evidence is harder
 to observe, for example, the existence of fractional electric charges
 in quarks, are accepted because the theory makes predictions that can
 be tested.

So far we're in complete agreement. Concluding that blood flows requires
observation which requires technology applicable to the phenomenon
(x-rays, needles, tests to see if the drug spread, etc.).

 But there are absolutely no testable predictions that can
 be made from a theory of consciousness.

But here you suddenly jump from saying we have no empirical tests to
saying there can be no empirical tests. This makes no sense to me.

Even if consciousness is only a belief with no real substance, there are
testable predictions that follow from its existence, and perhaps tests
to determine that it is limited to being only a belief.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote:
 If you don't define consciousness in terms of an objective test, then
 you can say anything you want about it.

We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while you claim there can never be.

(I say don't /entirely/ agree because I think we can talk about things
that are not completely defined -- in this case, I believe most people
reading this do know the subjective feeling of consciousness and
recognize that that's what I mean. A scientific exploration requires a
more thorough definition, but we can still have some meaningful
discourse without it, though we do risk running off into wildly
unsubstantiated theories when we do.)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley

On 11/4/2008 2:53 PM, YKY (Yan King Yin) wrote:

 Personally, I'm not making an AGI that has emotions...


So you take the view that, despite our minimal understanding of the 
basis of emotions, they will only arise if designed in, never 
spontaneously as an emergent property? So you can safely ignore the 
ethics question.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley

On 11/4/2008 3:31 PM, Matt Mahoney wrote:

 To answer your (modified) question, consciousness is detected by the
 activation of a large number of features associated with living
 humans. The more of these features are activated, the greater the
 tendency to apply ethical guidelines to the target that we would
 normally apply to humans. For example, monkeys are more like humans
 than mice, which are more like humans than insects, which are more
 like humans than programs. It does not depend on a single feature.


If I understand correctly, you're saying that there is no such thing as 
objective ethics, and that our subjective ethics depend on how much we 
identify/empathize with another creature. I grant this as a possibility, 
in which case I guess my question should be viewed as subjective. I.e., 
how do I tell when something is sufficiently close to me, without being 
able to see all the features directly, that I need to worry about the 
ethics subjectively?


Let me give an example: If I take a person and put them in a box, so 
that I can see none of their features or know how similar they are to 
me, I still consider it unethical to conduct certain experiments on 
them. This is because I believe those important similar features are 
there, I just can't see them.


Similarly, I believe at some point in AGI development, features similar 
to my own mind will arise, but since they will be obscured by a very 
different (and incomplete) implementation from my own, they may not be 
obvious, even though I believe they are there.


So although you've changed the phrasing of the question to a degree, the 
question remains.


(Note: You could argue that ethics, being subjective, are irrelevant, 
and while that may be true, I'm too squeamish to take that view, which 
also leads to allowing arbitrary experiments on people.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Harry Chesley
The question of when it's ethical to do AGI experiments has bothered me
for a while. It's something that every AGI creator has to deal with
sooner or later if you believe you're actually going to create real
intelligence that might be conscious. The following link is a blog essay
on the subject, which describes my current thinking on the subject, such
as it is. There's clearly much more that needs to be worked out.
Comments, either here or at the blog, would be appreciated.

http://www.mememotes.com/meme_motes/2008/11/ethical-experimentation-on-cognitive-entities.html



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Harry Chesley

On 10/18/2008 9:27 AM, Mike Tintner wrote:

 What rational computers can't do is find similarities between
 disparate, irregular objects - via fluid transformation - the essence
 of imagination.


So you don't believe that this is possible by finding combinations of 
abstract shapes (lines, squares, circles, etc.) within a scene and 
mapping or spatially transforming those shapes? This was my 
understanding of how human vision works. I had thought that was fairly 
well established, but it's not my area -- personally, I'm betting that a 
purely symbolic approach is workable.


And I may be missing the importance of your emphasis on fluid. I 
generally find that people think in more discrete jumps -- for example, 
the jump to Italy being a boot, rather than a series of smaller 
transformational steps from the map to the abstraction.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Reasoning by analogy recommendations

2008-10-17 Thread Harry Chesley
I find myself needing to more thoroughly understand reasoning by 
analogy. (I've read/thought about it to a degree, but would like more.) 
Anyone have any recommendation for books and/or papers on the subject?


Thanks.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Harry Chesley

On 10/15/2008 8:01 AM, Ben Goertzel wrote:

 What are your thoughts on this?


A narrower focus of the list would be better for me personally.

I've been convinced for a long time that computer-based AGI is possible, 
and am working toward it. As such, I'm no longer interested in arguments 
about whether it is feasible or not. I skip over those postings in the list.


I also skip over postings which are about a pet theory rather than a 
true reply to the original post. They tend to have the form your idea x 
will not work because it is in opposition to my theory y, which states 
insert complex description here. Certainly ones own ideas and 
theories should contribute to a reply, but they should not /be/ the reply.


And the last category that I skip are discussions that have gone far 
into an area that I don't consider relevant to my own line of inquiry. 
But I think those are valuable contributions to the list, just not of 
immediate interest to me. Like a typical programmer, I tend to 
over-focus on what I'm working on. But what I find irrelevant may be 
spot on for someone else, or for me at some other time.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Context

2008-08-28 Thread Harry Chesley
I think we would all agree that context is crucial to understanding. 
Kill them! means something quite different if you're at a soccer game, 
in a military battle, or playing a FPS video game.


But in a pragmatic, let's implement it, sense, I'm not as clear what 
context means. Let me try to enumerate some options and see if anyone 
has any dramatic insights (or pointers to existing work).


1) Context = the immediate container. This is the simplest to implement, 
where the context is simply whatever the object (or action) is directly 
within. There are times when the relevant context seems to be at a 
higher level, such as the reality/fantasy distinction between the battle 
and FPS above. That could be resolved by variants on the lower level 
containers that inherit from /their/ containers -- not a fight but a 
fantasy fight. But this proliferation of container variants seems 
inefficient and over-complex.


2) Context = the highest level container. Clearly sometimes context is 
not at the highest level only, but it's all a part of the top level. But 
this, in a sense, doesn't solve anything. It just says that everything 
is the context.


3) Context = a middle container. This is similar to nouns, in that for 
any given object, there is usually a middle level is-a noun that we 
prefer to use (e.q., dog rather than mammal). Maybe there is 
similarly a middle level container that's the preferred context. But 
this has many of the same problems as both 1 and 2.


4) Context = a search up the container hierarchy. You just look upwards 
until you find the relevant context. But this pushes a lot of the 
semantic complexity into find the relevant context, without answering 
what that really means.


5) Context = depends on the thing. Each distinct object may have a 
different context. There's no one-size-fits-all context. But this gives 
very little guidance on how to implement context.


6) Context = multiple items. A given object may have multiple contexts. 
For example, I may be a father in the context of my daughter, but a 
husband in the context of my wife. But this option is not mutually 
exclusive with the others, so you still have to pick on of them as well.


Thoughts?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-09 Thread Harry Chesley

On 8/9/2008 12:43 AM, Brad Paulsen wrote:

 Mike Tintner wrote:
 That illusion is partly the price of using language, which
 fragments into pieces what is actually a continuous common sense,
 integrated response to the world.

 Excellent observation.  I've said it many times before: language is
 analog human experience digitized.  And every time I do, people look
 at me funny.


I dunno about that. When I walk into my dining room, I don't see a 
continuous experience, I see a table and chairs and plates, etc. I clump 
the world into objects that have discrete boundaries. Isn't that 
digitization in the sense you mean?


I think of language more as serializing something that's parallel 
internally, and saving communications bandwidth by supplying enough 
information to uniquely identify an already known concept rather than 
fully describing it -- part of which is the use of symbols.


As a side note: There's some evidence that dolphins communicate by 
making sounds that imitate what their sonar would return. It's somewhat 
equivalent to me being able to wave my hands and make an image appear in 
the air. Thus there's no need for symbols, because they can reproduce 
the sensory input of the original object. If it had been easier to do 
the same thing in our sensory environment (vision rather than sonar), we 
might never have evolved symbolic language and all that led to.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-07 Thread Harry Chesley

James Ratcliff wrote:

 Every AGI, but the truly most simple AI must run in a simulated
 environment of some sort.


Not necessarily, but in most cases yes. To give a counter example, a 
human scholar reads Plato and publishes an analysis of what he has read. 
There is no interaction with the environment in the sense I believe you 
mean -- there is input and output, but the two are disconnected, and the 
output doesn't affect the input -- yet it's clearly a human-level 
intellectual activity. But interacting with an environment is often more 
interesting.



 There must be structure to its internal information nodes, some level
 of hierarchy for storage and usage correct? Many and most nodes will
 contain base nodes such as color or weight or position. How can a
 network be created without these?  The AGI may not have direct
 experential sampling of these concepts thru an input device, but the
 concepts must still be there.


Three points: 1) The main thing I was arguing is that the base nodes do 
not need to be different from the rest, other than their position in the 
network. There is no need for the equivalent of software primitive 
functions. 2) A hierarchical network needs base nodes, but a graph does 
not. It can be circular. 3) There are no true primitive concepts in the 
real world. Or rather, primitives only exist within a give perspective; 
you can change the perspective and define the previous primitives in 
terms of other concepts. Color seems primitive from a vision 
perspective, but if you change to a physics perspective, you can talk 
about photons; or if you change to a cultural perspective, you can talk 
about it being warmth or earthy or bold, etc.



 Data sources: But there is one sense in which a system must be
 grounded to provide useful results for the real world. The
 connections between concepts, and the statistics regarding those
 connections need to come from the real world. A dictionary is
 circular, but the connections between the nodes are set based on
 corresponding connections in the external world. Or to put that
 another way, you can't reason about something you have no data about.

 This seems to contradict the second notion.


The point I was trying to make is that the internal data must be 
influenced by the real world, but that's different than having to have a 
direct, primitive connection to the world. If by grounded you mean 
influenced by the world, then yes, an AI needs to be grounded to reason 
about the world. I was concerned with a definition that requires direct 
connections from the data to the world, which is not needed.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley

Terren Suydam wrote:

 Harry,

 --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
 I'll take a stab at both of these...

 The Chinese Room to me simply states that understanding cannot be
 decomposed into sub-understanding pieces. I don't see it as
 addressing grounding, unless you believe that understanding can
 only come from the outside world, and must become part of the
 system as atomic pieces of understanding. I don't see any reason to
 think that, but proving it is another matter -- proving negatives
 is always difficult.

 The argument is only implicitly about the nature of understanding. It
 is explicit about the agent of understanding. It says that something
 that moves symbols around according to predetermined rules - if
 that's all it's doing - has no understanding. Implicitly, the
 assumption is that understanding must be grounded in experience, and
 a computer cannot be said to be experiencing anything.


But it's a preaching to the choir argument: Is there anything more to 
the argument than the intuition that automatic manipulation cannot 
create understanding? I think it can, though I have yet to show it.


Take it from another perspective: Is it possible to make a beer can out 
of atoms? An aluminum atom is in no way a beer can. It doesn't look like 
one. It can't hold beer. You can't drink from it. Perhaps the key aspect 
of a beer can is containment. An atom has no containment. So clearly 
no collection of atoms can invoke containment.



 It really helps here to understand what a computer is doing when it
 executes code, and the Chinese Room is an analogy to that which makes
 a computer's operation expressible in terms of human experience -
 specifically, the experience of incomprehensible symbols like Chinese
 ideograms. All a computer really does is apply rules determined in
 advance to manipulate patterns of 1's and 0's. No comprehension is
 necessary, and invoking that at any time is a mistake.


I totally agree with all but the last sentence. The Chinese Room does 
provide a simple but accurate analogy to what a computer does. As such, 
it's excellent for helping non-computer types understand this issue in 
AI/philosophy. But I know of no definition of comprehension that is 
impossible to create using a program or a Chinese Room -- of course, I 
don't know /any/ complete definition of comprehension, and maybe when 
I do, it will have the feature you believe it has.



 Fortunately, that does not rule out embodied AI designs in which the
 agent is simulated. The processor still has no understanding - it
 just facilitates the simulation.


That sounds like agreement with my point (we might be arguing two 
aspects of the same side): If the processor has no understanding, but 
the simulation does, then it must be possible to compose understanding 
using a non-understanding processor.



 As to philosophy, I tend to think of it's relationship to AI as
 somewhat the same as alchemy's relationship to chemistry. That is,
 it's one of the origins of the field, and has some valid ideas, but
 it lacks the hard science and engineering needed to get things
 actually working. This is admittedly perhaps a naive view, and
 reflects the traditional engineering distrust of the humanities. I
 state it not to be critical of philosophy, but to give you an idea
 how some of us think of the area.

 As an engineer who builds things everyday (in software), I can
 appreciate the *limits* of philosophy. Spending too much time in that
 domain can lead to all sorts of excesses of thought, castles in the
 sky, etc. However, any good engineer will tell you how important
 theory is in the sense of creating and validating design. And while
 the theory behind rocket science involves physics, chemistry, and
 fluid dynamics (and others no doubt), the theory of AI involves
 information theory, computer science, and philosophy of mind 
 knowledge, like it or not. If you want to be a good AI engineer, you
 better be comfortable with all of the above.


Yes, I don't mean to dismiss philosophy. In some areas of AI, there is 
far more understanding within philosophy than within computer science. 
But there's also lots of angels dancing on pins, so it can take a lot of 
time to find it. In some ways it's like having a domain expert, always a 
good thing when writing a program.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley

Terren Suydam wrote:

 Harry,

 --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
 But it's a preaching to the choir argument: Is there anything more
 to the argument than the intuition that automatic manipulation
 cannot create understanding? I think it can, though I have yet to
 show it.

 The burden is on you, or anyone pursuing purely logical approaches,
 to show how you can cross the chasm from syntax to semantics - from
 form to meaning. How does your intuition of automatic understanding
 inform a design that does nothing but manipulate symbols? At what
 point does your design cross the boundary from simply manipulating
 data automatically to understanding it? To me, the real problem here
 is projecting your own understanding onto a machine that appears to
 be doing something intelligent.


I guess I'll settle for the pragmatic answer: When (if) I (we) get it 
working and it produces useful real world results, I'll be happy, 
without worrying specifically whether it understands.



 If your intuition is correct, than it's not a big leap to say that
 today's chess programs comprehend chess. Do you agree?


Yes. Though in a much narrower sense than we do, since they have no 
larger context of things like games, competition, war, etc.



 I totally agree with all but the last sentence. The Chinese Room
 does provide a simple but accurate analogy to what a computer does.
 As such, it's excellent for helping non-computer types understand
 this issue in AI/philosophy. But I know of no definition of
 comprehension that is impossible to create using a program or a
 Chinese Room -- of course, I don't know /any/ complete definition
 of comprehension, and maybe when I do, it will have the feature
 you believe it has.

 I think your problems here are due to lack of clarity about what it
 means for some kind of agent to understand something. For starters,
 understanding is done by something - it doesn't exist in a vacuum.
 What is the nature of that something?


Certainly there is lack of clarity about understanding, at least on my 
part. Some day we'll all look back and laugh at our misconceptions about 
the topic.


I'm not at all sure that understanding much be active. It may be that a 
text book on physics understands physics. But it doesn't do anything 
with that understanding, which is how we're used to seeing understanding 
expressed, so we don't think of it as understanding.



 Yes, I don't mean to dismiss philosophy. In some areas of AI, there
 is far more understanding within philosophy than within computer
 science. But there's also lots of angels dancing on pins, so it can
 take a lot of time to find it. In some ways it's like having a
 domain expert, always a good thing when writing a program.

 Totally agree! But it is so valuable to have your beliefs
 challenged, which is why we should not rely on others to do the heavy
 lifting.


Very true. Which is why this list is great when it sticks to challenging 
rather than insulting. (Which you've done perfectly, BTW.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley

Terren Suydam wrote:

 Unfortunately, I have to take a break from the list (why are people
 cheering??).


No cheering here.

Actually, I'd like to say thanks to everyone. This thread has been very 
interesting. I realize that much of it is old hat and boring to some of 
you, but it's been useful to me. Even the parts that I've been over 
before can be interesting to rehash occasionally. And there were some 
variations I hadn't thought about. Also, everyone has been quite civil, 
and there wasn't overly much unnecessary repetition within the thread 
itself. It's the sort of thing I had hoped for initially when joining 
the list.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding (was: Chinese Room)

2008-08-06 Thread Harry Chesley

Vladimir Nesov wrote:

 I think [having a general (causal) model] is a good fit for
 understanding. When you understand a phenomenon, you can model it
 in many different contexts (environments), including those never
 encountered before neither by the phenomenon, nor by you observing
 the phenomenon. Rote learning doesn't generalize, it just represents
 isolated data points.


Generally, I agree. However, rote learning can be a part of modeling. We 
learn arithmetic by rote, but then apply it to non-rote models, for 
example. Rote learning can provide parts of the model. Taken to extremes 
(as in an AI program), rote can conceivably provide everything.



 On Wed, Aug 6, 2008 at 11:36 PM, Harry Chesley [EMAIL PROTECTED]
 wrote:
 I'm not at all sure that understanding much be active. It may be
 that a text book on physics understands physics. But it doesn't do
 anything with that understanding, which is how we're used to seeing
 understanding expressed, so we don't think of it as understanding.


 A book is a request for understanding, it can be converted into a
 model if read by someone. I think about meaning as a target of
 optimization process permitted by a given model of environment. When
 you have a question, it creates a process of arriving at an answer,
 and so the meaning of this question is in the shape of your activity
 about finding the answer, in the target of this process. If it is
 expected that a book gets read, it is a part of optimization process
 in the model that anticipates that. If book is currently burning, and
 is expected to be reduced to ashes, it is not a part of such process
 and it has no understanding or meaning relevant to what's written in
 it.


Here and above, I think you need to distinguish between understanding 
and expressing or using understanding. You seem to be saying that 
understanding exists only when being expressed or used, and I wouldn't 
agree with that, though the point is subtle enough that it probably 
doesn't matter, since unused understanding is functionally irrelevant.


You say a book...can be converted into a model if read by someone, but 
what does reading do other than convert from one representation (printed 
words) to another (neural connections). (It also presumably connects the 
new knowledge to previously acquired knowledge, but that prior knowledge 
/could/ have been in the book too.) The only difference is that the new 
representation is more ready to be used.


Then you get asked a question and the neural mechanism goes to work and 
uses the knowledge to produce an answer showing your understanding. But 
you still had the understanding before you used it, and you still have 
it now even though you're not using that part of your brain at the moment.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley

Mark Waser wrote:

 The critical point that most people miss -- and what is really
 important for this list (and why people shouldn't blindly dismiss
 Searle) is that it is *intentionality* that defines understanding.
 If a system has goals/intentions and it's actions are modified by the
 external world (i.e. it is grounded), then, to the extent to which
 it's actions are *effectively* modified (as judged in relation to
 it's intentions) is the extent to which it understands.  The most
 important feature of an AGI is that it has goals and that it modifies
 it's behavior (and learns) in order to reach them.  The Chinese Room
 is incapable of these behaviors since it has no desires.


I think this is an excellent point, so long as you're careful to define 
intention simply in terms of goals that the system is attempting to 
satisfy/maximize, and not in terms of conscious desires. As you point 
out, the former provides a context in which to define understanding and 
to measure it. The latter leads off into further undefined terms and 
concepts -- I mention this rather than just agreeing outright mainly 
because of your use of the word desire in the last sentence, which 
/could/ be interpreted anthropomorphically.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-08-05 Thread Harry Chesley

On 8/5/2008 6:53 AM, YKY (Yan King Yin) wrote:


On 8/5/08, Mike Tintner [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


 Jeez, there is NO concept that is not dependent on context. There is 
NO concept that is not infinitely fuzzy and open-ended in itself, 
period - which is the principal reason why language is and has to be 
grounded (although that needs demonstration).
 
I see...
 
My current approach is to use fuzzy rules to model these concepts.  In 
some cases it seems to work but in other cases it seems problematic...
 
For example I can give a definition of the concept chair:
 
chair(X) :-

X has leg #1,
X has leg #2,
X has leg #3,
X has leg #4,
X has a horizontal seat area,
X has a vertical back area,
leg #1 is connected to seat at position #1,
etc,
etc
 
But what if a chair has one leg missing?  Using fuzzy logic (fuzzy 
AND), the missing leg will result in a fuzzy value close to 0, which 
is not quite right.
 
Probabilistic logic is also inappropriate.  I know *every* time that a 
chair missing a leg is somewhat a chair -- there is no probability 
involved here.
 
YKY


My tendency is to say that you're trying to make a single definition 
cover too much. I think of a chair as being a collection of 
semi-overlapping sets of predicates. You can have a three-legged chair, 
a backless chair (stool), a legless chair (seen 'em at the beach), etc. 
There is no subset of all chairs that defines a chair. Rather chair is 
the collection of predicate sets for different variants of a chair. And 
I would also say that part of chair is also a memory of all the actual 
chairs you've encountered.


Which leaves the question of how you categorize a new object that 
doesn't precisely match any prior chair. If it's sufficiently close to 
one of the priors, it's easy. If it has nothing in common with any 
prior, it's easy. In between can be more subtle.


And, to return to the original topic, part of each chair predicate set 
is the relevant context. (I agree that every meaning has context.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Harry Chesley

I'll take a stab at both of these...

The Chinese Room to me simply states that understanding cannot be 
decomposed into sub-understanding pieces. I don't see it as addressing 
grounding, unless you believe that understanding can only come from the 
outside world, and must become part of the system as atomic pieces of 
understanding. I don't see any reason to think that, but proving it is 
another matter -- proving negatives is always difficult.


As to philosophy, I tend to think of it's relationship to AI as somewhat 
the same as alchemy's relationship to chemistry. That is, it's one of 
the origins of the field, and has some valid ideas, but it lacks the 
hard science and engineering needed to get things actually working. This 
is admittedly perhaps a naive view, and reflects the traditional 
engineering distrust of the humanities. I state it not to be critical of 
philosophy, but to give you an idea how some of us think of the area.


Terren Suydam wrote:

Abram,

If that's your response then we don't actually agree. 


I agree that the Chinese Room does not disprove strong AI, but I think it is a 
valid critique for purely logical or non-grounded approaches. Why do you think 
the critique fails on that level?  Anyone else who rejects the Chinese Room 
care to explain why?

(I know this has been discussed ad nauseum, but that should only make it easier 
to point to references that clearly demolish the arguments. It should be noted 
however that relatively recent advances regarding complexity and emergence 
aren't quite as well hashed out with respect to the Chinese Room. In the 
document you linked to, mention of emergence didn't come until a 2002 reference 
attributed to Kurzweil.)

If you can't explain your dismissal of the Chinese Room, it only reinforces my 
earlier point that some of you who are working on AI aren't doing your homework 
with the philosophy. It's ok to reject the Chinese Room, so long as you have 
arguments to do it (and if you do, I'm all ears!) But if you don't think the 
philosophy is important, then you're more than likely doing Cargo Cult AI.

(http://en.wikipedia.org/wiki/Cargo_cult)

Terren

--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:

  

From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 9:49 PM
Terren,
I agree. Searle's responses are inadequate, and the
whole thought
experiment fails to prove his point. I think it also fails
to prove
your point, for the same reason.

--Abram






  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] Groundless reasoning

2008-08-04 Thread Harry Chesley
As I've come out of the closet over the list tone issues, I guess I 
should post something AI-related as well -- at least that will make me 
net neutral between relevant and irrelevant postings. :-)


One of the classic current AI issues is grounding, the argument being 
that a dictionary cannot be complete because it is only 
self-referential, and *has* to be grounded at some point to be truly 
meaningful. This argument is used to claim that abstract AI can never 
succeed, and that there must be a physical component of the AI that 
connects it to reality.


I have never bought this line of reasoning. It seems to me that meaning 
is a layered thing, and that you can do perfectly good reasoning at one 
(or two or three) levels in the layering, without having to go all the 
way down. And if that layering turns out to be circular (as it is in a 
dictionary in the pure sense), that in no way invalidates the reasoning 
done.


My own AI work makes no attempt at grounding, so I'm really hoping I'm 
right here.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-04 Thread Harry Chesley

Terren Suydam wrote:

 ...
 Without an internal
 sense of meaning, symbols passed to the AI are simply arbitrary data
 to be manipulated. John Searle's Chinese Room (see Wikipedia)
 argument effectively shows why manipulation of ungrounded symbols is
 nothing but raw computation with no understanding of the symbols in
 question.


Searle's Chinese Room argument is one of those things that makes me 
wonder if I'm living in the same (real or virtual) reality as everyone 
else. Everyone seems to take it very seriously, but to me, it seems like 
a transparently meaningless argument.


It's equivalent to saying that understanding cannot be decomposed; that 
you don't get understanding (the external perspective) without using 
understanding (the person or computer inside the room). I don't see any 
reason why this should be true. How to do it is what AI research is all 
about.


To look at it another way, it seems to me that the Chinese Room is 
exactly equivalent to saying AI is impossible. Until we actually get 
AI working, I can't really disprove that statement, but there's no 
reason I should accept it either.


Yet smarter people than I seem to take the Chinese Room completely 
seriously, so maybe I'm just not seeing it.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-04 Thread Harry Chesley

Vladimir Nesov wrote:

 It's too fuzzy an argument.


You're right, of course. I'm not being precise, and though I'll try to 
improve on that here, I probably still won't be. But here's my attempt:


There are essentially three types of grounding: embodiment, hierarchy 
base nodes, and pattern/data sources.


Embodiment: I don't think AIs need to have a connection to a real or 
simulated environment. Yes, we get a lot of our information that way, 
and yes, human meaning/understanding probably evolved out of that 
connection initially. But no, AI doesn't require it to do useful thinking.


Hierarchy base nodes: I don't think a hierarchy of concepts or a 
semantic network need to have a set of base nodes that connect to 
something outside the system (primitives). Meaning arises out of the 
network of connections, and doesn't need some basic unit of meaningful 
nodes.


Data sources: But there is one sense in which a system must be grounded 
to provide useful results for the real world. The connections between 
concepts, and the statistics regarding those connections need to come 
from the real world. A dictionary is circular, but the connections 
between the nodes are set based on corresponding connections in the 
external world. Or to put that another way, you can't reason about 
something you have no data about.


It was the first two senses that I meant when I said an AI doesn't need 
to be grounded.


P.S. You can think of embodiment as being an example of hierarchy base 
nodes; or you can think of it as a data source. In the latter case, it 
can be useful (as other have pointed out on the list), but isn't necessary.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Harry Chesley
In my experience, online communities are like offline communities, their 
tone and spirit depends on their members. Moderation seldom fixes 
anything, and content-based moderation only works if the community is 
intended to reflect the ideas and values of the moderator. But sometimes 
a respected moderator (and I think Ben is very respected here) can act 
as a sort of father figure, encouraging a particular style of interaction.


I have been a member of many online communities where the interactions 
were friendly, supportive, and productive, where negative language and 
attitude was the rare exception. So I don't think that style of 
interaction *has* to be there. It may be that I just need to keep 
looking for such a community of AI researchers. I sure hope it isn't 
inherent in AI work itself -- though the intellectually abstract and 
scientifically unsettled aspects of it do make it is the sort of field 
that can attract people who believe they know more than they do and who 
are insecure enough to need to disparage others around them. 
(Personally, I'm not at all sure I know anything, as I've found it's an 
area where I can *so* easily fool myself; and I believe that virtually 
anyone's approach on this list *might* be of great value.)


(Credentials: I've been involved in online communities since the '70s, 
occasionally working as an expert in the field, most recently as manager 
of the Social Computing Group at Microsoft Research, which I left in 
2001 to work on AI.)


Terren Suydam wrote:
Just to throw my 2 cents in here. The short version: if you want to improve the list, look to yourself. Don't rely on moderation. 


If you have something worth posting, post it without fear of rude responses. If 
people are rude, don't be rude back. Resist the urge to fire off the quick 
reply and score points (I often write the inflammatory reply and then delete 
it, just to get it out of my system). Don't feed the trolls. Thicken your skin: 
see personal attacks for what they are - refuge for someone without a 
reasonable rebuttal.

I've been participating in online forums of various sorts basically since the 
internet began in earnest and there is nothing unique about the behavior here. 
People are rude. The anonymity and discorporate nature of virtual communication 
lowers inhibitions in a big way. Moderation for anything but clear-cut 
violations of established rules is almost never helpful because it either 
stifles discussion or the forum devolves into trials about the fairness of the 
moderation.

Moderation based on subjective quality of content is a terrible idea, imo. I 
would never agree to moderate a forum based on anything but etiquette or 
on-topic-ness. Assuming the rules are spelled out and warnings are given and 
behavior is enforced fairly and consistently, moderation can help. But it takes 
a fairly proactive moderator to do all that.

Terren


--- On Sun, 8/3/08, Harry Chesley [EMAIL PROTECTED] wrote:

  

From: Harry Chesley [EMAIL PROTECTED]
Subject: Re: [agi] META: do we need a stronger politeness code on this list?
To: agi@v2.listbox.com
Date: Sunday, August 3, 2008, 12:52 PM
I have never posted to the list before for exactly the
reasons under 
discussion. It seems to me that the list is dominated, in
terms of 
volume, not, I think, in terms of people, by two types of
posts: 1) You 
don't understand theory x, which explains why your idea
or approach is 
unworkable; you need to spend hours (perhaps days) reading
about that 
(my) theory. Or 2) You're an idiot and your ideas are

trash.

I am pursuing a line of research that I believe has
potential. It would 
be useful to have a place I could float ideas and get some
feedback. 
While I'm not particularly thin skinned, I don't
have the time to deal 
with excursions into entirely different theories or to deal
with the 
distractive emotional baggage that's so common here. I
would also be 
happy to provide feedback to posts by others, but I
don't want to get 
dragged into heated and often content-sparse threads of

discussion.

I have seen very good and productive threads on this list,
but they tend 
to be the exception. Hence I mostly just delete the items
from the list, 
and follow the occasional thread that looks interesting or
involves 
people who have posted more reasonable items in the past.
As with most 
lists, 90% of the content is generated by 10% of the
members. In this 
case, that involves much unnecessary distraction and

unpleasantness.

Giving posters time outs for personal attacks
might go a long way 
toward calming the list down and encouraging some of the
people like me 
to become more involved. Also, a list FAQ that includes
pointers to some 
of the theories that get repeated endlessly, together with
encouragement 
to the posters to just post the FAQ's URL rather than
repeating the 
entire theory, might reduce the repetition. (Wasn't
there a wiki area 
exactly for that started a while ago?)


Anyway, that's my two cents