Sorry for the late reply.  Got interrupted.


Vladimir Nesov wrote:
(I'm sorry that I make some unclear statements on semantics/meaning,
I'll probably get to the description of this perspective later on the
blog (or maybe it'll become obsolete before that), but it's a long
story, and writing it up on the spot isn't an option.)

On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Taking the position that consciousness is an epiphenomenon and is therefore
meaningless has difficulties.

Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon.

By saying that it is an epiphenomenon, you actually do not answer the
questions about instrinsic qualities and how they relate to other things in
the universe.  The key point is that we do have other examples of
epiphenomena (e.g. smoke from a steam train),

What do you mean by smoke being epiphenomenal?

The standard philosophical term, no?  A phenomenon that is associated
with something, but which plays no causal role in the functioning of
that something.

Thus:  smoke coming from a steam train is always there when is running,
but the smoke does not cause the steam train to do anything.  It is just
a byproduct.




but their ontological status
is very clear:  they are things in the world.  We do not know of other
things with such puzzling ontology (like consciousness), that we can use as
a clear analogy, to explain what consciousness is.

Also, it raises the question of *why* there should be an epiphenomenon.
 Calling it an E does not tell us why such a thing should happen.  And it
leaves us in the dark about whether or not to believe that other systems
that are not atom-for-atom identical with us, should also have this
epiphenomenon.

I don't know how to parse the word "epiphenomenon" in this context. I
use to to describe reference-free, meaningless concepts, so you can't
say that some epiphenomenon is present here or there, that would be
meaningless.

I think the problem is that you are confusing "epiphenomenon" with something else.

Where did you get the idea that an epiphenomenon was a reference-free, meaningless concept? Not from Eliezer's reference-free, meaningless ramblings on his blog, I hope? ;-)


Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
"self is a cluster of atoms" doesn't cut it.
I'm not sure of what you are saying, exactly.

The framework is general in this sense:  its components have *clear*
counterparts in all models of cognition, both human and machine.  So, for
example, if you look at a system that uses logical reasoning and bare
symbols, that formalism will differentiate between the symbols that are
currently active, and playing a role in the system's analysis of the world,
and those that are not active.  That is the distinction between foreground
and background.

Without a working, functional theory of cognition, this high-level
descriptive picture has little explanatory power. It might be a step
towards developing a useful theory, but it doesn't explain anything.
There is a set of states of mind that correlates with experience of
apples, etc. So what? You can't build a detailed edifice on general
principles and claim that far-reaching conclusions apply to actual
brain. They might, but you need a semantic link from theory to
described functionality.

Sorry, I don't follow you here.

If you think that there was some aspect of the framework that might NOt show up in some architecture for a thinking system, you should probably point to it.

I think that the architecture was general, but it referred to a specific component (the analysis mechanism) that was well-specified enough to be usable in the theory. And that was all I needed.

If there is some specific way that it doesn't work, you will probably have to pin it down and tell me, because I don't see it.




As for the self symbol, there was no time to go into detail.  But there
clearly is an atom that represents the self.

*shug*
It only stands as definition, there is no "self"-neuron, or something
easily identifiable as "self", it's a complex thing. I'm not sure I
even understand what "self" refers to subjectively, I don't feel any
clear focus of self-perception, my experience is filled with thoughts
on many things, some of them involving management of thought process,
some of external concepts, but no unified center to speak of...

No, no: what I meant by "self" was that somewhere in the system it must have a representation for its own self, or it will have a missing concept. Also, in any system there is a "basic source of action" .... some place that is the original source for the system's moment by moment actions. The system will be aware of the fact that it has such a thing, and it will be aware of its own existence, so the "self' concept is a combination of at least those two things.

There is nothing terribly mysterious about those two things, surely. They must be present in a moderately advanced AGI.


Bottoming out of explanation of experience is a good answer, but you
don't need to point to specific moving parts of a specific cognitive
architecture to give it (I don't see how it helps with the argument).
If you have a belief (generally, a state of mind), it may indicate
that the world has a certain property, that world having that property
caused you to have this belief, or it can indicate that you have a
certain cognitive quirk that caused this belief, a loophole in
cognition. There is always a cause, the trick is in correctly
dereferencing the belief.
http://www.overcomingbias.com/2008/03/righting-a-wron.html
Not so fast.  There are many different types of "mistaken beliefs". Most of
these are so shallow that they could not possibly explain the
characteristics of consciousness that need to be explained.

And, as I point out in the second part, it is not at all clear that this
particular issue can be given the status of "mistaken" or "failure".  It
simply does not fit with all the other known examples of "failures" of the
cognitive system, such as hallucinations, etc.

I thin it would be intellectually dishonest to try to sweep it under the rug
with those other things, because those are clearly breakdowns that, with a
little care, could all be avoided.  But this issue is utterly different:  by
making the argument that I did, I think I showed that it was a kind of
"failure" that is intrinsic to the design of the system, and not avoidable.

Part 2 of the paper is, I agree, much more subtle.  But I think it is
important.

The point is that in general, any experience at all can be reduced to
its causal history, and can be given semantics of a model of that
history. It applies to correct beliefs, to shallow errors, and to
deepest mysteries of subjective experience. It's a blanked
explanation, it doesn't go into details of said causal histories, of
models of different kinds of experience, but it's an important point
to keep in mind, to avoid saying that some kinds of experience are
inherently mysterious or unexplainable, or beyond the reach of
science. The arguments answers this particular limitation, and isn't
intended to explain away specific characterizations of different kinds
of experience.

I'm sorry, you lost me there.




Subjective phenomena might be unreachable for meta-introspection, but
it doesn't place them on different level, making them "unanalyzeable",
you can in principle inspect them from outside, using tools other then
one's mind itself. You yourself just presented a model of what's
happening.
No, I don't think so.  Most philosophers would ask you what you meant by
"inspecting them from outside", and then when you gave an answer they woudl
say that you had changed the subject to a Non-Hard aspect of consciousness.

Maybe I have, and maybe they didn't have a meaningful explanation of
what hard problem is, and that it exists at all.

Now, what I did was not to inpsect them from the outside, but to
*circumscribe* them.  I did not breach the wall of subjectivity, did I?  I
do not think anyone can.

I think the trick is that meaning can't escape from frame of reference
of a mind, but you can describe any phenomenon, including subjective
ones, from a physical level, making it effectively objective, even if
in principle you can't ground objectivity completely (but it's a
problem on a level of obtaining absolute certainty in something, not
really a practical issue). You can start from a subjective frame of
reference, present your subjective experience in it, then present
semantics of physical world in the same basis, and convert meaning of
experience to semantics of physical world. It's counterintuitive,
descriptions are too different, but they are descriptions of the same
event, by construction.

So, you can't break out of subjectivity, but in the same sense
everything is subjective, including objectivity. Objectivity provides
a different basis, from which again nothing can't break out, including
subjectivity.


Meaning/information is relative, it can be represented within a basis,
for example within a mind, and communicated to another mind. Like
speed, it has no absolute, but the laws of relativity, of conversion
between frames of reference, between minds, are precise and not
arbitrary. Possible-worlds semantics is one way to establish a basis,
allowing to communicate concepts, but maybe not a very good one.
Grounding in common cognitive architecture is probably a good move,
but it doesn't have fundamental significance.
This is a deeper issue than we can probably address here.  But the point
that an Extreme Cognitive Semanticist would make is that the System Is The
Semantics.

That is very different from claiming that some other semantics exists,
except as a weak approximation.  Possible-worlds semantics is incredibly
weak:  it cannot work for most of the concepts that we use in our daily
lives, and that is why there are whole books on Cognitive Semantics, such as
the one I referenced.


Stopping the recursive buck is important here, but when you are
describing a model of semantics, you can't escape from presenting
information as ultimately interpreted by you. Most of the internal
semantics, of details present in the model, can be closed within a
described mind, stopping the regress, but understanding the mind would
require linking at least some of the semantics of what's going on
inside it, at the level of physical processes maybe, to what you
understand when you describe it.

"Predictions" are not described carefully enough to appear as
following from your theory. They use some terminology, but on a level
that allows literal translation to a language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.
I agree that they could be better worded, but do you not think the intention
is clear?  The intention is that, in the future, we look for the analysis
mechanisms, and then we look for the boundaries beyond which it cannot go.
 At that point we conduct our test.


No, I don't see that. For me, it sounds like suggesting to test
general relativity by throwing apples and measuring their trajectories
with a clepsydra.

Now you are just saying nothing. Be specific about where the test breaks down :-).

I propose that when we look we will find an "analysis mechanism". That is perdiction 1.

I propose that we will clearly see that it does something strange ("returns a kind of null result") when it tries to analyze those concepts on the very edge of its scope (the "boundary of the foreground"). That is prediction 2.

I propose that when you do certain manipulations of the connections going into the edge of the scope of those analysis mechanisms, you will cause *specific* subjective changes, as reported by the intelligent system experiencing them. That is prediction 3.

Those are at least three places where my prediction could go wrong.

Now tell me where, in that set, is something so vague that it looks like throwing apples and measuring their trajectories with a clepsydra. Surely that is unfair?






You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?
But I did, because I argued that there will always be an "analysis
mechanism" that allows the system to unpack its own concepts.  Even though I
gae a visualization for how it works in my own AGI design, that was just for
convenience, because exactly the same *type* of mechanism must exist in any
AGI that is powerful enough to do extremely flexible things with its
thoughts.

Basically, if a system can "reflect" on the meanings of its own concepts, it
will be aware of its consciousness.

I will take that argument further in another paper, because we need to
understand animal minds, for example.

It's hard and iffy business trying to recast a different architecture
in the language that involves these bottomless concepts and qualia.
How do you apply your argument to AIXI?

Are you joking?! AIXI isn't an intelligent system, it is a mathematical fantasy of the worst sort. Of course it doesn't apply to AIXI.


It doesn't map even on my
design notes, where architecture looks much more like yours, with
elements of description flying around and composing a scene or a plan
(in one of the high-level perspectives). In my case, the problem is
with semantics of elements of description being too fleeting,
context-dependent, and with description not being hierarchical, so
that when you get to the bottom, you find yourself on the top, in the
description of the same scene seen now from a different aspect.
Inference goes across the events in the environment+mind system
considered in time, so there is no intuitive counterpart to unpacking,
it all comes down to inference of events, what connects to what, what
can be inferred from what, what indicates what.


But .... do you really suggest that you have worked out an AGI design that includes all the *really* powerful machinery that I am implying in this paper? This is the point: you probably do not have the kind of powerful "analysis mechanism" I refer to in this paper, for the simple reason that something that far-ranging has barely even been thought of by most AGI people yet. It sounds like you have not, and that is no criticism, but that is not a weakness of the theory I propose here.

In other words, yes, if you look in existing architectures you will see only a pale shadow of the thing I call an "analysis mechanism" because most AGI architectures do not have the ability to grab ANY of their internal concepts and make them the object of a thinking episode ... they cannot introspect on their concepts and invent new concepts on the fly.

I only suggest that we will all eventually have such a thing, and that humans have it already.

For what it is worth, my own architecture (Safaire) does have this. But then I have been thinking of that as an important thing all along.

But it should be no surprise if you do not yet see this in other architectures.






Richard Loosemore









-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to