Vladimir Nesov wrote:
Some notes/review.

Whether AGI is "conscious" is independent from whether it'll
"rebel"/be dangerous. Answering any kind of question about
consciousness doesn't answer a question about safety.

Ah, but notice that in the opening of the paper I explain that this consciousness theory is part of a multi-part research attack on the entire problem of understanding motivation and emotion.

So, yes, I could get away without this, but in the public mind, these issues are related because *if* someone claims that AGIs will be conscious, people will immediately begin to believe that AGIs must necessarily have human-like motivations and emotions. This would not be a true deduction, but it does happen. So in that sense, they are connected issues that I want to deal with as a package.

How is the situation with p-zombies atom-by-atom identical to
conscious beings not resolved by saying that in this case
consciousness is an epiphenomenon, meaninglessness?
http://www.overcomingbias.com/2008/04/zombies.html
http://www.overcomingbias.com/2008/04/zombies-ii.html
http://www.overcomingbias.com/2008/04/anti-zombie-pri.html

Taking the position that consciousness is an epiphenomenon and is therefore meaningless has difficulties.

By saying that it is an epiphenomenon, you actually do not answer the questions about instrinsic qualities and how they relate to other things in the universe. The key point is that we do have other examples of epiphenomena (e.g. smoke from a steam train), but their ontological status is very clear: they are things in the world. We do not know of other things with such puzzling ontology (like consciousness), that we can use as a clear analogy, to explain what consciousness is.

Also, it raises the question of *why* there should be an epiphenomenon. Calling it an E does not tell us why such a thing should happen. And it leaves us in the dark about whether or not to believe that other systems that are not atom-for-atom identical with us, should also have this epiphenomenon.


Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
"self is a cluster of atoms" doesn't cut it.

I'm not sure of what you are saying, exactly.

The framework is general in this sense: its components have *clear* counterparts in all models of cognition, both human and machine. So, for example, if you look at a system that uses logical reasoning and bare symbols, that formalism will differentiate between the symbols that are currently active, and playing a role in the system's analysis of the world, and those that are not active. That is the distinction between foreground and background.

As for the self symbol, there was no time to go into detail. But there clearly is an atom that represents the self.


Bottoming out of explanation of experience is a good answer, but you
don't need to point to specific moving parts of a specific cognitive
architecture to give it (I don't see how it helps with the argument).
If you have a belief (generally, a state of mind), it may indicate
that the world has a certain property, that world having that property
caused you to have this belief, or it can indicate that you have a
certain cognitive quirk that caused this belief, a loophole in
cognition. There is always a cause, the trick is in correctly
dereferencing the belief.
http://www.overcomingbias.com/2008/03/righting-a-wron.html

Not so fast. There are many different types of "mistaken beliefs". Most of these are so shallow that they could not possibly explain the characteristics of consciousness that need to be explained.

And, as I point out in the second part, it is not at all clear that this particular issue can be given the status of "mistaken" or "failure". It simply does not fit with all the other known examples of "failures" of the cognitive system, such as hallucinations, etc.

I thin it would be intellectually dishonest to try to sweep it under the rug with those other things, because those are clearly breakdowns that, with a little care, could all be avoided. But this issue is utterly different: by making the argument that I did, I think I showed that it was a kind of "failure" that is intrinsic to the design of the system, and not avoidable.

Part 2 of the paper is, I agree, much more subtle. But I think it is important.



Subjective phenomena might be unreachable for meta-introspection, but
it doesn't place them on different level, making them "unanalyzeable",
you can in principle inspect them from outside, using tools other then
one's mind itself. You yourself just presented a model of what's
happening.

No, I don't think so. Most philosophers would ask you what you meant by "inspecting them from outside", and then when you gave an answer they woudl say that you had changed the subject to a Non-Hard aspect of consciousness.

Now, what I did was not to inpsect them from the outside, but to *circumscribe* them. I did not breach the wall of subjectivity, did I? I do not think anyone can.



Meaning/information is relative, it can be represented within a basis,
for example within a mind, and communicated to another mind. Like
speed, it has no absolute, but the laws of relativity, of conversion
between frames of reference, between minds, are precise and not
arbitrary. Possible-worlds semantics is one way to establish a basis,
allowing to communicate concepts, but maybe not a very good one.
Grounding in common cognitive architecture is probably a good move,
but it doesn't have fundamental significance.

This is a deeper issue than we can probably address here. But the point that an Extreme Cognitive Semanticist would make is that the System Is The Semantics.

That is very different from claiming that some other semantics exists, except as a weak approximation. Possible-worlds semantics is incredibly weak: it cannot work for most of the concepts that we use in our daily lives, and that is why there are whole books on Cognitive Semantics, such as the one I referenced.



"Predictions" are not described carefully enough to appear as
following from your theory. They use some terminology, but on a level
that allows literal translation to a language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.

I agree that they could be better worded, but do you not think the intention is clear? The intention is that, in the future, we look for the analysis mechanisms, and then we look for the boundaries beyond which it cannot go. At that point we conduct our test.


You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?

But I did, because I argued that there will always be an "analysis mechanism" that allows the system to unpack its own concepts. Even though I gae a visualization for how it works in my own AGI design, that was just for convenience, because exactly the same *type* of mechanism must exist in any AGI that is powerful enough to do extremely flexible things with its thoughts.

Basically, if a system can "reflect" on the meanings of its own concepts, it will be aware of its consciousness.

I will take that argument further in another paper, because we need to understand animal minds, for example.




Richard Loosemore



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to