Hector,
I skimmed your paper linked to in the post below.
From my quick read it appears the only meaningful way it suggests a brain
might be infinite was that since the brain used analogue values --- such as
synaptic weights, or variable time intervals between spikes (and presumably
On Dec 2, 2008, at 8:31 AM, Ed Porter wrote:
From my quick read it appears the only meaningful way it suggests a
brain might be infinite was that since the brain used analogue
values --- such as synaptic weights, or variable time intervals
between spikes (and presumably since those
J.,
Your arguments seem to support my intuitive beliefs, so my instinctual
response is to be thankful for them.
But I have to sheepishly admit I don't totally understand them.
Could you please give me a simple explanation for why it is an obvious
argument against infinite values ...
Hi Ed,
I am glad you have read the paper with such detail. You have
summarized quite well what it is about. I have no objection to the
points you make. It is only important to bear in mind that the paper
is about studying the possible computational power of the mind by
using the model of an
Suppose that the gravitational constant is a non-computable number (it
might be, we don't know because as you say, we can only measure with
finite precision). Planets compute G as part of the law of gravitation
that rules their movement (you can of course object, that G is part of
a model that has
Hector,
Yes, it's possible that the brain uses uncomputable neurons to predict
uncomputable physical dynamics in the observed world
However, even if this is the case, **there is no possible way to
verify or falsify this hypothesis using science**, if science is
construed to involve evaluation of
On Wed, Dec 3, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hector,
Yes, it's possible that the brain uses uncomputable neurons to predict
uncomputable physical dynamics in the observed world
However, even if this is the case, **there is no possible way to
verify or falsify this
Hi Hector,
You may say the hypothesis of neural hypercomputing valid in the sense
that it helps guide you to interesting, falsifiable theories. That's
fine. But, then you must admit that the hypothesis of souls could be
valid in the same sense, right? It could guide some other people to
We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by
myself, I
2008/12/1 Ben Goertzel [EMAIL PROTECTED]:
And, science cannot tell us whether QM or some empirically-equivalent,
wholly randomness-free theory is the right one...
If two theories give identical predictions under all circumstances
about how the real world behaves, then they are not two separate
If two theories give identical predictions under all circumstances
about how the real world behaves, then they are not two separate
theories, they are merely rewordings of the same theory. And choosing
between them is arbitrary; you may prefer one to the other because
human minds can
Ed, they used to combine ritalin with lsd for psychotherapy. It
assists in absorbing insights achieved from psycholitic doses, which
is a term for doses that are not fully psychedelic. Those are edifying
on their own but are less organized. I don't know if you can get this
in a clinical setting
Ed,
Unfortunately to reply to your message in detail would absorb a lot of
time, because there are two issues mixed up
1) you don't know much about computability theory, and educating you
on it would take a lot of time (and is not best done on an email list)
2) I may not have expressed some of
But quantum theory does appear to be directly related to limits of the
computations of physical reality. The uncertainty theory and the
quantization of quantum states are limitations on what can be computed by
physical reality.
Not really. They're limitations on what measurements of
On Mon, Dec 1, 2008 at 11:19 AM, Ed Porter [EMAIL PROTECTED] wrote:
You said QUANTUM THEORY REALLY HAS NOTHING DIRECTLY TO DO WITH
UNCOMPUTABILITY.
Please don't quote people using this style, it hurts my eyes.
But quantum theory does appear to be directly related to limits of the
Regarding the uncertainty principal, Wikipedia says:
In quantum physics, the Heisenberg uncertainty principle states that the
values of certain pairs of conjugate variables (position and momentum, for
instance) cannot both be known with arbitrary precision. That is, the more
precisely one
HI,
In quantum physics, the Heisenberg uncertainty principle states that the
values of certain pairs of conjugate variables (position and momentum, for
instance) cannot both be known with arbitrary precision. That is, the more
precisely one variable is known, the less precisely the other is
OTOH, there is no possible real-world test to distinguish a true
random sequence from a high-algorithmic-information quasi-random
sequence
So I don't find this argument very convincing...
On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
On Mon, Dec 1, 2008 at 3:09 AM,
On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But quantum theory does appear to be directly related to limits of the
computations of physical reality. The uncertainty theory and the
quantization of quantum states are limitations on what can be computed by
physical
On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
OTOH, there is no possible real-world test to distinguish a true
random sequence from a high-algorithmic-information quasi-random
sequence
I know, but the point is not whether we can distinguish it, but that
quantum
On Mon, Dec 1, 2008 at 4:53 AM, Hector Zenil [EMAIL PROTECTED] wrote:
On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
OTOH, there is no possible real-world test to distinguish a true
random sequence from a high-algorithmic-information quasi-random
sequence
I know,
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality... true random numbers are uncomputable entities which can
never be existed, and any finite series of observations can be modeled
equally well as the first N bits of an
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality...
It has all to do when it is about quantum mechanics. Quantum mechanics
is non-deterministic
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality...
I don't get it. You don't think that quantum mechanics is part of our
physical reality (if
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality...
I don't get it.
On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But I don't get your point at all, because the whole idea of
nondeterministic
Hector Zenil wrote:
On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
But I don't get your point at all, because the
Eric,
Without knowing the scientifically measurable effects of the substance your
post mentioned on the operation of the brain --- I am hypothesizing that the
subjective experience you described could be caused, for example, by a
greatly increased activation of neurons, or by a great decrease in
I remember reading that LSD caused a desegregation of brain faculties,
so that patterns of activity produced by normal operation in one
region can spill over into adjacent ones, where they're intepreted
bizarrely. However, the brain does not go to soup or static, but
rather explodes with novel
Eric: I think your idea that ego loss is induced by a swelling of abstract
senses, squeezing out the structures that deal with your self in an
identificatory way, rings true.
I haven't followed this thread closely, but there is an aspect to it, I
would argue, which is AGI-relevant. It's
Hey, ego loss is attendant with even modest doses of LSD or
psilocybin. At ~ 700 mics I found that effect to be very much
background
On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
Entheogens!
What a great word/euphemism.
Is it pronounced like Inns (where travelers sleep) + Theo
Eric,
If, as your post below implies, you have experienced ego loss, --- please
tell me --- how, if at all, was it different than the sense of oneness with
the surround world that I described in my post of Fri 11/21/2008 8:02 PM
which started this named thread.
That is, how was it
I don't feel motivated to kill this thread in my role as list
moderator, and I agree that what's on or off topic is fairly fuzzy ...
but I just have a sense that discussions of various varieties of
drug-induced (or otherwise induced) states of exalted consciousness is
a bit off-topic for an AGI
Wannabe,
If you read my post of Fri 11/21/2008 8:02 PM in this thread, you will see
that I said the sense of oneness with the external world many of us feel may
just be sensory experience and perception of the external world,
uninterrupted by thoughts of oneself or our brain's chatbot.
This
Ben,
Entheogens!
What a great word/euphemism.
Is it pronounced like Inns (where travelers sleep) + Theo (short for
Theodore) + gins(a subset of liquors I normally avoid like the plague,
except in the occasional summer gin and tonic with lime)?
What is the respective emphasis given to each
Ed Porter wrote:
Richard,
In response to your below copied email, I have the following response to
the below quoted portions:
### My prior post
That aspects of consciousness seem real does not provides much of an
“explanation for consciousness.” It
Hmmm...
I don't agree w/ you that the hard problem of consciousness is
unimportant or non-critical in a philosophical sense. Far from it.
However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.
Of course, I
Ben: I'm a panpsychist ...
You think that all things are sentient/ conscious?
(I argue that consciousness depends on having a nervous system and being
able to feel - and if we could understand the mechanics of that, we would
probably have solved the hard problem and be able to give something
well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?
On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: I'm a panpsychist ...
You think that
Ben,
If you place the limitations on what is part of the hard problem that
Richard has, most of what you consider part of the hard problem would
probably cease to be part of the hard problem. In one argument he
eliminated things relating to lateral or upward associative connections from
being
Ben,
I suspect you're being evasive. You and I know what feel means. When I feel
the wind, I feel cold. When I feel tea poured on my hand, I/it feel/s
scalding hot. And we can trace the line of feeling to a considerable
extent - no? - through the nervous system and brain. Not only do I feel
On Fri, Nov 21, 2008 at 2:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?
Does a rock compute Fibonacci numbers
When I was in college and LSD was the rage, one of the main goals of the
heavy duty heads was ego loss which was to achieve a sense of cosmic
oneness with all of the universe. It was commonly stated that 1000
micrograms was the ticket to ego loss. I never went there. Nor have I
ever
Matt Mahoney wrote:
Autobliss...
Imagine that there is another human language which is the same as
English, just the pain/pleasure related words have the opposite
meaning. Then consider what would that mean for your Autobliss.
My definition of pain is negative reinforcement in a system that
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Trent Waddington wrote:
Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.
That was Richard Feynman
When? I don't really know who said it.. but everyone else
I completed the first draft of a technical paper on consciousness
the other day. It is intended for the AGI-09 conference, and it
can be found at:
Ben Hi Richard,
Ben I don't have any comments yet about what you have written,
Ben because I'm not sure I fully understand what you're trying
--- On Wed, 11/19/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
My definition of pain is negative reinforcement in a system that learns.
IMO, pain is more like a data with the potential to cause disorder in
hard-wired algorithms. I'm not saying this fully covers it but it's
IMO already out of
Ben Goertzel wrote:
Richard,
I re-read your paper and I'm afraid I really don't grok why you think it
solves Chalmers' hard problem of consciousness...
It really seems to me like what you're suggesting is a cognitive
correlate of consciousness, to morph the common phrase neural
correlate
Trent,
Feynman's page on wikipedia has it as: If you can't explain something
to a first year student, then you haven't really understood it. but
Feynman reportedly said it in a number of ways, including the
grandmother variant. I learned about it when taking physics classes a
while ago so I don't
Lastly, about your question re. consciousness of extended objects that are
not concept-atoms.
I think there is some confusion here about what I was trying to say (my
fault perhaps). It is not just the fact of those concept-atoms being at the
end of the line, it is actually about what
Richard,
My first response to this is that you still don't seem to have taken account
of what was said in the second part of the paper - and, at the same time,
I can find many places where you make statements that are undermined by that
second part.
To take the most significant example:
Ben Goertzel wrote:
Richard,
My first response to this is that you still don't seem to have taken
account of what was said in the second part of the paper - and, at
the same time, I can find many places where you make statements that
are undermined by that second part.
To
Richard,
So are you saying that: According to the ordinary scientific standards of
'explanation', the subjective experience of consciousness cannot be
explained ... and as a consequence, the relationship between subjective
consciousness and physical data (as required to be elucidated by any
Ed,
I'd be curious for your reaction to
http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html
which explores the limits of scientific and linguistic explanation, in
a different but possibly related way to Richard's argument.
Science and language are
Ben Goertzel wrote:
Richard,
So are you saying that: According to the ordinary scientific standards
of 'explanation', the subjective experience of consciousness cannot be
explained ... and as a consequence, the relationship between subjective
consciousness and physical data (as required to
Ok, well I read part 2 three times and I seem not to be getting the
importance or the crux of it.
I hate to ask this, but could you possibly summarize it in some
different way, in the hopes of getting through to me??
I agree that the standard scientific approach to explanation breaks
when
Ed Porter wrote:
Richard,
/(the second half of this post, that starting with the all capitalized
heading, is the most important)/
I agree with your extreme cognitive semantics discussion.
I agree with your statement that one criterion for “realness” is the
directness and immediateness
From: Trent Waddington [mailto:[EMAIL PROTECTED]
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED]
wrote:
I mean that people are free to decide if others feel pain. For
example, a scientist may decide that a mouse does not feel pain when it
is stuck in the eye with a needle
I mean that people are free to decide if others feel pain.
Wow! You are one sick puppy, dude. Personally, you have just hit my Do
not bother debating with list.
You can decide anything you like -- but that doesn't make it true.
- Original Message -
From: Matt Mahoney [EMAIL
Richard Loosemore wrote:
Harry Chesley wrote:
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness
the other day. It is intended for the AGI-09 conference, and it
can be found at:
My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that
Mark Waser wrote:
My problem is if qualia are atomic, with no differentiable details,
why do some feel different than others -- shouldn't they all be
separate but equal? Red is relatively neutral, while searing
hot is not. Part of that is certainly lower brain function, below
the level of
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:
I mean that people are free to decide if others feel pain.
Wow! You are one sick puppy, dude. Personally, you have
just hit my Do not bother debating with list.
You can decide anything you like -- but that
doesn't make it true.
Harry Chesley wrote:
Richard Loosemore wrote:
Harry Chesley wrote:
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness
the other day. It is intended for the AGI-09 conference, and it
can be found at:
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:
Autobliss has no grounding, no internal feedback, and no
volition. By what definitions does it feel pain?
Now you are making up new rules to decide that autobliss doesn't feel pain. My
definition of pain is negative reinforcement in a
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:
Autobliss has no grounding, no internal feedback, and no
volition. By what definitions does it feel pain?
Now you are making up new rules to decide that
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Clearly, this can be done, and has largely been done already ... though
cutting and pasting or summarizing the relevant literature in emails would
not a productive use of time
Apparently, it was Einstein who said that if
Richard,
I re-read your paper and I'm afraid I really don't grok why you think it
solves Chalmers' hard problem of consciousness...
It really seems to me like what you're suggesting is a cognitive correlate
of consciousness, to morph the common phrase neural correlate of
consciousness ...
You
Now you are making up new rules to decide that autobliss doesn't feel
pain. My definition of pain is negative reinforcement in a system that
learns. There is no other requirement.
I made up no rules. I merely asked a question. You are the one who makes a
definition like this and then says
I am just trying to point out the contradictions in Mark's sweeping
generalizations about the treatment of intelligent machines
Huh? That's what you're trying to do? Normally people do that by pointing to
two different statements and arguing that they contradict each other. Not by
Colin: right or wrong...I have a working physical model for
consciousness.
Just so. Serious scientific study of consciousness entails *models* not
verbal definitions. The latter are quite hopeless. Richard opined that
there is a precise definition of the hard problem of consciousness.
There
--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote:
I wrote:
I think the reason that the hard question is
interesting at all is that it would presumably be OK to
torture a zombie because it doesn't actually experience
pain, even though it would react exactly like a human being
How do you propose grounding ethics?
Ethics is building and maintaining healthy relationships for the betterment
of all. Evolution has equipped us all with a good solid moral sense that
frequently we don't/can't even override with our short-sighted selfish
desires (that, more frequently
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Three things.
First, David Chalmers is considered one of the world's foremost
researchers in the consciousness field (he is certainly now the most
celebrated). He has read the argument presented in my paper, and he
has
Ben Goertzel wrote:
Sorry to be negative, but no, my proposal is not in any way a
modernization of Peirce's metaphysical analysis of awareness.
Could you elaborate the difference? It seems very similar to me.
You're saying that consciousness has to do with the bottoming-out of
Trent Waddington wrote:
Richard,
After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
consciousness that many lay-people use the word to refer to. I
don't think your explanation is fleshed out enough for those
Benjamin Johnston wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
Hi Richard,
I don't have any comments yet about what you have written, because I'm
not sure I fully understand
Colin Hales wrote:
Dear Richard,
I have an issue with the 'falsifiable predictions' being used as
evidence of your theory.
The problem is that right or wrong...I have a working physical model for
consciousness. Predictions 1-3 are something that my hardware can do
easily. In fact that kind
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:
How do you propose testing whether a model is correct or not?
By determining whether it is useful and predictive -- just
like what we always do when we're practicing science (as
opposed to spouting BS).
An ethical model tells you
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:
What I am claiming (and I will make this explicit in a
revision of the paper) is that these notions of
explanation, meaning, solution
to the problem, etc., are pushed to their breaking
point by the problem of consciousness. So
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
For example, in
fifty years, I think it is quite possible we will be able to say with some
confidence if certain machine intelligences we design are conscious nor not,
and whether their pain is as real as the pain of another type of
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, let me phrase it like this: I specifically say (or
rather I should have done... this is another thing I need to
make more explicit!) that the predictions are about making
alterations at EXACTLY the boundary of the analysis
Ben Goertzel wrote:
Ed,
BTW on this topic my view seems closer to Richard's than yours, though
not anywhere near identical to his either. Maybe I'll write a blog post
on consciousness to clarify, it's too much for an email...
I am very familiar with Dennett's position on consciousness, as
I have no doubt that if you did the experiments you describe, that the
brains would be rearranged consistently with your predictions. But what
does that say about consciousness?
What are you asking about consciousness?
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To:
On 11/14/2008 9:27 AM, Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
Good paper.
A
Matt Mahoney wrote:
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, let me phrase it like this: I specifically say (or rather I
should have done... this is another thing I need to make more
explicit!) that the predictions are about making alterations at
EXACTLY the
Harry Chesley wrote:
On 11/14/2008 9:27 AM, Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:
No it won't, because people are free to decide what makes pain real.
What? You've got to be kidding . . . . What makes
pain real is how the sufferer reacts to it -- not some
abstract wishful thinking that we use to justify our
Matt,
First, it is not clear people are free to decide what makes pain real,
at least subjectively real. If I zap you will a horrible electric shock of
the type Sadam Hussein might have used when he was the chief
interrogator/torturer of Iraq's Baathist party, it is not clear exactly how
much
An excellent question from Harry . . . .
So when I don't remember anything about those towns, from a few minutes
ago on my road trip, is it because (a) the attentional mechanism did not
bother to lay down any episodic memory traces, so I cannot bring back the
memories and analyze them, or (b)
Thanks Richard ... I will re-read the paper with this clarification in mind.
On the face of it, I tend to agree that the concept of explanation is
fuzzy and messy and probably is not, in its standard form, useful for
dealing with consciousness
However, I'm still uncertain as to whether your
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.
I mean that people are free to decide if others feel pain. For example, a
scientist may decide that a mouse does not feel pain when it is
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
I mean that people are free to decide if others feel pain. For example, a
scientist may decide that a mouse does not feel pain when it is stuck in the
eye with a needle (the standard way to draw blood) even though it
There are procedures in place for experimenting on humans. And the
biologies of people and animals are orthogonal! Much of this will be
simulated soon
On 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote:
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
I mean that
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:
Autobliss responds to pain by changing its behavior to
make it less likely. Please explain how this is different
from human suffering. And don't tell me its because one
is human and the other is a simple program, because...
Why
Matt,
With regard to your first point I largely agree with you. I would, however,
qualify it with the fact that many of us find it hard not to sympathize with
people or animals, such as a dog, under certain circumstances when we
directly sense outward manifestations that they are experiencing
--- On Mon, 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote:
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
I mean that people are free to decide if others feel
pain. For example, a scientist may decide that a mouse does
not feel pain when it is stuck in the eye
--- On Mon, 11/17/08, Eric Burton [EMAIL PROTECTED] wrote:
There are procedures in place for experimenting on humans. And the
biologies of people and animals are orthogonal! Much of this will be
simulated soon
When we start simulating people, there will be ethical debates about that. And
Before you can start searching for consciousness, you need to describe
precisely what you are looking for.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
From: Ed Porter [EMAIL PROTECTED]
Subject: RE: FW: [agi] A paper that actually does solve
Richard Loosemore wrote:
Harry Chesley wrote:
A related question: How do you explain the fact that we sometimes
are aware of qualia and sometimes not? You can perform the same
actions paying attention or on auto pilot. In one case, qualia
manifest, while in the other they do not. Why is that?
1 - 100 of 151 matches
Mail list logo