> From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
>
> For general intelligence some components and sub-components of
> consciousness
> need to be there and some don't. And some could be replaced with a human
> operator as in an augmentation-like system. Also some components could
> be
> designe
> From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
>
> The problem of consciousness is not only a hard problem because of
> unknown
> mechanisms in the brain but it is a problem of finding the DEFINITION of
> necessary conditions for consciousness.
> I think, consciousness without intelligence i
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sun, 6/1/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > OK How about this. A CAPTCHA that combines human audio and
> > visual illusion that evokes a realtime reaction only in a conscious
> > physical human. Can audio visual illusion be used
--- On Sun, 6/1/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> OK How about this. A CAPTCHA that combines human audio and
> visual illusion that evokes a realtime reaction only in a conscious
> physical human. Can audio visual illusion be used as a test for
> consciousness? Could it be used to evok
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sun, 6/1/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > AI has a long way to go to thwart CAPTCHAs altogether.
> > There are math CAPTCHAs (MAPTCHAs), 3-D CAPTCHAs, image rec CAPTCHAs,
> > audio and I can think
> > of some that are quite d
--- On Sun, 6/1/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> AI has a long way to go to thwart CAPTCHAs altogether.
> There are math CAPTCHAs (MAPTCHAs), 3-D CAPTCHAs, image rec CAPTCHAs,
> audio and I can think
> of some that are quite difficult for AI. Actually coming up
> with new CAPTCHAs wou
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
>
> You are - if I've understood you - talking about the machinery and
> programming that produce and help to process the movie of
> consciousness.
>
> I'm not in any way denying all that or its complexity. But the first
> thing
> is to define and mo
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Unfortunately AI will make CAPTCHAs useless against spammers. We will
> need to figure out other methods. I expect that when we have AI, most
> of the world's computing power is going to be directed at attacking
> other computers and defending agai
On Saturday 31 May 2008 10:23:15 pm, Matt Mahoney wrote:
> Unfortunately AI will make CAPTCHAs useless against spammers. We will need
to figure out other methods. I expect that when we have AI, most of the
world's computing power is going to be directed at attacking other computers
and defend
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > --- On Sat, 5/31/08, John G. Rose
> <[EMAIL PROTECTED]> wrote:
> >
> > > > From: Matt Mahoney
> [mailto:[EMAIL PROTECTED]
> >
> > > > I don't believe you are conscious. I believe yo
John:When you describe this you have to be careful how much computation your
mind
is doing and taking for granted. You make many assumptions just by looking
at the pic and saying these are signs that this man is conscious. And saying
that a handheld TV is some sort of model, ya that's making mass
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
>
> you utterly refused to answer my question re: what is your model? It's
> not a
> hard question to start answering - i.e. either you do have some kind of
> model or you don't. You simply avoided it. Again.
I have some models that I feel confident
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
>
> > > I don't believe you are conscious. I believe you
> > > are a zombie. Prove me wrong.
> >
> > I am a zombie. Prove to me that
John,
I'm going to stop here (unless you want to continue) - and not hound you :).
But I would like you to see something -
you utterly refused to answer my question re: what is your model? It's not a
hard question to start answering - i.e. either you do have some kind of
model or you don't. Yo
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > I don't believe you are conscious. I believe you
> > are a zombie. Prove me wrong.
>
> I am a zombie. Prove to me that I am not. Otherwise I will
> accuse you of being conscious.
--- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I wrote:
> > What internal properties of a Turing machine
> > distinguish one that has subjective experiences from an
> > equivalent machine (implementing the same function) that
> > only pretends to have subjective experience?
>
>
> Y
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> That's correct. The model of consciousness should be the self [brain-
> body]
> watching and physically interacting with the movie [that is in a sense
> an
> "open movie" - rather than on a closed screen - projected all over the
> world
> outside, an
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
>
> If by "conscious" you mean "having a humanlike subjective experience",
> I suppose that in future we will infer this about intelligent agents
> via a combination of observation of their behavior, and inspection of
> their internal construction and
John:A movie implies someone or something watching it. Too simplistic. A
rock is
getting the world movie played upon it ad infinitum.
That's correct. The model of consciousness should be the self [brain-body]
watching and physically interacting with the movie [that is in a sense an
"open movie
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > If something is pretending, at first it may dupe others into thinking
> > that it is conscious. But as time goes on and other conscious
> > agents detect and suspect an imposter thei
On Sun, Jun 1, 2008 at 2:15 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> But in future, there could be impostor agents that act like
>> they have humanlike subjective experience but don't ... and we
>> could uncover them by analyzin
--- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> But in future, there could be impostor agents that act like
> they have humanlike subjective experience but don't ... and we
> could uncover them by analyzing their internals...
What internal properties of a Turing machine distinguish
If by "conscious" you mean "having a humanlike subjective experience",
I suppose that in future we will infer this about intelligent agents
via a combination of observation of their behavior, and inspection of
their internal construction and dynamics.
As right now the only intelligent agents that
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> If something is pretending, at first it may dupe others into thinking
> that it is conscious. But as time goes on and other conscious
> agents detect and suspect an imposter their behavior will change
> towards it and the resultant beh
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> No, I believe I'm right here. Maths is only quantification - the
> question is
> : what are you quantifying? Programs are only recipes to construct
> something
> or a sequence of behaviour. The question again is: what are you
> constructing?
>
Math
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > People believe they are conscious. Why? Because they are.
>
> No, because people that didn't believe it did not pass on their genes.
>
Also people that didn't believe that they ha
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> People believe they are conscious. Why? Because they are.
No, because people that didn't believe it did not pass on their genes.
> Is there more than just a belief that we are conscious? Sure some
> rare individuals can block pain.
John, The reason why people are thinking about all this stuff in terms of
maths is
because it is not all just fluffy philosophizing you have to have at least
minimalistic math models in order to build software. So when you say
iTheathre or iMovie I'm thinking bits per send, compression, color dep
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
>
> What many people call consciousness is qualia, that which distinguishes
> you from a philosophical zombie, http://en.wikipedia.org/wiki/P-zombie
>
> There is no test for consciousness in this sense, but humans universally
> believe that they are
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
>
> You guys are seriously irritating me.
>
> You are talking such rubbish. But it's collective rubbish - the
> collective *non-sense* of AI. And it occurs partly because our culture
> doesn't offer a simple definition of consciousness. So let me hav
> From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
>
> read http://cs-www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf
>
You can come up with different models of consciousness. And the more models
that you think up, more variables creep into the equation. So you have to
fight to keep ones out
Here are some examples of consciousness as world-movie -
http://www.youtube.com/watch?v=_kbNv3vBvcI
http://www.youtube.com/watch?v=aiLvLsBjFwc
http://www.youtube.com/watch?v=2qPC4Ty0tlY
http://www.youtube.com/watch?v=Jwkm-IcBMy0
There are obvious limitations of this model - the pov camera only c
You guys are seriously irritating me.
You are talking such rubbish. But it's collective rubbish - the
collective *non-sense* of AI. And it occurs partly because our culture
doesn't offer a simple definition of consciousness. So let me have a crack
at one.
First off, let's remove consciousness-as
What many people call consciousness is qualia, that which distinguishes
you from a philosophical zombie, http://en.wikipedia.org/wiki/P-zombie
There is no test for consciousness in this sense, but humans universally
believe that they are conscious, and this belief is testable. Just ask
someone.
read http://cs-www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=86
> From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
>
> And if a spherical cow is suspended in a vacuum for 1 billion years,
> will it dream of intelligent robots? I have a distinct impression that
> people participating in this thread want to shroud themselves and
> others in opaque and hopeless mys
On Thu, May 29, 2008 at 8:08 PM, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> If an agent is shielded from memories about the processes going on in its
> own mind that are related to itself, if it is unaware of itself within its
> environment is it impossible for it to learn? Does it have to know ab
Consciousness opens a major can of worms and has religious issues. And a God
could be an emergence from the complex system of distributed consciousness
of multi-agent intelligent lifeforms (figure I'd throw that out there).
But avoiding consciousness and keeping intelligent agents unawares of
t
John, Matt, et al,
On 5/29/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> >
> > > Consciousness with minimal intelligence may be easier to build than
> > general
> > > intelligence. General intellig
> From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
> On Thu, May 29, 2008 at 6:41 PM, John G. Rose <[EMAIL PROTECTED]>
> wrote:
> >
> > How can the two terms be equivalent? Some may think that they are
> > inseparable, or that one cannot exist without the other, I can
> understand
> > that perspectiv
On Thu, May 29, 2008 at 6:41 PM, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> How can the two terms be equivalent? Some may think that they are
> inseparable, or that one cannot exist without the other, I can understand
> that perspective. But there is a quantitative relationship between the two.
>
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
>
> > Consciousness with minimal intelligence may be easier to build than
> general
> > intelligence. General intelligence is the one that takes the
> resources.
> > A general consciousness algorithm, on
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> Consciousness with minimal intelligence may be easier to build than
general
> intelligence. General intelligence is the one that takes the resources.
> A general consciousness algorithm, one that creates a consciousness in
any
> environment may be s
> From: A. T. Murray [mailto:[EMAIL PROTECTED]
> John Rose communicated:
> >
> > Consciousness with minimal intelligence may be easier
> > to build than general intelligence. [...]
>
> IMHO consciusness emerges from any level of intelligence.
> Please see
> http://mentifex.virtualentity.com/consci
John Rose communicated:
>
> Consciousness with minimal intelligence may be easier
> to build than general intelligence. [...]
IMHO consciusness emerges from any level of intelligence.
Please see
http://mentifex.virtualentity.com/conscius.html
"Is MindForth conscious?"
http://mentifex.virtualenti
45 matches
Mail list logo