On Sun, 20 Jun 2021 at 05:48, John Clark <johnkcl...@gmail.com> wrote:

> On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jasonre...@gmail.com> wrote:
>
> >> I'm enormously impressed with Deepmind and I'm an optimist regarding
>>> AI, but I'm not quite that optimistic.
>>>
>>
>> *>Are you familiar with their Agent 57? -- a single algorithm that
>> mastered all 57 Atari games at a super human level, with no outside
>> direction, no specification of the rules, and whose only input was the "TV
>> screen" of the game.*
>>
>
> As I've said, that is very impressive, but even more impressive would be
> winning a Nobel prize, or even just being able to diagnose that the problem
> with your old car is a broken fan belt, and be able to remove the bad
> belt and install a good one, but we're not quite there yet.
>
> *> Also, because of chaos, predicting the future to any degree of accuracy
>> requires exponentially more information about the system for each finite
>> amount of additional time to simulate, and this does not even factor in
>> quantum uncertainty,*
>>
>
> And yet many times humans can make predictions that turn out to be better
> than random guessing, and a computer should be able to do at least as good,
> and I'm certain they will eventually.
>
> >  Being unable to predict the future isn't a good definition of the
>> singularity, because we already can't.
>>
>
> Not true, often we can make very good predictions, but that will be
> impossible during the singularity
>
>  > *We are getting very close to that point. *
>>
>
> Maybe, but even if the singularity won't happen for 1000 years 999 years
> from now it will still seem like a long way off because more progress will
> be made in that last year than the previous 999 combined. It's in the
> nature of exponential growth and that's why predictions are virtually
> impossible during that time, the tiniest uncertainty in initial condition
> gets magnified into a huge difference in final outcome.
>
> *> There may be valid logical arguments that disprove the consistency of
>> zombies. For example, can something "know without knowing?" It seems not.*
>>
>
> Even if that's true I don't see how that would help me figure out if
> you're a zombie or not.
>
>
>> > So how does a zombie "know" where to place it's hand to catch a ball,
>> if it doesn't "knowing" what it sees?
>>
>
> If catching a ball is your criteria for consciousness then computers are 
> already
> conscious, and you don't even need a supercomputer, you can make one in
> your own home for a few hundred dollars and some spare parts. Well maybe
> so, I always maintained that consciousness is easy but intelligence is
> hard.
>
> Moving hoop won't let you miss
> <https://www.youtube.com/watch?v=myO8fxhDRW0>
>
> *> For example, wee could rule out many theories and narrow down on those
>> that accept "organizational invariance" as Chalmers defines it. This is the
>> principle that if one entity is consciousness, and another entity is
>> organizationally and functionally equivalent, preserving all the parts and
>> relationships among its parts, then that second entity must be equivalently
>> conscious to the first.*
>>
>
> Personally I think that principle sounds pretty reasonable, but I can't
> prove it's true and never will be able to.
>

Chalmers presents a proof of this in the form of a reductio ad absurdum.

>> I know I can suffer, can you?
>>
>>
>> *>I can tell you that I can.*
>>
>
> So now I know you could generate the ASCII sequence "*I can tell you that
> I can*", but that doesn't answer my question, can you suffer? I don't
> even know if you and I mean the same thing by the word "suffer".
>
>
>> *> You could verify via functional brain scans that I wasn't
>> preprogrammed like an Eliza bot to say I can. You could trace the neural
>> firings in my brain to uncover the origin of my belief that I can suffer,
>> and I could do the same for you.*
>>
>
> No I cannot. Theoretically I could trace the neural firings in your brain
> and figure out how they stimulated the muscles in your hand to type out "*I
> can tell you that I can*"  but that's all I can do. I can't see suffering
> or unhappiness on an MRI scan, although I may be able to trace the nerve
> impulses that stimulate your tear glands to become more active.
>
> *> Could a zombie write a book like Chalmers's "The Consciousness Mind"?*
>>
>
> I don't think so because it takes intelligence to write a book and my
> axiom is that consciousness is the inevitable byproduct of intelligence. I
> can give reasons why I think the axiom is reasonable and probably true
> but it falls short of a proof, that's why it's an axiom.
>
>
>>
>> *> Some have proposed writing philosophical texts on the philosophy of
>> mind as a kind of super-Turing test for establishing consciousness.*
>>
>
> I think you could do much better than that because it only takes a minimal
> amount of intelligence to dream up a new consciousness theory, they're a
> dime a dozen, any one of them is as good, or as bad, as another. Good
> intelligence theories on the other hand are hard as hell to come up with
> but if you do find one you're likely to become the world's first
> trillionaire.
>
> *Wouldn't you prefer the anesthetic that knocks you out vs. the one that
>> only blocks memory formation? Wouldn't a theory of consciousness be
>> valuable here to establish which is which?*
>>
>
> Such a theory would be utterly useless because there would be no way to
> tell if it was correct. If one consciousness theory says you were conscious
> and a rival theory says you were not there is no way to tell which one was
> right.
>
> *> You appear to operate according to a "mysterian" view of consciousness,
>> which is that we cannot ever know. *
>>
>
> There is no mystery, I just operate in the certainty that there are only 2
> possibilities, a chain of "why" questions either goes on for infinity or
> the chain terminates in a brute fact.  In this case I think termination is
> more likely, so I think it's a brute fact consciousness is the way data
> feels when it is being processed.
>
> Of my own free will, I consciously decide to go to a restaurant.
> *Why? *
> Because I want to.
> *Why ? *
> Because I want to eat.
> *Why?*
> Because I'm hungry?
> *Why ?*
> Because lack of food triggered nerve impulses  in my stomach , my brain 
> interpreted
> these signals as pain, I can only stand so much before I try to
> stop it.
> *Why?*
> Because I don't like pain.
> *Why? *
> Because that's the way my brain is constructed.
> *Why?*
> Because my body  and the hardware of my brain were made from the
> information in my genetic code  (lets see, 6 billion base pairs 2 bits
> per base pair
> 8 bits per byte that comes out to about 1.5 gig, )  the programming of my
> brain came from the environment, add a little quantum randomness perhaps
> and of my own free will I consciously decide to go to a restaurant.
>
> *> You could have been a mysterian about how life reproduces itself or why
>> the stars shine, until a few hundred years ago, but you would have been
>> proven wrong. Why do you think these questions below are intractable?*
>>
>
> Because there are objective experiments you can perform and things  you
> can observe that will give you information on how organisms reproduce
> themselves and how stars shine, but there is nothing comparable with regard
> to consciousness, there is no way to bridge the objective/subjective divide
> without making use of unproven and unprovable assumptions or axioms.
> That's why the field of consciousness research has not progressed one
> nanometer in the last century, or even the last millennium.
>
> >>I have no proof and never will have any, however I must assume that the
>>> above is true because I simply could not function if I really believed that
>>> solipsism was correct and I was the only conscious being in the
>>> universe. Therefore I take it as an axiom that intelligent behavior implies
>>> consciousness.
>>>
>>
>> *> This itself is a theory of consciousness.*
>>
>
> Yep, and it's just as good, and just as bad, as every other theory of
> consciousness.
>
> *> You must have some reason to believe it, even if you cannot yet prove
>> it.*
>>
>
> I do. I know Darwinian Evolution produced me and I know for a fact that I
> am conscious, but Natural Selection can't see consciousness any better than
> we can directly see consciousness in other people, Evolution can only see
> intelligent behavior and it can't select for something it can't see. And
> yet Evolution managed to produce consciousness at least once and probably
> many billions of times. I therefore conclude that either Darwin was wrong
> or consciousness is an inevitable byproduct of intelligence. I don't think
> Darwin was wrong.
>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> vgj
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW%3D5Ddp10FoLxULC5m1bzfLpvwo28O0dAv5ojoP8dTPTg%40mail.gmail.com.

Reply via email to