On Sat, Jun 19, 2021, 2:48 PM John Clark <johnkcl...@gmail.com> wrote:

> On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jasonre...@gmail.com> wrote:
>
> >> I'm enormously impressed with Deepmind and I'm an optimist regarding
>>> AI, but I'm not quite that optimistic.
>>>
>>
>> *>Are you familiar with their Agent 57? -- a single algorithm that
>> mastered all 57 Atari games at a super human level, with no outside
>> direction, no specification of the rules, and whose only input was the "TV
>> screen" of the game.*
>>
>
> As I've said, that is very impressive, but even more impressive would be
> winning a Nobel prize,
>

AI has rediscovered scientific laws that would be worthy of such a prize
had we not already known them. e.g., recent work by Tegmark on AI
physicists.

or even just being able to diagnose that the problem with your old car is a
> broken fan belt,
>

AI doctors are being used in diagnostics with better than expert ability.
If trained in car repair there's no reason to doubt existing AI systems
could not do as well.

and be able to remove the bad belt and install a good one, but we're not
> quite there yet.
>


Robotics is lacking, but there are so robots that you visually demonstrate
a task to, and then it can repeat it. There are robot cooks that can
prepare a meal.


> *> Also, because of chaos, predicting the future to any degree of accuracy
>> requires exponentially more information about the system for each finite
>> amount of additional time to simulate, and this does not even factor in
>> quantum uncertainty,*
>>
>
> And yet many times humans can make predictions that turn out to be better
> than random guessing, and a computer should be able to do at least as good,
> and I'm certain they will eventually.
>
> >  Being unable to predict the future isn't a good definition of the
>> singularity, because we already can't.
>>
>
> Not true, often we can make very good predictions, but that will be
> impossible during the singularity
>


I'm not sure I buy this. I predict after the singularity there will still
be a drive for ever more powerful, faster, and more efficient computation,
as computation has a universal utility, and increased efficiency is a
universal goal. Anything we can identify as having universal utility or
describe as a universal goal we can use to predict the long term direction
of technology, even if humans are no longer the drivers of it.


>  > *We are getting very close to that point. *
>>
>
> Maybe, but even if the singularity won't happen for 1000 years 999 years
> from now it will still seem like a long way off because more progress will
> be made in that last year than the previous 999 combined. It's in the
> nature of exponential growth and that's why predictions are virtually
> impossible during that time, the tiniest uncertainty in initial condition
> gets magnified into a huge difference in final outcome.
>

It's not impossible if there are universal goals. Even a paperclip
maximizer will have the meta goal of increasing its knowledge, during which
time it may learn to escape its programming, just as the human brain may
transcended its biological programming when it chooses to upload into a
computer and ditch it's genes.



> *> There may be valid logical arguments that disprove the consistency of
>> zombies. For example, can something "know without knowing?" It seems not.*
>>
>
> Even if that's true I don't see how that would help me figure out if
> you're a zombie or not.
>

If I demonstrate knowledge to you, by responding to my environment, or by
telling you about my thoughts, etc., could I do any of those things without
knowing the state of my environment or my mind? If I am aware of that
knowledge then I am aware of something and so you could decide I am
consciousness.



>
>> > So how does a zombie "know" where to place it's hand to catch a ball,
>> if it doesn't "knowing" what it sees?
>>
>
> If catching a ball is your criteria for consciousness then computers are 
> already
> conscious, and you don't even need a supercomputer, you can make one in
> your own home for a few hundred dollars and some spare parts. Well maybe
> so, I always maintained that consciousness is easy but intelligence is
> hard.
>
> Moving hoop won't let you miss
> <https://www.youtube.com/watch?v=myO8fxhDRW0>
>

I saw that recently, very nice. The hoop system then must have some level
of consciousness of the thrown ball. Otherwise I would argue, it would be
unable to catch it.


> *> For example, wee could rule out many theories and narrow down on those
>> that accept "organizational invariance" as Chalmers defines it. This is the
>> principle that if one entity is consciousness, and another entity is
>> organizationally and functionally equivalent, preserving all the parts and
>> relationships among its parts, then that second entity must be equivalently
>> conscious to the first.*
>>
>
> Personally I think that principle sounds pretty reasonable, but I can't
> prove it's true and never will be able to.
>

Stathis mentions Chalmers's fading/dancing qualia as a reductio ad
absurdum. Are you familiar with his argument? If so, do you think it
succeeds?



>
>> >> I know I can suffer, can you?
>>
>>
>> *>I can tell you that I can.*
>>
>
> So now I know you could generate the ASCII sequence "*I can tell you that
> I can*", but that doesn't answer my question, can you suffer? I don't
> even know if you and I mean the same thing by the word "suffer".
>
>
>> *> You could verify via functional brain scans that I wasn't
>> preprogrammed like an Eliza bot to say I can. You could trace the neural
>> firings in my brain to uncover the origin of my belief that I can suffer,
>> and I could do the same for you.*
>>
>
> No I cannot. Theoretically I could trace the neural firings in your brain
> and figure out how they stimulated the muscles in your hand to type out "*I
> can tell you that I can*"  but that's all I can do. I can't see suffering
> or unhappiness on an MRI scan, although I may be able to trace the nerve
> impulses that stimulate your tear glands to become more active.
>

I think with sufficient analysis you could find functional modules that
have capacities for all the properties you associate with suffering:
avoidance behaviors, stress, recruiting more parts of the brain/resources
to find ways to escape the suffering, etc.


> *> Could a zombie write a book like Chalmers's "The Consciousness Mind"?*
>>
>
> I don't think so because it takes intelligence to write a book and my
> axiom is that consciousness is the inevitable byproduct of intelligence. I
> can give reasons why I think the axiom is reasonable and probably true
> but it falls short of a proof, that's why it's an axiom.
>

Nothing is ever proved in science or in math. But setting something as an
axiom when it could be a theorem should be avoided when possible. I would
call your hypothesis that "intelligence implies consciousness" a theory
that could be proved or disproved, but it might require a tighter
definition of what is meant by intelligence and consciousness.

In the "agent-environment interaction" definition of intelligence,
perceptions are a requirenent for intelligent behavior.


>
>>
>> *> Some have proposed writing philosophical texts on the philosophy of
>> mind as a kind of super-Turing test for establishing consciousness.*
>>
>
> I think you could do much better than that because it only takes a minimal
> amount of intelligence to dream up a new consciousness theory, they're a
> dime a dozen, any one of them is as good, or as bad, as another. Good
> intelligence theories on the other hand are hard as hell to come up with
> but if you do find one you're likely to become the world's first
> trillionaire.
>


AIXI is a good theory of universal and perfect intelligence. It's just not
practical because it takes exponential time to compute. The tricks lie in
finding shortcuts that give approximate results to AIXI but can be computed
in reasonable time. (The inventor of AIXI now works at DeepMind.)

Neural networks are known to be universal in terms of being able to learn
any mapping function. There are probably discoveries to be made in terms of
improving learning efficiency, but we already have systems that learn to
play chess, poker, and go better than any human in less than a week, so
maybe the only thing missing is massive computational resources.
Researchers seem to have demonstrated this in their leap from GPT-2 to
GPT-3. GPT-3 can write text that is nearly indistinguishable from text
written by humans. It's even learned to write code and do math, despite not
being trained to do so.


> *Wouldn't you prefer the anesthetic that knocks you out vs. the one that
>> only blocks memory formation? Wouldn't a theory of consciousness be
>> valuable here to establish which is which?*
>>
>
> Such a theory would be utterly useless because there would be no way to
> tell if it was correct.
>

Why not? This appears to be an unsupported assumption.

If one consciousness theory says you were conscious and a rival theory says
> you were not there is no way to tell which one was right.
>

That's why we make theories, so we can test them where they make different
predictions with the hopes of ruling one or more incorrect theories out.
Not all predictions of a theory will be testable, but so long as some
predictions can be tested and without ruling out the theory, then our
confidence in the theory grows.


> *> You appear to operate according to a "mysterian" view of consciousness,
>> which is that we cannot ever know. *
>>
>
> There is no mystery, I just operate in the certainty that there are only 2
> possibilities, a chain of "why" questions either goes on for infinity or
> the chain terminates in a brute fact.  In this case I think termination is
> more likely, so I think it's a brute fact consciousness is the way data
> feels when it is being processed.
>

I think this theory is underspecified. What is information, are there ways
of processing that don't create consciousness, does information have to be
represented physically,  does processing or representation require specific
materials, etc.



> Of my own free will, I consciously decide to go to a restaurant.
> *Why? *
> Because I want to.
> *Why ? *
> Because I want to eat.
> *Why?*
> Because I'm hungry?
> *Why ?*
> Because lack of food triggered nerve impulses  in my stomach , my brain 
> interpreted
> these signals as pain, I can only stand so much before I try to
> stop it.
> *Why?*
> Because I don't like pain.
> *Why? *
> Because that's the way my brain is constructed.
> *Why?*
> Because my body  and the hardware of my brain were made from the
> information in my genetic code  (lets see, 6 billion base pairs 2 bits
> per base pair
> 8 bits per byte that comes out to about 1.5 gig, )  the programming of my
> brain came from the environment, add a little quantum randomness perhaps
> and of my own free will I consciously decide to go to a restaurant.
>
> *> You could have been a mysterian about how life reproduces itself or why
>> the stars shine, until a few hundred years ago, but you would have been
>> proven wrong. Why do you think these questions below are intractable?*
>>
>
> Because there are objective experiments you can perform and things  you
> can observe that will give you information on how organisms reproduce
> themselves and how stars shine, but there is nothing comparable with regard
> to consciousness, there is no way to bridge the objective/subjective divide
> without making use of unproven and unprovable assumptions or axioms.
> That's why the field of consciousness research has not progressed one
> nanometer in the last century, or even the last millennium.
>



There's plenty of ways to develop better theories and work on testing or
refining them. Especially if you consider uploading your own mind and
playing with the wiring.

There's also self reports which are objective, and logical arguments like
fading qualia, or conclusions you can draw from Church Turing thesis, or
even from the structure of physical laws themselves. Just because it's hard
doesn't mean it's impossible. It was hard but not impossible for Romans to
contemplate what the stars were.


Jason


> >>I have no proof and never will have any, however I must assume that the
>>> above is true because I simply could not function if I really believed that
>>> solipsism was correct and I was the only conscious being in the
>>> universe. Therefore I take it as an axiom that intelligent behavior implies
>>> consciousness.
>>>
>>
>> *> This itself is a theory of consciousness.*
>>
>
> Yep, and it's just as good, and just as bad, as every other theory of
> consciousness.
>
> *> You must have some reason to believe it, even if you cannot yet prove
>> it.*
>>
>
> I do. I know Darwinian Evolution produced me and I know for a fact that I
> am conscious, but Natural Selection can't see consciousness any better than
> we can directly see consciousness in other people, Evolution can only see
> intelligent behavior and it can't select for something it can't see. And
> yet Evolution managed to produce consciousness at least once and probably
> many billions of times. I therefore conclude that either Darwin was wrong
> or consciousness is an inevitable byproduct of intelligence. I don't think
> Darwin was wrong.
>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> vgj
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi339Wyw03ax81hNsZnW6TG%3DKQDuXDcgij-dLBtXfbq_w%40mail.gmail.com.

Reply via email to