Re: Are Philosophical Zombies possible?

2024-07-14 Thread Jason Resch
On Sun, Jul 14, 2024, 11:36 AM PGC  wrote:

>
>
> On Sunday, July 14, 2024 at 5:42:23 AM UTC+2 Jason Resch wrote:
>
>
>
> On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:
>
>
>
> On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
>
> Yes it's possible to have a universal Turing machine in the sense that you
> can run any program by just changing the tape, however ONLY if that tape
> has instructions for changing the set of states  that the machine can be
> in.
>
>
>
> It still boggles my mind that matter is Turing-complete.
>
>
> Turing completeness, as incredible as it is, is (remarkably) easy to come
> by. You can achieve it with addition and multiplication, with billiard
> balls, with finite automata (rule 110, or game of life), with artificial
> neurons, etc. That something as sophisticated as matter could achieve it is
> to me less surprising than the fact that these far simpler things can.
>
>
> In hindsight, every result is easy to come by. You assume sophistication
> to beat simplicity. That's just weird, given how little we actually know.
> Without that simplicity for example, we wouldn't have discovered computers.
>

When I say that matter is more sophisticated than say, the cells in game of
life, I mean matter is more flexible. So if something as limited as GoL is
flexible enough to create a Turing machine in it, then to me, it is less
surprising our (even more flexible) physics allows Turing machines to be
constructed.



>
>
>
> And this despite parts of physics being not Turing emulable.
>
> Finite physical system's can be simulated to any desired degree of
> accuracy, and moreover all known laws of physics are computable. Which
> parts of physics do you refer to when you say there are parts that aren't
> Turing emulable?
>
>
> ? You write so much about these topics, I cannot understand how you make
> that statement. Many of the known laws are
>

I am not aware of any exceptions (except the hypothesized objective wave
function collapse) but objective wave function collapse is a rather
ridiculous theory for which we have no evidence.

 but there is so much more to physics than known laws and their solutions.
> And to any desired degree of accuracy?
>

When I say this, I quote the Church-Turing-Wolfram-Deutsch principle:
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle

"One expects in fact that universal computers are as powerful in their
computational capabilities as any physically realizable system can be, so
that they can simulate any physical system. This is the case if in all
physical systems there is a finite density of information, which can be
transmitted only at a finite rate in a finite-dimensional space."
— Stephen Wolfram in “Undecidability and Intractability in Theoretical
Physics” (1985)

To my knowledge, this principle remains an open conjecture in physics.



I'll write fast and clumsily as I am by no means an expert and gotta go:
>
> Some finite-state physical phenomena present significant challenges to
> computational simulation due to their inherent complexity and the
> limitations of current computational models.
>

This is due to the time and space limits of our computer hardware, not due
to any assumed inherent non-computable processes in physics.


One example is quantum entanglement and superposition. In quantum
> mechanics, particles can exist in multiple states simultaneously, which you
> know, and influence each other instantaneously at a distance, a phenomenon
> known as entanglement.
>

There are no non-local influence unless one  believes there is objective
wave function collapse. Entanglement is no more mysterious than consistency
of measurements. Both are the same phenomenon.


Simulating these quantum behaviors on classical Turing machines is
> inherently difficult because it requires representing exponentially growing
> state spaces.
>

Again this is a practical limitation of our hardware.



> Turbulence in fluid dynamics is another challenging phenomenon. Turbulent
> flow in fluids features chaotic and unpredictable patterns, including
> vortices and eddies.
>

Chaotic behavior means a system's future state cannot be predicted by
analytic means (there's not an equation we can plug a time variable into to
get a result arbitrarily far into the future). Rather, chaotic systems must
be simulated. Systems can be simulated to any desired degree of accuracy,
and measurement limitations will impose limits on how much we can know
about a system we intend on simulating. Again, the existence of chaotic
systems is not an example of uncomputable physical laws.


Although Navier-Stokes equations describe fluid flow, solving these
> equations accurately (really accurately, beyond engineering application)
> for turbulent system

Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:

>
>
> On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
>
> Yes it's possible to have a universal Turing machine in the sense that you
> can run any program by just changing the tape, however ONLY if that tape
> has instructions for changing the set of states  that the machine can be
> in.
>
>
>
> It still boggles my mind that matter is Turing-complete.
>

Turing completeness, as incredible as it is, is (remarkably) easy to come
by. You can achieve it with addition and multiplication, with billiard
balls, with finite automata (rule 110, or game of life), with artificial
neurons, etc. That something as sophisticated as matter could achieve it is
to me less surprising than the fact that these far simpler things can.


And this despite parts of physics being not Turing emulable.
>
Finite physical system's can be simulated to any desired degree of
accuracy, and moreover all known laws of physics are computable. Which
parts of physics do you refer to when you say there are parts that aren't
Turing emulable?

Jason

We can implement Turing Machines with matter, and even with constraints in
> the physical world, it appears to be the basic principle of brains, cells,
> and computers.
>
> Just for clarity’s sake, we should distinguish the idea of
> Turing/universal machine with some demonstrative physical implementation,
> like some computer, tape machine, or LLM running on my table/in the cloud:
> By Turing machine, I mean a T machine u such that phi_u(x, y) = phi_x(y).
> We call “u” the computer, x is named the program, and y is the data. Of
> course, (x, y) is supposed to be a number (coding the two numbers x and y).
> And yeah, you can specify it with infinite tape, print, read, write heads,
> and many other formalisms that have proven equivalent etc. but the class of
> functions is the same. The set of partially computable functions from N to
> N with the standard definitions and axioms.
>
> There are a lot of posts distinguishing this computer here, that LLM
> there, that brain in my head etc. ostensively, as if we knew what we were
> talking about. If we believe we are Turing emulable at some level of
> description, then we are not able to distinguish between ourselves and our
> experiences when emulated in say Python, which is emulated by Rust, which
> is emulated by Swift, which is emulated by Kotlin, which is emulated by Go,
> which is emulated by Elixir, which is emulated by Julia, which is emulated
> by TypeScript, which is emulated by R, which is emulated by a physical
> universe, itself emulated by arithmetic (e.g. assuming arithmetical realism
> like Russell and Bruno), from “our self” emulated in Rust, emulated by
> Python, emulated by Go, emulated by Swift, emulated by Julia, emulated by
> Elixir, emulated by Kotlin, emulated by R, emulated by TypeScript, emulated
> by arithmetic, emulated by a physical universe…
>
> That’s the difficulty of defining what a physical instantiation of a
> computation is (See Maudlin and MGA). For if we could distinguish those
> computations, we’d have something funky in consciousness, which would not
> be Turing emulable, falsifying the arithmetical realism type approaches.
> And if you have that, I’d like to know everything about you, your diet,
> reading habits, pets, family, beverages, medicines etc. and whether
> something like gravity is Turing emulable, even if I guess it isn’t. Send
> me that message in private though and don’t publish anything.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/602ae080-85fe-4a99-ab85-194dec7aae0fn%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/602ae080-85fe-4a99-ab85-194dec7aae0fn%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhu9f8zNkUv1A-kXHDc-gFy7nDsLtKPTW8JoJeVUgRs8A%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 6:22 PM John Clark  wrote:

> On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker 
> wrote:
>
> *> All Turing machines have the same computational capability. *
>
>
> Well that certainly is not true! There is a Turing Machine for any
> computable task, but any PARTICULAR  Turing Machine has a finite number of
> internal states and can only do one thing. If you want something else done
> then you are going to have to use a Turing Machine with a different set
> of internal states.
>

The number of internal states a Turing machine has is unrelated to a Turing
machine's universality. Think of internal states as the instruction set in
a CPU. A CPU can only be in so many states, but pair it with a memory and a
loop, and it can compute anything.

I think what you are saying makes sense if you consider a Turing machine
running a particular fixed program. Then the Turing machine acts like some
particular machine. And if you want it to act differently, you need to
provide a different program.

Jason


>
> The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n),
> This is because there are n-1 non-halting states, and we have n choices
> for the next state, and 2 choices for which symbol to write, and 2
> choices for which direction to move the read head. So for example there
> are 16,777,216 different three state Turing Machines, and 25,600,000,000
> different four state turing machines.
>
>John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> nrp
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv12iBKidY_a_QC4tvTtdFNdmZRgZ9K-UH0La%3DTvdUMuew%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv12iBKidY_a_QC4tvTtdFNdmZRgZ9K-UH0La%3DTvdUMuew%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjjZ3t7kcDeKAfz3seZC%2BK6NpEQffZJ18v4PW%2BoD%3Do23A%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 4:18 PM Brent Meeker  wrote:

>
>
> On 7/13/2024 4:07 AM, John Clark wrote:
>
> On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker 
> wrote:
>
> An AI needs to play dumb in order to fool a human into thinking it is
>>> human. Don't you find that fact to be compelling?
>>
>>
>> *No, it only passed because the human interlocutor didn't ask the right
>> questions; like, "Where are you?" and  "Is it raining outside?". *
>>
>
> If the AI  was trying to deceive the human into believing it was not a
> computer then it would simply say something like "*I am in Vancouver
> Canada and it's not raining outside it's snowing*".
>
> Which could easily be checked in real time.  Anyone question won't resolve
> whether it's a person or not but a sequence can provide good evidence.
> Next question, "Is there a phone in your room."  Answer, "Yes"  Call the
> number and see if anyone answers.  etc.  The point is a human IS in a
> specific place and can act there.  An LLM AI isn't anyplace in particular.
>


The reason for conducting the test by text (rather than in person with an
android body) was to prevent external clues from spoiling the result. To be
completely fair, perhaps the test needs to be amended to judge between an
AI and an uploaded human brain.

Jason


> And I don't see how a question like that could help you figure out the
> nature of an AI's mind, or any mine for that matter, even if the AI was
> ordered to tell the truth. The position of a mind in 3D space is a nebulous
> concept; if your brain is in one place and your sense organs are in another
> place, and you're thinking
>
> At other times you say consciousness is just how data feels when being
> processed.  It's processed in your brain...which has a definite location.
>
> about yet another place, then where exactly is the position of your mind?
>
> I just asked "Where are you?"  Not "Where is your mind?"
>
> I think it's a nonsense question because  "you" should not be thought of
> as a pronoun but as an adjective.  You are the way atoms behave when they
> are organized in a Brentmeekerian way.
>
> And those atoms have a location in order to interact.
>
> Brent
>
> So asking a question like that is like asking where is "big" located or
> the color yellow.
>
>  See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> y11
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/2ca26f33-1346-4abf-a55e-7f5f24704173%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/2ca26f33-1346-4abf-a55e-7f5f24704173%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgZ7BLUPcgKiG3igTi8%2BSdh_9STNjJ%3DzGgQUwXVvssHnQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Jason Resch
On Fri, Jul 12, 2024, 7:02 AM John Clark  wrote:

> On Thu, Jul 11, 2024 at 7:01 PM Jason Resch  wrote:
>
> >> Who judges if the "phenomenal judgments" of the machine are correct or
>>> incorrect? Even humans can't agree among themselves about most
>>> philosophical matters, certainly that's true of members of this list.
>>>
>>
>> *> They don't have to be correct, as far as I know. The machine just has
>> to make phenomenal judgements (without prior training on such topics).*
>>
>
> The AI's responses don't have to be correct?!  Generating philosophical 
> blather
> about consciousness is the easiest thing in the world because there is
> nothing to work on, there are no facts that the blather must fit. For it to
> rise a little above the level of blather you've got to start with an
> unproven axiom such as "*consciousness is the way data feels when it is
> being processed and thus I am not the only conscious being in the universe*".
>
>
>
>> *> Failing the test doesn't imply a lack of consciousness. But passing
>> the test implies the presence of consciousness.*
>>
>
> So the Argonov Test has the same flaw that the Turing Test has, and is far
> easier to pass. For a computer to pass the Turing Test it must be able to
> converse intelligently, but not too intelligently, ON ANY SUBJECT, but to
> pass the  Argonov Test it only needs to be able to prattle on about
> consciousness.
>
>
> *> there must be a source of information to permit the making of
>> phenomenal judgements, and since the machine was not trained on them, what
>> else, would you propose that source could be, other than consciousness?*
>>
>
> From your questions to the AI. When I meet someone we don't spontaneously
> start talking about consciousness, it only happens when one of us steers
> the conversation into that direction, and that seldom happens (except on
> this list) because usually both of us would rather talk about other things.
>
>

Do you think that passing the Argonov test would constitute positive proof
of consciousness?

Jason



>   See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> ubu
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%2BkS-i_n3BTVcB%3D92M0VfESR6n_Ua14fYj4aEFMmCQBQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Jason Resch
On Fri, Jul 12, 2024, 6:25 AM John Clark  wrote:

> On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker 
> wrote:
>
> >> Sometimes on some problems the human brain could be considered as
>>> being Turing Complete, otherwise we would never be able to do anything that
>>> was intelligent.
>>
>>
>> *> ??? How on Earth do you reach that conclusion.*
>>
>
> I reached that conclusion because I know that anything that can process
> data, and the human brain can process data, can be emulated by a Turing
> Machine. And a Turing Machine is Turing Complete.
>


Perhaps you mean the brain is "Turing emulable" i.e. computable here,
rather than "Turing complete" (which is having the capacity emulate any
other Turing machine).

Jason


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> mnl
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv39%2B52RQg_HbcE3gr4ArUkf4QTsBRgBTU_AAYCuAK9Gdg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv39%2B52RQg_HbcE3gr4ArUkf4QTsBRgBTU_AAYCuAK9Gdg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgiwUN8%3DkMZMZctS4%2Bq-aDLpNcYS2vr7D2PAc7eh6%3D5OA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 6:00 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 5:33 PM Jason Resch  wrote:
>
> *> Consider a deterministic intelligent machine having no innate
>> philosophical knowledge or philosophical discussions while learning. Also,
>> the machine does not contain informational models of other creatures (that
>> may implicitly or explicitly contain knowledge about these creatures’
>> consciousness). If, under these conditions, the machine produces phenomenal
>> judgments on all problematic properties of consciousness, then, according
>> to [the postulates], materialism is true and the machine is conscious.*
>>
>
> Who judges if the "phenomenal judgments" of the machine are correct or
> incorrect? Even humans can't agree among themselves about most
> philosophical matters, certainly that's true of members of this list.
>

They don't have to be correct, as far as I know. The machine just has to
make phenomenal judgements (without prior training on such topics). If a
machine said "I think, therefore I am", or proposed epiphenomenalism,
without having been trained on any philosophical topics, those would
constitute phenomenal judgements that suggest the machine possesses
consciousness.


And the fact is many, perhaps most, human beings don't think about deep
> philosophical questions at all, they find it all to be a big bore, so does
> that mean they're philosophical zombies?
>

Failing the test doesn't imply a lack of consciousness. But passing the
test implies the presence of consciousness.

And just because a machine can pontificate about consciousness, what
> reason, other than Argonov's authority, would I have for believing the
> machine was conscious?
>

That there must be a source of information to permit the making of
phenomenal judgements, and since the machine was not trained on them, what
else, would you propose that source could be, other than consciousness?

Jason


> I'm going to take a break from the list right now because I wanna watch
> Joe Biden's new press conference  ah... I think I think I wanna watch
> it it
>
>  See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> bfq
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1Vc%2BxDqYGT9uz8TNNvLb5nijcJePj2yTzN74c%2BJK5NNQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1Vc%2BxDqYGT9uz8TNNvLb5nijcJePj2yTzN74c%2BJK5NNQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQMYTCG_3AH0GOobmOefeq44b0-_iZ_sk6bQNqsT9msQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 5:28 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 5:01 PM Jason Resch  wrote:
>
>
>> *> There are easier and harder tests than the Turing test. I don't know
>> why you say it's the only test we have. Also: would passing the Argonov
>> test (which I described in my document on whether zombies are possible) not
>> be a sufficient proof of consciousness? Note that the Argonov test is much
>> harder to pass than the Turing test.*
>>
>
> I have a clear understanding of exactly what the Turing Test is, but I am
> unable to get a clear understanding of exactly, or even approximately, what
> the Argonov test is. I know it has something to do with "phenomenal
> judgments" but I don't know what that means and I don't know what I need to
> do to pass the Argonov Test, so I guess I'd fail it. And because of my
> failure to understand the test it seems that I've been wrong all my life
> about being conscious and really I am a philosophical zombie.
>


“Phenomenal judgments” are the words,
discussions, and texts about consciousness,
subjective phenomena, and the mind-body
problem. […]

In order to produce detailed phenomenal
judgments about problematic properties of
consciousness, an intelligent system must have a source of knowledge about
the properties of consciousness. [...]

Consider a deterministic intelligent machine
having no innate philosophical knowledge or
philosophical discussions while learning. Also, the machine does not
contain informational models of other creatures (that may implicitly or
explicitly contain knowledge about these creatures’ consciousness). If,
under these conditions, the machine produces phenomenal judgments on all
problematic properties of consciousness, then, according to [the
postulates], materialism is true and the machine is conscious.
— Victor Argonov in “Experimental Methodsfor Unraveling the Mind-Body
Problem: The Phenomenal Judgment Approach” (2014)



Jason



>
>   See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> pzx
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3DiWF-qSnKRkpu_oRKG%2BhASJE86inx%2BNYYqk7fZ69LXw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3DiWF-qSnKRkpu_oRKG%2BhASJE86inx%2BNYYqk7fZ69LXw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj5euGRoNkcgSDLYRd1hs1Df%2Bs2jf4cV3uf6%3D%2BvrxnW%2Bw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 4:57 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker 
> wrote:
>
> *> A rock, along with many other things, can't pass a first grade
>> arithmetic tes either; but that doesn't show that anything that can't pass
>> a first grade arithmetic test is unintelligent or unconscious, as for
>> example an octopus or a 3yr old child.*
>>
>
> And because of their failure to pass a first year arithmetic test we would
> say that a rock, an octopus and a three year old child are not behaving
> very intelligently. But as I said before, the Turing Test is not perfect,
> however it's all we've got. If something passes the test then it's
> intelligent and conscious. If fails the test then it may or may not be
> intelligent and or conscious
>

There are easier and harder tests than the Turing test. I don't know why
you say it's the only test we have.

Also: would passing the Argonov test (which I described in my document on
whether zombies are possible) not be a sufficient proof of consciousness?
Note that the Argonov test is much harder to pass than the Turing test.

Jason



> See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> asb
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhCVnObf57LOsMANYXGhd27FL6bJs7_wSUpAK7Wj3ZR1g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 2:39 PM Brent Meeker  wrote:

> I stand corrected.  But that just means I chose a bad example.  My point
> was that consciousness doesn't require Turing completeness.  You agreed
> with me about the paramecium.
>


I agree Turing completeness is not required for consciousness. The human
brain (given it's limited and faulty memory) wouldn't even meet the
definition of being Turing complete.

Jason


> Brent
>
> On 7/10/2024 7:24 AM, Jason Resch wrote:
>
> There was a study done in the 1950s on probabilistic Turing machines (
> https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en
> ) that found what they could compute is no different than what a
> deterministic Turing machine can compute.
>
> "The computing power of Turing machines
> provided with a random number generator was
> studied in the classic paper [Computability by
> Probabilistic Machines]. It turned out that such
> machines could compute only functions that are already computable by
> ordinary Turing machines."
> — Martin Davis in “The Myth of Hypercomputation” (2004)
>
> To see why consider that programs can similarly split themselves and run
> in parallel
> with each of the possible values. To each instance of the split program,
> the value it is provided will seem random. But importantly: what the
> program can computes with this value
> is the same as what it would compute had the value come from a "truly
> random" quantum measurement.
>
> It would make a difference if it were a quantum computer or not.
>>
>
> For us observing the program run from the outside, it would make a
> difference. But the program itself has way of distinguishing if it is
> receiving a value that came from a real measurement of a quantum system, or
> if it was provided the result of a simulated quantum system.
>
>
> And going the other way, what if it didn't have a multiply operation.
>> We're so accustomed the standard Turing-complete von Neumann computer we
>> take it for granted.
>>
>
> A program will crash if it's run on a hardware that it's not compatible
> with. This is why you can't take a .exe from windows and run it on a Mac.
> But if you run a windows emulator on the Mac you can then run the .exe
> within it.
>
> The program the has no idea it is running on a Mac, it has every reason to
> believe it is running on a real windows computer, but it is fooled by the
> emulation layer (this emulation layer is what I speak of when to refer to
> the "Turing firewall"). That such layers can be created is a direct
> consequence of the fact that all Turing machines are capable of emulating
> each other.
>
> Jason
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1c7cc5d2-93a1-4ac3-ab70-d5a99341346b%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/1c7cc5d2-93a1-4ac3-ab70-d5a99341346b%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiCxPqjP%3Dx-_tVSW2%2By7K7RgDgEhMyCEK35HGNytVvAWg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Jason Resch
On Tue, Jul 9, 2024, 7:22 PM Brent Meeker  wrote:

>
>
> On 7/8/2024 1:20 PM, Jason Resch wrote:
>
>
>
> On Mon, Jul 8, 2024, 4:01 PM John Clark  wrote:
>
>> On Mon, Jul 8, 2024 at 2:23 PM Jason Resch  wrote:
>>
>> *> If you believe mental states do not cause anything, then you believe
>>> philosophical zombies are logically possible (since we could remove
>>> consciousness without altering behavior).*
>>>
>>
>> Not if consciousness is the inevitable byproduct of intelligece, and I'm
>> almost certain that it is.
>>
>
> If consciousness is necessary for intelligence, then it's not a byproduct.
> If on the other hand, consciousness is just a useless byproduct, then it
> could (logically if not nomologically) be eliminated without affecting
> intelligent.
>
> You seem to want it to be both necessary but also be something that makes
> no difference to anything (which makes it unnecessary).
>
> I would be most curious to hear your thoughts  regarding the section of my
> article on "Conscious behaviors" -- that is, behaviors which (seem to)
> require consciousness in order to do them.
>
>
>> *> I view mental states as high-level states operating in their own
>>> regime of causality (much like a Java computer program).*
>>>
>>
>> I have no problem with that, actually it's very similar to my view.
>>
>
> That's good to hear.
>
>
>>
>>> *> The java computer program can run on any platform, regardless of the
>>> particular physical nature of it.*
>>>
>>
>> Right. You could even say that "computer program" is not a noun, it is an
>> adjective, it is the way a computer will behave when the machine's  logical
>> states are organized in a certain way.  And "I" is the way atoms behave
>> when they are organized in a Johnkclarkian way, and "you" is the way atoms
>> behave when they are organized in a Jasonreschian way.
>>
>
> I'm not opposed to that framing.
>
>>
>> *> I view consciousness as like that high-level control structure. It
>>> operates within a causal realm where ideas and thoughts have causal
>>> influence and power, and can reach down to the lower level to do things
>>> like trigger nerve impulses.*
>>>
>>
>> Consciousness is a high-level description of brain states that can be
>> extremely useful, but that doesn't mean that lower level and much more
>> finely grained description of brain states involving nerve impulses, or
>> even more finely grained descriptions involving electrons and quarks are
>> wrong, it's just that such level of detail is unnecessary and impractical
>> for some purposes.
>>
>
> I would even say, that at a certain level of abstraction, they become
> irrelevant. It is the result of what I call "a Turing firewall", software
> has no ability to know its underlying hardware implementation, it is an
> inviolable separation of layers of abstraction, which makes the lower
> levels invisible to the layers above.
>
> That's roughly true, but not exactly.  If you think of intelligence
> implemented on a computer it would make a difference if it had a true
> random number generator (hardware) or not.
>

There was a study done in the 1950s on probabilistic Turing machines (
https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en
) that found what they could compute is no different than what a
deterministic Turing machine can compute.

"The computing power of Turing machines
provided with a random number generator was
studied in the classic paper [Computability by
Probabilistic Machines]. It turned out that such
machines could compute only functions that are already computable by
ordinary Turing machines."
— Martin Davis in “The Myth of Hypercomputation” (2004)

To see why consider that programs can similarly split themselves and run in
parallel
with each of the possible values. To each instance of the split program,
the value it is provided will seem random. But importantly: what the
program can computes with this value
is the same as what it would compute had the value come from a "truly
random" quantum measurement.

It would make a difference if it were a quantum computer or not.
>

For us observing the program run from the outside, it would make a
difference. But the program itself has way of distinguishing if it is
receiving a value that came from a real measurement of a quantum system, or
if it was provided the result of a simulated quantum system.


And going the other way, what if it didn't have a multiply operation.
> We're so accustomed the standard Turing-complete von Neum

Re: Are Philosophical Zombies possible?

2024-07-10 Thread Jason Resch
On Tue, Jul 9, 2024, 6:59 PM Brent Meeker  wrote:

>
>
> On 7/8/2024 11:12 AM, Jason Resch wrote:
>
>
>
> On Mon, Jul 8, 2024 at 10:29 AM John Clark  wrote:
>
>>
>> On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker 
>> wrote:
>>
>> *>I thought it was obvious that foresight requires consciousness. It
>>> requires the ability of think in terms of future scenarios*
>>>
>>
>> The keyword in the above is "think". Foresight means using logic to
>> predict, given current starting conditions, what the future will likely be,
>> and determining how a change in the initial conditions will likely
>> affect the future.  And to do any of that requires intelligence. Both Large
>> Language Models and picture to video AI programs have demonstrated that
>> they have foresight ; if you ask them what will happen if you cut the
>> string holding down a helium balloon they will tell you it will flow away,
>> but if you add that the instant string is cut an Olympic high jumper will
>> make a grab for the dangling string they will tell you what will likely
>> happen then too. So yes, foresight does imply consciousness because
>> foresight demands intelligence and consciousness is the inevitable
>> byproduct of intelligence.
>>
>
> Consciousness is a prerequisite of intelligence. One can be conscious
> without being intelligent, but one cannot be intelligent without being
> conscious.
> Someone with locked-in syndrome can do nothing, and can exhibit no
> intelligent behavior. They have no measurable intelligence. Yet they are
> conscious. You need to have perceptions (of the environment, or the current
> situation) in order to act intelligently. It is in having perceptions that
> consciousness appears. So consciousness is not a byproduct of, but an
> integral and necessary requirement for intelligent action.
>
> And not necessarily a high-level language based consiousness.  Paramecia
> act intelligently based on perception of chemical gradients.  So one would
> say they are conscious of said gradients.
>


Yes, I agree.

Jason


> Brent
>
>
> Jason
>
>
>>
>>
>>> *> in which you are an actor*
>>>
>>
>> Obviously any intelligence will have to take its own actions in account
>> to determine what the likely future will be. After a LLM gives you an
>> answer to a question, based on that answer I'll bet an AI  will be able to
>> make a pretty good guess what your next question to it will be.
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> ods
>>
>>
>>
>>
>>
>>
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/8e3133ae-abcc-48fe-966f-96210858f33d%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/8e3133ae-abcc-48fe-966f-96210858f33d%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhd9yd9o4c2d7HDabAuYWxhp%2BfiHHyFWzU72KQ5g0Ch%2BA%40mail.gmail.com.


Re: AI hype

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 11:50 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Jason. Recursion is not self-reference. If you would have read my paper
> you would have seen that.
>

You alluded to a familiarity with computer programming. Have you studied
computer science?

Is the classical way the Fibonacci sequence is defined not an example of
self-reference as you use the term? If it's not, then you are using
"self-reference" in a very non-standard that is sure to confuse a lot of
people, especially computer scientists.

Jason


> On Tuesday 9 July 2024 at 14:42:42 UTC+3 Jason Resch wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 4:04 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> lol ? By knowing that all AI does is to follow deterministic
>>> instructions such as
>>>
>>> if (color == white) {
>>>print ("Is day");
>>> } else {
>>>print ("Is night");
>>> }
>>>
>>
>> This was an objection first made by Ada Lovelace. But Turing showed that
>> even deterministic processes can often surprise us, and behave in ways that
>> aren't predictable (without running the computation until it finishes).
>> E.g., does a machine halt or not?
>>
>> If I give you the program, can you tell me from looking at it what it
>> will do? You might think you can, but consider if I gave you a program that
>> looked for a counterexample to Goldbach's conjecture. If and when it finds
>> it, the program prints it and then halts. Does the machine given this
>> program halt or not?
>>
>> You might have an opinion, but if you can't prove it, then you really
>> don't know. So far no one has been able to prove it one way or the other.
>>
>>
>>
>>
>>> There is no reason involved. Just blindly following instructions. Do
>>> people that believe in the AI believe that computers are magical entities
>>> where fairies live and they sprout rainbows ?
>>>
>>
>> The Turing machine and (computability generally) is built on the notion
>> of recursion. I.e. self-reference. If we are conscious due to
>> self-reference, then why shouldn't recursive computer programs be conscious
>> too?
>>
>>
>> Jason
>>
>>
>>> On Tuesday 9 July 2024 at 06:19:30 UTC+3 Terren Suydam wrote:
>>>
>>>> How has your understanding of computer programming helped you avoid
>>>> being victimized by AI hype?
>>>>
>>>> On Mon, Jul 8, 2024 at 5:19 PM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> People that are victims of the AI hype neither understand computer
>>>>> programming nor consciousness.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Everything List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to everything-li...@googlegroups.com.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/everything-list/d095509d-00c5-4693-ae91-af4732e231can%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/everything-list/d095509d-00c5-4693-ae91-af4732e231can%40googlegroups.com?utm_medium=email_source=footer>
>>>>> .
>>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/15a64042-3008-48ab-b406-94b1ba3f54c0n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/15a64042-3008-48ab-b406-94b1ba3f54c0n%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/497a3e36-53ef-449f-9f65-03873ca22d39n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/497a3e36-53ef-449f-9f65-03873ca22d39n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj6O8pB97r8fcmBqG%3Dj2OEvnuEqi__YcW7OXUVPfBfCpg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 10:50 AM John Clark  wrote:

> On Tue, Jul 9, 2024 at 8:31 AM Jason Resch  wrote:
>
> >> My dictionary says the definition of "*prerequisite*"  is  "*a thing
>>> that is required as a prior condition for something else to happen or 
>>> exist*". And
>>> it says the definition of "*cause*" is "*a person or thing that gives
>>> rise to an action, phenomenon, or condition*". So cause and
>>> prerequisite are synonyms.
>>>
>>
>> *> There's a subtle distinction. Muscles and bones are prerequisites for
>> limbs, but muscles and bones do not cause limbs.*
>>
>
> There are many things that caused limbs to come into existence, one of
> them was the existence of muscles, another was the existence of bones,
> and yet another was the help limbs gave to organisms in getting genes into
> the next generation.
>
> *> Lemons are a prerequisite for lemonade, but do not cause lemonade.*
>>
>
> You can't make lemonade without lemons, and lemons can't make lemonade
> without you.
>

And this highlights the distinction between a prerequisite and a cause.



>
>> *> I define intelligence by something capable of intelligent action.*
>>
>
> Intelligent action is what drove evolution to amplify intelligence, but if
> Stephen Hawking's voice generator had broken down for one hour I would
> still say I have  reason to believe that he remained intelligent during
> that hour.
>


Sure, but that is just a delayed action. Would he still be intelligent if
he never was able to speak again (even with the help of a machine)? He
wouldn't be according to evolution.


>
> *> Intelligent action requires non random choice:*
>>
>
> If it's non-random then by definition it is deterministic.
>

We aren't debating free will here. Not sure why you mention this.


> > *Having information about the environment (i.e. perceptions) is
>> consciousness.*
>>
>
> But you can't have perceptions without intelligence, sight and sound
> would just be meaningless gibberish.
>

How do you define intelligence?


> > *You cannot have perceptions without there being some process or thing
>> to perceive them.*
>>
>
> Yes, and that thing is intelligence.
>
> *> Therefore perceptions (i.e. consciousness) is a requirement and
>> precondition of being able to perform intelligent actions.*
>>
>
> The only perceptions we have firsthand experience with are our own, so
> investigating perceptions is not very useful in Philosophy or in trying to
> figure out how the world works, but intelligence is another matter
> entirely.
>

It is if we want to answer the question of why consciousness evolved.


That's why in the last few years there has been enormous progress in
> figuring out how intelligence works, but nobody has found anything new to
> say about consciousness in centuries.
>

You don't think functionalism is progress?

Jason



> John K Clark
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0kBCxYZaN474fj0S5i5RBUGYZ_dHiU2a3b2mesTpyR2w%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0kBCxYZaN474fj0S5i5RBUGYZ_dHiU2a3b2mesTpyR2w%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjTVvE-yogEMwXWGJkHk78nW-CdzAvfWg8X%2BQvnO0RVkQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 11:18 AM Stathis Papaioannou 
wrote:

>
>
> Stathis Papaioannou
>
>
> On Wed, 10 Jul 2024 at 00:34, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> Stathis Papaioannou
>>>
>>>
>>> On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 7, 2024 at 3:14 PM John Clark 
>>>>>> wrote:
>>>>>>
>>>>>>> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch 
>>>>>>> wrote:
>>>>>>>
>>>>>>> *>>> ** I think such foresight is a necessary component of
>>>>>>>>>> intelligence, not a "byproduct".*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> >>I agree, I can detect the existence of foresight in others and
>>>>>>>>> so can natural selection, and that's why we have it.  It aids in 
>>>>>>>>> getting
>>>>>>>>> our genes transferred into the next generation. But I was talking 
>>>>>>>>> about
>>>>>>>>> consciousness not foresight, and regardless of how important we 
>>>>>>>>> personally
>>>>>>>>> think consciousness is, from evolution's point of view it's
>>>>>>>>> utterly useless, and yet we have it, or at least I have it.
>>>>>>>>>
>>>>>>>>
>>>>>>>> *> you don't seem to think zombies are logically possible,*
>>>>>>>>
>>>>>>>
>>>>>>> Zombies are possible, it's philosophical zombies, a.k.a. smart
>>>>>>> zombies, that are impossible because it's a brute fact that 
>>>>>>> consciousness
>>>>>>> is the way data behaves when it is being processed intelligently,
>>>>>>> or at least that's what I think. Unless you believe that all
>>>>>>> iterated sequences of "why" or "how" questions go on forever then
>>>>>>> you must believe that brute facts exist; and I can't think of a better
>>>>>>> candidate for one than consciousness.
>>>>>>>
>>>>>>> *> so then epiphenomenalism is false*
>>>>>>>>
>>>>>>>
>>>>>>> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
>>>>>>> is a position in the philosophy of mind according to which mental 
>>>>>>> states or
>>>>>>> events are caused by physical states or events in the brain but do not
>>>>>>> themselves cause anything*". If that is the definition then I
>>>>>>> believe in Epiphenomenalism.
>>>>>>>
>>>>>>
>>>>>> If you believe mental states do not cause anything, then you believe
>>>>>> philosophical zombies are logically possible (since we could remove
>>>>>> consciousness without altering behavior).
>>>>>>
>>>>>
>>>>> Mental states could be necessarily tied to physical states without
>>>>> having any separate causal efficacy, and zombies would not be logically
>>>>> possible. Software is necessarily tied to hardware activity: if a computer
>>>>> runs a particular program, it is not optional that the program is
>>>>> implemented. However, the software does not itself have causal efficacy,
>>>>> causing current to flow in wires and semiconductors and so on: there is
>>>>> always a sufficient explanation for such activity in purely physical 
>>>>> terms.
>>>>>
>>>>
>>>> I don't disagree that there is sufficient explanation in all the
>>>> particle movements all following physical laws.
>>>>
>>>> But then consider the question, how do we decide what level is in
>>>> control? You make the case that we should consider the quantum field lev

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou 
wrote:

>
>
> Stathis Papaioannou
>
>
> On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>>>
>>>>
>>>>
>>>> On Sun, Jul 7, 2024 at 3:14 PM John Clark  wrote:
>>>>
>>>>> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch 
>>>>> wrote:
>>>>>
>>>>> *>>> ** I think such foresight is a necessary component of
>>>>>>>> intelligence, not a "byproduct".*
>>>>>>>
>>>>>>>
>>>>>>> >>I agree, I can detect the existence of foresight in others and so
>>>>>>> can natural selection, and that's why we have it.  It aids in getting 
>>>>>>> our
>>>>>>> genes transferred into the next generation. But I was talking about
>>>>>>> consciousness not foresight, and regardless of how important we 
>>>>>>> personally
>>>>>>> think consciousness is, from evolution's point of view it's utterly
>>>>>>> useless, and yet we have it, or at least I have it.
>>>>>>>
>>>>>>
>>>>>> *> you don't seem to think zombies are logically possible,*
>>>>>>
>>>>>
>>>>> Zombies are possible, it's philosophical zombies, a.k.a. smart
>>>>> zombies, that are impossible because it's a brute fact that consciousness
>>>>> is the way data behaves when it is being processed intelligently, or
>>>>> at least that's what I think. Unless you believe that all iterated
>>>>> sequences of "why" or "how" questions go on forever then you must
>>>>> believe that brute facts exist; and I can't think of a better candidate 
>>>>> for
>>>>> one than consciousness.
>>>>>
>>>>> *> so then epiphenomenalism is false*
>>>>>>
>>>>>
>>>>> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
>>>>> is a position in the philosophy of mind according to which mental states 
>>>>> or
>>>>> events are caused by physical states or events in the brain but do not
>>>>> themselves cause anything*". If that is the definition then I believe
>>>>> in Epiphenomenalism.
>>>>>
>>>>
>>>> If you believe mental states do not cause anything, then you believe
>>>> philosophical zombies are logically possible (since we could remove
>>>> consciousness without altering behavior).
>>>>
>>>
>>> Mental states could be necessarily tied to physical states without
>>> having any separate causal efficacy, and zombies would not be logically
>>> possible. Software is necessarily tied to hardware activity: if a computer
>>> runs a particular program, it is not optional that the program is
>>> implemented. However, the software does not itself have causal efficacy,
>>> causing current to flow in wires and semiconductors and so on: there is
>>> always a sufficient explanation for such activity in purely physical terms.
>>>
>>
>> I don't disagree that there is sufficient explanation in all the particle
>> movements all following physical laws.
>>
>> But then consider the question, how do we decide what level is in
>> control? You make the case that we should consider the quantum field level
>> in control because everything is ultimately reducible to it.
>>
>> But I don't think that's the best metric for deciding whether it's in
>> control or not. Do the molecules in the brain tell neurons what do, or do
>> neurons tell molecules what to do (e.g. when they fire)? Or is it some
>> mutually conditioned relationship?
>>
>> Do neurons fire on their own and tell brains what to do, or do neurons
>> only fire when other neurons of the whole brain stimulate them
>> appropriately so they have to fire? Or is it again, another case of
>> mutualism?
>>
>> When two people are discussing ideas, are the ideas determining how each
>> brain thinks and responds, or are the brains determining the ideas by
>> virtue of generating the words through which they are expressed?
>>
>> Through in each of these cases, we 

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024 at 8:17 AM Jason Resch  wrote:

>
>
> On Tue, Jul 9, 2024, 7:03 AM 'Cosmin Visan' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> Physical doesn't exist. "Physical" is just an idea in consciousness.
>>
>
>
> Do you see this reality as in any way shared?
>

If you don't then why are you arguing with a figment of your imagination?
If you do, then what name should we give to this shared reality?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgDeSiK5LSr9%3DMcO9BLrXu3QkjY6VXKa8KmMcBpuB5n%2Bw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 8:18 AM John Clark  wrote:

>
>
> On Tue, Jul 9, 2024 at 7:54 AM Jason Resch  wrote:
>
> >>Consciousness is the inevitable product of intelligence, it is not the
>>> cause of intelligence.
>>>
>>
>>
>> *> **I didn't say it was the cause, I said it is a prerequisite.*
>>
>
> My dictionary says the definition of "*prerequisite*"  is  "*a thing that
> is required as a prior condition for something else to happen or exist*". And
> it says the definition of "*cause*" is "*a person or thing that gives
> rise to an action, phenomenon, or condition*". So cause and prerequisite
> are synonyms.
>

There's a subtle distinction.

Muscles and bones are prerequisites for limbs, but muscles and bones do not
cause limbs.

Lemons are a prerequisite for lemonade, but do not cause lemonade.

Intelligence is what you get when you combine perception with action, so
that actions can be selected in a manner guided by perceptions.

Perception (i.e. consciousness) and action are prerequisites for
intelligence. But perception alone does not cause and will not provide
intelligence.


>
>> *> You conveniently (for you but not for me) ignored and deleted my
>> explanation in your reply.*
>>
>
> Somehow I missed that "detailed explanation" you refer to.
>


I copy it here:

I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by
information from the environment.

Having information about the environment (i.e. perceptions) is
consciousness. You cannot have perceptions without there being some process
or thing to perceive them.

Therefore perceptions (i.e. consciousness) is a requirement and
precondition of being able to perform intelligent actions.



Jason


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjN%2BaWV5F5Wx7%2Bvxo1YvJRnpnMzMJSWecAUMyu_%2BnTKYA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 7:03 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> Physical doesn't exist. "Physical" is just an idea in consciousness.
>

Then so is this e-mail. Does that mean I should ignore it? Is it of no
relevance?

Or is it part of a vast (apparent) reality we are each trying to navigate?
What's wrong with calling this reality we are each trying to navigate
(where this email exists) physical?

Do you see this reality as in any way shared?

Jason



> On Tuesday 9 July 2024 at 11:33:33 UTC+3 Stathis Papaioannou wrote:
>
>> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>>
>>>
>>>
>>> On Sun, Jul 7, 2024 at 3:14 PM John Clark  wrote:
>>>
>>>> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch  wrote:
>>>>
>>>> *>>> ** I think such foresight is a necessary component of
>>>>>>> intelligence, not a "byproduct".*
>>>>>>
>>>>>>
>>>>>> >>I agree, I can detect the existence of foresight in others and so
>>>>>> can natural selection, and that's why we have it.  It aids in getting our
>>>>>> genes transferred into the next generation. But I was talking about
>>>>>> consciousness not foresight, and regardless of how important we 
>>>>>> personally
>>>>>> think consciousness is, from evolution's point of view it's utterly
>>>>>> useless, and yet we have it, or at least I have it.
>>>>>>
>>>>>
>>>>> *> you don't seem to think zombies are logically possible,*
>>>>>
>>>>
>>>> Zombies are possible, it's philosophical zombies, a.k.a. smart
>>>> zombies, that are impossible because it's a brute fact that consciousness
>>>> is the way data behaves when it is being processed intelligently, or
>>>> at least that's what I think. Unless you believe that all iterated
>>>> sequences of "why" or "how" questions go on forever then you must
>>>> believe that brute facts exist; and I can't think of a better candidate for
>>>> one than consciousness.
>>>>
>>>> *> so then epiphenomenalism is false*
>>>>>
>>>>
>>>> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
>>>> is a position in the philosophy of mind according to which mental states or
>>>> events are caused by physical states or events in the brain but do not
>>>> themselves cause anything*". If that is the definition then I believe
>>>> in Epiphenomenalism.
>>>>
>>>
>>> If you believe mental states do not cause anything, then you believe
>>> philosophical zombies are logically possible (since we could remove
>>> consciousness without altering behavior).
>>>
>>
>> Mental states could be necessarily tied to physical states without having
>> any separate causal efficacy, and zombies would not be logically possible.
>> Software is necessarily tied to hardware activity: if a computer runs a
>> particular program, it is not optional that the program is implemented.
>> However, the software does not itself have causal efficacy, causing current
>> to flow in wires and semiconductors and so on: there is always a sufficient
>> explanation for such activity in purely physical terms.
>>
>> I view mental states as high-level states operating in their own regime
>>> of causality (much like a Java computer program). The java computer program
>>> can run on any platform, regardless of the particular physical nature of
>>> it. It has in a sense isolated itself from the causality of the electrons
>>> and semiconductors, and operates in its own realm of the causality of if
>>> statements, and for loops. Consider this program, for example:
>>>
>>> [image: twin-prime-program2.png]
>>>
>>> What causes the program to terminate? Is it the inputs, and the logical
>>> relation of primality, or is it the electrons flowing through the CPU? I
>>> would argue that the higher-level causality, regarding the logical
>>> relations of the inputs to the program logic is just as important. It
>>> determines the physics of things like when the program terminates. At this
>>> level, the microcircuitry is relevant only to its support of the higher
>>> level causal structures, but the program doesn't need to be aware of nor
>>> consider those low-level things. It operates

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou  wrote:

>
>
> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>
>>
>>
>> On Sun, Jul 7, 2024 at 3:14 PM John Clark  wrote:
>>
>>> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch  wrote:
>>>
>>> *>>> ** I think such foresight is a necessary component of
>>>>>> intelligence, not a "byproduct".*
>>>>>
>>>>>
>>>>> >>I agree, I can detect the existence of foresight in others and so
>>>>> can natural selection, and that's why we have it.  It aids in getting our
>>>>> genes transferred into the next generation. But I was talking about
>>>>> consciousness not foresight, and regardless of how important we personally
>>>>> think consciousness is, from evolution's point of view it's utterly
>>>>> useless, and yet we have it, or at least I have it.
>>>>>
>>>>
>>>> *> you don't seem to think zombies are logically possible,*
>>>>
>>>
>>> Zombies are possible, it's philosophical zombies, a.k.a. smart zombies,
>>> that are impossible because it's a brute fact that consciousness is the way
>>> data behaves when it is being processed intelligently, or at least
>>> that's what I think. Unless you believe that all iterated sequences of
>>> "why" or "how" questions go on forever then you must believe that brute
>>> facts exist; and I can't think of a better candidate for one than
>>> consciousness.
>>>
>>> *> so then epiphenomenalism is false*
>>>>
>>>
>>> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
>>> is a position in the philosophy of mind according to which mental states or
>>> events are caused by physical states or events in the brain but do not
>>> themselves cause anything*". If that is the definition then I believe
>>> in Epiphenomenalism.
>>>
>>
>> If you believe mental states do not cause anything, then you believe
>> philosophical zombies are logically possible (since we could remove
>> consciousness without altering behavior).
>>
>
> Mental states could be necessarily tied to physical states without having
> any separate causal efficacy, and zombies would not be logically possible.
> Software is necessarily tied to hardware activity: if a computer runs a
> particular program, it is not optional that the program is implemented.
> However, the software does not itself have causal efficacy, causing current
> to flow in wires and semiconductors and so on: there is always a sufficient
> explanation for such activity in purely physical terms.
>

I don't disagree that there is sufficient explanation in all the particle
movements all following physical laws.

But then consider the question, how do we decide what level is in control?
You make the case that we should consider the quantum field level in
control because everything is ultimately reducible to it.

But I don't think that's the best metric for deciding whether it's in
control or not. Do the molecules in the brain tell neurons what do, or do
neurons tell molecules what to do (e.g. when they fire)? Or is it some
mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do neurons only
fire when other neurons of the whole brain stimulate them appropriately so
they have to fire? Or is it again, another case of mutualism?

When two people are discussing ideas, are the ideas determining how each
brain thinks and responds, or are the brains determining the ideas by
virtue of generating the words through which they are expressed?

Through in each of these cases, we can always drop a layer and explain all
the events at that layer, that is not (in my view) enough of a reason to
argue that the events at that layer are "in charge." Control structures,
such as whole brain regions, or complex computer programs, can involve and
be influenced by the actions of billions of separate events and separate
parts, and as such, they transcend the behaviors of any single physical
particle or physical law.

Consider: whether or not a program halts might only be determinable by some
rules and proof in a mathematical system, and in this case no physical law
will reveal the answer to that physical system's (the computer's) behavior.
So if higher level laws are required in the explanation, does it still make
sense to appeal to the lower level (physical) laws as providing the
explanation?

Given the generality of computers, they can also simulate any imaginable
set of physical laws. In such simulations, again I think appealing to our
physical la

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 7:48 AM John Clark  wrote:

> On Mon, Jul 8, 2024 at 4:20 PM Jason Resch  wrote:
>
> *> If consciousness is necessary for intelligence* [...]
>>
>
> Consciousness is the inevitable product of intelligence, it is not the
> cause of intelligence.
>


I didn't say it was the cause, I said it is a prerequisite. You
conveniently (for you but not for me) ignored and deleted my explanation in
your reply.

Jason

And as I cannot emphasize enough, natural selection can't select for
> something it can't see and it can't see consciousness, but natural
> selection CAN see intelligent actions. And you know for a fact that natural
> selection has managed to produce at least one conscious being and probably
> mini billions of them.
> Don't you understand how those two facts are telling you something that is
> philosophically important?
>
>
>> > *If on the other hand, consciousness is just a useless byproduct, then
>> it could (logically if not nomologically) be eliminated without affecting
>> intelligent.*
>>
>
> That would not be possible if it's a brute fact that consciousness is the
> way data feels when it is being processed.
>
> John K Clark
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1i%2BParGqA%2Bcy%3D-HpirKE%3DX_Y58UEgeCy4-ORRbOfH3Mw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1i%2BParGqA%2Bcy%3D-HpirKE%3DX_Y58UEgeCy4-ORRbOfH3Mw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgB6uYufPQFJLZaSOw8egoOxfMKOrqGdDp-PUMK9SruJw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 4:05 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> So, where is Santa Claus ?


If he's possible in this universe he exists very far away. If he's not
possible in this universe but possible in other universes then he exists in
some subset of those universes where he is possible. If he's not logically
possible he doesn't exist anywhere.


Also, does he bring presents to all the children in the world in 1 night ?
> How does he do that ?
>

He sprinkles fairy dust all over the planet (nano bot swarms) which travel
down chimneys to self-assemble presents from ambient matter, after they
scan the brain's of sleeping children to see if they are naughty or nice
and what present they hoped for.

Jason



> On Tuesday 9 July 2024 at 07:31:46 UTC+3 Jason Resch wrote:
>
>>
>>
>> On Mon, Jul 8, 2024, 6:38 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> So based on your definition, Santa Claus exists.
>>>
>>
>> I believe everything possible exists.
>>
>> That is the idea this mail list was created to discuss, after all. (That
>> is why it is called the "everything list")
>>
>> Jason
>>
>>
>>
>>> On Tuesday 9 July 2024 at 00:47:28 UTC+3 Jason Resch wrote:
>>>
>>>>
>>>>
>>>> On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> Brain doesn't exist.
>>>>
>>>>
>>>> Then it exists as an object in consciousness, which is as much as exist
>>>> would mean under idealism. Rather than say things don't exist, I think it
>>>> would be better to redefine what is meant by existence.
>>>>
>>>>
>>>> "Brain" is just an idea in consciousness.
>>>>
>>>>
>>>> Sure, and all objects exist in the mind of God. So "exist" goes back to
>>>> meaning what it has always meant, as Markus Mueller said (roughly): "A
>>>> exists for B, when changing the state of A can change the state of B, and
>>>> vice versa, under certain auxiliary conditions."
>>>>
>>>>
>>>> See my papers, like "How Self-Reference Builds the World":
>>>>> https://philpeople.org/profiles/cosmin-visan
>>>>
>>>>
>>>>>
>>>> I have, and replied with comments and questions. You, however,
>>>> dismissed them as me not having read your paper.
>>>>
>>>> Have you seen my paper on how computational observers build the world?
>>>> It reaches a similar conclusion to yours:
>>>>
>>>> https://philpeople.org/profiles/jason-k-resch
>>>>
>>>> Jason
>>>>
>>>>
>>>>
>>>>> On Monday 8 July 2024 at 23:35:12 UTC+3 Jason Resch wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:
>>>>>>
>>>>>>>
>>>>>>> On Mon, Jul 8, 2024 at 2:12 PM Jason Resch 
>>>>>>> wrote:
>>>>>>>
>>>>>>> *>Consciousness is a prerequisite of intelligence.*
>>>>>>>>
>>>>>>>
>>>>>>> I think you've got that backwards, intelligence is a prerequisite
>>>>>>> of consciousness. And the possibility of intelligent ACTIONS is a
>>>>>>> prerequisite for Darwinian natural selection to have evolved it.
>>>>>>>
>>>>>>
>>>>>> I disagree, but will explain below.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> *> One can be conscious without being intelligent,*
>>>>>>>>
>>>>>>>
>>>>>>> Sure.
>>>>>>>
>>>>>>
>>>>>> I define intelligence by something capable of intelligent action.
>>>>>>
>>>>>> Intelligent action requires non random choice: choice informed by
>>>>>> information from the environment.
>>>>>>
>>>>>> Having information about the environment (i.e. perceptions) is
>>>>>> consciousness. You cannot have perceptions without there being some 
>>>>>> process
>>>>>> or thing to perceive them.
&

Re: AI hype

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 4:04 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> lol ? By knowing that all AI does is to follow deterministic instructions
> such as
>
> if (color == white) {
>print ("Is day");
> } else {
>print ("Is night");
> }
>

This was an objection first made by Ada Lovelace. But Turing showed that
even deterministic processes can often surprise us, and behave in ways that
aren't predictable (without running the computation until it finishes).
E.g., does a machine halt or not?

If I give you the program, can you tell me from looking at it what it will
do? You might think you can, but consider if I gave you a program that
looked for a counterexample to Goldbach's conjecture. If and when it finds
it, the program prints it and then halts. Does the machine given this
program halt or not?

You might have an opinion, but if you can't prove it, then you really don't
know. So far no one has been able to prove it one way or the other.




> There is no reason involved. Just blindly following instructions. Do
> people that believe in the AI believe that computers are magical entities
> where fairies live and they sprout rainbows ?
>

The Turing machine and (computability generally) is built on the notion of
recursion. I.e. self-reference. If we are conscious due to self-reference,
then why shouldn't recursive computer programs be conscious too?


Jason


> On Tuesday 9 July 2024 at 06:19:30 UTC+3 Terren Suydam wrote:
>
>> How has your understanding of computer programming helped you avoid being
>> victimized by AI hype?
>>
>> On Mon, Jul 8, 2024 at 5:19 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> People that are victims of the AI hype neither understand computer
>>> programming nor consciousness.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/d095509d-00c5-4693-ae91-af4732e231can%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/d095509d-00c5-4693-ae91-af4732e231can%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/15a64042-3008-48ab-b406-94b1ba3f54c0n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/15a64042-3008-48ab-b406-94b1ba3f54c0n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjUwSXSPc0kcfm-drUPhEuVzZz2fffZm2oCBgiee-F-vA%40mail.gmail.com.


Re: AI hype

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 7:01 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Quentin. Of course I am a magical entity. I am God. And so are you. We
> are are all one and the same God dreaming infinite dreams.
>
> @Quentin @Stathis. That's where the whole magical belief in AI comes from,
> from believing that you are robots. Well.. breaking news: you are not! You
> are God. "Brain" is just a picture that you as God dreams in this dream. It
> doesn't actually exist.
>

Is not self-reference a rule? One which according to your own views, all of
reality is based (and therefore all of reality follows)?

Jason




> On Tuesday 9 July 2024 at 11:24:45 UTC+3 Stathis Papaioannou wrote:
>
>> On Tue, 9 Jul 2024 at 18:04, 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> lol ? By knowing that all AI does is to follow deterministic
>>> instructions such as
>>>
>>> if (color == white) {
>>>print ("Is day");
>>> } else {
>>>print ("Is night");
>>> }
>>>
>>> There is no reason involved. Just blindly following instructions. Do
>>> people that believe in the AI believe that computers are magical entities
>>> where fairies live and they sprout rainbows ?
>>>
>>
>> This is what humans do also: their brains follow deterministic rules, and
>> it results in the complex behaviour that we see.
>>
>>
>> --
>> Stathis Papaioannou
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/19faaf74-251d-4e42-9ce6-a1d6d6208fc4n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/19faaf74-251d-4e42-9ce6-a1d6d6208fc4n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhfxo4xxPPLWk27ZX0qnPKS5Mc-GW5FgcX7aBFj6_S_Ng%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Mon, Jul 8, 2024, 6:38 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> So based on your definition, Santa Claus exists.
>

I believe everything possible exists.

That is the idea this mail list was created to discuss, after all. (That is
why it is called the "everything list")

Jason



> On Tuesday 9 July 2024 at 00:47:28 UTC+3 Jason Resch wrote:
>
>>
>>
>> On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> Brain doesn't exist.
>>
>>
>> Then it exists as an object in consciousness, which is as much as exist
>> would mean under idealism. Rather than say things don't exist, I think it
>> would be better to redefine what is meant by existence.
>>
>>
>> "Brain" is just an idea in consciousness.
>>
>>
>> Sure, and all objects exist in the mind of God. So "exist" goes back to
>> meaning what it has always meant, as Markus Mueller said (roughly): "A
>> exists for B, when changing the state of A can change the state of B, and
>> vice versa, under certain auxiliary conditions."
>>
>>
>> See my papers, like "How Self-Reference Builds the World":
>>> https://philpeople.org/profiles/cosmin-visan
>>
>>
>>>
>> I have, and replied with comments and questions. You, however, dismissed
>> them as me not having read your paper.
>>
>> Have you seen my paper on how computational observers build the world? It
>> reaches a similar conclusion to yours:
>>
>> https://philpeople.org/profiles/jason-k-resch
>>
>> Jason
>>
>>
>>
>>> On Monday 8 July 2024 at 23:35:12 UTC+3 Jason Resch wrote:
>>>
>>>>
>>>>
>>>> On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:
>>>>
>>>>>
>>>>> On Mon, Jul 8, 2024 at 2:12 PM Jason Resch  wrote:
>>>>>
>>>>> *>Consciousness is a prerequisite of intelligence.*
>>>>>>
>>>>>
>>>>> I think you've got that backwards, intelligence is a prerequisite of
>>>>> consciousness. And the possibility of intelligent ACTIONS is a
>>>>> prerequisite for Darwinian natural selection to have evolved it.
>>>>>
>>>>
>>>> I disagree, but will explain below.
>>>>
>>>>
>>>>>
>>>>>> *> One can be conscious without being intelligent,*
>>>>>>
>>>>>
>>>>> Sure.
>>>>>
>>>>
>>>> I define intelligence by something capable of intelligent action.
>>>>
>>>> Intelligent action requires non random choice: choice informed by
>>>> information from the environment.
>>>>
>>>> Having information about the environment (i.e. perceptions) is
>>>> consciousness. You cannot have perceptions without there being some process
>>>> or thing to perceive them.
>>>>
>>>> Therefore perceptions (i.e. consciousness) is a requirement and
>>>> precondition of being able to perform intelligent actions.
>>>>
>>>> Jason
>>>>
>>>> The Turing Test is not perfect, it has a lot of flaws, but it's all
>>>>> we've got. If something passes the Turing Test then it's intelligent and
>>>>> conscious, but if it fails the test then it may or may not be intelligent
>>>>> and or conscious.
>>>>>
>>>>>  *You need to have perceptions (of the environment, or the current
>>>>>> situation) in order to act intelligently. *
>>>>>
>>>>>
>>>>> For intelligence to have evolved, and we know for a fact that it has,
>>>>>  you not only need to be able to perceive the environment you also
>>>>> need to be able to manipulate it. That's why zebras didn't evolve great
>>>>> intelligence, they have no hands, so a brilliant zebra wouldn't have a
>>>>> great advantage over a dumb zebra, in fact he'd probably be at a
>>>>> disadvantage because a big brain is a great energy hog.
>>>>>   John K ClarkSee what's on my new list at  Extropolis
>>>>> <https://groups.google.com/g/extropolis>
>>>>> 339
>>>>>
>>>>> 3b4
>>>>>
>>>>>
>>>>>> --
>>>>> You received this message because you are subscribed

Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> Brain doesn't exist.


Then it exists as an object in consciousness, which is as much as exist
would mean under idealism. Rather than say things don't exist, I think it
would be better to redefine what is meant by existence.


"Brain" is just an idea in consciousness.


Sure, and all objects exist in the mind of God. So "exist" goes back to
meaning what it has always meant, as Markus Mueller said (roughly): "A
exists for B, when changing the state of A can change the state of B, and
vice versa, under certain auxiliary conditions."


See my papers, like "How Self-Reference Builds the World":
> https://philpeople.org/profiles/cosmin-visan


>
I have, and replied with comments and questions. You, however, dismissed
them as me not having read your paper.

Have you seen my paper on how computational observers build the world? It
reaches a similar conclusion to yours:

https://philpeople.org/profiles/jason-k-resch

Jason



> On Monday 8 July 2024 at 23:35:12 UTC+3 Jason Resch wrote:
>
>>
>>
>> On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:
>>
>>>
>>> On Mon, Jul 8, 2024 at 2:12 PM Jason Resch  wrote:
>>>
>>> *>Consciousness is a prerequisite of intelligence.*
>>>>
>>>
>>> I think you've got that backwards, intelligence is a prerequisite of
>>> consciousness. And the possibility of intelligent ACTIONS is a
>>> prerequisite for Darwinian natural selection to have evolved it.
>>>
>>
>> I disagree, but will explain below.
>>
>>
>>>
>>>> *> One can be conscious without being intelligent,*
>>>>
>>>
>>> Sure.
>>>
>>
>> I define intelligence by something capable of intelligent action.
>>
>> Intelligent action requires non random choice: choice informed by
>> information from the environment.
>>
>> Having information about the environment (i.e. perceptions) is
>> consciousness. You cannot have perceptions without there being some process
>> or thing to perceive them.
>>
>> Therefore perceptions (i.e. consciousness) is a requirement and
>> precondition of being able to perform intelligent actions.
>>
>> Jason
>>
>> The Turing Test is not perfect, it has a lot of flaws, but it's all we've
>>> got. If something passes the Turing Test then it's intelligent and
>>> conscious, but if it fails the test then it may or may not be intelligent
>>> and or conscious.
>>>
>>>  *You need to have perceptions (of the environment, or the current
>>>> situation) in order to act intelligently. *
>>>
>>>
>>> For intelligence to have evolved, and we know for a fact that it has,
>>>  you not only need to be able to perceive the environment you also need
>>> to be able to manipulate it. That's why zebras didn't evolve great
>>> intelligence, they have no hands, so a brilliant zebra wouldn't have a
>>> great advantage over a dumb zebra, in fact he'd probably be at a
>>> disadvantage because a big brain is a great energy hog.
>>>   John K ClarkSee what's on my new list at  Extropolis
>>> <https://groups.google.com/g/extropolis>
>>> 339
>>>
>>> 3b4
>>>
>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv2Zjakk5szeMFfZu%3DCYp3FzopZsOOMXW%2Bx7qPH9_pujfg%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/everything-list/CAJPayv2Zjakk5szeMFfZu%3DCYp3FzopZsOOMXW%2Bx7qPH9_pujfg%40mail.gmail.com?utm_medium=email_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/5812f096-a4a9-4915-8fee-5b7c810d3609n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/5812f096-a4a9-4915-8fee-5b7c810d3609n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgXtOSSv21YbAvdzdbxSwwm-0vB8aM4gA4%3Dr87NdW0cnA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:

>
> On Mon, Jul 8, 2024 at 2:12 PM Jason Resch  wrote:
>
> *>Consciousness is a prerequisite of intelligence.*
>>
>
> I think you've got that backwards, intelligence is a prerequisite of
> consciousness. And the possibility of intelligent ACTIONS is a
> prerequisite for Darwinian natural selection to have evolved it.
>

I disagree, but will explain below.


>
>> *> One can be conscious without being intelligent,*
>>
>
> Sure.
>

I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by
information from the environment.

Having information about the environment (i.e. perceptions) is
consciousness. You cannot have perceptions without there being some process
or thing to perceive them.

Therefore perceptions (i.e. consciousness) is a requirement and
precondition of being able to perform intelligent actions.

Jason

The Turing Test is not perfect, it has a lot of flaws, but it's all we've
> got. If something passes the Turing Test then it's intelligent and
> conscious, but if it fails the test then it may or may not be intelligent
> and or conscious.
>
>  *You need to have perceptions (of the environment, or the current
>> situation) in order to act intelligently. *
>
>
> For intelligence to have evolved, and we know for a fact that it has, you
> not only need to be able to perceive the environment you also need to be
> able to manipulate it. That's why zebras didn't evolve great intelligence,
> they have no hands, so a brilliant zebra wouldn't have a great advantage
> over a dumb zebra, in fact he'd probably be at a disadvantage because a big
> brain is a great energy hog.
>   John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 339
>
> 3b4
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2Zjakk5szeMFfZu%3DCYp3FzopZsOOMXW%2Bx7qPH9_pujfg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2Zjakk5szeMFfZu%3DCYp3FzopZsOOMXW%2Bx7qPH9_pujfg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi-LWn1sGnWc95aUw1ib9a7WXV%2BCkj9b%2Bgq0OboAes7mw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Mon, Jul 8, 2024, 4:01 PM John Clark  wrote:

> On Mon, Jul 8, 2024 at 2:23 PM Jason Resch  wrote:
>
> *> If you believe mental states do not cause anything, then you believe
>> philosophical zombies are logically possible (since we could remove
>> consciousness without altering behavior).*
>>
>
> Not if consciousness is the inevitable byproduct of intelligece, and I'm
> almost certain that it is.
>

If consciousness is necessary for intelligence, then it's not a byproduct.
If on the other hand, consciousness is just a useless byproduct, then it
could (logically if not nomologically) be eliminated without affecting
intelligent.

You seem to want it to be both necessary but also be something that makes
no difference to anything (which makes it unnecessary).

I would be most curious to hear your thoughts  regarding the section of my
article on "Conscious behaviors" -- that is, behaviors which (seem to)
require consciousness in order to do them.


> *> I view mental states as high-level states operating in their own regime
>> of causality (much like a Java computer program).*
>>
>
> I have no problem with that, actually it's very similar to my view.
>

That's good to hear.


>
>> *> The java computer program can run on any platform, regardless of the
>> particular physical nature of it.*
>>
>
> Right. You could even say that "computer program" is not a noun, it is an
> adjective, it is the way a computer will behave when the machine's  logical
> states are organized in a certain way.  And "I" is the way atoms behave
> when they are organized in a Johnkclarkian way, and "you" is the way atoms
> behave when they are organized in a Jasonreschian way.
>

I'm not opposed to that framing.

>
> *> I view consciousness as like that high-level control structure. It
>> operates within a causal realm where ideas and thoughts have causal
>> influence and power, and can reach down to the lower level to do things
>> like trigger nerve impulses.*
>>
>
> Consciousness is a high-level description of brain states that can be
> extremely useful, but that doesn't mean that lower level and much more
> finely grained description of brain states involving nerve impulses, or
> even more finely grained descriptions involving electrons and quarks are
> wrong, it's just that such level of detail is unnecessary and impractical
> for some purposes.
>

I would even say, that at a certain level of abstraction, they become
irrelevant. It is the result of what I call "a Turing firewall", software
has no ability to know its underlying hardware implementation, it is an
inviolable separation of layers of abstraction, which makes the lower
levels invisible to the layers above. So the neurons and molecular forces
aren't in the drivers seat for what goes on in the brain. That is the
domain of higher level structures and forces. We cannot ignore completely
the lower levels, they provide the substrate upon which the higher levels
are built, but I think it is an abuse of reductionism that leads people to
saying consciousness is an epiphenomenon and doesn't do anything. When no
one would try to apply reductionism to explain why, when a glider in the
game of life hits a block and causes it to self destruct, that it is due to
quantum mechanics in our universe, rather than a consequence of the very
different rules of the game of life as they operate in the game of life
universe.

Jason



> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> qb2
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv28Yh4o5TGpuZ2nfh7NFxYWbi4yVW%2B5v%3DbeXULDqdbPsg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv28Yh4o5TGpuZ2nfh7NFxYWbi4yVW%2B5v%3DbeXULDqdbPsg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjBw4RA-YBpKERYR5swmxieSUkv_%3DXyttLmszc8XOtd8g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Sun, Jul 7, 2024 at 3:14 PM John Clark  wrote:

> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch  wrote:
>
> *>>> ** I think such foresight is a necessary component of intelligence,
>>>> not a "byproduct".*
>>>
>>>
>>> >>I agree, I can detect the existence of foresight in others and so can
>>> natural selection, and that's why we have it.  It aids in getting our genes
>>> transferred into the next generation. But I was talking about consciousness
>>> not foresight, and regardless of how important we personally think
>>> consciousness is, from evolution's point of view it's utterly useless,
>>> and yet we have it, or at least I have it.
>>>
>>
>> *> you don't seem to think zombies are logically possible,*
>>
>
> Zombies are possible, it's philosophical zombies, a.k.a. smart zombies,
> that are impossible because it's a brute fact that consciousness is the way
> data behaves when it is being processed intelligently, or at least that's
> what I think. Unless you believe that all iterated sequences of "why" or
> "how" questions go on forever then you must believe that brute facts
> exist; and I can't think of a better candidate for one than consciousness.
>
> *> so then epiphenomenalism is false*
>>
>
> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
> is a position in the philosophy of mind according to which mental states or
> events are caused by physical states or events in the brain but do not
> themselves cause anything*". If that is the definition then I believe in
> Epiphenomenalism.
>

If you believe mental states do not cause anything, then you believe
philosophical zombies are logically possible (since we could remove
consciousness without altering behavior).

I view mental states as high-level states operating in their own regime of
causality (much like a Java computer program). The java computer program
can run on any platform, regardless of the particular physical nature of
it. It has in a sense isolated itself from the causality of the electrons
and semiconductors, and operates in its own realm of the causality of if
statements, and for loops. Consider this program, for example:

[image: twin-prime-program2.png]

What causes the program to terminate? Is it the inputs, and the logical
relation of primality, or is it the electrons flowing through the CPU? I
would argue that the higher-level causality, regarding the logical
relations of the inputs to the program logic is just as important. It
determines the physics of things like when the program terminates. At this
level, the microcircuitry is relevant only to its support of the higher
level causal structures, but the program doesn't need to be aware of nor
consider those low-level things. It operates the same regardless.

I view consciousness as like that high-level control structure. It operates
within a causal realm where ideas and thoughts have causal influence and
power, and can reach down to the lower level to do things like trigger
nerve impulses.


Here is a quote from Roger Sperry, who eloquently describes what I am
speaking of:


"I am going to align myself in a counterstand, along with that
approximately 0.1 per cent mentalist minority, in support of a hypothetical
brain model in which consciousness and mental forces generally are given
their due representation as important features in the chain of control.
These appear as active operational forces and dynamic properties that
interact with and upon the physiological machinery. Any model or
description that leaves out conscious forces, according to this view, is
bound to be pretty sadly incomplete and unsatisfactory. The conscious mind
in this scheme, far from being put aside and dispensed with as an
"inconsequential byproduct," "epiphenomenon," or "inner aspect," as is the
customary treatment these days, gets located, instead, front and center,
directly in the midst of the causal interplay of cerebral mechanisms.

Mental forces in this particular scheme are put in the driver's seat, as it
were. They give the orders and they push and haul around the physiology and
physicochemical processes as much as or more than the latter control them.
This is a scheme that puts mind back in its old post, over matter, in a
sense-not under, outside, or beside it. It's a scheme that idealizes ideas
and ideals over physico-chemical interactions, nerve impulse traffic-or
DNA. It's a brain model in which conscious, mental, psychic forces are
recognized to be the crowning achievement of some five hundred million
years or more of evolution.

[...] The basic reasoning is simple: First, we contend that conscious or
mental phenomena are dynamic, emergent, pattern (or configurational)
properties of the living brain

Re: Are Philosophical Zombies possible?

2024-07-08 Thread Jason Resch
On Mon, Jul 8, 2024 at 10:29 AM John Clark  wrote:

>
> On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker  wrote:
>
> *>I thought it was obvious that foresight requires consciousness. It
>> requires the ability of think in terms of future scenarios*
>>
>
> The keyword in the above is "think". Foresight means using logic to
> predict, given current starting conditions, what the future will likely be,
> and determining how a change in the initial conditions will likely affect
> the future.  And to do any of that requires intelligence. Both Large
> Language Models and picture to video AI programs have demonstrated that
> they have foresight ; if you ask them what will happen if you cut the
> string holding down a helium balloon they will tell you it will flow away,
> but if you add that the instant string is cut an Olympic high jumper will
> make a grab for the dangling string they will tell you what will likely
> happen then too. So yes, foresight does imply consciousness because
> foresight demands intelligence and consciousness is the inevitable
> byproduct of intelligence.
>

Consciousness is a prerequisite of intelligence. One can be conscious
without being intelligent, but one cannot be intelligent without being
conscious.
Someone with locked-in syndrome can do nothing, and can exhibit no
intelligent behavior. They have no measurable intelligence. Yet they are
conscious. You need to have perceptions (of the environment, or the current
situation) in order to act intelligently. It is in having perceptions that
consciousness appears. So consciousness is not a byproduct of, but an
integral and necessary requirement for intelligent action.

Jason


>
>
>> *> in which you are an actor*
>>
>
> Obviously any intelligence will have to take its own actions in account to
> determine what the likely future will be. After a LLM gives you an answer
> to a question, based on that answer I'll bet an AI  will be able to make a
> pretty good guess what your next question to it will be.
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> ods
>
>
>
>
>
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-07 Thread Jason Resch
On Sun, Jul 7, 2024, 11:58 AM John Clark  wrote:

> On Sat, Jul 6, 2024 at 3:03 PM Brent Meeker  wrote:
>
> *> ** I think such foresight is a necessary component of intelligence,
>> not a "byproduct".*
>
>
> I agree, I can detect the existence of foresight in others and so can
> natural selection, and that's why we have it.  It aids in getting our genes
> transferred into the next generation. But I was talking about consciousness
> not foresight, and regardless of how important we personally think
> consciousness is, from evolution's point of view it's utterly useless,
> and yet we have it, or at least I have it.
>

This is the position of epiphenomenalism: that conscious has no effects. It
is what makes zombies logically possible. But you don't seem to think
zombies are logically possible, so then epiphenomenalism is false, and
consciousness does have effects. As you said previously, if consciousness
had no effects, there would be no reason for it to evolve in the first
place.


Why? It must be because consciousness is the byproduct of something else
> that is not useless, there are no other possibilities.
>

There is another possibility: consciousness is not useless.

Jason



Incidentally, GPT has demonstrated foresight, when shown a picture of
> somebody holding a pair of scissors next to a string holding down a helium
> balloon and  asked "what comes next?" it replies that the string is about
> to be cut by the scissors and then the balloon will float away.
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> hbf
>
>
>
>
>
>> Anybody who claims that philosophical zombies are possible needs to ask
>> themselves one question. Natural selection cannot select for something
>> it cannot see, and it can't directly see consciousness any better than we
>> can, except in ourselves; so how did Evolution manage to produce at least
>> one conscious being, and probably many billions of them? I think the answer
>> is that although Evolution can't see consciousness it can certainly see
>> intelligent activity, so consciousness must be an inevitable byproduct of
>> intelligence.
>>
>> I
>>
>> Or to put it another way, it's a brute fact that consciousness is the way
>> data feels when it is being processed. After all, without exception, every
>> iterated sequence of "why" or "how" questions either goes on forever or
>> terminates in a brute fact.
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> wfn
>> --
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3XGz7MJdzy7P2cnmq96McL1U_6r8k5bKpQKMCbkS5bBA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3XGz7MJdzy7P2cnmq96McL1U_6r8k5bKpQKMCbkS5bBA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh_69v0K%2BQd0dJF7DrQh8-hfc%3DrJRfVizA3Adrx1DqW-w%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-06 Thread Jason Resch
On Sat, Jul 6, 2024 at 2:52 PM Brent Meeker  wrote:

> You emphasize that a Zombie would assert that he had a consciousness, but
> what about the converse?  Suppose you met someone who simply denied that
> the had a consciousness.  When he stubs his toe and says "OUCH!" and hops
> around on one foot he says yes that was my reaction but I wasn't conscious
> of pain.  Can you prove him wrong or do you just DEFINE him as wrong?
>

As Chalmers writes, even the statement "Consciousness does not exist" is a
third-order phenomenal judgement, of the kind that seems to imply the
presence of consciousness in those that come to such conclusions. It seems
to be it is neither the assertion of having it, nor the denial of not
having it, which proves the presence of consciousness, but rather, it is
having a source of knowledge to be able to make such conclusions in the
first place, which I think should be taken as the evidence for the presence
of a mind.

As to the example of denying a particular perception like pain, there are
people who have no sense of pain, and there is also pain dissociation,
where the pain's intensity and locus are known, but the experience has no
noxiousness. I don't think such denies of pain would constitute evidence of
having pain, in the same way denying that one is conscious could be taken
as evidence of being conscious (as you have to have some self-awareness to
be in a position to deny what aspects of yourself you possess or don't
possess).

Jason



>
> Brent
>
> On 7/5/2024 10:41 AM, Jason Resch wrote:
>
> I finished this section for my article on consciousness:
>
>
> https://drive.google.com/file/d/1jq3uOucSStCPe5TQnUv-8YWvGUW05Enr/view?usp=sharing
>
> It is an important question, because if zombies are not possible, then
> consciousness is not optional. Rather, consciousness would be logically
> necessary, in any system having the right configuration.
>
> (Whether that configuration is functional/organizational/causal/or
> physical is a separate question).
>
> Jason
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjY6cGV8606u8Xf3_ELbBibF2Cs-dPv_bhuctitQsaUag%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjY6cGV8606u8Xf3_ELbBibF2Cs-dPv_bhuctitQsaUag%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/45e72f86-b4dc-4ebf-a38e-a09c331d3ba1%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/45e72f86-b4dc-4ebf-a38e-a09c331d3ba1%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgAfWb4Y36A_uVfwrYq_rBZj0Vu1ZgTaE-qc-z4qwZKWA%40mail.gmail.com.


Are Philosophical Zombies possible?

2024-07-05 Thread Jason Resch
I finished this section for my article on consciousness:

https://drive.google.com/file/d/1jq3uOucSStCPe5TQnUv-8YWvGUW05Enr/view?usp=sharing

It is an important question, because if zombies are not possible, then
consciousness is not optional. Rather, consciousness would be logically
necessary, in any system having the right configuration.

(Whether that configuration is functional/organizational/causal/or physical
is a separate question).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjY6cGV8606u8Xf3_ELbBibF2Cs-dPv_bhuctitQsaUag%40mail.gmail.com.


Re: Claude 3.5 sonnet

2024-07-03 Thread Jason Resch
On Wed, Jul 3, 2024, 6:20 PM PGC  wrote:

>
>
> On Tuesday, July 2, 2024 at 6:52:28 PM UTC+2 Jason Resch wrote:
>
> On Fri, Jun 28, 2024 at 11:57 AM PGC  wrote:
>
>
> I'm not trying to play jargon police or anything—everyone has a right to
> take part in the intelligence discussion. But imho it's misleading to
> associate developments in machine learning through hardware advances with
> true intelligence.
>
> I also see it as surprising that through hardware improvements alone, and
> without specific breakthroughs in algorithms, we should see such great
> strides in AI.
>
>
> Without knowing what goes on under the hood... Perhaps it's just my
> impression, but a few years ago, I felt that ML was more of an open field,
> where everybody had some idea what they were working on. There would be
> Silicon Valley guys from big tech firms having conferences with the more
> independent side of the research community. Take this with a grain of salt,
> as I am not an insider but an observer. It seems that, since the hype +
> influx of money, this trend has reversed and the more economic idea of
> trade secrets has become more prominent. I feel it's harder to know what
> "state-of-the-art" is, these days, with exception to marketing stats and
> bragging.
>


As I see, it Google's 2017 publication of "Attention is all you need" is
what ultimately led to Open AI's rise (after GPT 2 made waves). GPT2 was
also the first time Open AI said they would keep the model private (using
the reason that they saw its public disclosure could lead to harm). Note
that this is also around the time they received major private investment (I
think Microsoft gave them a billion USD), and the investors essentially
took open AI private. Open AI was previously an organization founded on the
principle of keeping advances in AI open and public.



> Some time ago, there was the sentiment with RL "Your algorithm doesn't
> matter the way it did during your PhD anymore, what matters is how much
> data you can throw at it, hardware constraints, whether you have legal
> access to that data and hardware." Then OpenAI got the hype and investment
> interest to skyrocket with its GPT iterations and - I'm speculating - I'm
> not sure that it was hardware alone. Other Silicon Valley players had the
> toys/hardware, so I'm guessing some data curation in combination with
> software development might have been responsible for the initial advantage.
>

I don't think there is anything algorithmically special to Open AI. There
are open source language models as well as many privately developed ones of
equivalent (if not superior) quality to Chat GPT.

OpenAI's GPT-4o and Anthropic's Claude 3.5 are considered among the best
available today, but the others (such as these:
https://mindsdb.com/blog/navigating-the-llm-landscape-a-comparative-analysis-of-leading-large-language-models
) are probably not more than 6-12 months behind.

There will be advances in figuring out how to train AIs more efficiently,
and using AI to train AI and generate training data, in making models
smaller and more efficient to run, and so on, but I don't think there's any
monopoly on (or shortage of) ideas for how to do this.



>
> But I also see a possible explanation. Nature has likewise discovered
> something, which is relatively simple in its behavior and capabilities,
> yet, when aggregated into ever larger collections yields greater and
> greater intelligence and capability: the neuron.
>
> There is relatively little difference in neurons across mammals. A rat
> neuron is little different from a mouse neuron, for example. Yet a human
> brain has several thousand times more of them than a mouse brain does, and
> this difference in scale, seems to be the only meaningful difference
> between what mice and humans have been able to accomplish.
>
> Deep learning, and the progress in that field, is a microcosm of this
> example from nature. The artificial neuron is proven to be "a universal
> function learner." So the more of them there are aggregated together in one
> network, the more rich and complex functions they can learn to approximate.
> Humans no longer write the algorithms these neural networks derive, the
> training process comes up with them. And much like the algorithms
> implemented in the human brain, they are in a representation so opaque and
> that they escape our capacity to understand.
>
> So I would argue, there have been massive breakthroughs in the algorithms
> that underlie the advances in AI, we just don't know what those
> breakthroughs are.
>
> These algorithms are products of systems which have (now) trillions of
> parts. Even the best human programmers can't know the complete details of
> projects with around a million lines o

Re: Claude 3.5 sonnet

2024-07-02 Thread Jason Resch
On Tue, Jul 2, 2024, 4:00 PM John Clark  wrote:

> On Tue, Jul 2, 2024 at 12:52 PM Jason Resch  wrote:
>
> *> I also see it as surprising that through hardware improvements alone,
>> and without specific breakthroughs in algorithms, we should see such great
>> strides in AI.*
>
>
> I was not surprised because the entire human genome only has the capacity
> to hold 750 MB of information; that's about the amount of information you
> could fit on an old-fashioned CD, not a DVD, just a CD. The true number
> must be considerably less than that because that's the recipe for building
> an entire human being, not just the brain, and the genome contains a huge
> amount of redundancy, 750 MB is just the upper bound.
>

That the initial code to write a "seed AI" algorithm could take less than
750 MB is, as you say, not surprising.

My comment was more to reflect the fact that there has been no great
breakthrough in solving how human neurons learn. We're still using the same
method of back propagation invented in the 1970s, using the same neuron
model of the 1960s. Yet, simply scaling this same approach up, with more
training data and training time, with more neurons arranged in more layers,
has produced all the advances we've seen. Image and video generators, voice
cloning, language models, go, poker, chess, Atari, and StarCraft master
players, etc.



>
>> *> Humans no longer write the algorithms these neural networks derive,
>> the training process comes up with them. And much like the algorithms
>> implemented in the human brain, they are in a representation so opaque and
>> that they escape our capacity to understand. So I would argue, there have
>> been massive breakthroughs in the algorithms that underlie the advances in
>> AI, we just don't know what those breakthroughs are.*
>
>
> That is a very interesting way to look at it, and I think you are
> basically correct.
>

Thank you. I thought you might appreciate it. ☺️



>
>> *> I think the human brain, with its 600T connections might signal an
>> upper bound for how many are required, but the brain does a lot of other
>> things too, so the bound could be lower.*
>>
>
> The human brain has about 86 billion neurons with 7*10^14 synaptic
> connections (a more generous estimate than yours), but the largest
> supercomputer in the world,
>

I think that figure comes from multiplying the ~100 billion neurons by the
average of 7,000 synaptic connections per neuron. If you multiply your 86
billion figure by 7,000 synapses per neuron, you get my figure.


the Frontier Computer at Oak ridge, has  2.5*10^15  transistors, over three
> times as many. And we know from experiments that a typical synapse in the
> human brain "fires" between 1 and 50 times per second, but a typical
> transistor in a computer "fires" about 4 billion times a second (4*10^9).
> It also has 9.2* 10^15 bites of fast memory. That's why the Frontier
> Computer can perform 1.1 *10^18 double precision floating point
> calculations per second and why the human brain can not.
>

The human brain's computational capacity is estimated to be around the
exaop range ( (assuming ~10^15 synapses firing at an upper bound of 1000
times per second). So I agree with your point we have the computation
necessary, it is now a question of do we have the software? Some assumed we
would have to upload a brain to reverse engineer its mechanisms, but it now
seems the techniques of machine learning will reproduce these algorithms
well before we apply the resources necessary to scan a human brain at a
synaptic resolution.


> By way of comparison, Ray Kurzweil estimates that the hardware needed to
> emulate a human mind would need to be able to perform 10^16 calculations
> per second and have 10^12 bytes of memory.
>

Those numbers assume the brain is about 100-1000 times less efficient than
could be. It very well might be that much less efficient, but we should
treat those estimates as optimistic lower bounds.

And the calculations would not need to be 64 bit double precision floating
> point, 8 bit or perhaps even 4 bit precision would be sufficient. So in
> the quest to develop a superintelligence, insufficient hardware is no
> longer a barrier.
>

There are various kinds of superintelligence as defined by Bostrom. There
is depth of thinking, speed of thinking, and breath of knowledge. I think
current language models are on the precipice (if not past it) of super
intelligence on terms of speed and breadth of knowledge. But it seems to me
that AI is still behind humans in terms of depth of thinking (e.g. how
deeply they can go in terms of following a sequence of logical inferences).
This may be limited by the existing architecture of LLMs which have a
neural network that only has so many lay

Re: Claude 3.5 sonnet

2024-07-02 Thread Jason Resch
On Fri, Jun 28, 2024 at 11:57 AM PGC  wrote:

> Jason,
>
> There's no universal consensus on intelligence beyond the broad outlines
> of the narrow vs general distinction. This is reflected in our informal
> discussion: some emphasize that effective action should be the result and
> are satisfied with a certain set and level of capabilities. However, I'm
> less sure whether that paints a complete picture. "General" should mean
> what it means. Brent talks about an integration system that does the
> modelling. But reflection, even the redundant kind that doesn't immediately
> yield anything may lead to a Russell coffee break moment. That seems to
> play a role, with people taking years, decades, generations, and even
> entire civilizations to discover that a problem may be ill-posed,
> unsolvable, or solvable.
>
> We look at historical developments and ask whether all of it is required
> to have one Newton appear every now and then. Or whether we could've had 10
> every generation with different values or politics, for instance. Those
> would be gigantic simulations to run, but who knows? Maybe we could get to
> Euclidean geometry far more cheaply than we did. Instead, we are making
> gigantic investments into known machine learning techniques with huge
> hardware boosts, calling it AI for marketing reasons (with many marketing
> MBA types becoming "Chief AI Officer" because they have a chatGPT
> subscription), to build robots to be our servants, maids, assistants, and
> secretaries.
>
> I'm not trying to play jargon police or anything—everyone has a right to
> take part in the intelligence discussion. But imho it's misleading to
> associate developments in machine learning through hardware advances with
> true intelligence.
>
I also see it as surprising that through hardware improvements alone, and
without specific breakthroughs in algorithms, we should see such great
strides in AI. But I also see a possible explanation. Nature has likewise
discovered something, which is relatively simple in its behavior and
capabilities, yet, when aggregated into ever larger collections yields
greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat
neuron is little different from a mouse neuron, for example. Yet a human
brain has several thousand times more of them than a mouse brain does, and
this difference in scale, seems to be the only meaningful difference
between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this
example from nature. The artificial neuron is proven to be "a universal
function learner." So the more of them there are aggregated together in one
network, the more rich and complex functions they can learn to approximate.
Humans no longer write the algorithms these neural networks derive, the
training process comes up with them. And much like the algorithms
implemented in the human brain, they are in a representation so opaque and
that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms
that underlie the advances in AI, we just don't know what those
breakthroughs are.

These algorithms are products of systems which have (now) trillions of
parts. Even the best human programmers can't know the complete details of
projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true
intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human
brain, with its 600T connections might signal an upper bound for how many
are required, but the brain does a lot of other things too, so the bound
could be lower.



> Of course, there can be synergistic effects that Minsky speculates about,
> but we can hardly manage resource allocation for all persons with actual
> AGI abilities globally alive today, which makes me pretty sure that this
> isn't what most people want. They want servants that are maximally
> intelligent to do what they are told, revealing something about our own
> desires. This is the desire for people as tools.
>
> Personally, I lean towards viewing intelligence as the potential to
> reflect plus remaining open to novel approaches to any problem. Sure,
> capability/ability is needed to solve a problem, and intelligence is
> required to see that, but at some point in acquiring abilities, folks seem
> to lose the ability to consider fundamentally novel approaches, often
> ridiculing them etc. There seems to be a point where ability limits the
> potential for new approaches to a problem.
>

Yes, this is what Bruno considers the "competence" vs. "intelligence"
distinction. 

Re: Claude 3.5 sonnet

2024-06-26 Thread Jason Resch
On Wed, Jun 26, 2024 at 3:33 PM PGC  wrote:

> Your excitement about Claude 3.5 Sonnet's performance is understandable.
> It's an impressive development, but it's crucial to remember that beating
> benchmarks or covering a wide range of conversational topics does not
> equate to general intelligence. I wish we lived in a context where I could
> encourage you to provide evidence for your claims about AI capabilities and
> future predictions but Claude, OpenAI, etc are... not exactly open.
>
> Then we could discuss empirical data and trends instead of betting: I
> don't know what the capability ceiling is, for narrow AI development behind
> closed doors now or in the next years, nor have I pretended to.
> Wide/general is not narrow/specific and brittle. But I am happy for you if
> you feel that you can converse intelligently with it; I know what you mean.
> For my taste its a tad obsequious and not very original, i.e. I am
> providing all the originality of the conversation that some large
> corporation is sucking up without getting paid for it.
>
>
> *I don't want clever conversationI never want to work that hard, mmm - *Billy
> Joel
>

PGC,

Would you consider the aggregate capabilities of all AIs that have been
created to date, as a general intelligence? In the spirit of what Minsky
said here:

"Each practitioner thinks there’s one magic way to get a machine to be
smart, and so they’re all wasting their time in a sense. On the other hand,
each of them is improving some particular method, so maybe someday in the
near future, or maybe it’s two generations away, someone else will come
around and say, ‘Let’s put all these together,’ and then it will be smart."
-- Marvin Minsky

I wrote that human general intelligence, consists of the following
abilities:

   - Communicate via natural language
   - Learn, adapt, and grow
   - Move through a dynamic environment
   - Recognize sights and sounds
   - Be creative in music, art, writing and invention
   - Reason with logic and rationality to solve problems

I think progress exists across each of these domains. While the best humans
in their area of expertise may beat the best AIs, it is arguable that the
AI systems which exist in these domains are better than the average human
in that area.

This article I wrote in 2020 is quite dated, but it shows that even back
then, we have machines that could be called creative:

https://alwaysasking.com/when-will-ai-take-over/#Creative_abilities_of_AI

If we could somehow clobber together all the AIs that we have made so far,
and integrate them into a robot body. Would that be something we could
regard as generally intelligent? And if not, what else would need to be
done?

Jason



> On Monday, June 24, 2024 at 11:02:05 PM UTC+2 John Clark wrote:
>
>> On Mon, Jun 24, 2024 at 10:00 AM PGC  wrote:
>>
>>
>>> *> And for everybody here assuming the Mechanist ontology, which implies
>>> the Strong AI thesis, i.e. the assertion that a machine can think,*
>>>
>>
>> I don't know about everybody but I certainly have that view because the
>> only alternative is vitalism, the idea that only life, especially human
>> life, has a special secret sauce that is not mechanistic, that is to say
>> does not follow the same laws of physics as non-living things.  And that
>> view has been thoroughly discredited since 1859 when Darwin wrote "The
>> Origin Of Species".
>>
>>
>>
>>> *> I am curious as to why any of you would assume that general
>>> intelligence and mind would arise from a narrow AI.*
>>>
>>
>> If a human could converse with you as intelligently as Claude can in such
>> a wide number of unrelated topics you would never call his range of
>> interest narrow, but because Claude's brain is hard and dry and not soft
>> and squishy you do.  I'll tell you what let's make a bet, I bet that an AI
>> will win the International Mathematical Olympiad in less than 3 years,
>> perhaps much less. I also bet that in less than 3 years the main political
>> issue in every major country will not be unlawful immigration or crime or
>> even an excess in wokeness, it will be what to do about AI which is taking
>> over jobs at an accelerating rate.  What do you bet?
>>
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> bwu
>>
>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google

Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Jason Resch
On Tue, Jun 25, 2024 at 2:01 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Jason. You say:
>
> ""Every rule has an exception"
> This is a self referential sentence"
>
> But from my paper:
>
> "In “This sentence is false”, a 3rd person “sentence” is imagined to
> exist, and to that imagined
> “sentence”, the property of “is false” is added, and a weird combination
> of 3rd person entity “This
> sentence is false” masquerading as 1st person entity is created, and from
> this the apparent
> paradox, which ultimately is nothing but an incoherent worlds-play,
> appears. Self-reference on
> the other hand, is a 1st person entity all-throughout. It is not a 3rd
> person entity like “sentence”
> that we can point outside of ourselves and to which we can add properties.
> Self-reference is itself
> and is for itself. Its “looking-back-at-itself” happens from the inside.
> Because of this, the paradox
> doesn’t take place as it happens for “This sentence is false” and any
> other words-play that can be
> made at the 3rd person, including Russell’s paradox."
>
> So how can you claim you read it, when I say clearly in the paper that
> such "self-referential sentences" are just incoherent words-play ?
>

"The sentence is a lie" may be incoherent word play. But if there are any
self-existing absolute truths, they must consist in truths whose denial
leads to inconsistency. I think the sentence you gave as an example of
incoherent word play is just an example of inconsistency. It is different
from the example I provided, which I intended to show leads to an absolute
truth: the existence of rules that have no exceptions. If such absolute
truths exists then the idea of an absolute nothing (devoid of even truths
and relations) cannot be.

Jason


>
> On Tuesday 25 June 2024 at 20:48:56 UTC+3 Jason Resch wrote:
>
>> On Tue, Jun 25, 2024 at 12:54 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> When will that day come when people actually first read the papers and
>>> then comment ? Oh, God!
>>>
>>
>> I read your paper. I am sorry if you did not find my comments or
>> references helpful.
>>
>> Jason
>>
>>
>>>
>>> On Tuesday 25 June 2024 at 19:18:25 UTC+3 Jason Resch wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jun 25, 2024, 9:09 AM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> I invite you to discover my paper "How Self-Reference Builds the
>>>>> World" which is the theory of everything that people searched for
>>>>> millennia. It can be found on my philpeople profile:
>>>>> https://philpeople.org/profiles/cosmin-visan
>>>>>
>>>>
>>>> Hi Cosmin,
>>>>
>>>> Very nice, and very original work.
>>>>
>>>> A few comments and questions, written as they occurred to me:
>>>>
>>>>
>>>> The idea of self reference being larger and smaller than itself made me
>>>> think of how the universe can be thought of as much larger than us, but all
>>>> our thoughts and ideas about the universe are contained within our skulls.
>>>> I am not sure if this is an example of the kind of paradox of self
>>>> reference that you describe but I thought I would ask.
>>>>
>>>>
>>>> Your bootstrapping of nothing into something via self reference made me
>>>> think of the following example. Start with the sentence:
>>>>
>>>> "Every rule has an exception"
>>>> This is a self referential sentence, which can be either true or false.
>>>> If it is false, then there are rules without exceptions (i.e. laws). If it
>>>> is true, then "every rule has an exception" would also be a rule, and if it
>>>> has an exception, then again we reach the conclusion that there are some
>>>> rules without exceptions (i.e. laws), so this self refuting sentence
>>>> implies a universal truth, the existence of laws.
>>>>
>>>>
>>>> Another comment:
>>>> Fractals are objects defined through their self reference, is any
>>>> special attention owed to them? What about numbers such as e? Or steps in a
>>>> recursive computational relation (steps of the evolving game of life
>>>> universe might be conceived of as a recursive function, for example).
>>>>
>>>

Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Jason Resch
On Tue, Jun 25, 2024 at 12:54 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> When will that day come when people actually first read the papers and
> then comment ? Oh, God!
>

I read your paper. I am sorry if you did not find my comments or references
helpful.

Jason


>
> On Tuesday 25 June 2024 at 19:18:25 UTC+3 Jason Resch wrote:
>
>>
>>
>> On Tue, Jun 25, 2024, 9:09 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> I invite you to discover my paper "How Self-Reference Builds the World"
>>> which is the theory of everything that people searched for millennia. It
>>> can be found on my philpeople profile:
>>> https://philpeople.org/profiles/cosmin-visan
>>>
>>
>> Hi Cosmin,
>>
>> Very nice, and very original work.
>>
>> A few comments and questions, written as they occurred to me:
>>
>>
>> The idea of self reference being larger and smaller than itself made me
>> think of how the universe can be thought of as much larger than us, but all
>> our thoughts and ideas about the universe are contained within our skulls.
>> I am not sure if this is an example of the kind of paradox of self
>> reference that you describe but I thought I would ask.
>>
>>
>> Your bootstrapping of nothing into something via self reference made me
>> think of the following example. Start with the sentence:
>>
>> "Every rule has an exception"
>> This is a self referential sentence, which can be either true or false.
>> If it is false, then there are rules without exceptions (i.e. laws). If it
>> is true, then "every rule has an exception" would also be a rule, and if it
>> has an exception, then again we reach the conclusion that there are some
>> rules without exceptions (i.e. laws), so this self refuting sentence
>> implies a universal truth, the existence of laws.
>>
>>
>> Another comment:
>> Fractals are objects defined through their self reference, is any special
>> attention owed to them? What about numbers such as e? Or steps in a
>> recursive computational relation (steps of the evolving game of life
>> universe might be conceived of as a recursive function, for example).
>>
>>
>> What would you consider the simplest possible program that had
>> consciousness to be? That is, what is the shortest bit of code that would
>> manifest consciousness of something (even a single bit)?
>>
>>
>> I agree to that the difficulty of explaining or communicating qualia
>> stems from what me might call self-reference islands. Each of us is trapped
>> within an isolated context, from which we have qualia of various kinds but
>> no common framework established between other minds that enable
>> communication beyond this island. Think of the analogous situation of
>> people in two different universes or AIs in two different computer
>> simulations, trying to define what they mean by a metered or a kilogram.
>> These terms are meaningless and incommunicable outside the particular
>> universe, since they are terms wholly defined by relationships that exist
>> only within a particular universe or simulation. There not only can be no
>> agreement on what is meant by those terms, but they aren't even definable
>> (outside the contextual island that exists only within that universe). For
>> we consciousness beings, we each have such a universe of qualia in our own
>> heads, and these are similarly undefinable beyond the context of our inner
>> view.
>>
>>
>>
>>
>> As for the ontology that results, your work reminded me of these works
>> that contain related ideas (of self-reference, observer-centric,
>> nothing-based means of bootstrapping reality):
>>
>>
>> Bruno Marchal's "The computationalist reformulation of the mind-body
>> problem"
>>
>> https://www.researchgate.net/publication/236138701_The_computationalist_reformulation_of_the_mind-body_problem
>>
>>
>> Mark F. Sharlow's "Can Machines Have First-Person Properties?"
>> https://archive.is/rDP33
>>
>>
>> Markus Muller's
>> "Law without law: from observer states to physics via algorithmic
>> information theory"
>> https://arxiv.org/abs/1712.01826
>>
>> David Pearce's "The Zero Ontology"
>> https://www.hedweb.com/witherall/zero.htm
>>
>> Stephen Wolfram's "The Concept of the Ruliad"
>> https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ru

Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Jason Resch
esentation of the system
together with its representations of all the rest of the world. Which
“I” you are is determined by the WAY you carry out that cycling,
and the way you represent the world.”

“In a sense, Gödel’s Theorem is a mathematical analogue of the fact that I
cannot understand what it is like not to like chocolate, or to be a bat,
except by an infinite sequence of ever-more-accurate simulation processes
that converge toward, but never reach, emulation. I am trapped inside
myself and therefore can’t see how other systems are. Gödel’s Theorem
follows from a consequence of the general fact: I am trapped inside myself
and therefore can’t see how other systems see me. Thus the
objectivity-subjectivity dilemmas that Nagel has sharply posed are somehow
related to epistemological problems in both mathematical logic, and as we
saw earlier, the foundations of physics.” (Hofstader in Mind’s I)
-- Douglas Hofstadter and Daniel Dennett in "The Mind’s I" (1981)



“There was a man who said though,
it seems that I know that I know,
what I would like to see,
is the eye that knows me,
when I know that I know that I know.”
-
“This is the human problem, we know that we know.”
-- Alan Watts
https://www.youtube.com/watch?v=I_Q2xNqKvnE


“Even for the universal machine doing nothing more than self-introspection,
her consciousness (related to []p & p) is not definable, for reason related
to the fact that knowledge and truth are not definable by any machine, when
the range of that knowledge and truth is vast enough to encompass the
machine itself.”
-- Bruno Marchal


“You need self-reference ability for the notion of belief, together with a
notion of reality or truth, which the machine cannot define.
To get immediate knowledgeability you need to add consistency ([]p & <>t),
to get ([]p & <>t & p) which prevents transitivity, and gives to the
machine a feeling of immediacy.”
-- Bruno Marchal

“It is not because some “information processing” could support
consciousness that we can conclude that all information processing can
support consciousness. You need at least one reflexive loop. You need two
reflexive loop for having self-consciousness (Löbianity)."
-- Bruno Marchal


“The appearance of a universe, or even universes, must be explained by the
geometry of possible computations of possible machines, seen by these
machines".”
-- The Amoeba’s Secret - Bruno Marchal 2014
https://www.hpcoders.com.au/docs/amoebassecret.pdf page 140


“To exist, it must have cause–effect power; to exist from its own intrinsic
perspective, independent of extrinsic factors, it must have cause–effect
power upon itself: its present mechanisms and state must ‘make a
difference’ to the probability of some past and future state of the system
(its cause–effect space)”
https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2014.0167 (Tononi
Koch, IIT paper)


“More broadly one could say that, through the human being, the universe has
created a mirror to observe itself.” - David Bohm, The Undivided Universe,
Routledge, 2002, pp. 389

“A many minds theory, like a many worlds theory, supposes that, associated
with a sentient being at any given time, there is a multiplicity of
distinct conscious points of view. But a many minds theory holds that it is
these conscious points of view or ‘minds,’ rather than ‘worlds’, that are
to be conceived as literally dividing or differentiating over time.”
– Michael Lockwood in “‘Many Minds’. Interpretations of Quantum Mechanics”
(1995)


“It is sometimes suggested within physics that information is fundamental
to the physics of the universe, and even that physical properties and laws
may be derivative from informational properties and laws. This “it from
bit” view is put forward by “Wheeler (1989, 1990) and Fredkin (1990), and
is also investigated by papers in Zurek (1990) and MAtzke (1992, 1994). If
this is so, we may be able to give information a more serious role in our
ontology. [...]
This approach stems from the observation that in physical theories,
fundamental physical states are effectively individuated as information
states. When we look at a feature such as mass or charge, we find simply a
brute space of differences that make a difference. Physics tells us nothing
about what mass is, or what charge is: it simply tells us the range of
different values that these features can take on, and it tells us their
effects on other features. As far as physical theories are concerned,
specific states of mass or charge might as well be pure information states:
all that matters is their location within an information space.”
-- David Chalmers in "The Conscious Mind" (1996)



"A cat.
A cat is seen.
Something seen, must be a seer.
I see a cat.
I exist.
What is I?"
-- Jason


"Perhaps consciousness arises when the brain’s simulation of the world
becomes so complete that it must include a model of itself. Obviously the
limbs and body of a surviv

Re: Situational Awareness

2024-06-21 Thread Jason Resch
On Fri, Jun 21, 2024, 8:48 AM PGC  wrote:

>
>
> On Thursday, June 20, 2024 at 4:13:25 AM UTC+2 Jason Resch wrote:
>
> On Wed, Jun 19, 2024 at 6:05 PM Brent Meeker  wrote:
>
> You can always add some randomness to a computer program.  LLM's aren't
> deterministic now.  Human intelligence may very well be memory plus
> randomness, although I'd bet on the inclusion of some inference
> algorithms.  The randomness doesn't even have to be in the brain.  People
> interact with their environment which provides a lot of effective
> randomness plus some relevant prompts.
>
>
> Yes, I think there is no great mystery to creativity. It requires only 1.
> random permutation/combination, and 2. an evaluation function: *how much
> better is this new thing compared to the previous thing?* This is the
> driver behind all the innovation in biology produced by natural selection.
> And this same mechanism is replicated in the technique of "genetic
> programming <https://en.wikipedia.org/wiki/Genetic_programming>." Koza,
> who invented genetic programming, used it to create his "invention machine
> <https://www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine/>"
> which has created patent-worthy improvements across multiple domains of
> technology.
>
> I use genetic programming to evolve bots, and in only a few generations,
> they move from stumbling around at random, to deriving unique,
> environment-specific strategies to maximize their ability to feed
> themselves while avoiding obstacles:
>
>
> https://www.youtube.com/watch?v=InBsqlWQTts=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX=2
>
> There is no intelligence imparted to the design of the bots. They evolve
> purely based on random variation of traits of the top performers (as
> evaluated based on how much they ate during their life).
>
>
> Your addition about randomness is interesting. It’s true that LLMs
> incorporate some degree of randomness, and human intelligence might also be
> influenced by randomness and inference algorithms. The interaction with our
> environment introduces effective randomness contributing to our
> decision-making processes. The notion that creativity stems from random
> permutation/combination and an evaluation function resonates with the
> principles of natural selection and genetic programming. The example of
> genetic programming evolving bots to optimize their behavior through random
> variation and evaluation showcases this mechanism effectively.
>
> However, we should differentiate between speculation and facts in your
> statements. While randomness and evaluation are essential components of
> genetic programming, the assertion that there is "no great mystery to
> creativity" oversimplifies: what you're bringing up is a kind of
> creativity, which is constrained by its iterative limitations. A change
> here, a small new feature there... it's clear that this is creativity on a
> budget, making only the smallest adaptations necessary for survival instead
> of yielding radically new designs from the ground up. The kind that is
> found and most sought after in boundary-breaking science and/or art, even
> if everybody stands on shoulders: not every PhD has a Newtonian impact on
> the world.
>
> Randomness + evaluation = creativity looks rhetorically simple and clear.
> However, there are two problems I see:
>
> 1. Who/What is Evaluating? Evaluation can be completely deterministic and
> mechanical, it can be effective on levels like natural selection, or it can
> result from a subject with intuition, experience, and a refined sense of
> taste or a more rudimentary one. It can involve a particular psychology,
> some world or even multiverse-based ontology to embed said subject, and
> more. The questions raised encompass our entire history and all qualia, if
> not more. Therefore, evaluation is not as simple or clear as that seemingly
> factual statement suggests. "Evaluation," as you sketch out rather
> unclearly, merely hides the problem of subject and reality for a rhetorical
> mirage of clarity.
>

Evaluation functions can be arbitrarily complex. It could be the aesthetic
sense of an artist, or a a mathematical function devised by an engineer to
evaluate a jet engine's weight and efficiency.

People have studied creativity in humans and found that it consists of two
parts, as it does in genetic programming:

There is the open ended ideation where the brain comes up with as many
ideas as possible, without concern for their practicality or feasibility.
This part is called "divergent thinking" human children are often rated at
genius levels compared to adults in this domain (
https://twentyonetoys.com/blogs/teaching-21st-century-skills/creative-

Re: Situational Awareness

2024-06-19 Thread Jason Resch
On Wed, Jun 19, 2024 at 6:05 PM Brent Meeker  wrote:

> You can always add some randomness to a computer program.  LLM's aren't
> deterministic now.  Human intelligence may very well be memory plus
> randomness, although I'd bet on the inclusion of some inference
> algorithms.  The randomness doesn't even have to be in the brain.  People
> interact with their environment which provides a lot of effective
> randomness plus some relevant prompts.
>

Yes, I think there is no great mystery to creativity. It requires only 1.
random permutation/combination, and 2. an evaluation function: *how much
better is this new thing compared to the previous thing?* This is the
driver behind all the innovation in biology produced by natural selection.
And this same mechanism is replicated in the technique of "genetic
programming <https://en.wikipedia.org/wiki/Genetic_programming>." Koza, who
invented genetic programming, used it to create his "invention machine
<https://www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine/>"
which has created patent-worthy improvements across multiple domains of
technology.

I use genetic programming to evolve bots, and in only a few generations,
they move from stumbling around at random, to deriving unique,
environment-specific strategies to maximize their ability to feed
themselves while avoiding obstacles:

https://www.youtube.com/watch?v=InBsqlWQTts=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX=2

There is no intelligence imparted to the design of the bots. They evolve
purely based on random variation of traits of the top performers (as
evaluated based on how much they ate during their life).

Jason


>
>
> On 6/19/2024 5:55 AM, PGC wrote:
>
> I'm hypothesizing here, as the nature of intelligence is still a mystery.
> Thank you, Terren, for your thoughtful contribution. You aptly highlight
> the confusion between skill and intelligence. Jason and John could be
> right; intelligence might emerge from advanced LLMs. The recent
> achievements are impressive. The differences between models like Gemini and
> ChatGPT might stem from better data curation rather than compute power.
>
> However, I see LLMs currently more as assistants that help us organize and
> structure our work more efficiently. Terence Tao isn't talking about
> replacing mathematicians but about enhancing collaboration and
> verification. If LLMs were truly intelligent, all jobs, including AI
> researchers', would soon vanish. But I don't foresee real engineers, AI
> researchers, or IT departments being replaced in the short to mid-term.
> There's too much novelty and practical knowledge involved in complex human
> work that LLMs can't replicate.
>
> Take engineers, for example. Much of their work relies on practical
> experience and intuition developed over years. LLMs aren't producing
> groundbreaking results like Ramanujan's infinite series etc; they're more
> about aiding in tasks like automated theorem proving. Intelligence might
> just be memory and vast training data, but I believe there's an element of
> freedom in human reasoning that leads to novel ideas.
>
> Consider Russell's best ideas coming while walking to the coffee machine.
> This unstructured thinking grants fresh perspectives. Creativity often
> involves discarding old approaches, a process that presupposes freedom.
> Machines would need to run long or even endlessly, reasoning in inscrutable
> code, which is neither practical nor desirable. Or somebody finds something
> that would bring inference to LLMs to effectively reduce the infinite space
> of all possible programs for effective synthesis of new programs. Fully
> deterministic and static programs are not enough to deal with the complex
> situations we face everyday. There's always some element of novelty that we
> have to deal with, combining reasoning and memory.
>
> Ultimately, while everyone appreciates a helpful assistant, few truly seek
> machines that challenge our understanding or autonomy. That's why I find
> the way we talk about LLMs and AGI a bit disingenuous. And no this is not a
> case of setting the bar higher and higher to preserve some kind of notion
> of human superiority. If all those jobs are replaced in short order, I'll
> just be wrong empirically speaking, and you can all make fun of these posts
> and yell "told you so".
>
> On Tuesday, June 18, 2024 at 9:24:07 PM UTC+2 Jason Resch wrote:
>
>>
>>
>> On Sun, Jun 16, 2024, 10:26 PM PGC  wrote:
>>
>>> A lot of the excitement around LLMs is due to confusing skill/competence
>>> (memory based) with the unsolved problem of intelligence, its most
>>> optimal/perfect test etc. There is a difference between completing strings
>>> of words/prompts re

Re: Situational Awareness

2024-06-19 Thread Jason Resch
On Wed, Jun 19, 2024, 12:48 PM John Clark  wrote:

> On Wed, Jun 19, 2024 at 12:33 PM Jason Resch  wrote:
>
> *> **Just the other day (on another list), I proposed that the problem
>> "hallucination" is not really a bug, but rather, it is what we have
>> designed LLMs to do (when we consider the training regime we subject them
>> to). We train these models to produce the most probable extrapolations of
>> text given some sample. Now consider if you were placed in a box and
>> rewarded or punished based on how accurately you guessed the next character
>> in a sequence.*
>>
>> *You are given the following sentence and asked to guess the next
>> character:*
>> *"Albert Einstein was born on March, "*
>>
>> *True, you could break the fourth wall and protest "But I don't know! Let
>> me out of here!"*
>>
>> *But that would only lead to your certain punishment. Or: you could take
>> a guess, there's a decent chance the first digit is a 1 or 2. You might
>> guess one of those and have at least a 1/3 chance of getting it right.*
>> *This is how we have trained the current crop of LLMs. We don't reward
>> them for telling us they don't know, we reward them for having the highest
>> accuracy possible in making educated guesses.*
>>
>
> Damn, I wish I'd said that! Very clever.
>


Thank you! Feel welcome to use it. :-)

Jason

 John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> mze
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0%2BMkb0y%2B1fY-AryeOC6C-FwuR%2B654Ua_EjMb_%3D6CGNCQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0%2BMkb0y%2B1fY-AryeOC6C-FwuR%2B654Ua_EjMb_%3D6CGNCQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj0Tu-O3fv-QgSqdYs1L9ak8A2cc%2BD-AHC%2BTtDBTw6RUQ%40mail.gmail.com.


Re: Situational Awareness

2024-06-19 Thread Jason Resch
On Wed, Jun 19, 2024, 10:59 AM Terren Suydam 
wrote:

>
>
> On Tue, Jun 18, 2024 at 3:24 PM Jason Resch  wrote:
>
>>
>>
>> On Sun, Jun 16, 2024, 10:26 PM PGC  wrote:
>>
>>> A lot of the excitement around LLMs is due to confusing skill/competence
>>> (memory based) with the unsolved problem of intelligence, its most
>>> optimal/perfect test etc. There is a difference between completing strings
>>> of words/prompts relying on memorization, interpolation, pattern
>>> recognition based on training data and actually synthesizing novel
>>> generalization through reasoning or synthesizing the appropriate program on
>>> the fly. As there isn't a perfect test for intelligence, much less
>>> consensus on its definition, you can always brute force some LLM through
>>> huge compute and large, highly domain specific training data, to "solve" a
>>> set of problems; even highly complex ones. But as soon as there's novelty
>>> you'll have to keep doing that. Personally, that doesn't feel like
>>> intelligence yet. I'd want to see these abilities combined with the program
>>> synthesis ability; without the need for ever vaster, more specific
>>> databases etc. to be more convinced that we're genuinely on the threshold.
>>
>>
>> I think there is no more to intelligence than patter recognition and
>> extrapolation (essentially, the same techniques required for improving
>> compression). It is also the same thing science is concerned with:
>> compressing observations of the real world into a small set of laws
>> (patterns) which enable predictions. And prediction is the essence of
>> intelligent action, as all goal-centered action requires predicting
>> probable outcomes that may result from any of a set of possible behaviors
>> that may be taken, and then choosing the behavior with the highest expected
>> reward.
>>
>> I think this can explain why even a problem as seemingly basic as "word
>> prediction" can (when mastered to a sufficient degree) break through into
>> general intelligence. This is because any situation can be described in
>> language, and being asked to predict next words requires understanding the
>> underlying reality to a sufficient degree to accurately model the things
>> those words describe. I confirmed this by describing an elaborate physical
>> setup and asked GPT-4 to predict and explain what it thought would happen
>> over the next hour. It did so perfectly, and also explained the
>> consequences of various alterations I later proposed.
>>
>> Since any of thousands, or perhaps millions, of patterns exist in the
>> training corpus, language models can come to learn, recognize, and
>> extrapolate all of those thousands or millions of patterns. This is what we
>> think of as generality (a sufficiently large repertoire of pattern
>> recognition that it appears general).
>>
>> Jason
>>
>
> Hey Jason,
>
> You've articulated this idea before, that the result of the training on
> such large amounts of data may result in the ability of LLMs to create
> models of reality and simulate minds and so forth, and it's an intriguing
> possibility.  However, one fact of how current LLMs operate is that they
> don't know when they're wrong. If what you're saying is true, shouldn't an
> LLM be able to model its own state of knowledge?
>

Just the other day (on another list), I proposed that the problem
"hallucination" is not really a bug, but rather, it is what we have
designed LLMs to do (when we consider the training regime we subject them
to). We train these models to produce the most probable extrapolations of
text given some sample.

Now consider if you were placed in a box and rewarded or punished based on
how accurately you guessed the next character in a sequence.

You are given the following sentence and asked to guess the next character:
"Albert Einstein was born on March, "

True, you could break the fourth wall and protest "But I don't know! Let me
out of here!"

But that would only lead to your certain punishment. Or: you could take a
guess, there's a decent chance the first digit is a 1 or 2. You might guess
one of those and have at least a 1/3 chance of getting it right.

This is how we have trained the current crop of LLMs. We don't reward them
for telling us they don't know, we reward them for having the highest
accuracy possible in making educated guesses.

We can develop more elaborate training processes that punish wrong answers
and reward statements that they are unsure, don't know and are making an
educated guess, but that would be something other than a pure decod

Re: Situational Awareness

2024-06-19 Thread Jason Resch
That process would find a program that has ideal models of our physical
reality, including laws not yet discovered.

True: it is not computationally feasible to do the brute force search in
this way, but there are heuristics we can use for finding better ways of
compressing datasets that we have. In fact this is what I see ourselves
doing with training models to be able to more accurately predict text (that
is making them better at compressing text) which is the same thing as
making them better understand the processes (the universe and the human
brain (that operates within and observes that universe) to writes those
works) that underlying them.

As to the limitations of LLMs, they have a finite and fixed depth. This
means they are only capable of computing functions they can complete within
that fixed time (unless you argument them with a loop and memory). This is
like considering the limits of a human brain that was.onky given, say, 10
seconds to solve any problem. This is why it fails at multiplying long
numbers, which we might consider easy for a computer, but if you have a
fixed-depth circuit, there are only so many times you can shift and add,
and thus only so big a multiplicand you can handle.

Jason



> Terren
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiGt_CTa6WPPp%3DFb2dHLjXtvyySUMyTwx3KN7t8UuxFGA%40mail.gmail.com.


Re: Situational Awareness

2024-06-18 Thread Jason Resch
On Sun, Jun 16, 2024, 10:26 PM PGC  wrote:

> A lot of the excitement around LLMs is due to confusing skill/competence
> (memory based) with the unsolved problem of intelligence, its most
> optimal/perfect test etc. There is a difference between completing strings
> of words/prompts relying on memorization, interpolation, pattern
> recognition based on training data and actually synthesizing novel
> generalization through reasoning or synthesizing the appropriate program on
> the fly. As there isn't a perfect test for intelligence, much less
> consensus on its definition, you can always brute force some LLM through
> huge compute and large, highly domain specific training data, to "solve" a
> set of problems; even highly complex ones. But as soon as there's novelty
> you'll have to keep doing that. Personally, that doesn't feel like
> intelligence yet. I'd want to see these abilities combined with the program
> synthesis ability; without the need for ever vaster, more specific
> databases etc. to be more convinced that we're genuinely on the threshold.


I think there is no more to intelligence than patter recognition and
extrapolation (essentially, the same techniques required for improving
compression). It is also the same thing science is concerned with:
compressing observations of the real world into a small set of laws
(patterns) which enable predictions. And prediction is the essence of
intelligent action, as all goal-centered action requires predicting
probable outcomes that may result from any of a set of possible behaviors
that may be taken, and then choosing the behavior with the highest expected
reward.

I think this can explain why even a problem as seemingly basic as "word
prediction" can (when mastered to a sufficient degree) break through into
general intelligence. This is because any situation can be described in
language, and being asked to predict next words requires understanding the
underlying reality to a sufficient degree to accurately model the things
those words describe. I confirmed this by describing an elaborate physical
setup and asked GPT-4 to predict and explain what it thought would happen
over the next hour. It did so perfectly, and also explained the
consequences of various alterations I later proposed.

Since any of thousands, or perhaps millions, of patterns exist in the
training corpus, language models can come to learn, recognize, and
extrapolate all of those thousands or millions of patterns. This is what we
think of as generality (a sufficiently large repertoire of pattern
recognition that it appears general).

Jason



> John, as you enjoyed that podcast with Aschenbrenner, you might find the
> following one with Chollet interesting. Imho you cannot scale past not
> having a more advanced approach to program synthesis (which nonetheless
> could be informed or guided by LLMs to deal with the combinatorial
> explosion of possible program synthesis).
>
> https://www.youtube.com/watch?v=UakqL6Pj9xo
> On Friday, June 14, 2024 at 7:28:50 PM UTC+2 John Clark wrote:
>
>> Sabine Hossenfelder came out with a video attempting to discredit Leopold
>> Aschenbrenner. She failed.
>>
>> Is the Intelligence-Explosion Near? A Reality Check
>> <https://www.youtube.com/watch?v=xm1B3Y3ypoE=553s>
>>
>> I wrote this in the comment section of the video:
>>
>> "You claim that AI development will slow because we will run out of
>> data, but synthetic data is already being used to train AIs and it actually
>> works! AlphaGo was able to go from knowing nothing about the most
>> complicated board game in the world called "GO" to being able to play it at
>> a superhuman level in just a few hours by using synthetic data, it played
>> games against itself. As for power, during the last decade the total power
>> generation of the US has remained flat, but during that same decade the
>> power generation of China has not, in just that same decade China
>> constructed enough new power stations to equal power generated by the
>> entire US. So a radical increase in electrical generation capacity is
>> possible, the only thing that's lacking is the will to do so. When it
>> becomes obvious to everybody that the first country to develop a super
>> intelligent computer will have the capability to rule the world there
>> will be a will to build those power generating facilities as fast as
>> humanly possible. Perhaps they will use natural gas, perhaps they will use
>> nuclear fission."
>>
>>   John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> hid
>>
>>
>>
>>> --
> You received this message because you are subscribed to the Google Groups
&

Re: Will Australia’s giant Quantum Computer bring militaries fears to life?

2024-05-06 Thread Jason Resch
On Mon, May 6, 2024 at 10:09 AM Brent Meeker  wrote:

> But it was my understanding that encryption is being changed to methods
> for which a quantum computer is no better than a classical computer and are
> effectively secure.
>

That's correct. Many quantum-secure algorithms have been invented already
and NIST is in the process of standardizing them. The issue is that until
we move to those algorithms, existing and past communications can be
exposed once a quantum computer is developed that can break existing
algorithms. Consider, for example, if some government agency were to store
all intercepted encrypted communications long-term. Then once a quantum
computer of sufficient power is created, they can go back and decrypt this
archive of intercepted encrypted communications.

Jason


>
>
>
> On 5/6/2024 6:16 AM, Jason Resch wrote:
>
> While adopting new algorithms will secure future communications, anyone
> with the capacity to intercept and record messages now can hold on to them
> until the time large scale quantum computers can be developed to break the
> old encryption. There will be some advantage to the first one to get such a
> computer (assuming that one also has the recorded communications protected
> with current algorithms).
>
> Jason
>
> On Sun, May 5, 2024, 5:02 PM Brent Meeker  wrote:
>
>> The article implies that if China gets big quantum computers before we do
>> they'll be able to read all our messages.  But us getting big QC first
>> doesn't affect that.  What we need to do is change to encryption not
>> susceptible to QCs, something we are already doing.  I has nothing to do
>> with how fast be make big QCs.
>>
>> Brent
>>
>> On 5/5/2024 5:58 AM, John Clark wrote:
>>
>> *Will Australia’s giant Quantum Computer bring militaries’ fears to life?*
>> <https://www.defenseone.com/technology/2024/05/will-australias-giant-quantum-project-bring-militaries-fears-life/396312/>
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> aqp
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv0P6WMi1wONBg%3DW4LG%3DwyztuHU8bDgg55ymsfCqFd8fcA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAJPayv0P6WMi1wONBg%3DW4LG%3DwyztuHU8bDgg55ymsfCqFd8fcA%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/cbf6d480-5011-4454-b2a2-336c600dc82e%40gmail.com
>> <https://groups.google.com/d/msgid/everything-list/cbf6d480-5011-4454-b2a2-336c600dc82e%40gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjq1R7Of5-%3D6fnRqdujzBBaeCpXnXuvihe_X%2Bz45uMZUA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjq1R7Of5-%3D6fnRqdujzBBaeCpXnXuvihe_X%2Bz45uMZUA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/fbaa5bd9-9541-4581-baed-cf838857e819%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/fbaa5bd9-9541-4581-baed-cf838857e819%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg3rPDBvZ3fTp3QahoiG3CouTAP1e9mXfSaizO-mn5p7Q%40mail.gmail.com.


Re: Will Australia’s giant Quantum Computer bring militaries fears to life?

2024-05-06 Thread Jason Resch
While adopting new algorithms will secure future communications, anyone
with the capacity to intercept and record messages now can hold on to them
until the time large scale quantum computers can be developed to break the
old encryption. There will be some advantage to the first one to get such a
computer (assuming that one also has the recorded communications protected
with current algorithms).

Jason

On Sun, May 5, 2024, 5:02 PM Brent Meeker  wrote:

> The article implies that if China gets big quantum computers before we do
> they'll be able to read all our messages.  But us getting big QC first
> doesn't affect that.  What we need to do is change to encryption not
> susceptible to QCs, something we are already doing.  I has nothing to do
> with how fast be make big QCs.
>
> Brent
>
> On 5/5/2024 5:58 AM, John Clark wrote:
>
> *Will Australia’s giant Quantum Computer bring militaries’ fears to life?*
> <https://www.defenseone.com/technology/2024/05/will-australias-giant-quantum-project-bring-militaries-fears-life/396312/>
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> aqp
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0P6WMi1wONBg%3DW4LG%3DwyztuHU8bDgg55ymsfCqFd8fcA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0P6WMi1wONBg%3DW4LG%3DwyztuHU8bDgg55ymsfCqFd8fcA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/cbf6d480-5011-4454-b2a2-336c600dc82e%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/cbf6d480-5011-4454-b2a2-336c600dc82e%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjq1R7Of5-%3D6fnRqdujzBBaeCpXnXuvihe_X%2Bz45uMZUA%40mail.gmail.com.


Re: Coming Singularity

2024-04-03 Thread Jason Resch
On Tue, Apr 2, 2024, 7:18 PM 'spudboy...@aol.com' via Everything List <
everything-list@googlegroups.com> wrote:

> Opinion on what occurs when we load, not an LLM, but a LLM + a Neural  Net
> on a low-error, high entanglement, quantum computer. Will this create a
> mind?
>


If you're not careful, you could create 2^N minds. Where N is the number of
qubits.

Jason


> On Saturday, March 30, 2024 at 08:31:25 AM EDT, John Clark <
> johnkcl...@gmail.com> wrote:
>
>
> On Fri, Mar 29, 2024 at 10:28 PM Russell Standish 
> wrote:
>
>
> * >"There is a big difference between the way transistors are wired in
> a CPU and the way neurons are wired up in a brain."*
>
>
> Yes, but modern chips made by companies like NVIDIA, Cerebras and Groq
> don't make CPUs or even GPUs, they make Tensor Processing Units, or in
> Groq's case Language Processing Units, chips that have been optimized to do
> best not in floating point operations but in large neural networks that all
> current AI programs are. In the recent press conference where Nvidia
> introduced their new 208 billion transistor Blackwell B200 tensor chip,
> they pointed out that when used for neural nets, AIs chips have increased
> their performance by a factor of 1 million over the last 10 years. That's
> far faster than Moore's Law, and that was possible because Moore's Law is
> about transistor density, but they were talking about AI workloads, and
> doing well at AI is what NVIDIA's chips are specialized to do. I also found
> it interesting that their new Blackwell chip, when used for AI, needed 25
> times less energy than the current AI chip champion,  NVIDIA's Hopper chip,
> which the company introduced just 2 years ago.  And I do not think it's a
> coincidence that this huge increase in hardware capability coincided with
> the current explosion in AI improvement.
>
>
>
> *> "In the future, I would expect we'd have dedicate neural processing
> units, based on memristors"*
>
>
> If memristor technology ever becomes practical that would speed things up
> even more, but it's not necessary to achieve superhuman performance in an
> AI in the very near future.
>
>
>
> > *"**The comparing synapses with ANN parameters is only relevant for
> the statement "we can simulate a human brain sized ANN by X date"."*
>
>
> I don't see how comparing the two things can produce anything useful
> because one is concerned with software and the other is concerned with
> hardware. Comparing transistors to synapses may not be perfect but it's a
> much better analogy than comparing program parameters with brain synapses,
> at least transistors and synapses are both hardware. Comparing hardware
> with software will only produce a muddle.
>
>
>
> > *"**he [Kurzweil] said human intelligence parity (which I supose could
> be taken to be avergae intelligence, or an IQ of 100) [...]*
>
>
>
> *AI passes 100 IQ for first time, with release of Claude-3
> <https://secretive-tarsal-fe8.notion.site/AIs-ranked-by-IQ-AI-passes-100-IQ-for-first-time-with-release-of-Claude-3-f67219c7ccb44e4da1707d54ef1df8f1>
>  *
>
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> lnm
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
>
> https://groups.google.com/d/msgid/everything-list/CAJPayv0TftwZ7N09tmzP5DDsyfRcmcL40Au1-4mzTG8anvTDpA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0TftwZ7N09tmzP5DDsyfRcmcL40Au1-4mzTG8anvTDpA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1070662887.2783537.1712099933698%40mail.yahoo.com
> <https://groups.google.com/d/msgid/everything-list/1070662887.2783537.1712099933698%40mail.yahoo.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg0Aq%3D6HaAPEBD8OLS-D-d57325BEob2jj5OaOHBd6YeQ%40mail.gmail.com.


Re: Coming Singularity

2024-03-29 Thread Jason Resch
On Fri, Mar 29, 2024, 1:42 AM Dylan Distasio  wrote:

> I think we need to be careful with considering LLM parameters as analogous
> to synapses.   Biological neuronal systems have very significant
> differences in terms of structure, complexity, and operation compared to
> LLM parameters.
>
> Personally, I don't believe it is a given that simply increasing the
> parameters of a LLM is going to result in AGI or parity with overall human
> potential.
>

I agree it may not be apples to apples to compare synapses to parameters,
but of all the comparisons to make it is perhaps the closest one there is.


> I think there is a lot more to figure out before we get there, and LLMs
> (assuming variations on current transformer based architectures) may end up
> a dead end without other AI breakthroughs combining them with other
> components, and inputs (as in sensory inputs)..
>

Here is where I think we may disagree. I think the basic LLM model, as
currently used, is all we need to achieve AGI.

My motivation for this belief is there all forms of intelligence reduce to
prediction (that is, given a sequence observables, determining what is the
most likely next thing to see?).

Take any problem that requires intelligence to solve and I can show you how
it is a subset of the skill of prediction.

Since human language is universal in the forms and types of patterns it can
express, there is no limit to the kinds of patterns and LLM can learn to
recognize and predict. Think of all the thousands, if not millions of types
of patterns that exist in the training corpus. The LLM can learn them all.

We have already seen this. Despite not being trained for anything beyond
prediction, modern LLMs have learned to write code, perform arithmetic,
translate between languages, play chess, summarize text, take tests, draw
pictures, etc.

The "universal approximation theorem" (UAT) is a result in the field of
neural networks which says that with a large enough neural network, and
with enough training, a neural network can learn any function. Given this,
the UAT, and the universality of language to express any pattern, I believe
the only thing holding back LLMs today is their network size and amount of
training. I think the language corpus is sufficiently large and diverse in
the patterns it contains that it isn't what's holding us back.

An argument could be made that we already have achieved AGI. We have AI
that passes the bar in the 90th percentile, passes math olympiad tests in
the 99th percentile, programs better than the average google coder, scores
a 155 in a verbal IQ test, etc. If we took GPT-4 back to the 1980s to show
it off, would anyone at the time say it is not AGI? I think we are only
blinded to the significance of what has happened because we are living
through history now and the history books have not yet covered this time.

Jason



> We may find out that the singularity is a lot further away than it seems,
> but I guess time will tell.Personally, I would be very surprised to see
> it within the next decade.
>
> On Thu, Mar 28, 2024 at 9:27 PM Russell Standish 
> wrote:
>
>>
>> So to compare apples with apples - the human brain contains around 700
>> trillion (7E14) synapses, which would roughly correpond to an AI's
>> parameter count. GPT5 (due to be released sometime next year) will
>> have around 2E12 parameters, still 2-3 orders of magnitude to
>> go. Assuming continuation of current rates of AI improvement
>> GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
>> count, it will take to 2033 for AI to achieve human parity.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/20240329012651.GE2357%40zen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH9L6f%3Dc8%3DjjQAgXSP5WvHQ-k2dUvwS%2Btj-UWqw%2BaxUoZQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJrqPH9L6f%3Dc8%3DjjQAgXSP5WvHQ-k2dUvwS%2Btj-UWqw%2BaxUoZQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUia%2BsapAjkzLRzU9xAAepPhhXCJwuntSp%3D64sCbW%2BCFVA%40mail.gmail.com.


Re: The physical limits of computation

2024-01-21 Thread Jason Resch
On Sat, Jan 20, 2024 at 1:46 AM 'scerir' via Everything List <
everything-list@googlegroups.com> wrote:

> Interesting quote about all that (and information)
> Frank Wilczek: "Information is another dimensionless quantity that plays a
> large and increasing role in our description of the world. Many of the
> terms that arise naturally in discussions of information have a distinctly
> physical character. For example we commonly speak of density of information
> and flow of information. Going deeper, we find far-reaching analogies
> between information and (negative) entropy, as noted already in Shannon's
> original work. Nowadays many discussions of the microphysical origin of
> entropy, and of foundations of statistical mechanics in general, start from
> discussions of information and ignorance. I think it is fair to say that
> there has been a unification fusing the physical quantity (negative)
> entropy and the conceptual quantity information. A strong formal connection
> between entropy and action arises through the Euclidean, imaginary-time
> path integral formulation of partition functions. Indeed, in that framework
> the expectation value of the Euclideanized action essentially is the
> entropy. The identification of entropy with Euclideanized action has been
> used, among other things, to motivate an algebraically simple (but deeply
> mysterious "derivation" of black hole entropy. If one could motivate the
> imaginary-time path integral directly and insightfully, rather than
> indirectly through the apparatus of energy eigenvalues, Boltzmann factors,
> and so forth, then one would have progressed toward this general prediction
> of unification: Fundamental action principles, and thus the laws of
> physics, will be re-interpreted as statements about information and its
> transformations." http://arxiv.org/pdf/1503.07735v1.pdf
> <https://l.facebook.com/l.php?u=http%3A%2F%2Farxiv.org%2Fpdf%2F1503.07735v1.pdf=6AQGH8JQz>
>

Interesting quote and reference, I appreciate them!

I especially like: "the laws of physics, will be reinterpreted as
statements about information and its transformations."

I think I will include that in my write up. :-)

Jason


>
>
> Il 20/01/2024 01:10 +01 Jason Resch  ha scritto:
>
>
> I put together a short write up on the relationship between physics,
> information, and computation, drawing heavily from the work of Seth Lloyd
> and others:
>
>
> https://drive.google.com/file/d/124q3ni51E3sf9kMC_sNKgP3ikcl8ou1t/view?usp=sharing
>
> I thought it might be interesting to members of this list who often debate
> whether our reality is fundamentally computational/informational.
>
> Jason
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgRo-xNors%2BWZbDVpboT3QwiHC_NS24_uQ9_QkiTd3fyQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgRo-xNors%2BWZbDVpboT3QwiHC_NS24_uQ9_QkiTd3fyQ%40mail.gmail.com?utm_medium=email_source=footer>.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1890352135.1654730.1705733200088%40mail1.libero.it
> <https://groups.google.com/d/msgid/everything-list/1890352135.1654730.1705733200088%40mail1.libero.it?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg8Px8e4QLzYer0G2MgM4k1V97YBXC%3DVNQ-x5W65m70Ww%40mail.gmail.com.


The physical limits of computation

2024-01-19 Thread Jason Resch
I put together a short write up on the relationship between physics,
information, and computation, drawing heavily from the work of Seth Lloyd
and others:

https://drive.google.com/file/d/124q3ni51E3sf9kMC_sNKgP3ikcl8ou1t/view?usp=sharing

I thought it might be interesting to members of this list who often debate
whether our reality is fundamentally computational/informational.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgRo-xNors%2BWZbDVpboT3QwiHC_NS24_uQ9_QkiTd3fyQ%40mail.gmail.com.


Re: Watch "Can Many Worlds Solve The Measurement Problem?" on YouTube

2023-12-06 Thread Jason Resch
On Wed, Dec 6, 2023, 5:40 PM Tomas Pales  wrote:

> A split into a finite number of worlds would solve the measure problem but
> where did he get his finite number?


My guess is he is using something like the number of distinguishable
quantum states given by the Bekenstein bound, or the total number of
degrees of freedom for the 10^23 molecules of gas in a cubic meter of air.


And why are physicists like Tegmark and Greene still talking about the
> measure problem if the number is finite?
>

I am not sure, perhaps they are considering it as infinite across a
spatially infinite universe. But we only have access to a finite portion of
the universe, so perhaps it is fine to ignore the rest of it (infinite
space and universes) at least as it may relate to the measure problem.

Jason


> On Wednesday, December 6, 2023 at 2:52:31 PM UTC+1 Jason Resch wrote:
>
>>
>>
>> On Wed, Dec 6, 2023, 7:24 AM Tomas Pales  wrote:
>>
>>> But isn't there a problem when the number of worlds after the split is
>>> infinite? In popular science books they always write that if the number of
>>> worlds is infinite then there are different ways of counting the
>>> probabilities and so we can arrive at different probabilities than those
>>> given by the Born rule. They call it the "measure problem" (not measurement
>>> problem).
>>>
>>
>>
>> Here, at about 6 minutes and 30 seconds in, Deutsch is asked how many
>> universes are there. He gives a finite number:
>>
>> https://youtu.be/Kj2lxDf9R3Y
>>
>> Jason
>>
>>
>>> On Wednesday, December 6, 2023 at 7:28:54 AM UTC+1 Jason Resch wrote:
>>>
>>>> https://youtu.be/BU8Lg_R2DL0
>>>>
>>>> This is timely.
>>>>
>>>> Jason
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/013311b5-c92b-403a-8dad-5e090fd95affn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/013311b5-c92b-403a-8dad-5e090fd95affn%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/e41d1ebe-0ac4-4845-a739-e6e368730900n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/e41d1ebe-0ac4-4845-a739-e6e368730900n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj-Z4kGRhmLe-0VWJz%3DRkGk_oNKj_QoAr0mzZH5YDc7ZQ%40mail.gmail.com.


Re: Watch "Can Many Worlds Solve The Measurement Problem?" on YouTube

2023-12-06 Thread Jason Resch
On Wed, Dec 6, 2023, 7:24 AM Tomas Pales  wrote:

> But isn't there a problem when the number of worlds after the split is
> infinite? In popular science books they always write that if the number of
> worlds is infinite then there are different ways of counting the
> probabilities and so we can arrive at different probabilities than those
> given by the Born rule. They call it the "measure problem" (not measurement
> problem).
>


Here, at about 6 minutes and 30 seconds in, Deutsch is asked how many
universes are there. He gives a finite number:

https://youtu.be/Kj2lxDf9R3Y

Jason


> On Wednesday, December 6, 2023 at 7:28:54 AM UTC+1 Jason Resch wrote:
>
>> https://youtu.be/BU8Lg_R2DL0
>>
>> This is timely.
>>
>> Jason
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/013311b5-c92b-403a-8dad-5e090fd95affn%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/013311b5-c92b-403a-8dad-5e090fd95affn%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh1sOsawbvCc0XvJv4tftrOVa0M5XrErjhkiukDL5w5WA%40mail.gmail.com.


Watch "Can Many Worlds Solve The Measurement Problem?" on YouTube

2023-12-05 Thread Jason Resch
https://youtu.be/BU8Lg_R2DL0

This is timely.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUji%2Brr3fZ7AGgqjTyAPS%3D6B8-0%2BafvokW9AYcmzzF9URQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-12-03 Thread Jason Resch
On Sun, Dec 3, 2023, 4:40 PM Brent Meeker  wrote:

> I don't think the Born rule is implied by MWI; but it's already known to
> be the only rational way to define a probability measure on a Hilbert space
> (Gleason's theorem).  So in a sense it's implicit in QM regardless of
> interpretation.
>
> QBism, which is a version of CI+decoherence is at least as rational as
> MWI.  I think the proper measure of an interpretation is whether they
> suggest improvements and experiments.  MWI may be better in that respect.
>

QBism, like other non-realist theories, can't account for the effectiveness
of quantum computers (unless one believes that non-real things can have
real, detectable effects (like producing the solution to factoring a large
semiprime)). But if you are realist about the wave function, then you are
dealing with MW, not QBism.

Jason


> Brent
>
> On 11/29/2023 4:00 AM, John Clark wrote:
>
> On Tue, Nov 28, 2023 at 7:30 PM Brent Meeker 
> wrote:
>
> *> MWI fans assert that it is superior because it doesn't assume the Born
>> rule, only the Schroedinger equation.  I wouldn't claim that the (modern)
>> version of Copenhagen is superior to MWI, I'm just unconvinced of the
>> converse.*
>
>
> A pretty convincing argument can be made that if the Many Worlds idea is
> true then the Born Rule must have the ability to predict the most probable
> outcome of any quantum experiment and as an added bonus, unlike its
> competitors, it can do so without adding any random elements. However I
> admit nobody has ever been able to prove that Many Worlds is the only
> possible explanation of why the Born Rule works, and we already know from
> experiments that it does. Put it this way, if Many Worlds is true then the
> Born Rule works, and if the Born Rule works (and we know that it does) then
> Many Worlds MIGHT be true. But that's still a hell of a lot better than any
> other quantum interpretation anybody has managed to come up with, at least
> so far. I'm not certain Many Worlds is correct, but I am certain its
> competitors are wrong, or so bad they're not even wrong.
>
> And as far as assumptions are concerned, every scientist, not just
> physicists, has no choice but to assume that probability must be a real
> number between zero and one, and all the probabilities must add up to
> exactly one for any given situation, because otherwise the very concept
> of probability would make no sense. And we know that taking the square root
> of the absolute value is the only way to get a number like that out of a
> complex function like Schrodinger's wave equation.  If Many Worlds is
> true, and If each version of Brent Meeker makes bets In accordance with the
> laws of probability so derived, then more Brent Meekers will make money
> by following the advice given by the Born Rule than if they followed any
> other betting strategy. Yes some Brent Meekers will still go broke even
> if they follow the Born Rule, but most will not.
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 7ff
>
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv00iaJKxfguE7bjmyViNO3nYnCtEaNf9o9fs81yOtAYBg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv00iaJKxfguE7bjmyViNO3nYnCtEaNf9o9fs81yOtAYBg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/3d467c95-5d40-4552-abd9-9b5736f69ee0%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/3d467c95-5d40-4552-abd9-9b5736f69ee0%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgA%3DxbyimvnxNUxf9ETaiSy0X%2BE4n4NtNKVw5wiHWCoBQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-30 Thread Jason Resch
On Thu, Nov 30, 2023, 4:02 PM Brent Meeker  wrote:

>
>
> On 11/29/2023 11:23 PM, Jason Resch wrote:
>
>
>
> On Thu, Nov 30, 2023, 12:19 AM Brent Meeker  wrote:
>
>>
>>
>> On 11/29/2023 8:21 PM, Jason Resch wrote:
>>
>>
>>
>> On Wed, Nov 29, 2023, 9:57 PM Brent Meeker  wrote:
>>
>>>
>>>
>>> On 11/29/2023 4:58 PM, Jason Resch wrote:
>>>
>>>
>>>
>>> On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett 
>>> wrote:
>>>
>>>> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou <
>>>> stath...@gmail.com> wrote:
>>>>
>>>>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>>>>> wrote:
>>>>>
>>>>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou <
>>>>>> stath...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>>> The Born rule allows you to calculate the probability of what
>>>>>>> outcome you will see in a Universe where all outcomes occur.
>>>>>>>
>>>>>>
>>>>>> You are still conflating incompatible theories. The Born rule is a
>>>>>> rule for calculating probabilities from the wave function -- it says
>>>>>> nothing about worlds or existence. MWI is a theory about the existence of
>>>>>> many worlds. These theories are incompatible, and should not be 
>>>>>> conflated.
>>>>>>
>>>>>
>>>>> “The Born rule is a rule for calculating probabilities from the wave
>>>>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>>>>> theory about the existence of many worlds” are not incompatible 
>>>>> statements.
>>>>>
>>>>
>>>> Perhaps that is the wrong way to look at it. The linearity of the
>>>> Schrodinger equation implies that the individuals on all branches are the
>>>> same: there is nothing to distinguish one of them as "you" and the others
>>>> as mere shadows or zombies. In other words, they are all "you". So you are
>>>> the person on the branch with all spins up and your probability of seeing
>>>> this result is one, since this branch certainly exists, and, by linearity,
>>>> "you" are the individual on that branch. This is inconsistent with the
>>>> claim that the Born rule gives the probability that "you" will see some
>>>> particular result. As we have seen, the probability that "you" will see all
>>>> ups in one, whereas the Born probability for this result is 1/2^N. These
>>>> probability estimates are incompatible.
>>>>
>>>
>>>
>>> According to relativity you exist in all times across your lifespan (and
>>> all times are equally really).
>>>
>>> Sez who?
>>>
>>
>> Sez Einstein, Minkowski, C.W. Rietdijk, Kip Thorne, Briane Greene, and
>> Roger Penrose, to name a few.
>>
>>
>> Yes I'm sure you can find some Platonist to cite.
>>
>
> Are all of those physicists platonists?
>
> Do you think that your future world-line exists?
>>
>
> Yes, but I further believe there's not just one unique future (but many of
> them in the multiverse).
>
>
>
>>
>>
>> You take these images intended to help your mathematical intution far too
>>> seriously.
>>>
>>
>> You agreed with this at one point in time.
>>
>>
>> Can you quote me?
>>
>
>
>
> From this email and the one that follows:
>
> https://groups.google.com/g/everything-list/c/jyB504QkIAs/m/0V0qGJO7Vj0J
>
> "Yes.  So why don't you recognize that "present place" is just a label,
> exactly like a latitude and longitude - and then that "present time" is a
> label, a coordinate time - which the diagrams I posted made perfectly
> clear.  The problem is that you seem to think "here and now" implies a
> "there and now"; but "there and now" is ambiguous and is RELATIVE to the
> state of motion."
>
> "And just like "here" is relative to state of motion, so is "now". SR
> isn't complicated, it
> just takes a little adjustment before it's intuitive."
>
>
>
> Perhaps I misinterpreted, but I took these quotes to mean you believed the
> present was an indexical like "here" and is in no way privileged.
>
>
> 

Re: The multiverse is unscientific nonsense??

2023-11-30 Thread Jason Resch
On Thu, Nov 30, 2023, 7:33 AM John Clark  wrote:

>
>
>
> On Wed, Nov 29, 2023 at 4:39 PM Brent Meeker 
> wrote:
>
>> On Tue, Nov 28, 2023 at 7:43 PM Brent Meeker 
>> wrote:
>>
>> *>>> For comparison you could posit a theory, MWI*, which is MWI plus the
>>> provision that only one exists with probability as defined by the Born
>>> rule.  Would MWI* be a different interpretation than modern-CI? *
>>
>>
>> >> In that case  MWI* would be the same as CI un that neither could
>> explain why Schrodinger's equation and the Born rule treat one world
>> very differently from all the others that makes it more real.  MWI* we
>> have to start talking about measurement and observers and all that crap.
>>
>> >
>> *All that crap that makes up everything we observe, write down, report
>> and cite in papers?  That crap?*
>>
>
> Yes. If somebody proposes a theory that would have profound physical and
> philosophical implications and a key ingredient of that theory is something
> called "measurement " that seems to have magical abilities and nobody can
> even approximately explain what a measurement is, much less how it works
> it's magic, then that theory is 100% extra virgin triple distilled premium
> grade CRAP.
>
> Speaking of crap, Einstein once asked Niels Bohr a very interesting
> question, "*do you believe the moon doesn't exist when you're not looking
> at it?*". Apparently Bohr's response has been lost to history.
>


I believe it was Pais that he asked this question to, but he was in the
same camp of the non-realists like Bohr.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh57ysoaFTUPyUWvog9k-oDf1xc5Je51jywcwgcL_aACQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Thu, Nov 30, 2023, 12:19 AM Brent Meeker  wrote:

>
>
> On 11/29/2023 8:21 PM, Jason Resch wrote:
>
>
>
> On Wed, Nov 29, 2023, 9:57 PM Brent Meeker  wrote:
>
>>
>>
>> On 11/29/2023 4:58 PM, Jason Resch wrote:
>>
>>
>>
>> On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett 
>> wrote:
>>
>>> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou 
>>> wrote:
>>>
>>>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>>>> wrote:
>>>>
>>>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou <
>>>>> stath...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>>> The Born rule allows you to calculate the probability of what
>>>>>> outcome you will see in a Universe where all outcomes occur.
>>>>>>
>>>>>
>>>>> You are still conflating incompatible theories. The Born rule is a
>>>>> rule for calculating probabilities from the wave function -- it says
>>>>> nothing about worlds or existence. MWI is a theory about the existence of
>>>>> many worlds. These theories are incompatible, and should not be conflated.
>>>>>
>>>>
>>>> “The Born rule is a rule for calculating probabilities from the wave
>>>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>>>> theory about the existence of many worlds” are not incompatible statements.
>>>>
>>>
>>> Perhaps that is the wrong way to look at it. The linearity of the
>>> Schrodinger equation implies that the individuals on all branches are the
>>> same: there is nothing to distinguish one of them as "you" and the others
>>> as mere shadows or zombies. In other words, they are all "you". So you are
>>> the person on the branch with all spins up and your probability of seeing
>>> this result is one, since this branch certainly exists, and, by linearity,
>>> "you" are the individual on that branch. This is inconsistent with the
>>> claim that the Born rule gives the probability that "you" will see some
>>> particular result. As we have seen, the probability that "you" will see all
>>> ups in one, whereas the Born probability for this result is 1/2^N. These
>>> probability estimates are incompatible.
>>>
>>
>>
>> According to relativity you exist in all times across your lifespan (and
>> all times are equally really).
>>
>> Sez who?
>>
>
> Sez Einstein, Minkowski, C.W. Rietdijk, Kip Thorne, Briane Greene, and
> Roger Penrose, to name a few.
>
>
> Yes I'm sure you can find some Platonist to cite.
>

Are all of those physicists platonists?

Do you think that your future world-line exists?
>

Yes, but I further believe there's not just one unique future (but many of
them in the multiverse).



>
>
> You take these images intended to help your mathematical intution far too
>> seriously.
>>
>
> You agreed with this at one point in time.
>
>
> Can you quote me?
>



>From this email and the one that follows:

https://groups.google.com/g/everything-list/c/jyB504QkIAs/m/0V0qGJO7Vj0J

"Yes.  So why don't you recognize that "present place" is just a label,
exactly like a latitude and longitude - and then that "present time" is a
label, a coordinate time - which the diagrams I posted made perfectly
clear.  The problem is that you seem to think "here and now" implies a
"there and now"; but "there and now" is ambiguous and is RELATIVE to the
state of motion."

"And just like "here" is relative to state of motion, so is "now". SR isn't
complicated, it
just takes a little adjustment before it's intuitive."



Perhaps I misinterpreted, but I took these quotes to mean you believed the
present was an indexical like "here" and is in no way privileged.



>
> In any case, it's not a mere image, but a well accepted implication of
> relativity.
>
> Then you must believe that your future is as fixed as your past.
>

I have many futures and many pasts (compatible with my present state of
awareness).

Jason


> Brent
>
> See:
>
> https://alwaysasking.com/what-is-time/
>
> For references.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4c522bce-974c-43e9-bcfb-7eac3d805d98%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/4c522bce-974c-43e9-bcfb-7eac3d805d98%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhMDvW6S8oNRXEitBCkXbW5Rh9P%3DYX6JmiA27VoDVmaFQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023, 10:45 PM Bruce Kellett  wrote:

> On Thu, Nov 30, 2023 at 12:46 PM Jason Resch  wrote:
>
>> On Wed, Nov 29, 2023, 8:39 PM Bruce Kellett 
>> wrote:
>>
>>> On Thu, Nov 30, 2023 at 11:59 AM Jason Resch 
>>> wrote:
>>>
>>>> On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett 
>>>> wrote:
>>>>
>>>>> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou <
>>>>> stath...@gmail.com> wrote:
>>>>>
>>>>>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>>>>>> wrote:
>>>>>>
>>>>>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou <
>>>>>>> stath...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>>> The Born rule allows you to calculate the probability of what
>>>>>>>> outcome you will see in a Universe where all outcomes occur.
>>>>>>>>
>>>>>>>
>>>>>>> You are still conflating incompatible theories. The Born rule is a
>>>>>>> rule for calculating probabilities from the wave function -- it says
>>>>>>> nothing about worlds or existence. MWI is a theory about the existence 
>>>>>>> of
>>>>>>> many worlds. These theories are incompatible, and should not be 
>>>>>>> conflated.
>>>>>>>
>>>>>>
>>>>>> “The Born rule is a rule for calculating probabilities from the wave
>>>>>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>>>>>> theory about the existence of many worlds” are not incompatible 
>>>>>> statements.
>>>>>>
>>>>>
>>>>> Perhaps that is the wrong way to look at it. The linearity of the
>>>>> Schrodinger equation implies that the individuals on all branches are the
>>>>> same: there is nothing to distinguish one of them as "you" and the others
>>>>> as mere shadows or zombies. In other words, they are all "you". So you are
>>>>> the person on the branch with all spins up and your probability of seeing
>>>>> this result is one, since this branch certainly exists, and, by linearity,
>>>>> "you" are the individual on that branch. This is inconsistent with the
>>>>> claim that the Born rule gives the probability that "you" will see some
>>>>> particular result. As we have seen, the probability that "you" will see 
>>>>> all
>>>>> ups in one, whereas the Born probability for this result is 1/2^N. These
>>>>> probability estimates are incompatible.
>>>>>
>>>>
>>>>
>>>> According to relativity you exist in all times across your lifespan
>>>> (and all times are equally really). Yet you are only ever aware of being in
>>>> one time and in one place. I think this tells us more about the limitations
>>>> of our neurology than it reveals about the extent or nature of reality. If
>>>> a copy of me is created on Mars, the me know Earth doesn't magically become
>>>> aware of it.
>>>>
>>>
>>> And how do we select out the present moment from the block universe?
>>>
>>
>> I believe all apparent selections are merely indexical illusions. 'Here'
>> is no more real than 'There', 'Now' is no more real than 'Then', 'I' is no
>> more real than 'Him'. We only consider these things special due to the
>> position we happen to be in at the time a consideration is made, but all
>> such considerations exist and are equally valid. All 'Heres' are real, all
>> 'Nows' are real, all points of view are 'Is'. Only, as Shrodigner says, we
>> aren't in a position to survey them all at once.
>>
>
> What a load of fanciful nonsense! This goes no way towards explaining our
> experience.
>

Think about it some more.

Jason



> Bruce
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAFxXSLRrWGtxdn4S1fs8QvJhKd5fdRg0g_ioN5Yga6JK%3D4uLWQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAFxXSLRrWGtxdn4S1fs8QvJhKd5fdRg0g_ioN5Yga6JK%3D4uLWQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhWtiY_DpgNEcX6nbTSw%2BajB0h27xch-EviY4N0RsQiCA%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023, 9:57 PM Brent Meeker  wrote:

>
>
> On 11/29/2023 4:58 PM, Jason Resch wrote:
>
>
>
> On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett  wrote:
>
>> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou 
>> wrote:
>>
>>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>>> wrote:
>>>
>>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou <
>>>> stath...@gmail.com> wrote:
>>>>
>>>>>
>>>>>>> The Born rule allows you to calculate the probability of what
>>>>> outcome you will see in a Universe where all outcomes occur.
>>>>>
>>>>
>>>> You are still conflating incompatible theories. The Born rule is a rule
>>>> for calculating probabilities from the wave function -- it says nothing
>>>> about worlds or existence. MWI is a theory about the existence of many
>>>> worlds. These theories are incompatible, and should not be conflated.
>>>>
>>>
>>> “The Born rule is a rule for calculating probabilities from the wave
>>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>>> theory about the existence of many worlds” are not incompatible statements.
>>>
>>
>> Perhaps that is the wrong way to look at it. The linearity of the
>> Schrodinger equation implies that the individuals on all branches are the
>> same: there is nothing to distinguish one of them as "you" and the others
>> as mere shadows or zombies. In other words, they are all "you". So you are
>> the person on the branch with all spins up and your probability of seeing
>> this result is one, since this branch certainly exists, and, by linearity,
>> "you" are the individual on that branch. This is inconsistent with the
>> claim that the Born rule gives the probability that "you" will see some
>> particular result. As we have seen, the probability that "you" will see all
>> ups in one, whereas the Born probability for this result is 1/2^N. These
>> probability estimates are incompatible.
>>
>
>
> According to relativity you exist in all times across your lifespan (and
> all times are equally really).
>
> Sez who?
>

Sez Einstein, Minkowski, C.W. Rietdijk, Kip Thorne, Briane Greene, and
Roger Penrose, to name a few.


You take these images intended to help your mathematical intution far too
> seriously.
>

You agreed with this at one point in time.

In any case, it's not a mere image, but a well accepted implication of
relativity. See:

https://alwaysasking.com/what-is-time/

For references.

Jason


>
> Yet you are only ever aware of being in one time and in one place. I think
> this tells us more about the limitations of our neurology than it reveals
> about the extent or nature of reality. If a copy of me is created on Mars,
> the me know Earth doesn't magically become aware of it.
>
> Jason
>
>
>> Bruce
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAFxXSLQL8jz5p5AvoaZAr4%2B06KfsAC8KwA2ZaJpWhDDoYAifpA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAFxXSLQL8jz5p5AvoaZAr4%2B06KfsAC8KwA2ZaJpWhDDoYAifpA%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi697%3DmSFSQuvh%3D6BRiKN5kCCkMKQLE3U9YXx%3DaPJ1yPw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi697%3DmSFSQuvh%3D6BRiKN5kCCkMKQLE3U9YXx%3DaPJ1yPw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4a29fac3-16c4-4909-b01b-d39820a9a6e7%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/4a29fac3-16c4-4909-b01b-d39820a9a6e7%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi4OvZa0Lh9_cHHP6wFgp8yzrsGOAjACavkpbbtJGRVdA%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023, 8:39 PM Bruce Kellett  wrote:

> On Thu, Nov 30, 2023 at 11:59 AM Jason Resch  wrote:
>
>> On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett 
>> wrote:
>>
>>> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou 
>>> wrote:
>>>
>>>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>>>> wrote:
>>>>
>>>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou <
>>>>> stath...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>>> The Born rule allows you to calculate the probability of what
>>>>>> outcome you will see in a Universe where all outcomes occur.
>>>>>>
>>>>>
>>>>> You are still conflating incompatible theories. The Born rule is a
>>>>> rule for calculating probabilities from the wave function -- it says
>>>>> nothing about worlds or existence. MWI is a theory about the existence of
>>>>> many worlds. These theories are incompatible, and should not be conflated.
>>>>>
>>>>
>>>> “The Born rule is a rule for calculating probabilities from the wave
>>>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>>>> theory about the existence of many worlds” are not incompatible statements.
>>>>
>>>
>>> Perhaps that is the wrong way to look at it. The linearity of the
>>> Schrodinger equation implies that the individuals on all branches are the
>>> same: there is nothing to distinguish one of them as "you" and the others
>>> as mere shadows or zombies. In other words, they are all "you". So you are
>>> the person on the branch with all spins up and your probability of seeing
>>> this result is one, since this branch certainly exists, and, by linearity,
>>> "you" are the individual on that branch. This is inconsistent with the
>>> claim that the Born rule gives the probability that "you" will see some
>>> particular result. As we have seen, the probability that "you" will see all
>>> ups in one, whereas the Born probability for this result is 1/2^N. These
>>> probability estimates are incompatible.
>>>
>>
>>
>> According to relativity you exist in all times across your lifespan (and
>> all times are equally really). Yet you are only ever aware of being in one
>> time and in one place. I think this tells us more about the limitations of
>> our neurology than it reveals about the extent or nature of reality. If a
>> copy of me is created on Mars, the me know Earth doesn't magically become
>> aware of it.
>>
>
> And how do we select out the present moment from the block universe?
>

I believe all apparent selections are merely indexical illusions. 'Here' is
no more real than 'There', 'Now' is no more real than 'Then', 'I' is no
more real than 'Him'. We only consider these things special due to the
position we happen to be in at the time a consideration is made, but all
such considerations exist and are equally valid. All 'Heres' are real, all
'Nows' are real, all points of view are 'Is'. Only, as Shrodigner says, we
aren't in a position to survey them all at once.

Jason


It seems that whatever line you take, there are an awful lot of
> supplementary assumptions needed before MWI gets off the ground.
>
> Bruce
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAFxXSLRKTT8h7JAHFvuwR%2B%2B3Fmg1ofX7hST1Hu6Yk9Vu8rMLmg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAFxXSLRKTT8h7JAHFvuwR%2B%2B3Fmg1ofX7hST1Hu6Yk9Vu8rMLmg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgbp3rvj_CWFpyE54oUXnmRhGUBKf6iUdzN-RUgfzkL_g%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023, 8:34 PM Brent Meeker  wrote:

>
>
> On 11/29/2023 4:17 PM, Bruce Kellett wrote:
>
> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou 
> wrote:
>
>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>> wrote:
>>
>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou 
>>> wrote:
>>>
>>>>
>>>>>> The Born rule allows you to calculate the probability of what outcome
>>>> you will see in a Universe where all outcomes occur.
>>>>
>>>
>>> You are still conflating incompatible theories. The Born rule is a rule
>>> for calculating probabilities from the wave function -- it says nothing
>>> about worlds or existence. MWI is a theory about the existence of many
>>> worlds. These theories are incompatible, and should not be conflated.
>>>
>>
>> “The Born rule is a rule for calculating probabilities from the wave
>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>> theory about the existence of many worlds” are not incompatible statements.
>>
>
> Perhaps that is the wrong way to look at it. The linearity of the
> Schrodinger equation implies that the individuals on all branches are the
> same: there is nothing to distinguish one of them as "you" and the others
> as mere shadows or zombies. In other words, they are all "you". So you are
> the person on the branch with all spins up and your probability of seeing
> this result is one, since this branch certainly exists, and, by linearity,
> "you" are the individual on that branch. This is inconsistent with the
> claim that the Born rule gives the probability that "you" will see some
> particular result. As we have seen, the probability that "you" will see all
> ups in one, whereas the Born probability for this result is 1/2^N. These
> probability estimates are incompatible.
>
>
> How is this different than throwing a die and seeing it came up 6.  Is
> that incompatible with that result having probability 1/6?  Why don't we
> have a multiple-worlds theory of classical probabilities?
>

It's interesting, Feynman and others had this exact debate in that
reference scerir provided (asking how quantum probabilities are different
from dice rolls, Feynman thought there was an important difference).

Jason


> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/e08cb2fe-f896-4300-8214-3318ca5c1069%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/e08cb2fe-f896-4300-8214-3318ca5c1069%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi2m%3DTHkW8wKQZFsxHuUFtRwh6ZR8h7YPe119zZiARnPA%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023, 7:17 PM Bruce Kellett  wrote:

> On Wed, Nov 29, 2023 at 10:49 PM Stathis Papaioannou 
> wrote:
>
>> On Wed, 29 Nov 2023 at 12:34, Bruce Kellett 
>> wrote:
>>
>>> On Wed, Nov 29, 2023 at 12:02 PM Stathis Papaioannou 
>>> wrote:
>>>
>>>>
>>>>>> The Born rule allows you to calculate the probability of what outcome
>>>> you will see in a Universe where all outcomes occur.
>>>>
>>>
>>> You are still conflating incompatible theories. The Born rule is a rule
>>> for calculating probabilities from the wave function -- it says nothing
>>> about worlds or existence. MWI is a theory about the existence of many
>>> worlds. These theories are incompatible, and should not be conflated.
>>>
>>
>> “The Born rule is a rule for calculating probabilities from the wave
>> function -- it says nothing about worlds or existence”  -and- “MWI is a
>> theory about the existence of many worlds” are not incompatible statements.
>>
>
> Perhaps that is the wrong way to look at it. The linearity of the
> Schrodinger equation implies that the individuals on all branches are the
> same: there is nothing to distinguish one of them as "you" and the others
> as mere shadows or zombies. In other words, they are all "you". So you are
> the person on the branch with all spins up and your probability of seeing
> this result is one, since this branch certainly exists, and, by linearity,
> "you" are the individual on that branch. This is inconsistent with the
> claim that the Born rule gives the probability that "you" will see some
> particular result. As we have seen, the probability that "you" will see all
> ups in one, whereas the Born probability for this result is 1/2^N. These
> probability estimates are incompatible.
>


According to relativity you exist in all times across your lifespan (and
all times are equally really). Yet you are only ever aware of being in one
time and in one place. I think this tells us more about the limitations of
our neurology than it reveals about the extent or nature of reality. If a
copy of me is created on Mars, the me know Earth doesn't magically become
aware of it.

Jason


> Bruce
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQL8jz5p5AvoaZAr4%2B06KfsAC8KwA2ZaJpWhDDoYAifpA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAFxXSLQL8jz5p5AvoaZAr4%2B06KfsAC8KwA2ZaJpWhDDoYAifpA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi697%3DmSFSQuvh%3D6BRiKN5kCCkMKQLE3U9YXx%3DaPJ1yPw%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-29 Thread Jason Resch
On Wed, Nov 29, 2023 at 2:59 PM Brent Meeker  wrote:

>
>
> On 11/29/2023 4:00 AM, John Clark wrote:
>
> On Tue, Nov 28, 2023 at 7:30 PM Brent Meeker 
> wrote:
>
> *> MWI fans assert that it is superior because it doesn't assume the Born
>> rule, only the Schroedinger equation.  I wouldn't claim that the (modern)
>> version of Copenhagen is superior to MWI, I'm just unconvinced of the
>> converse.*
>
>
> A pretty convincing argument can be made that if the Many Worlds idea is
> true then the Born Rule must have the ability to predict the most probable
> outcome of any quantum experiment and as an added bonus, unlike its
> competitors, it can do so without adding any random elements. However I
> admit nobody has ever been able to prove that Many Worlds is the only
> possible explanation of why the Born Rule works, and we already know from
> experiments that it does. Put it this way, if Many Worlds is true then the
> Born Rule works, and if the Born Rule works (and we know that it does) then
> Many Worlds MIGHT be true. But that's still a hell of a lot better than any
> other quantum interpretation anybody has managed to come up with, at least
> so far. I'm not certain Many Worlds is correct, but I am certain its
> competitors are wrong, or so bad they're not even wrong.
>
> And as far as assumptions are concerned, every scientist, not just
> physicists, has no choice but to assume that probability must be a real
> number between zero and one, and all the probabilities must add up to
> exactly one for any given situation, because otherwise the very concept
> of probability would make no sense. And we know that taking the square root
> of the absolute value is the only way to get a number like that out of a
> complex function like Schrodinger's wave equation.  If Many Worlds is
> true, and If each version of Brent Meeker makes bets In accordance with the
> laws of probability so derived, then more Brent Meekers will make money
> by following the advice given by the Born Rule than if they followed any
> other betting strategy. Yes some Brent Meekers will still go broke even
> if they follow the Born Rule, but most will not.
>
>
> Yes, I knew all that.  But does it follow from the Schroedinger equation
> alone.  Reading the Carroll/Sebens paper is suggestive, but it depends on
> transforming to a basis that makes the number of components match the Born
> rule.  But it seems to me that one could transform to basis where the
> number of components did not match the Born rule.  Their example is chosen
> so that in the transformed basis each component has amplitude 1 ,  but
> that's just scaling.  They even start with eqn (33) which is not
> normalized.  So it shows how to convert a weighted superposition into a
> branch count.  But it appears to me that it could produce any number of
> branches.  The example is chosen to neatly produce all branches of
> amplitude 1, but that cannot be significant since eqn(35) is not
> normalized.  So the number of branches is not actually determined and could
> be anything.
>

I found this interesting, on comparing whether all bases are really on
equal footing or not:

https://www.lesswrong.com/posts/XDkeuJTFjM9Y2x6v6/which-basis-is-more-fundamental

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgA%2BXi3PCs6JyuWhquA2qp4erqY-wW50pa2KubHAK1Ywg%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-28 Thread Jason Resch
On Tue, Nov 28, 2023, 5:12 PM Brent Meeker  wrote:

>
>
> On 11/28/2023 1:57 PM, Jason Resch wrote:
>
>
>
> On Tue, Nov 28, 2023, 4:55 PM Brent Meeker  wrote:
>
>>
>>
>> On 11/28/2023 1:33 PM, John Clark wrote:
>>
>>
>>
>> On Tue, Nov 28, 2023 at 4:22 PM Brent Meeker 
>> wrote:
>>
>>
>>>
>>
>> That is incorrect.  Schrodinger's equation, the thing that generates the
>>>> complex wave function, says nothing, absolutely nothing, about that wave
>>>> function collapsing, So if you don't like philosophical paradoxes but still
>>>> want to use Schrodinger's equation because it always gives correct results,
>>>> you only have 2 options:
>>>> 1) You can stick on bells and whistles to Schrodinger's equation to
>>>> get rid of those other worlds that you find so annoying even though there's
>>>> no experimental evidence that they are needed.
>>>
>>>
>>> > *You can do exactly the same thing the MWI fans do and apply the Born
>>> rule to predict the probability of your world. *
>>>
>>
>> That is absolutely correct. If you're an engineer and are only
>> interested in finding the correct answer to a given problem then Shut Up
>> And Calculate works just fine.  MWI is only needed if you're curious and
>> want to look under the hood to figure out what could possibly make the
>> quantum realm behave so weirdly.
>>
>>
>> Except that in spite of many attempts the application of the Born rule
>> isn't found under the hood.
>>
>
>
> Is it found in Copenhagen?
>
> Yes, because Copenhagen explicitly included it and didn't pretend the the
> Schroedinger equation was everything.
>


If both Interpretations must assume it, I don't see how that's a special
weakness of MWI.

Jason


> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/79e091f7-152e-48b8-9317-b186d51f9c2e%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/79e091f7-152e-48b8-9317-b186d51f9c2e%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgnNDU0h11BDDEc4YjCJmXdW-pGzFXCx10FfCeGcX%3DsaQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-28 Thread Jason Resch
On Tue, Nov 28, 2023, 4:55 PM Brent Meeker  wrote:

>
>
> On 11/28/2023 1:33 PM, John Clark wrote:
>
>
>
> On Tue, Nov 28, 2023 at 4:22 PM Brent Meeker 
> wrote:
>
>
>>
>
> That is incorrect.  Schrodinger's equation, the thing that generates the
>>> complex wave function, says nothing, absolutely nothing, about that wave
>>> function collapsing, So if you don't like philosophical paradoxes but still
>>> want to use Schrodinger's equation because it always gives correct results,
>>> you only have 2 options:
>>> 1) You can stick on bells and whistles to Schrodinger's equation to get
>>> rid of those other worlds that you find so annoying even though there's no
>>> experimental evidence that they are needed.
>>
>>
>> > *You can do exactly the same thing the MWI fans do and apply the Born
>> rule to predict the probability of your world. *
>>
>
> That is absolutely correct. If you're an engineer and are only interested
> in finding the correct answer to a given problem then Shut Up And Calculate
> works just fine.  MWI is only needed if you're curious and want to look
> under the hood to figure out what could possibly make the quantum realm
> behave so weirdly.
>
>
> Except that in spite of many attempts the application of the Born rule
> isn't found under the hood.
>


Is it found in Copenhagen?

Jason


> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/d4c5cd49-55a7-425c-a35d-a61c2c1b9665%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/d4c5cd49-55a7-425c-a35d-a61c2c1b9665%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhd0h1yxYiCHgsb0cYJ2Kiwdk%2B1gxN8FtzSh7pfQuD4cQ%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-26 Thread Jason Resch
On Sun, Nov 26, 2023 at 8:07 PM Bruce Kellett  wrote:

> On Mon, Nov 27, 2023 at 9:55 AM John Clark  wrote:
>
>> On Sun, Nov 26, 2023 at 5:35 PM Bruce Kellett 
>> wrote:
>>
>> >>>
>>>>> *and how do they instantiate the probabilities that we measure.*
>>>>>
>>>>
>>>> >> There is one observer for every quantum state Schrodinger's cat is
>>>> in.
>>>>
>>>
>>> *>That is exactly the problem. That would suggest that the two outcomes
>>> (dead or alive) are equally likely. But it can easily be arranged that one
>>> outcome is more probable than the other. MWI cannot account for unequal
>>> probabilities.*
>>>
>>
>> There are a googolplex number of Bruce Kelletts, all of which are in very
>> slightly different quantum states but they all observe that, although
>> Schrodinger's cat is in slightly different quantum states, the cat is alive
>> in all of them. And there are 3 googolplexes of Bruce Kelletts, all of
>> which are in very slightly different quantum states but they all observe
>> that, although Schrodinger's cat is in slightly different quantum states,
>> the cat is dead in all of them. Therefore if Bruce Kellett had no other
>> information than before he opened the box he would bet that there is
>> only one chance in four he would see an alive cat when the box was opened.
>>
>
> Nonsense. Where did the 3:1 ratio come from? I know the decay rate of the
> radioactive source. I can arrange to open the box when there is only a 10%
> chance that the atom has decayed. In that case I clearly have a 90% chance
> of seeing a live cat when I open the box. Similarly, I can arrange for any
> probability between zero and one of seeing a live cat. Whereas, if there is
> always a live cat branch and a dead cat branch, my probability of seeing a
> live cat is always 50%, contrary to the laws of radioactive decay.
>

The time the decay occurs is roughly continuous over the hour of the
experiment. Thus the dead cat will have been dead for a random period
between 0 and 1 hours from the time it entered the box. You will find the
observed temperature of the cat will be a continuous variable correlated to
the time of the decay, and this requires an infinity of possible observers.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgtYO0DDpC-yd2N-Fxs4G8jvaUdbYMiQZLq%3DeLUAFynFA%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-22 Thread Jason Resch
Very well said!

On Wed, Nov 22, 2023, 7:23 AM John Clark  wrote:

> On Tue, Nov 21, 2023 at 7:45 PM Brent Meeker 
> wrote:
>
> >> There is plenty of direct evidence that quantum weirdness exists, even
>>> the father of the Copenhagen Interpretation Niels Bohr admitted that 
>>> "*Anyone
>>> who is not shocked by Quantum theory does not understand it *".
>>> Something must be behind all that strangeness and whatever it is it must be
>>> odd, very very odd. Yes, many world's idea is ridiculous, but is it
>>> ridiculous enough to be true? If it's not then something even more
>>> ridiculous is. As for the Copenhagen interpretation, I don't think it's
>>> ridiculous, I think it's incoherent, and if you ask 10 adherents what it's
>>> saying you'll get 12 completely different answers, but they all boil down
>>> to "*just give up, don't even try to figure out what's going on*". But
>>> I think one must try.
>>
>>
>
> * > I think that's very unfair to Bohr.  His basic observation was that we
>> do science in a classical world of necessity.*
>>
>
> Bohr was a great scientist but I think he was a lousy philosopher.  Bohr
> thought there was a mystical interface between quantum events and conscious
> awareness, some call it the "Heisenberg Cut", but neither Bohr nor
> Heisenberg could explain the mechanism behind this mysterious phenomenon
> nor could they say exactly, or even approximately, where the hell the
> dividing line between the classical world and the quantum world is. By
> contrast Many Worlds has no problem whatsoever explaining the mechanism
> behind the Heisenberg cut or where the dividing line is because the
> Heisenberg cut does not exist and there is no dividing line, everything is
> quantum mechanical including the entire universe.  I think this is the
> reason the Many Worlds interpretation is more popular among cosmologists
> than among scientists in general.
>
>  > *Only in a classical world can we make measurements and keep records
>> that we can agree on.  *
>
>
> But the Copenhagen adherents can't agree even among themselves what a
> "measurement" is or what a "record" means, but Many Worlds people are in
> agreement, all measurements are a change in a quantum state but a quantum
> change is not necessarily a measurement.
>
>
>> > *when we study the microscopic world we must use quantum mechanics,
>> but our instruments must be classical. *
>>
>
> We can pretend our instruments are classical, in our everyday life we can
> pretend that everything is classical, but we've known for nearly a century
> that is just a useful lie we tell ourselves because reality is not
> classical, it is quantum mechanical.
>
>
>> *> You can treat a baseball as a quantum system composed of elementary
>> particles; but your measurements on it must still give classical values. *
>>
>
> As I said before, you can live your entire life by pretending that
> classical physics is all there is and in fact billions of people have had
> successful lives doing so, but that doesn't make it true. In theory
> classical measurements can be exact, but quantum measurements cannot be
> even in theory. If we wish to study the fundamental nature of reality we're
> going to need to perform experiments with things when they are in very
> exotic conditions that we will never encounter in everyday life, and when
> we perform these difficult experiments we find the things get weird, very
> very weird, and that demands an explanation. And waving your hands and
> saying there is a Heisenberg cut is not an explanation.
>
>
> * > Since the development of decoherence theory this boundary can be
>> quantified in terms vanishing of cross-terms in a reduced density matrix. *
>>
>
> Forget theory, every time the precision of our quantum *EXPERIMENTS*
> improves the lower limit of this mythical boundary between the classical
> world and the quantum world gets larger, I think it's as large as the
> entire universe.
>
>
>> > *What is left unexplained, in MWI as well as Copenhagen, is the
>> instantiation of a random result with probability proportional to the
>> diagonal elements of the reduced density matrix.*
>>
>
> If the concept of "probability" is to make any sense and not be
> paradoxical it must be a real number between 0 and 1, and all the
> probabilities in a given situation must add up to exactly 1. Gleason's
> theorem proved that given those restraints, probability can always be
> expressed by the density matrix, that is to say the Born Rule. So the real
> question is; Schrodinger's equation is completely deterministic so why do
> we need probability at all? The Copenhagen people have a range of answers
> to that question, some say Schrodinger's equation needs to be modified by
> adding a random element, but they can't agree on exactly what it should be,
> others say it is improper to even ask that question, but they can't agree
> among themselves exactly why it is improper.  The Many Worlds people have a
> clear and simple explanation, 

Re: The multiverse is unscientific nonsense??

2023-11-21 Thread Jason Resch
On Tue, Nov 21, 2023, 11:17 AM 'scerir' via Everything List <
everything-list@googlegroups.com> wrote:

> Just an interesting quote.
> “The idea that they [measurement outcomes] be not alternatives but *all*
> really happen simultaneously seems lunatic to him [the quantum theorist],
> just *impossible*. He thinks that if the laws of nature took *this* form
> for, let me say, a quarter of an hour, we should find our surroundings
> rapidly turning into a quagmire, or sort of a featureless jelly or plasma,
> all contours becoming blurred, we ourselves probably becoming jelly fish.
> It is strange that he should believe this. For I understand he grants that
> unobserved nature does behave this way – namely according to the wave
> equation. The aforesaid *alternatives* come into play only when we make an
> observation - which need, of course, not be a scientific observation. Still
> it would seem that, according to the quantum theorist, nature is prevented
> from rapid jellification only by our perceiving or observing it. []
> The compulsion to replace the "simultaneous* happenings, as indicated
> directly by the theory, by *alternatives*, of which the theory is supposed
> to indicate the respective *probabilities*, arises from the conviction that
> what we really observe are particles - that actual events always concern
> particles, not waves."
>
> -Erwin Schroedinger, The Interpretation of Quantum Mechanics. Dublin
> Seminars (1949-1955) and Other Unpublished Essays (Ox Bow Press,
> Woodbridge, Connecticut, 1995), pages 19-20.
>

This is how David Deutsch interpreted these lectures:

"Schrödinger also had the basic idea of parallel universes shortly before
Everett, but he didn't publish it. He mentioned it in a lecture in Dublin,
in which he predicted that the audience would think he was crazy. Isn't
that a strange assertion coming from a Nobel Prize winner—that he feared
being considered crazy for claiming that his equation, the one that he won
the Nobel Prize for, might be true." -- David Deutsch


Jason


>
>
>
> Il 21/11/2023 16:43 +01 Jason Resch  ha scritto:
>
>
>
>
> On Mon, Nov 20, 2023, 3:32 PM John Clark  wrote:
>
> On Mon, Nov 20, 2023 at 1:22 PM Jesse Mazer  wrote:
>
>
> *> Depends what you mean by "couldn't be true"--my understanding is that
> Einstein's EPR paper was just asserting that there must be additional
> elements of reality beyond the quantum description*
>
>
> Yes, Einstein thought he had proven that quantum mechanics* must *be
> incomplete because nature just couldn't be that ridiculous. But it turned
> out nature *could* be that ridiculous. The moral of the story is that
> being ridiculous is not necessarily the same thing as being wrong.
>
>
> EPR was ultimately right. QM, as the understood was incomplete, for it
> wasn't acknowledged that there as an infinity of simultaneously existing
> states all of which persisted after measurement. It was assuming that
> measurement somehow changed things and made states disappear and do so
> faster than light which EPR authors couldn't swallow. Their intuition
> proved correct, there are no FTL influences.
>
> Jason
>
>
>
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> brw
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3bTNE_YRgRpnmVh8rxKT01A4xtDvEPr%2BRrgE6jLmoanw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3bTNE_YRgRpnmVh8rxKT01A4xtDvEPr%2BRrgE6jLmoanw%40mail.gmail.com?utm_medium=email_source=footer>.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiGBGzi2XwOfdv0OW0SM-0TUOBtPyhiZSYXfAgm9QQKrg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiGBGzi2XwOfdv0OW0SM-0TUOBtPyhiZSYXfAgm9QQKrg%40mail.gmail.com?utm_medium=email_source=footer>.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+un

Re: The multiverse is unscientific nonsense??

2023-11-21 Thread Jason Resch
On Mon, Nov 20, 2023, 3:32 PM John Clark  wrote:

> On Mon, Nov 20, 2023 at 1:22 PM Jesse Mazer  wrote:
>
> *> Depends what you mean by "couldn't be true"--my understanding is that
>> Einstein's EPR paper was just asserting that there must be additional
>> elements of reality beyond the quantum description*
>>
>
> Yes, Einstein thought he had proven that quantum mechanics* must *be
> incomplete because nature just couldn't be that ridiculous. But it turned
> out nature *could* be that ridiculous. The moral of the story is that
> being ridiculous is not necessarily the same thing as being wrong.
>

EPR was ultimately right. QM, as the understood was incomplete, for it
wasn't acknowledged that there as an infinity of simultaneously existing
states all of which persisted after measurement. It was assuming that
measurement somehow changed things and made states disappear and do so
faster than light which EPR authors couldn't swallow. Their intuition
proved correct, there are no FTL influences.

Jason



>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> brw
>
>
>
>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3bTNE_YRgRpnmVh8rxKT01A4xtDvEPr%2BRrgE6jLmoanw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3bTNE_YRgRpnmVh8rxKT01A4xtDvEPr%2BRrgE6jLmoanw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiGBGzi2XwOfdv0OW0SM-0TUOBtPyhiZSYXfAgm9QQKrg%40mail.gmail.com.


Re: The multiverse is unscientific nonsense??

2023-11-18 Thread Jason Resch
That's kind of him to reply.

Aren't functional quantum computers proof that atoms can be in two places
at once?

Jat

On Sat, Nov 18, 2023, 6:58 AM John Clark  wrote:

> *I read an article called The multiverse is unscientific nonsense
>  
> by Jacob
> Barandes, a lecturer in physics at Harvard University, and I wrote a letter
> to professor **Barandes commenting on it. He responded with a very polite
> letter saying he read it and appreciated what I said but didn't have time
> to comment further. This is the letter I sent: *
> ===
>
>
> *Hello Professor Barandes*
>
> *I read your article The multiverse is unscientific nonsense with interest
> and I have a few comments:*
>
> *Nobody is claiming that the existence of the multiverse is a
> proven fact, but I think the idea needs to be taken seriously because: *
>
> *1) Unlike Bohr's Copenhagen interpretation, the Many Worlds theory is
> clear about what it's saying. *
> *2) It is self consistent and conforms with all known experimental
> results. *
> *3) It has no need to speculate about new physics as objective wave
> collapse theories like GRW do.*
>
> *4) It doesn't have to explain what consciousness or a measurement is
> because they have nothing to do with it, all it needs is Schrodinger's
> equation.  *
>
>
> *I don't see how you can explain counterfactual quantum reasoning and such
> things as the Elitzur–Vaidman bomb tester without making use of many
> worlds. Hugh Everett would say that by having a bomb in a universe we are
> not in explode we can tell if a bomb that is in the branch of the
> multiverse that we are in is a dud or is a live fully functional bomb.  You
> say that many worlds needs to account for probability and that's true, but
> then you say many worlds demands that some worlds have “higher
> probabilities than others" but that is incorrect. According to many worlds
> there is one and only one universe for every quantum state that is not
> forbidden by the laws of physics. So when you flip a coin the universe
> splits many more times than twice because there are a vast number, perhaps
> an infinite number, of places where a coin could land, but you are not
> interested in exactly where the coin lands, you're only interested if it
> lands heads or tails. And we've known for centuries how to obtain a useful
> probability between any two points on the continuous bell curve even though
> the continuous curve is made up of an unaccountably infinite number of
> points, all we need to do is perform a simple integration to figure out
> which part of the bell curve we're most likely on.*
>
> *Yes, that's a lot of worlds, but you shouldn't object that the multiverse
> really couldn't be that big unless you are a stout defender of the idea
> that the universe must be finite, because even if many worlds turns out to
> be untrue the universe could still be infinite and an infinity plus an
> infinity is still the an infinity with the same Aleph number. Even if there
> is only one universe if it's infinite then a finite distance away there
> must be a doppelgänger of you because, although there are a huge number of
> quantum states your body could be in, that number is not infinite, but the
> universe is. *
>
>
> *And Occam's razor is about an economy of assumptions not an economy of
> results.  As for the "Tower of assumptions" many worlds is supposed to be
> based on, the only assumption that many worlds makes is that Schrodinger's
> equation means what it says, and it says nothing about the wave function
> collapsing. I would maintain that many worlds is bare-bones no-nonsense
> quantum mechanics with none of the silly bells and whistles that other
> theories stick on that do nothing but get rid of those  pesky other worlds
> that keep cropping up that they personally dislike for some reason. And
> since Everett's time other worlds do seem to keep popping up and in
> completely unrelated fields, such as string theory and inflationary
> cosmology.*
>
>
> *You also ask what a “rational observer” is and how they ought to behave,
> and place bets on future events, given their self-locating uncertainty. I
> agree with David Hume who said that "ought" cannot be derived from "is",
> but "ought" can be derived from "want". So if an observer is a gambler that
> WANTS to make money but is irrational then he is absolutely guaranteed to
> lose all his money if he plays long enough, while a rational observer who
> knows how to make use of continuous probabilities is guaranteed to make
> money, or at least break even. Physicists WANT their ideas to be clear,
> have predictive power, and to conform with reality as described by
> experiment; therefore I think they OUGHT to embrace the many world's idea.
>  *
>
>
> *And yes there is a version of you and me that flips a coin 1 million
> times and see heads every single time even though the coin is 100% fair,
> however it is 

Re: Cryptography could help us figure out if a photograph is real or an AI fake

2023-11-07 Thread Jason Resch
On Tue, Nov 7, 2023, 3:04 PM John Clark  wrote:

>
>
> On Tue, Nov 7, 2023 at 1:59 PM Jason Resch  wrote:
>
>
> *> How does Apple (or whoever is signing the image and its metadata) know
>> it was taken by an iphone at a particular location?*
>>
>
> Regardless of how the picture was  produced, the GPS timestamp created by
> the GPS people can verify exactly when it was made, and can verify where
> the picture was claimed to have been made.
>

GPS works entirely passively on the receiver side. There would be no
external validation of the GPS coordinates.


And Apple Corporation can verify that the iPhone that was supposed to have
> taken the picture has been registered to Mr. Joe Blow. So if the picture is
> an embarrassing picture of a politician and if the picture is phony then
> Mr. Blow must be involved.  Mr. Blow is either an innocent bystander who
> got his iPhone hacked and his secret key stolen, or he is actively engaged
> in deception because he wants the politician to lose the next election.
> But if there's no evidence of any hacking and if Mr. Blow has no history of
> criminality and seems pretty apolitical and if it's not impossible that the
> politician could have been at that place at that time, then it would be
> reasonable to conclude that the photograph was real.
>

Yes, and note, that again it reduces entirely to whatever trust you have or
don't in Mr. Blow. Apple adds no additional trust to the veracity of the
images, it only serves in establishing the identity of Mr. Blow. But there
are better and existing schemes for this which don't require sending all
your images to Apple (certificate authorities).


>
> That's certainly an improvement to what we have now;  a photograph with
> no provenance at all, an anonymous person just posts a picture on the
> Internet with no hint about where or when the picture was taken or by who.
>

I agree. But it's important to recognize what problems cryptography does it
doesn't solve. It can solve the problem of provenance (who generated the
image) but it can't solve the more general problem of is this a deep fake
or not.

Jason



>
>   John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> qoz
>
> q0z
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1r1TOfKn7k7CqCHba5Kg7BMY6SQKr6KXSKmoktSTSRog%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1r1TOfKn7k7CqCHba5Kg7BMY6SQKr6KXSKmoktSTSRog%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi9bduPJBiTHJEb4kA996X7CLaHra%2BLJRD89QVFadPH5A%40mail.gmail.com.


Re: Cryptography could help us figure out if a photograph is real or an AI fake

2023-11-07 Thread Jason Resch
On Tue, Nov 7, 2023, 1:28 PM John Clark  wrote:

> On Tue, Nov 7, 2023 at 1:06 PM Jason Resch  wrote:
>
> >> I don't care if Joe Blow signs it or not with his private key that's
>>> on his iPhone because I have no reason to trust Mr. Blow. I want the Apple
>>> Corporation and the people who run the GPS satellites to sign a hash
>>> function of the picture and the GPS data with their private keys, and their
>>> private keys are not on anybody's phone, they're locked up somewhere in a
>>> deep underground vault, or the cyber security equivalent.
>>>
>>
>> *> But how would Apple, in your scenario, authenticate the picture was
>> really taken from the camera of an iPhone?*
>>
>
> The person claims the picture was taken by an iPhone, if he is lying about
> that then that is a very strong reason to suspect the picture is phony.
>

How does Apple (or whoever is signing the image and its metadata) know it
was taken by an iphone at a particular location?

Presumably, if the signing key is kept in some secure location, there will
have to be a remotely invocable API, which accepts from the sender, any
possible image data and any valid GPS coordinates, etc.

By what means can the signer verify that the data provided was captured
from a camera and not generated or manipulated? I see no way to solve this
problem.

Jason


Why else would he lie about it?  And even if I couldn't be sure how the
> picture was made I'd still know when and where it was made. So you couldn't
> claim to have a compromising picture of me when I was a teenager, or claim
> to have a picture of me taken in Bangkok the day before yesterday when I
> can prove that the day before yesterday I was in Las Vegas.
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> ilv
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv14KpZtQ_MLhoM9MYuiOZhxcdw5cMMucKLO5NOy5%3DH2Jw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv14KpZtQ_MLhoM9MYuiOZhxcdw5cMMucKLO5NOy5%3DH2Jw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgrOGbYx_6pe52zUHCyCs9UuKQfJM9VC4Y55boQaPyLxg%40mail.gmail.com.


Re: Cryptography could help us figure out if a photograph is real or an AI fake

2023-11-07 Thread Jason Resch
On Tue, Nov 7, 2023, 12:31 PM John Clark  wrote:

> On Tue, Nov 7, 2023 at 11:54 AM Jason Resch  wrote:
>
> >> I agree, but I think most people, myself included, would trust that
>>> the entire GPS satellite system is unlikely to be part of some grand
>>> conspiracy of deception, nor is it likely that the Apple Corporation is
>>> stupid enough to do so either because if such deception was ever made
>>> public, and secrets that huge can never be kept for long, it would be the
>>> ruin of the trillion dollar company.  At any rate I'd certainly trust
>>> them more than I'd trust any politician. Or Fox News.
>>>
>>
>>
>> *> I don't know how feasible it would be for  any device maker to prevent
>> someone from extracting a private key from a hardware device which is
>> already is in the hands of the person who seeks to extract it.*
>>
>
> I don't care if Joe Blow signs it or not with his private key that's on
> his iPhone because I have no reason to trust Mr. Blow. I want the Apple
> Corporation and the people who run the GPS satellites to sign a hash
> function of the picture and the GPS data with their private keys, and their
> private keys are not on anybody's phone, they're locked up somewhere in a
> deep underground vault, or the cyber security equivalent.
>

But how would Apple, in your scenario, authenticate the picture was really
taken from the camera of an iPhone?

Jason



Well OK, it's theoretically possible that anybody's secret key can get
> hacked, so even that isn't 100% secure, but then nothing is. However I
> think such a scheme could provide pretty good evidence that a picture was
> genuine.
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> los
>
>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3Fvu4t_Anju7gifuE5UcPXhp_L1kfSnULGOXFiiSTZmw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3Fvu4t_Anju7gifuE5UcPXhp_L1kfSnULGOXFiiSTZmw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj7uA5JgBH7Y21MOnNo58Ui-e0R3qi%2BqFvGyh1y5kE3pA%40mail.gmail.com.


Re: Cryptography could help us figure out if a photograph is real or an AI fake

2023-11-07 Thread Jason Resch
On Tue, Nov 7, 2023 at 10:44 AM John Clark  wrote:

> On Tue, Nov 7, 2023 at 11:11 AM Jason Resch  wrote:
>
> *> I think such protocols are only useful for verifying whether the image
>> came from an already known and trusted source. I don't see that it could
>> verify whether some content is genuine or not if you didn't already
>> know/trust the entity it is purported to come from (and trust that they
>> would not provide you with false content).*
>>
>
> I agree, but I think most people, myself included, would trust that the
> entire GPS satellite system is unlikely to be part of some grand conspiracy
> of deception, nor is it likely that the Apple Corporation is stupid enough
> to do so either because if such deception was ever made public, and secrets
> that huge can never be kept for long, it would be the ruin of the trillion
> dollar company.  At any rate I'd certainly trust them more than I'd trust
> any politician. Or Fox News.
>


I don't know how feasible it would be for  any device maker to prevent
someone from extracting a private key from a hardware device which is
already is in the hands of the person who seeks to extract it.

There are methods to make it difficult, but I don't think it can be made
impossible. And once one of the keys is removed from the device which
contained it, any extra information, such as GPS coordinates, etc. could be
falsely generated and then signed by that key.

Jason


>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 3ep
>
>
>
>
>
>
>>
>> Jason
>>
>> On Tue, Nov 7, 2023 at 8:14 AM John Clark  wrote:
>>
>>> Now that AI art is so good it's becoming impossible to determine if a
>>> photograph is real or fake, but a new open-source internet protocol
>>> called "C2PA" may offer a solution. If camera and smartphone makers
>>> agree to do so their products would all have a feature (which I hope you
>>> would be allowed to turn off if you wish) that would make a cryptographic
>>> hash of the picture and, thanks to GPS satellites, also have information on
>>> the time and place the picture was taken, and on the type of camera and
>>> exposure settings. Any alteration to the picture could easily be
>>> determined. And if social media companies cooperated you could even figure
>>> out when it was first posted on them. You could find out all of this stuff
>>> with just one click, it would work something like this:
>>>
>>> What happens if real is actually fake? <https://truepic.com/revel/>
>>>
>>> Of course you could refuse to use C2PA, but if you did that would make
>>> somebody deeply suspicious that your photograph is real.
>>>
>>> Cryptography may offer a solution to the massive AI-labeling problem
>>> <https://www.technologyreview.com/2023/07/28/1076843/cryptography-ai-labeling-problem-c2pa-provenance/>
>>>
>>>
>>> 5tt
>>>
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1KZ_ArPbSPe0nTrTKvHkRUw2xeofkNS3BrOVytbZexXg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1KZ_ArPbSPe0nTrTKvHkRUw2xeofkNS3BrOVytbZexXg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgQW_A7SqBAi7eYNgNw8UHidoZnH8mF_Pt4MqDBA%3DjpZg%40mail.gmail.com.


Re: Cryptography could help us figure out if a photograph is real or an AI fake

2023-11-07 Thread Jason Resch
I think such protocols are only useful for verifying whether the image came
from an already known and trusted source. I don't see that it could verify
whether some content is genuine or not if you didn't already know/trust the
entity it is purported to come from (and trust that they would not provide
you with false content).

Jason

On Tue, Nov 7, 2023 at 8:14 AM John Clark  wrote:

> Now that AI art is so good it's becoming impossible to determine if a
> photograph is real or fake, but a new open-source internet protocol
> called "C2PA" may offer a solution. If camera and smartphone makers agree
> to do so their products would all have a feature (which I hope you would be
> allowed to turn off if you wish) that would make a cryptographic hash of
> the picture and, thanks to GPS satellites, also have information on the
> time and place the picture was taken, and on the type of camera and
> exposure settings. Any alteration to the picture could easily be
> determined. And if social media companies cooperated you could even figure
> out when it was first posted on them. You could find out all of this stuff
> with just one click, it would work something like this:
>
> What happens if real is actually fake? <https://truepic.com/revel/>
>
> Of course you could refuse to use C2PA, but if you did that would make
> somebody deeply suspicious that your photograph is real.
>
> Cryptography may offer a solution to the massive AI-labeling problem
> <https://www.technologyreview.com/2023/07/28/1076843/cryptography-ai-labeling-problem-c2pa-provenance/>
>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 5tt
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3QwOgZmxztpqErch8BuCi8Ffv5fN7WGpFYvFO%3DzotRHg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3QwOgZmxztpqErch8BuCi8Ffv5fN7WGpFYvFO%3DzotRHg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUga656BaAOd6PBc2LF8mK84t_JOs69xx6DJY5gs7spzXg%40mail.gmail.com.


Re: Are Many Worlds & Pilot Wave THE SAME Theory?

2023-09-29 Thread Jason Resch
On Fri, Sep 29, 2023, 6:19 AM John Clark  wrote:

> My answer would be YES, except that Many worlds just needs Schrodinger's
> Equation, but Pilot Wave theory also needs a very complex guiding equation
> that does nothing but make the theory incompatible with special relativity.
> If Occam's razor alone wasn't enough to rule out Pilot Waves that should do
> it, this video goes in the more detail explaining why:
>
>   Are Many Worlds & Pilot Wave THE SAME Theory?
> <https://www.youtube.com/watch?v=BUHW1zlstVk>
>

Nice video, thanks for sharing.

I agree. Both accept the continued reality of the wave function.

Pilot-wave theory adds purely philosophical assumptions, namely, that "all
but one branch is not-really-real" and "everyone in those other branches is
a philosophical zombie."

This zombiehood claim is made despite the fact that the people in these
"not-really-real" branches still behave like the conscious people in the
real branch; they have full lives, they talk to one another, they write
books about consciousness, they develop a pilot-wave theory that people in
other branches are zombies, etc.)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgJNAmys%3DjT_dBGr_RnTFBUFAgZU011Ww4d%2BQzD-hyywQ%40mail.gmail.com.


Re: Consciousness theory slammed as "pseudoscience"

2023-09-21 Thread Jason Resch
By its own definitions IIT is not falsifiable, for it proclaims that a
computer program that gave identical behavior in all situations to another
conscious system, would not be conscious. But since it has identical
behavior there is no objective way to prove this assertion of IIT (that one
system is conscious while the other is not).

This also implies the possibility of philosophical zombies (which IIT
proponents freely admit), which also implies consciousness is
epiphenomenonal, with all the problems of philosophical zombies and
epiphenomenonalism entail.

So is it pseudoscience? I don't know if I would call it that, but I think
it is almost certainly wrong as it is currently framed.

I do find some strengths in some of the ideas that have come out of it, in
particular how a system must be capable of affecting itself for it to be
aware of its consciousness. I also think it is right to put the focus on
information.

I think where it errs is in confusing a logical-informational state with a
instantaneous physical state. This leads to the mistaken belief that a
parallel computation is more conscious than a serial computation, even when
they compute the exact same function (IIT proponents don't consider
space-time symmetry).

I think that if IIT corrected these problems, it would be no more than
functionalism. I think of IIT is a kind of "functionalism in denial", as it
makes many similar claims to functionalists, placing emphasis on the causal
organization of a system, but at the last moment, it insists that a
computer implementing that same causal organization would not be conscious.

Jason

On Thu, Sep 21, 2023, 2:08 PM John Clark  wrote:

> Consciousness theory slammed as "pseudoscience"
> <https://www.nature.com/articles/d41586-023-02971-1?utm_source=Nature+Briefing_campaign=84936ca310-briefing-dy-20230921_medium=email_term=0_c9dfd39373-84936ca310-44221073>
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> jqq
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3KhiA9tJXvhos5RRSStwh6hSWm6SFVfz-vpRZDrYGM%2BA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3KhiA9tJXvhos5RRSStwh6hSWm6SFVfz-vpRZDrYGM%2BA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiPOahXAG8hUbt1xy21caiaPze0z-r4G%2B6VmaKv3TbtDA%40mail.gmail.com.


Re: The human race almost didn't happen

2023-09-14 Thread Jason Resch
On Thu, Sep 14, 2023, 2:56 PM John Clark  wrote:

> In the September 1 issue of the Journal science researchers report they
> have found, are using genetic analysis, that the ancestors of the human
> race, as well as those of the Neanderthals and the Denisovans, suffer
> through a severe population decline that started 930,000 BP (Before
> Present) and lasted for 117,000 years until 813,000 BP.  This time period
> corresponds to a gap in the fossil record when there was almost no evidence
> of our ancestors  although there are many more fossils of them both before
> and after that gap. At its lowest point there were only about 1280 breeding
> individuals, every human, Neanderthal, and Denisovan who ever lived is a
> descendent of one or all of those 1280 individuals. It is not clear what
> caused the decline but whatever it was it doesn't seem to have been a
> global environmental event because other species unrelated to us don't seem
> to have suffered through a similar apocalypse.
>
> Genomic inference of a severe human bottleneck during the Early to Middle
> Pleistocene transition
> <https://www.science.org/doi/10.1126/science.abq7487>
>
>  If they're right then the human race almost didn't happen, life has
> existed on this planet for over 3 1/2 billion years but only in the last
> few thousand has a technology producing species shown up, and if things
> have been just slightly different it never would have. Perhaps this
> explains the Fermi paradox. Life is easy but intelligence is hard.
>

Most of the time seems to be between single cell and multicellular life.

In comparison to the single-cell to multi-cell gap, the rise of many
species (whose chief survival advantage is their high intelligence), seems
to have been relatively short. We also note it occurs in many separate
evolutionary lines (cephalopods, cetaceans, corvids, primates).

It's true that if multicellular life is hard that intelligence is hard, but
it seems once there's multicellular life, intelligence is easy.

Jason



> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> h66
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2i4rwKOzv9q2%2BYnb_vC2NR18WhKtWYToTLCjJv%3DNL4UQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2i4rwKOzv9q2%2BYnb_vC2NR18WhKtWYToTLCjJv%3DNL4UQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgh%3DMwGYRxFVpW1u0vfTOy_%2BGhpVwSLqOuYhmhsz0FCRw%40mail.gmail.com.


Re: Is Many Worlds Falsifiable?

2023-09-04 Thread Jason Resch
As Rob Garrett shows here, there's really nothing mysterious about
entanglement.

Entanglement is merely measurement. The mystery, if there is one, is why
are measurements consistent across time:

https://youtu.be/dEaecUuEqfc?si=psmNck41LbAW4SjV

Jason

On Mon, Sep 4, 2023, 7:48 AM 'scerir' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> Il 04/09/2023 12:29 +01 Bruce Kellett  ha scritto:
>
> No. The example was not particularly well thought out. My point is that
> geometrical motions can exceed light velocity, and distant galaxies recede
> at greater than light speed. Light speed limits only physical transmission,
> unless by tachyons. In fine, *understanding non-locality probably
> involves refining our understanding of space and time.*
>
> https://www.edge.org/response-detail/26790
> Anton Zeilinger. “It appears that an understanding is possible via the
> notion of information. Information seen as the possibility of obtaining
> knowledge. Then quantum entanglement describes a situation where
> information exists about possible correlations between possible future
> results of possible future measurements without any information existing
> for the individual measurements. The latter explains quantum randomness,
> the first quantum entanglement. And both have significant consequences for
> our customary notions of causality. It remains to be seen what the
> consequences are for our notions of space and time, or space-time for that
> matter. *Space-time itself cannot be above or beyond such considerations.
> I suggest we need a new deep analysis of space-time, a conceptual analysis
> maybe analogous to the one done by the Viennese physicist-philosopher Ernst
> Mach who kicked Newton’s absolute space and absolute time form their
> throne.* The hope is that in the end we will have new physics analogous
> to Einstein’s new physics in the two theories of relativity.”
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/243069131.8543635.1693828098555%40mail1.libero.it
> <https://groups.google.com/d/msgid/everything-list/243069131.8543635.1693828098555%40mail1.libero.it?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjzCQ6prDvQ%3DwV7PjZaZCWt5ufjesyHRCAXibb-CDER-g%40mail.gmail.com.


Re: Is Many Worlds Falsifiable?

2023-09-01 Thread Jason Resch
I agree with John. What makes superdeterminism weird isn't the determinism
part. It's that the system is also rigged against us to produce the Bell
inequality.

I am not sure if you saw my recent example on extropy-chat with flipping
coins and always seeing heads 66% of the time, no matter what we do, but
superdeterminism is basically saying that's just how it is the universe has
preordained that humans flip coins such that they come up head's 66% of the
time.

Jason

Jason

On Fri, Sep 1, 2023, 2:47 PM Stathis Papaioannou  wrote:

>
>
> On Sat, 2 Sep 2023 at 04:20, John Clark  wrote:
>
>> On Fri, Sep 1, 2023 at 1:22 PM Stathis Papaioannou 
>> wrote:
>>
>>  >> according to superdeterminism the particular initial condition the
>>>> universe was in 13.8 billion years ago has determined if you think
>>>> superdeterminism is a reasonable theory or if you think it's complete
>>>> bullshit. As for me I was determined to believe it's bullshit.
>>>>
>>>
>>> *>I still struggle to see the difference between determinism and
>>> superdeterminism. They both say that there is no true randomness*
>>>
>>
>> Yes.
>>
>>
>>> * > which includes randomness in how the experimenters set up their
>>> experiment.*
>>>
>>
>> No. Knowing the laws of physics is not enough, to make predictions you
>> also need to know the initial conditions. Superdeterminism says more than a
>> given state of the universe is the mathematical product of the previous
>> state, superdeterminism assumes, for no particular reason, that out of the
>> infinite number of states the universe could've started out at, 13.8
>> billion years ago it was in the one and only one particular state that
>> would make experimenters 13.8 billion years later "choose" to set their
>> instruments in such a way that they always *INCORRECTLY* conclude that
>> things can *NOT* be both realistic and local. It would be absolutely
>> impossible to make a larger assumption than this, and that is why it is the
>> largest violation of Occam's Razor conceivable. There are an infinite
>> number of initial conditions the universe could've started out in and in
>> which things would be deterministic today, but one and only one initial
>> condition would produce the universe in which superdeterminism is true. And
>> if superdeterminism were true then there would be no point in performing
>> scientific experiments since there would be no reason for them to lead
>> to the truth, and yet airplanes fly and bridges don't collapse so they do
>> seem to lead to the truth, there is no way to explain that unless the
>> initial conditions were even further restrained such that we set our
>> instruments correctly on all experiments *EXCEPT* when the experimenters
>> try to test for realism or locality, then we "choose" to set them
>> incorrectly. That's why I don't understand how anyone can take this
>> seriously. That is why I think superdeterminism is bullshit.
>>
>
> Bell seemed to think that super determinism meant that the mind of the
> experimenters was determined along with everything else, which he described
> as a lack of “free will” (it seems he meant by this lack of randomness in
> their minds), which he thought was an assumption in the experiment:
>
> “There is a way to escape the inference of superluminal
> <https://en.wikipedia.org/wiki/superluminal> speeds and spooky action at
> a distance. But it involves absolute determinism
> <https://en.wikipedia.org/wiki/determinism> in the universe, the complete
> absence of free will <https://en.wikipedia.org/wiki/free_will>. Suppose
> the world is super-deterministic, with not just inanimate nature running on
> behind-the-scenes clockwork, but with our behavior, including our belief
> that we are free to choose to do one experiment rather than another,
> absolutely predetermined, including the ‘decision’ by the experimenter to
> carry out one set of measurements rather than another, the difficulty
> disappears. There is no need for a faster-than-light signal to tell particle
>  *A* what measurement has been carried out on particle *B*, because the
> universe, including particle *A*, already ‘knows’ what that measurement,
> and its outcome, will be.”
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgi

Re: Is Many Worlds Falsifiable?

2023-09-01 Thread Jason Resch
On Fri, Sep 1, 2023 at 8:52 AM John Clark  wrote:

>
>
> On Fri, Sep 1, 2023 at 9:38 AM Jason Resch  wrote:
>
>
>
>> >> 128 bits would probably be enough information to program a Turing
>>> Machine to calculate the infinite series 4(1-1/3 +1/5 -1/7 +...) and
>>> that would produce an infinite string of digits that never repeats and
>>> looks completely random, 31415926535
>>> 897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
>>> ., because that particular infinite series converges to the
>>> transcendental number *π*.
>>>
>>
>> *> It's not that it's generating apparent random results though,
>> superdeterminism requires results that are correlated to the way we choose
>> to make the measurements.*
>>
>
> But according to superdeterminism your "choices" of how to make the
> measurements were also completely determined, if you had "chosen" to make
> the measurements in a certain way you could have shown that
> superdeterminism produce results that were self-contradictory, but you have
> never "chosen" to do so and you never will.  By the way, I feel a little
> queasy defending superdeterminism because I think the idea is completely
> idiotic.
>

But did (or could) superdeterminism choose the digits of Pi?

Jason



>
>   John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> ifq
>
>
>
>>>> On Fri, Sep 1, 2023 at 7:26 AM John Clark  wrote:
>>>>
>>>>> On Thu, Aug 31, 2023 at 6:29 PM Bruce Kellett 
>>>>> wrote:
>>>>>
>>>>> *> OK. So spell out your non-realist, but local, many worlds account
>>>>>> of the violations of the Bell inequalities. It seems that you want it 
>>>>>> both
>>>>>> ways -- Bell's theorem says that MWI must be non-local, but you claim 
>>>>>> that
>>>>>> it is local? "Realism" has nothing to do with it.*
>>>>>
>>>>>
>>>>>
>>>>> "Realism" has* EVERYTHING* to do with it, and I spelled out exactly
>>>>> why in a post on May 4 2022 when somebody said they wanted to hear all the
>>>>> gory details and this is what I said:
>>>>> ==
>>>>>
>>>>> " If you want all the details this is going to be a long post, you
>>>>> asked for it. First I'm gonna have to show that any theory (except for
>>>>> superdeterminism which is idiotic) that is deterministic, local and
>>>>> realistic cannot possibly explain the violation of Bell's Inequality that
>>>>> we see in our experiments, and then show why *a theory like Many
>>>>> Worlds which is deterministic and local but NOT realistic can.*
>>>>>
>>>>> The hidden variable concept was Einstein's idea, he thought there was
>>>>> a local reason all events happened, even quantum mechanical events,
>>>>> but we just can't see what they are. It was a reasonable guess at the time
>>>>> but today experiments have shown that Einstein was wrong, to do that I'm
>>>>> gonna illustrate some of the details of Bell's inequality with an example.
>>>>>
>>>>> When a photon of undetermined polarization hits a polarizing filter
>>>>> there is a 50% chance it will make it through. For many years physicists
>>>>> like Einstein who disliked the idea that God played dice with the universe
>>>>> figured there must be a hidden variable inside the photon that told it 
>>>>> what
>>>>> to do. By "hidden variable" they meant something different about that
>>>>> particular photon that we just don't know about. They meant something
>>>>> equivalent to a look-up table inside the photon that for one reason or
>>>>> another we are unable to access but the photon can when it wants to know 
>>>>> if
>>>>> it should go through a filter or be stopped by one. We now understand that
>>>>> is impossible. In 1964 (but not published until 1967) John Bell showed 
>>>>> that
>>>>> correlations that work by hidden variables must be less than or equal to a
>>>>> certain value, this is called Bell's inequality. In experiment it was 
>>>>> found
>>>>> that some correlations are actually greater than that value. Quantum
>>>>&

Re: Is Many Worlds Falsifiable?

2023-09-01 Thread Jason Resch
On Fri, Sep 1, 2023, 9:16 AM John Clark  wrote:

> On Fri, Sep 1, 2023 at 8:41 AM Jason Resch  wrote:
>
> *> I think it may be possible actually, to use a mathematical argument to
>> disprove superdeterminism*
>>
>
> I'm not sure a mathematical proof that superdeterminism is not true is
> even necessary because a greater violation of Occam's Razor is quite
> literally impossible to imagine.
>
> *> it's not feasible for 128 measurements, to mathematically, contain
>> enough information and variation to also determine and the subsequent 2^128
>> outcomes.*
>
>
> 128 bits would probably be enough information to program a Turing Machine
> to calculate the infinite series 4(1-1/3 +1/5 -1/7 +...) and that would
> produce an infinite string of digits that never repeats and looks
> completely random, 31415926535
> 897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
> ., because that particular infinite series converges to the
> transcendental number *π*.
>

It's not that it's generating apparent random results though,
superdeterminism requires results that are correlated to the way we choose
to make the measurements.

So how can these correlations be predetermined to follow the outputs of
this algorithm, when the deterministic algorithm is deciding what
measurements to make? And the deterministic algorithm in question was
chosen (deterministically) from prior measurements.

It has the feeling to me of a compression algorithm that could make any
input smaller, but still perfectly decompress and return the original
input. This is impossible because there are more larger messages than
smaller ones, so the original input would be under determined.

With super determinism, every successive state of the universe is perfectly
one-to-one. But this seems like it must down whenever we try to link the
superdeterminism measurements results to against other functions that have
many, or an infinite number of, outputs from one input or initial state.

Jason



>
>  John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> isc
>
>
>> On Fri, Sep 1, 2023 at 7:26 AM John Clark  wrote:
>>
>>> On Thu, Aug 31, 2023 at 6:29 PM Bruce Kellett 
>>> wrote:
>>>
>>> *> OK. So spell out your non-realist, but local, many worlds account of
>>>> the violations of the Bell inequalities. It seems that you want it both
>>>> ways -- Bell's theorem says that MWI must be non-local, but you claim that
>>>> it is local? "Realism" has nothing to do with it.*
>>>
>>>
>>>
>>> "Realism" has* EVERYTHING* to do with it, and I spelled out exactly why
>>> in a post on May 4 2022 when somebody said they wanted to hear all the gory
>>> details and this is what I said:
>>> ==
>>>
>>> " If you want all the details this is going to be a long post, you asked
>>> for it. First I'm gonna have to show that any theory (except for
>>> superdeterminism which is idiotic) that is deterministic, local and
>>> realistic cannot possibly explain the violation of Bell's Inequality that
>>> we see in our experiments, and then show why *a theory like Many Worlds
>>> which is deterministic and local but NOT realistic can.*
>>>
>>> The hidden variable concept was Einstein's idea, he thought there was a
>>> local reason all events happened, even quantum mechanical events, but
>>> we just can't see what they are. It was a reasonable guess at the time but
>>> today experiments have shown that Einstein was wrong, to do that I'm gonna
>>> illustrate some of the details of Bell's inequality with an example.
>>>
>>> When a photon of undetermined polarization hits a polarizing filter
>>> there is a 50% chance it will make it through. For many years physicists
>>> like Einstein who disliked the idea that God played dice with the universe
>>> figured there must be a hidden variable inside the photon that told it what
>>> to do. By "hidden variable" they meant something different about that
>>> particular photon that we just don't know about. They meant something
>>> equivalent to a look-up table inside the photon that for one reason or
>>> another we are unable to access but the photon can when it wants to know if
>>> it should go through a filter or be stopped by one. We now understand that
>>> is impossible. In 1964 (but not published until 1967) John Bell showed that
>>> correlations that work by hidden variables must be less than or equal to a
>>> certain value, this is called

Re: Is Many Worlds Falsifiable?

2023-09-01 Thread Jason Resch
I think it may be possible actually, to use a mathematical argument to
disprove superdeterminism, in a manner similar to how Bell disproved
theories that are local, real, and counterfactually definite.

The method would show that there is a necessary underdetermination that can
happen, when a small number of measurement results are gathered, and then
used to feed back into the polarizing settings for a much larger number of
subsequent measurements. If the universe is completely deterministic, as
superdeterminism proposes, there should be a point at which the
correlations must fail, as there are not enough ways a single (or small
number of) facts can determine a much larger, potentially infinite, number
of following facts.

As an example, take the first 128 measurements from a Bell type experiment,
and use the measured values to determine the 128-bits of an encryption key.
Use that key to initialize a cipher (which can be viewed as a seed to a
pseudo random number generator), which has a period of 2^128. That is, this
cipher (or pseudo random number generator), will output a deterministic
sequence of bits that is on the order of 2^128 bits long. Use these output
bits to determine the settings of how to set the angle of the polarizing
filter in an iterated Bell/EPR test.

According to Superdeterminism, everything measured was pre-determined to
have the measurement results. However, in this case, it's not feasible for
128 measurements, to mathematically, contain enough information and
variation to also determine and the subsequent 2^128 outcomes. The 2^128
outcomes are mathematically underdetermined by 128 prior measurements, and
so the system cannot be deterministic in the way superdeterminism proposes.

Jason


On Fri, Sep 1, 2023 at 7:26 AM John Clark  wrote:

> On Thu, Aug 31, 2023 at 6:29 PM Bruce Kellett 
> wrote:
>
> *> OK. So spell out your non-realist, but local, many worlds account of
>> the violations of the Bell inequalities. It seems that you want it both
>> ways -- Bell's theorem says that MWI must be non-local, but you claim that
>> it is local? "Realism" has nothing to do with it.*
>
>
>
> "Realism" has* EVERYTHING* to do with it, and I spelled out exactly why
> in a post on May 4 2022 when somebody said they wanted to hear all the gory
> details and this is what I said:
> ==
>
> " If you want all the details this is going to be a long post, you asked
> for it. First I'm gonna have to show that any theory (except for
> superdeterminism which is idiotic) that is deterministic, local and
> realistic cannot possibly explain the violation of Bell's Inequality that
> we see in our experiments, and then show why *a theory like Many Worlds
> which is deterministic and local but NOT realistic can.*
>
> The hidden variable concept was Einstein's idea, he thought there was a
> local reason all events happened, even quantum mechanical events, but we
> just can't see what they are. It was a reasonable guess at the time but
> today experiments have shown that Einstein was wrong, to do that I'm gonna
> illustrate some of the details of Bell's inequality with an example.
>
> When a photon of undetermined polarization hits a polarizing filter there
> is a 50% chance it will make it through. For many years physicists like
> Einstein who disliked the idea that God played dice with the universe
> figured there must be a hidden variable inside the photon that told it what
> to do. By "hidden variable" they meant something different about that
> particular photon that we just don't know about. They meant something
> equivalent to a look-up table inside the photon that for one reason or
> another we are unable to access but the photon can when it wants to know if
> it should go through a filter or be stopped by one. We now understand that
> is impossible. In 1964 (but not published until 1967) John Bell showed that
> correlations that work by hidden variables must be less than or equal to a
> certain value, this is called Bell's inequality. In experiment it was found
> that some correlations are actually greater than that value. Quantum
> Mechanics can explain this, classical physics or even classical logic can
> not.
>
> Even if Quantum Mechanics is someday proven to be untrue Bell's argument
> is still valid, in fact his original paper had no Quantum Mechanics in it
> and can be derived with high school algebra; his point was that any
> successful theory about how the world works must explain why his
> inequality is violated, and today we know for a fact from experiments
> that it is indeed violated. Nature just refuses to be sensible and doesn't
> work the way you'd think it should.
>
> I have a black box, it has a red light and a blue light on it, it also has
> a rotary switch with 6 connectio

Re: A new theory of consciousness: conditionalism

2023-08-26 Thread Jason Resch
Thank you John for your thoughts. I few notes below:

On Sat, Aug 26, 2023 at 7:17 AM John Clark  wrote:

> On Fri, Aug 25, 2023 at 1:47 PM Jason Resch  wrote:
>
> *> At a high level, states of consciousness are states of knowledge,*
>>
>
> That is certainly true, but what about the reverse, does a high state of
> knowledge imply consciousness?  I'll never be able to prove it but I
> believe it does but of course for this idea to be practical there must be
> some way of demonstrating that the thing in question does indeed have a
> high state of knowledge, and the test for that is the Turing Test, and
> the fact that my fellow human beings have passed the Turing test is the
> only reason I believe that I am NOT the only conscious being in the
> universe.
>

Yes, I believe there's an identity between states of knowledge and states
of consciousness. That is almost implicit in the definition of
consciousness:
con- means "with"
-scious- means "knowledge"
-ness means "the state of being"
con-scious-ness -> the state of being with knowledge.

Then, the question becomes: what is a state of knowledge? How do we
implement or instantiate a knowledge state, physically or otherwise?

My intuition is that it requires a process of differentiation, such that
some truth becomes entangled with the system's existence.


>
> *> A conditional is a means by which a system can enter/reach a state of
>> knowledge (i.e. a state of consciousness) if and only if some fact is true.*
>>
>
> Then "conditional" is not a useful philosophical term because you could be
> conscious of and know a lot about Greek mythology. but none of it is true
> except for the fact that Greek mythology is about Greek mythology.
>

Yes. Here, the truth doesn't have to be some objective truth, it can be
truth of what causes ones mind to reach a particular state. E.g., here it
would be the truth of what particular sensory data came into the scholar's
eyes as he read a book of Greek mythology.



> >  *Consciousness is revealed as an immaterial, ephemeral relation, not
>> any particular physical thing we can point at or hold.*
>>
>
> I mostly agree with that but that doesn't imply there's anything mystical
> going on, information is also immaterial and you can't point to *ANY
> PARTICULAR* physical thing
>

I agree.

 (although you can always point to *SOME *physical thing) and I believe
> it's a brute fact that consciousness is the way information feels when it
> is being processed intelligently.
>

I like this analogy, but I think it is incomplete. Can information (by
itself) feel? Can information (by itself) have meaning?

I see value in making a distinction between information and "the system to
be informed." I think the pair are necessary for there to be meaning, or
consciousness.


However there is nothing ephemeral about information, as far as we can tell
> the laws of physics are unitary, that is information can't be destroyed
> and the probability of all possible outcomes must add up to 100%. For a
> while Stephen Hawking thought that Black Holes destroyed information but he
> later changed his mind, Kip Thorne still thinks it may do so but he is in
> the minority.
>

I agree information can't be destroyed. But note that what I called
ephemeral was the conditional relation, which (at least usually) seems to
occur and last during a short time.



>
> *> All we need to do is link some action to a state of knowledge.*
>>
>
> At the most fundamental level that pretty much defines what a computer
> programmer does to make a living.
>

Yes.



> * > It shows the close relationship between consciousness and information,
>> where information is defined as "a difference that makes a difference",*
>>
>
> And the smallest difference that still makes a difference is the
> difference between one and zero, or on and off.
>

The bit is the simplest unit of information, but interestingly, there can
also be fractional bits. For example, if there's a 75% chance of some
event, like two coin tossings not both being heads, and I tell you that two
coin tossings were not both heads, then I have only
communicated -log2(0.75) ~= 0.415 bits of information to you.



> > *It shows a close relationship between consciousness and
>> computationalism,*
>>
>
> I strongly agree with that,  it makes no difference if the thing doing
> that computation is carbon-based and wet and squishy, or silicon-based and
> dry and hard.
>

Absolutely  


>  >  It is also supportive of functionalism and it's multiple
>> realizability, as there are many possibile physical arrangements that lead
>> to conditionals.
>
>
> YES!
>
> *> It's clear there neural network

A new theory of consciousness: conditionalism

2023-08-25 Thread Jason Resch
I would like to propose a theory of consciousness which I think might have
some merit, but more importantly I would like to see what criticism others
might have for it.

I have chosen the name "conditionalism" for this theory, as it is based
loosely on the notion of conditional statements as they appear in both
regular language, mathematics, and programming languages.

At a high level, states of consciousness are states of knowledge, and
knowledge is embodied by the existence of some relation to some truth.

A conditional is a means by which a system can enter/reach a state of
knowledge (i.e. a state of consciousness) if and only if some fact is true.
A simple example using a programming language:

if (x >= 5) then {
   // knowledge state of x being greater than or equal to 5
}

I think this way of considering consciousness, as that existing between
those two braces: { } can explain a lot.

1. Consciousness is revealed as an immaterial, ephemeral relation, not any
particular physical thing we can point at or hold.

2. It provides for a straight-forward way to bind complex states of
consciousness, though conjunction, for example:
If (a and b) {
// knowledge of the simultaneous truth of both a and b
}
This allows states of consciousness to be arbitrarily complex and varied.

3. It explains the causal efficacy of states of consciousness. All we need
to do is link some action to a state of knowledge. Consciousness is then
seen as antecedent to, and a prerequisite for, any intelligent behavior.
For example:
If (light == color.red) {
slowDown();
}

4. It shows the close relationship between consciousness and information,
where information is defined as "a difference that makes a difference", as
conditionals are all about what differences make which differences.

5. It shows a close relationship between consciousness and
computationalism, since computations are all about counterfactual and
conditional relations.

6. It is also supportive of functionalism and it's multiple realizability,
as there are many possibile physical arrangements that lead to conditionals.

7. It's clear there neural networks firings is all about conditionals and
combining them in whether or not a neuron will fire and which other neurons
have fired binds up many conditional relations into one larger one.

8. It seems no intelligent (reactive, deliberative, contemplative,
reflective, etc.) process can be made that does not contain at least some
conditionals. As without them, there can be no responsiveness. This
explains the biological necessity to evolve conditionals and apply them in
the guidance of behavior. In other words, consciousness (states of
knowledge) would be strictly necessary for intelligence to evolve.


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhtNO0jWDAAE167oM%2BGAODWxoh%2Bx2AsHAqz-_iBenxS7w%40mail.gmail.com.


Re: Worms that have been dead for over 45,000 years have been brought back to life

2023-07-31 Thread Jason Resch
Hamsters and rats can be frozen and reanimated by microwaves:

https://youtu.be/2tdiKTSdE9Y

It was theorized that it would work with larger mammals but the technical
problem is heating the entire animal all at once.

Contrary to the common belief that microwaves heat from the inside out,
they heat from the outside in.

Jason

On Sun, Jul 30, 2023, 7:30 AM John Clark  wrote:

> On Sun, Jul 30, 2023 at 12:15 AM 'spudboy...@aol.com' via Everything List
>  wrote:
>
> *> means of survival, so this looks like evidence to me that you may be
>> correct? *
>>
>
> It's favorable evidence but it doesn't prove that human Cryonics will
> work, however it certainly proves that the old cliché that claims freezing
> and then thawing a cell always turns it into undifferentiated mush is not
> true.  Human Cryonics will be proven to work on the very day it becomes
> obsolete and is no longer needed, the day that Drexler style Nanotechnology
> becomes available .
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> )^&
>
>
>
> Scientists have brought back to life Nematode worms that have been buried
>> 130 feet under the Siberian permafrost for between 45,839 and 47,769  years
>> according to Carbon-14 tests. Researchers at the Max Planck Institute in
>> Germany have now bred these worms for over 100 generations (worm
>> generations are about 10 days long) and they say it is a species of
>> Nematode that has never been seen before. They call it "Panagrolaimus
>> kolymaensis". The lead researcher says:
>>
>> *"Basically, you only have to bring the worms into amenable conditions,
>> on a culture (agar) plate with some bacteria, some humidity and room
>> temperature, they just start crawling around then. They also just start
>> reproducing. In this case this is even easier, as it is an all-female
>> (asexual) species. They don‘t need to find males and have sex, they just
>> start making eggs, which develop."*
>>
>> A novel nematode species from the Siberian permafrost shares adaptive
>> mechanisms for cryptobiotic survival with C. elegans dauer larva
>> <https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1010798>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0QY55pXyyh2Wjuufems72F0UA5sKAfJdCfzj6CQg_3Kg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0QY55pXyyh2Wjuufems72F0UA5sKAfJdCfzj6CQg_3Kg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiMp%2Bxn%3DmHK8rvbhDzTzXYd7DjZpYeV10Y2P%2BgXZif4Ew%40mail.gmail.com.


Re: The expansion of the universe could be a mirage, new theoretical study suggests

2023-07-07 Thread Jason Resch
On Thu, Jul 6, 2023, 5:05 PM 'spudboy...@aol.com' via Everything List <
everything-list@googlegroups.com> wrote:

> The expansion of the universe could be a mirage, new theoretical study
> suggests | Live Science
> <https://www.livescience.com/physics-mathematics/dark-energy/the-expansion-of-the-universe-could-be-a-mirage-new-theoretical-study-suggests>
>
> Which, if evidence is forthcoming, means what? Are we back to running back
> to the edge of spacetime with a sign that says, No pass? Do we hit the back
> of our own heads?
>


Reminds me of the tired light theory.

But a static universe has a lot more to explain than just redshift:

1. Where does matter income from
2. How is it that the universe hasn't gravitationally collapses already?

Further, his theory is that particle masses change over time. Where are all
the heavier old electrons?

Or if he means all particles get lighter, by what mechanism? How have stars
and chemistry remained stable over time if particles get lighter? That
means chemical bonds lose energy, and atoms get bigger, but we've had DNA
based life for billions of years, the chemistry must have been stable over
that time.

Jason


-- 
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4459828.328555.1688677536543%40mail.yahoo.com
> <https://groups.google.com/d/msgid/everything-list/4459828.328555.1688677536543%40mail.yahoo.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhyCbgFOssbfuU_yE9JtvSqJx%2BfwQZ%2BS9ZupnLfoMs-HQ%40mail.gmail.com.


Re: AI and Interest rates

2023-06-03 Thread Jason Resch
Interest rates have the function of marshalling the productive resources of
an economy towards pursuit of the most economically productive ends.
Anything with an economic return less then prevailing interest rates isn't
worth taking out a loan to invest in putting resources towards that
endeavor.

When there is super intelligent AI, the AI will have an understanding of
its available resources as well as models of which endeavors to prioritize
as having the best risk adjusted rates of return, and can choose to
prioritize accordingly, and perhaps it could do so entirely freed from our
present constraints money, borrowing, or interest rates.

Jason

On Sat, Jun 3, 2023, 6:53 PM John Clark  wrote:

> I have a theory about interest rates and I'd like to know what those who
> know more about economics than I do think about it.
>
> When it comes to economic forecasting the generally accepted beliefs that
> an economy's population has is all important, and it doesn't even matter if
> that belief is true. So on the day it becomes generally accepted that the
>  AI singularity is near and a very drastic increase in productivity is
> imminent I believe there will be a BIG increase in interest rates, because
> a dollar in your pocket right now will be more important to you than a
> million dollars will be in 20 years, even if you manage to survive the
> singularity which you very will might not. And if you don't survive then
> the value of a dollar to you will be precisely zero, so you might as well
> spend it today and have a little fun and not loan it out. So regardless of
> if you believe you will survive the singularity or not, for you to be
> willing to loan me a dollar today if you were a logical you would demand
> that I give you many many more dollars tomorrow as repayment. Put it
> another way, in a few years a dollar will enable you to buy far more stuff
> than it can today, so you'd want to save your money and not lend it out
> unless you were given a very big reason to do so, such as an astronomically
> high interest-rate.
>
> If I'm right about this then that would mean those who think they are
> being conservative and safe by investing in low interest government or
> corporate bonds will be disappointed because the value of all low interest
> investments that are supposed to be safe will crash. But that leads to
> another question that I don't have a clear answer to, even if I decide to
> save my money and not loan it out, how am I supposed to safely do that?
> I'm sure some will immediately say "gold" but I have no reason to believe
> that in a post singularity world that particular metal will be
> significantly more valuable than iron. Iron is much more common than gold
> but iron is also much more useful than gold.
>
> John K Clark See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 6bx
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0koN_ZNCan_FCo4MzyA_MtNnmpXc_6_yKjdcu94dB6kw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0koN_ZNCan_FCo4MzyA_MtNnmpXc_6_yKjdcu94dB6kw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgTmemgj87szwf7QQ8h8ee2jJYCi%3D_8uDLc7W42oB7eCQ%40mail.gmail.com.


Re: AI and Interest rates

2023-06-03 Thread Jason Resch
On Sun, Jun 4, 2023, 12:48 AM Brent Meeker  wrote:

> "Buy land.  They aren't making any more of it."
> --- Mark Twain
>

But perhaps the utility and scarcity of land will diminish after the
development of superhuman AI or the singularity, for any of the following
reasons:

- The potential to create more space, land, and places in virtual reality

- The diminishment of importance of location given telepresence
technologies and online transactions

- The replacement of agriculture with food synthesis, lab grown meat,
underground hydroponics, etc., or the elimination of the necessity of food
for robotic or virtual bodies which may replace our existing ones.

- The replacement of solar energy as a significant or the cheapest source
of energy as new reactor designs are created.

Jason


>
> On 6/3/2023 8:52 AM, John Clark wrote:
>
> I have a theory about interest rates and I'd like to know what those who
> know more about economics than I do think about it.
>
> When it comes to economic forecasting the generally accepted beliefs that
> an economy's population has is all important, and it doesn't even matter if
> that belief is true. So on the day it becomes generally accepted that the
>  AI singularity is near and a very drastic increase in productivity is
> imminent I believe there will be a BIG increase in interest rates, because
> a dollar in your pocket right now will be more important to you than a
> million dollars will be in 20 years, even if you manage to survive the
> singularity which you very will might not. And if you don't survive then
> the value of a dollar to you will be precisely zero, so you might as well
> spend it today and have a little fun and not loan it out. So regardless of
> if you believe you will survive the singularity or not, for you to be
> willing to loan me a dollar today if you were a logical you would demand
> that I give you many many more dollars tomorrow as repayment. Put it
> another way, in a few years a dollar will enable you to buy far more stuff
> than it can today, so you'd want to save your money and not lend it out
> unless you were given a very big reason to do so, such as an astronomically
> high interest-rate.
>
> If I'm right about this then that would mean those who think they are
> being conservative and safe by investing in low interest government or
> corporate bonds will be disappointed because the value of all low interest
> investments that are supposed to be safe will crash. But that leads to
> another question that I don't have a clear answer to, even if I decide to
> save my money and not loan it out, how am I supposed to safely do that?
> I'm sure some will immediately say "gold" but I have no reason to believe
> that in a post singularity world that particular metal will be
> significantly more valuable than iron. Iron is much more common than gold
> but iron is also much more useful than gold.
>
> John K Clark See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 6bx
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0koN_ZNCan_FCo4MzyA_MtNnmpXc_6_yKjdcu94dB6kw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0koN_ZNCan_FCo4MzyA_MtNnmpXc_6_yKjdcu94dB6kw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/890bf057-3bfe-505f-5368-6e4de8b84ffe%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/890bf057-3bfe-505f-5368-6e4de8b84ffe%40gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgToKtfpmQ2-9M0nvkygM1gYmTy1JROUj3Tg4LT6LbxzA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:16 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 2:27 PM Jason Resch 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>>>>> wrote:
>>>>>
>>>>>
>>>>>> And yes, I'm arguing that a true simulation (let's say for the sake
>>>>>> of a thought experiment we were able to replicate every neural connection
>>>>>> of a human being in code, including the connectomes, and 
>>>>>> neurotransmitters,
>>>>>> along with a simulated nerve that was connected to a button on the desk 
>>>>>> we
>>>>>> could press which would simulate the signal sent when a biological pain
>>>>>> receptor is triggered) would feel pain that is just as real as the pain 
>>>>>> you
>>>>>> and I feel as biological organisms.
>>>>>>
>>>>>
>>>>> This follows from the physicalist no-zombies-possible stance. But it
>>>>> still runs into the hard problem, basically. How does stuff give rise to
>>>>> experience.
>>>>>
>>>>>
>>>> I would say stuff doesn't give rise to conscious experience. Conscious
>>>> experience is the logically necessary and required state of knowledge that
>>>> is present in any consciousness-necessitating behaviors. If you design a
>>>> simple robot with a camera and robot arm that is able to reliably catch a
>>>> ball thrown in its general direction, then something in that system *must*
>>>> contain knowledge of the ball's relative position and trajectory. It simply
>>>> isn't logically possible to have a system that behaves in all situations as
>>>> if it knows where the ball is, without knowing where the ball is.
>>>> Consciousness is simply the state of being with knowledge.
>>>>
>>>> Con- "Latin for with"
>>>> -Scious- "Latin for knowledge"
>>>> -ness "English suffix meaning the state of being X"
>>>>
>>>> Consciousness -> The state of being with knowledge.
>>>>
>>>> There is an infinite variety of potential states and levels of
>>>> knowledge, and this contributes to much of the confusion, but boiled down
>>>> to the simplest essence of what is or isn't conscious, it is all about
>>>> knowledge states. Knowledge states require activity/reactivity to the
>>>> presence of information, and counterfactual behaviors (if/then, greater
>>>> than less than, discriminations and comparisons that lead to different
>>>> downstream consequences in a system's behavior). At least, this is my
>>>> theory of consciousness.
>>>>
>>>> Jason
>>>>
>>>
>>> This still runs into the valence problem though. Why does some
>>> "knowledge" correspond with a positive *feeling* and other knowledge
>>> with a negative feeling?
>>>
>>
>> That is a great question. Though I'm not sure it's fundamentally
>> insoluble within model where every conscious state is a particular state of
>> knowledge.
>>
>> I would propose that having positive and negative experiences, i.e. pain
>> or pleasure, requires knowledge states with a certain minium degree of
>> sophistication. For example, knowing:
>>
>> Pain being associated with knowledge states such as: "I don't like this,
>> this is bad, I'm in pain, I want to change my situation."
>>
>> Pleasure being associated with knowledge states such as: "This is good
>> for me, I could use more of this, I don't want this to end.'
>>
>> Such knowledge states require a degree of reflexive awareness, to have a
>> notion of a self where some outcomes may be either positive or negative to
>> that self, and perhaps some notion of time or a sufficient agency to be
>> able to change one's situation.
>>
>> Sone have argued that plants can't feel pain because there's little they
>> can do to change their situation (though I'm agnostic on this).
>>
>>   I'm not talking about the function

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:05 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:46 PM Jason Resch 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
>>>>> wrote:
>>>>>
>>>>>> As I see this thread, Terren and Stathis are both talking past each
>>>>>> other. Please either of you correct me if i am wrong, but in an effort to
>>>>>> clarify and perhaps resolve this situation:
>>>>>>
>>>>>> I believe Stathis is saying the functional substitution having the
>>>>>> same fine-grained causal organization *would* have the same 
>>>>>> phenomenology,
>>>>>> the same experience, and the same qualia as the brain with the same
>>>>>> fine-grained causal organization.
>>>>>>
>>>>>> Therefore, there is no disagreement between your positions with
>>>>>> regards to symbols groundings, mappings, etc.
>>>>>>
>>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>>> believe this is partly responsible for why you are both talking past each
>>>>>> other, because there are many levels involved in brains (and 
>>>>>> computational
>>>>>> systems). I believe you were discussing completely different levels in 
>>>>>> the
>>>>>> hierarchical organization.
>>>>>>
>>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>>>>>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>>>>>
>>>>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>>>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>>>>>> quale
>>>>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>>>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>>>>> answer/description for it can only be supplied in terms of a vast amount 
>>>>>> of
>>>>>> information concerning low level structures, be they patterns of neuron
>>>>>> firings, or patterns of bits being processed. When we consider things 
>>>>>> down
>>>>>> at this low level, however, we lose all context for what the meaning, 
>>>>>> idea,
>>>>>> and quale are or where or how they come in. We cannot see or find the 
>>>>>> idea
>>>>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>>>>
>>>>>> Of course then it should seem deeply mysterious, if not impossible,
>>>>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>>>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>>>>> back and forth. Trying to understand a smartphone by looking at the flows
>>>>>> of electrons is a similar kind of problem, it would seem just as 
>>>>>> difficult
>>>>>> or impossible to explain and understand the high-level features and
>>>>>> complexity out of the low-level simplicity.
>>>>>>
>>>>>> This is why it's crucial to bear in mind and explicitly discuss the
>>>>>> level one is operation on when one discusses symbols, substrates, or 
>>>>>> quale.
>>>>>> In summary, I think a chief reason you have been talking past each other 
>>>>>> is
>>>>>> because you are each operating on different assumed levels.
>>>>>>
>>>>>> Please correct me if you believe I am mistaken and know I only offer
>>>>>> my perspective in the hope it might help the conversation.
>>>>>>
>>>>>
>>>>&g

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 9:43 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:
>
>>
>>
>> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>>>
>>>>
>>>>
>>>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, 25 May 2023 at 11:48, Jason Resch 
>>>>> wrote:
>>>>>
>>>>> >An RNG would be a bad design choice because it would be extremely
>>>>>> unreliable. However, as a thought experiment, it could work. If the 
>>>>>> visual
>>>>>> cortex were removed and replaced with an RNG which for five minutes
>>>>>> replicated the interactions with the remaining brain, the subject would
>>>>>> behave as if they had normal vision and report that they had normal 
>>>>>> vision,
>>>>>> then after five minutes behave as if they were blind and report that they
>>>>>> were blind. It is perhaps contrary to intuition that the subject would
>>>>>> really have visual experiences in that five minute period, but I don't
>>>>>> think there is any other plausible explanation.
>>>>>>
>>>>>
>>>>>> I think they would be a visual zombie in that five minute period,
>>>>>> though as described they would not be able to report any difference.
>>>>>>
>>>>>> I think if one's entire brain were replaced by an RNG, they would be
>>>>>> a total zombie who would fool us into thinking they were conscious and we
>>>>>> would not notice a difference. So by extension a brain partially replaced
>>>>>> by an RNG would be a partial zombie that fooled the other parts of the
>>>>>> brain into thinking nothing was amiss.
>>>>>>
>>>>>
>>>>> I think the concept of a partial zombie makes consciousness
>>>>> nonsensical.
>>>>>
>>>>
>>>> It borders on the nonsensical, but between the two bad alternatives I
>>>> find the idea of a RNG instantiating human consciousness somewhat less
>>>> sensical than the idea of partial zombies.
>>>>
>>>
>>> If consciousness persists no matter what the brain is replaced with as
>>> long as the output remains the same this is consistent with the idea that
>>> consciousness does not reside in a particular substance (even a magical
>>> substance) or in a particular process.
>>>
>>
>> Yes but this is a somewhat crude 1960s version of functionalism, which as
>> I described and as you recognized, is vulnerable to all kinds of attacks.
>> Modern functionalism is about more than high level inputs and outputs, and
>> includes causal organization and implementation details at some level (the
>> functional substitution level).
>>
>> Don't read too deeply into the mathematical definition of function as
>> simply inputs and outputs, think of it more in terms of what a mind does,
>> rather than what a mind is, this is the thinking that led to functionalism
>> and an acceptance of multiple realizability.
>>
>>
>>
>> This is a strange idea, but it is akin to the existence of platonic
>>> objects. The number three can be implemented by arranging three objects in
>>> a row but it does not depend those three objects unless it is being used
>>> for a particular purpose, such as three beads on an abacus.
>>>
>>
>> Bubble sort and merge sort both compute the same thing and both have the
>> same inputs and outputs, but they are different mathematical objects, with
>> different behaviors, steps, subroutines and runtime efficiency.
>>
>>
>>
>>>
>>>> How would I know that I am not a visual zombie now, or a visual zombie
>>>>> every Tuesday, Thursday and Saturday?
>>>>>
>>>>
>>>> Here, we have to be careful what we mean by "I". Our own brains have
>>>> various spheres of consciousness as demonstrated by the Wada Test: we can
>>>> shut down one hemisphere of the brain and lose partial awareness and
>>>> functionality such as the ability to form words and yet one remains
>>>> conscious. I think being a partial zombie would be like that, hav

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>>
>>> >An RNG would be a bad design choice because it would be extremely
>>>> unreliable. However, as a thought experiment, it could work. If the visual
>>>> cortex were removed and replaced with an RNG which for five minutes
>>>> replicated the interactions with the remaining brain, the subject would
>>>> behave as if they had normal vision and report that they had normal vision,
>>>> then after five minutes behave as if they were blind and report that they
>>>> were blind. It is perhaps contrary to intuition that the subject would
>>>> really have visual experiences in that five minute period, but I don't
>>>> think there is any other plausible explanation.
>>>>
>>>
>>>> I think they would be a visual zombie in that five minute period,
>>>> though as described they would not be able to report any difference.
>>>>
>>>> I think if one's entire brain were replaced by an RNG, they would be a
>>>> total zombie who would fool us into thinking they were conscious and we
>>>> would not notice a difference. So by extension a brain partially replaced
>>>> by an RNG would be a partial zombie that fooled the other parts of the
>>>> brain into thinking nothing was amiss.
>>>>
>>>
>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>
>>
>> It borders on the nonsensical, but between the two bad alternatives I
>> find the idea of a RNG instantiating human consciousness somewhat less
>> sensical than the idea of partial zombies.
>>
>
> If consciousness persists no matter what the brain is replaced with as
> long as the output remains the same this is consistent with the idea that
> consciousness does not reside in a particular substance (even a magical
> substance) or in a particular process.
>

Yes but this is a somewhat crude 1960s version of functionalism, which as I
described and as you recognized, is vulnerable to all kinds of attacks.
Modern functionalism is about more than high level inputs and outputs, and
includes causal organization and implementation details at some level (the
functional substitution level).

Don't read too deeply into the mathematical definition of function as
simply inputs and outputs, think of it more in terms of what a mind does,
rather than what a mind is, this is the thinking that led to functionalism
and an acceptance of multiple realizability.



This is a strange idea, but it is akin to the existence of platonic
> objects. The number three can be implemented by arranging three objects in
> a row but it does not depend those three objects unless it is being used
> for a particular purpose, such as three beads on an abacus.
>

Bubble sort and merge sort both compute the same thing and both have the
same inputs and outputs, but they are different mathematical objects, with
different behaviors, steps, subroutines and runtime efficiency.



>
>> How would I know that I am not a visual zombie now, or a visual zombie
>>> every Tuesday, Thursday and Saturday?
>>>
>>
>> Here, we have to be careful what we mean by "I". Our own brains have
>> various spheres of consciousness as demonstrated by the Wada Test: we can
>> shut down one hemisphere of the brain and lose partial awareness and
>> functionality such as the ability to form words and yet one remains
>> conscious. I think being a partial zombie would be like that, having one's
>> sphere of awareness shrink.
>>
>
> But the subject's sphere of awareness would not shrink in the thought
> experiment,
>

Have you ever wondered what delineates the mind from its environment? Why
it is that you are not aware of my thoughts but you see me as an object
that only affects your senses, even though we could represent the whole
earth as one big functional system?

I don't have a good answer to this question but it seems it might be a
factor here. The randomly generated outputs from the RNG would seem an
environmental noise/sensation coming from the outside, rather than a
recursively linked and connected loop of processing as would exist in a
genuinely functioning brain of two hemispheres.


since by assumption their behaviour stays the same, while if their sphere
> of awareness shrank they notice that something was different and say so.
>

But here (almost by magic), the RNG output

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>
> >An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
>> I think they would be a visual zombie in that five minute period, though
>> as described they would not be able to report any difference.
>>
>> I think if one's entire brain were replaced by an RNG, they would be a
>> total zombie who would fool us into thinking they were conscious and we
>> would not notice a difference. So by extension a brain partially replaced
>> by an RNG would be a partial zombie that fooled the other parts of the
>> brain into thinking nothing was amiss.
>>
>
> I think the concept of a partial zombie makes consciousness nonsensical.
>

It borders on the nonsensical, but between the two bad alternatives I find
the idea of a RNG instantiating human consciousness somewhat less sensical
than the idea of partial zombies.


How would I know that I am not a visual zombie now, or a visual zombie
> every Tuesday, Thursday and Saturday?
>

Here, we have to be careful what we mean by "I". Our own brains have
various spheres of consciousness as demonstrated by the Wada Test: we can
shut down one hemisphere of the brain and lose partial awareness and
functionality such as the ability to form words and yet one remains
conscious. I think being a partial zombie would be like that, having one's
sphere of awareness shrink.


What is the advantage of having "real" visual experiences if they make no
> objective difference and no subjective difference either?
>

The advantage of real computations (which imply having real
awareness/experiences) is that real computations are more reliable than
RNGs for producing intelligent behavioral responses.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj5O4vwxKjOC60cORyU7qHxM5HsQo-9xDFogiYwBZ9mtA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>>>
>>>>
>>>>
>>>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, 24 May 2023 at 15:37, Jason Resch 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>>>>>>>> stath...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> As I see this thread, Terren and Stathis are both talking past
>>>>>>>>>> each other. Please either of you correct me if i am wrong, but in an 
>>>>>>>>>> effort
>>>>>>>>>> to clarify and perhaps resolve this situation:
>>>>>>>>>>
>>>>>>>>>> I believe Stathis is saying the functional substitution having
>>>>>>>>>> the same fine-grained causal organization *would* have the same
>>>>>>>>>> phenomenology, the same experience, and the same qualia as the brain 
>>>>>>>>>> with
>>>>>>>>>> the same fine-grained causal organization.
>>>>>>>>>>
>>>>>>>>>> Therefore, there is no disagreement between your positions with
>>>>>>>>>> regards to symbols groundings, mappings, etc.
>>>>>>>>>>
>>>>>>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>>>>>>> believe this is partly responsible for why you are both talking past 
>>>>>>>>>> each
>>>>>>>>>> other, because there are many levels involved in brains (and 
>>>>>>>>>> computational
>>>>>>>>>> systems). I believe you were discussing completely different levels 
>>>>>>>>>> in the
>>>>>>>>>> hierarchical organization.
>>>>>>>>>>
>>>>>>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>>>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>>>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>>>>>>>>> human
>>>>>>>>>> brains, or circuits, logic gates, bits, and instructions as in 
>>>>>>>>>> computers.
>>>>>>>>>>
>>>>>>>>>> I think when Terren mentions a "symbol for the smell of
>>>>>>>>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad 
>>>>>>>>>> of
>>>>>>>>>> levels. The quale or idea or memory of the smell of GMK is a very
>>>>>>>>>> high-level feature of a mind. When Terren asks for or discusses a 
>>>>>>>>>> symbol
>>>>>>>>>> for it, a complete answer/description for it can only be supplied in 
>>>>>>>>>> terms
>>>>>>>>>> of a vast amount of information concerning low level structures, be 
>>>>>>>>>> they
>>>>>>>>>> patterns of neuron firings, or patterns of bits being processed. 
>>>>>>>>>> When we
>>>>>>>>>> consider things down at this low level, however, we lose all context 
>>>>>>

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>>
>>>>
>>>>
>>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>>>>>> stath...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> As I see this thread, Terren and Stathis are both talking past each
>>>>>>>> other. Please either of you correct me if i am wrong, but in an effort 
>>>>>>>> to
>>>>>>>> clarify and perhaps resolve this situation:
>>>>>>>>
>>>>>>>> I believe Stathis is saying the functional substitution having the
>>>>>>>> same fine-grained causal organization *would* have the same 
>>>>>>>> phenomenology,
>>>>>>>> the same experience, and the same qualia as the brain with the same
>>>>>>>> fine-grained causal organization.
>>>>>>>>
>>>>>>>> Therefore, there is no disagreement between your positions with
>>>>>>>> regards to symbols groundings, mappings, etc.
>>>>>>>>
>>>>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>>>>> believe this is partly responsible for why you are both talking past 
>>>>>>>> each
>>>>>>>> other, because there are many levels involved in brains (and 
>>>>>>>> computational
>>>>>>>> systems). I believe you were discussing completely different levels in 
>>>>>>>> the
>>>>>>>> hierarchical organization.
>>>>>>>>
>>>>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>>>>>>> human
>>>>>>>> brains, or circuits, logic gates, bits, and instructions as in 
>>>>>>>> computers.
>>>>>>>>
>>>>>>>> I think when Terren mentions a "symbol for the smell of
>>>>>>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
>>>>>>>> levels. The quale or idea or memory of the smell of GMK is a very
>>>>>>>> high-level feature of a mind. When Terren asks for or discusses a 
>>>>>>>> symbol
>>>>>>>> for it, a complete answer/description for it can only be supplied in 
>>>>>>>> terms
>>>>>>>> of a vast amount of information concerning low level structures, be 
>>>>>>>> they
>>>>>>>> patterns of neuron firings, or patterns of bits being processed. When 
>>>>>>>> we
>>>>>>>> consider things down at this low level, however, we lose all context 
>>>>>>>> for
>>>>>>>> what the meaning, idea, and quale are or where or how they come in. We
>>>>>>>> cannot see or find the idea of GMK in any neuron, no more than we can 
>>>>>>>> see
>>>>>>>> or find it in any neuron.
>>>>>>>>
>>>>>>>> Of course then it should seem deeply mysterious, if not impossible,
>>>>>>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>>>>>>> greater a leap from how we get "it" from a bunch of cells squirting 
>>>>>>>> ions
>>>>>>>> back and forth. Trying to understand a smartphone by looking at the 
>>

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 11:12 AM Brent Meeker  wrote:

>
>
>
> On 5/23/2023 10:37 PM, Jason Resch wrote:
>
>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>>>>
>>>>> As I see this thread, Terren and Stathis are both talking past each
>>>>> other. Please either of you correct me if i am wrong, but in an effort to
>>>>> clarify and perhaps resolve this situation:
>>>>>
>>>>> I believe Stathis is saying the functional substitution having the
>>>>> same fine-grained causal organization *would* have the same phenomenology,
>>>>> the same experience, and the same qualia as the brain with the same
>>>>> fine-grained causal organization.
>>>>>
>>>>> Therefore, there is no disagreement between your positions with
>>>>> regards to symbols groundings, mappings, etc.
>>>>>
>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>> believe this is partly responsible for why you are both talking past each
>>>>> other, because there are many levels involved in brains (and computational
>>>>> systems). I believe you were discussing completely different levels in the
>>>>> hierarchical organization.
>>>>>
>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>>>>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>>>>
>>>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>>>>> quale
>>>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>>>> answer/description for it can only be supplied in terms of a vast amount 
>>>>> of
>>>>> information concerning low level structures, be they patterns of neuron
>>>>> firings, or patterns of bits being processed. When we consider things down
>>>>> at this low level, however, we lose all context for what the meaning, 
>>>>> idea,
>>>>> and quale are or where or how they come in. We cannot see or find the idea
>>>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>>>
>>>>> Of course then it should seem deeply mysterious, if not impossible,
>>>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>>>> back and forth. Trying to understand a smartphone by looking at the flows
>>>>> of electrons is a similar kind of problem, it would seem just as difficult
>>>>> or impossible to explain and understand the high-level features and
>>>>> complexity out of the low-level simplicity.
>>>>>
>>>>> This is why it's crucial to bear in mind and explicitly discuss the
>>>>> level one is operation on when one discusses symbols, substrates, or 
>>>>> quale.
>>>>> In summary, I think a chief reason you have been talking past each other 
>>>>> is
>>>>> because you are each operating on different assumed levels.
>>>>>
>>>>> Please correct me if you believe I am mistaken and know I only offer
>>>>> my perspective in the hope it might help the conversation.
>>>>>
>>>>
>>>> I think you’ve captured my position. But in addition I think
>>>> replicating the fine-grained causal organisation is not necessary in order
>>>> to replicate higher level phenomena such as GMK. By extension of Chalmers’
>>>> substitution experiment,
>>>>
>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lo

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 5:35 AM John Clark  wrote:

>
> On Wed, May 24, 2023 at 1:37 AM Jason Resch  wrote:
>
> *> By substituting a recording of a computation for a computation, you
>> replace a conscious mind with a tape recording of the prior behavior of a
>> conscious mind. *
>
>
> But you'd still need a computation to find the particular tape recording
> that you need, and the larger your library of recordings the more complex
> the computation you'd need to do would be.
>
> *> This is what happens in the Blockhead thought experiment*
>
>
> And in that very silly thought experiment your library needs to contain
> every sentence that is syntactically and grammatically correct. And there
> are an astronomical number to an astronomical power of those. Even if every
> electron, proton, neutron, photon and neutrino in the observable universe
> could record 1000 million billion trillion sentences there would still be
> well over a googolplex number of sentences that remained unrecorded.
> Blockhead is just a slight variation on Searle's idiotic Chinese room.
>


It's very different.

Note they you don't need to realize or store every possible input for the
central point of Block's argument to work.

For example, let's say that AlphaZero was conscious for the purposes of
this argument. We record each of its 361 possible responses AlphaZero
produces to each of the different opening moves on a Go board and store the
result in a lookup table. This table would be only a few kilobytes. Then we
can ask, what has happened to the conscious of AlphaZero? Here we have a
functionally equivalent response for all possible second moves, but we've
done away with all the complexity of the prior computation.

What the substitution level argument really asks is how far up in the
subroutines of a mind's program can we implement memoization (
https://en.m.wikipedia.org/wiki/Memoization ) before the result is some
kind of altered consciousness, or at least some diminished contribution to
the measure of a conscious experience (under duplicationist conceptions of
measure).


Jason

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg29ypDhX_3sZTTLZuFdWV%2Be86WVvin9r878U%3D1XNMAxg%40mail.gmail.com.


  1   2   3   4   5   6   7   8   9   10   >