On 8/21/2018 9:01 PM, Jason Resch wrote:


On Tue, Aug 21, 2018 at 10:50 PM Brent Meeker <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:



    On 8/21/2018 7:38 PM, Jason Resch wrote:


    On Tue, Aug 21, 2018 at 7:43 PM Brent Meeker
    <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:



        On 8/21/2018 3:37 PM, Jason Resch wrote:


        On Tue, Aug 21, 2018 at 5:00 PM Brent Meeker
        <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:



            On 8/21/2018 2:40 PM, agrayson2...@gmail.com
            <mailto:agrayson2...@gmail.com> wrote:


                If I start a 200 qubit quantum computer at time =
                0, and 100 microseconds later it has produced a
                result that required going through 2^200 = 1.6 x
                10^60 = states (more states than is possible for
                200 things to go through in 100 microseconds even
                if they changed their state every Plank time
                (5.39121 x 10^-44 seconds), then physically
                speaking it **must** have been simultaneous.  I
                don't see any other way to explain this result. 
                How can 200 things explore 10^60 states in 10^-4
                seconds, when a Plank time is 5.39 x 10^-44 seconds?


            It's no more impressive numerically than an electron
            wave function picking out one of 10^30 silver halide
            molecules on a photographic plate to interact with
            (which is also non-local, aka simultaneous).


        Well consider the 1000 qubit quantum computer. This is a 1
        followed by 301 zeros.

        What is "this".  It's the number possible phase relations
        between the 1000 qubits.  If we send a 1000 electrons toward
        our photographic plate through a 1000 holes the Schrodinger
        wave function approaching the photographic plate then also
        has 1e301 different phase relations.  The difference is only
        that we don't control them so as to cancel out "wrong answers".



    The reason I think the quantum computer example is important to
    consider is because when we control them to produce a useful
    result, it becomes that much harder to deny the reality and
    significance of the intermediate states.

    Which is why I'm pointing that, while important from our view of
    it as a computation, from a physical viewpoint it is nothing
    unusual.  If I poked a 100 pinholes in a screen and shone my laser
    pointer on it there would the same number of "intermediate states"
    between the screen and a photo detector.


Okay.  But this example tends to ignore the intermediate steps of the computation, in a way that is easier to look over.


    For instance, we can verify the result of a Shor calculation for
    the factorization of a large prime.  We can't so easily verify
    the statistics of the 1e301 phase relations are what they should be.

        This is not only over a googol^2 times the number of silver
        halide molecules in your plate, but more than a googol times
        the 10^80 atoms in the observable universe.

        What is it, in your mind, that is able to track and
        consistently compute over these 10^301 states, in this
        system composed of only 1000 atoms?


    Are you aware of anything other than many-worlds view that can
    account for this?

    I don't see anyway a many-worlds view can account for it. All
    those qubits have to be entangled and interfere in order to arrive
    at an answer.  So they all have to be in the same world.  Your
    numerology is just counting interference relations in this world,
    they don't imply some events in other worlds.


Where are these interference relations existing?  We've already established there are not enough atoms to account for all the states

That's because the states aren't things, they are entanglements, i.e. relations between things.  That's why the numbers are in exponential in the number of things.  They are not things themselves, so it's specious to compare them to atoms.

in the whole observable universe (one world), nor are there enough Plank times to account for iterating over every possible state involved in the computation in (one world). So where are all of these states existing and being processed?



            Also note that you can only read off 200bits of
            information (c.f. Holevo's theorem).


        True, but that is irrelevant to the number of intermediate
        states necessary for the computation that is performed to
        arrive at the final and correct answer.

        But you have to put in 2^200 complex numbers to initiate your
        qubits.  So you're putting in a lot more information than
        you're getting out.


    You just initialize each of the 200 qubits to be in a superposition.

        Those "intermediate states" are just interference patterns in
        the computer, not some inter-dimensional information flow.


    What is interference, but information flow between different
    parts of the wave function: other "branches" of the superposition
    making their presence known to us by causing different outcomes
    to manifest in our own branch.

        Also, many quantum algorithms only give you an answer that is
        probably correct.  So you have to run it multiple times to
        have confidence in the result.


    I would say it depends on the algorithm and the precision of the
    measurement and construction of the computer.  If your algorithm
    computes the square of a randomly initialized set of qubits, then
    the only answer you should get (assuming perfect construction of
    the quantum computer) after measurement will be a perfect square.

    Right.  There are some quantum algorithms that give probability 1
    answer.


        Quantum computers will certainly impact cryptography where
        there's heavy reliance on factoring primes and discrete
        logarithms.  They should be able to solve protein folding and
        similar problems that are out of reach of classical
        computers.  But they're not a magic bullet.  Most problems
        will still be solved faster by conventional von Neumann
        computers or by specialized neural nets.  One reason is that
        even though a quantum algorithm is faster in the limit of
        large problem size, it may still be slower for the problem
        size of interest.  It's the same problem that shows up in
        classical algorithms; for example the Coppersmith-Winograd
        algorithm for matrix multiplication takes O(n^2.375) compared
        to the Strassen O(n^2.807) but it is never used because it is
        only faster for matrices too large to be processed in
        existing computers.


    So where do you stand concerning the reality of the immense
    number of intermediate states the qubits are in before measured?

    It's just like flipping two rocks in a pond and being amazed at
    the immense number of points at which ripples interfere before
    they determine the wave that hits the sand bar.


Except there are more ripples than bits in the Hubble volume, and more state transitions than there have been Plank times in the age of the universe.

Not ripples, the analogy is intersection of ripples.  The huge numbers are combinatorics.  They are abstract "states" only in the sense that the /*relation*/ of two different atoms in a ripple is a state.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to