On Fri, Aug 24, 2012 at 5:04 AM, benjayk <benjamin.jaku...@googlemail.com>wrote:

>
>
> Jason Resch-2 wrote:
> >
> > On Thu, Aug 23, 2012 at 1:18 PM, benjayk
> > <benjamin.jaku...@googlemail.com>wrote:
> >
> >>
> >>
> >> Jason Resch-2 wrote:
> >> >
> >> >> Taking the universal dovetailer, it could really mean everything (or
> >> >> nothing), just like the sentence "You can interpret whatever you want
> >> >> into
> >> >> this sentence..." or like the stuff that monkeys type on typewriters.
> >> >>
> >> >>
> >> > A sentence (any string of information) can be interpreted in any
> >> possible
> >> > way, but a computation defines/creates its own meaning.  If you see a
> >> > particular step in an algorithm adds two numbers, it can pretty
> clearly
> >> be
> >> > interpreted as addition, for example.
> >> A computation can't define its own meaning, since it only manipulates
> >> symbols (that is the definition of a computer),
> >
> >
> > I think it is a rather poor definition of a computer.  Some have tried to
> > define the entire field of mathematics as nothing more than a game of
> > symbol manipulation (see
> > http://en.wikipedia.org/wiki/Formalism_(mathematics) ).  But if
> > mathematics
> > can be viewed as nothing but symbol manipulation, and everything can be
> > described in terms of mathematics, then what is not symbol manipulation?
> >
> That what it is describing. Very simple. :)
>
>
>
> Jason Resch-2 wrote:
> >
> >> and symbols need a meaning
> >> outside of them to make sense.
> >>
> >
> > The meaning of a symbol derives from the context of the machine which
> > processes it.
> I agree. The context in which the machine operates matters. Yet our
> definitions of computer don't include an external context.
>
>
A computer can simultaneously emulate the perceiver and the object of
perception.


>
> Jason Resch-2 wrote:
> >
> >>
> >> Jason Resch-2 wrote:
> >> >
> >> >>
> >> >> Jason Resch-2 wrote:
> >> >> >
> >> >> >  The UD contains an entity who believes it writes a single program.
> >> >> No! The UD doesn't contain entities at all. It is just a computation.
> >> You
> >> >> can only interpret entities into it.
> >> >>
> >> >>
> >> > Why do I have to?  As Bruno often asks, does anyone have to watch your
> >> > brain through an MRI and interpret what it is doing for you to be
> >> > conscious?
> >> Because there ARE no entities in the UD per its definition. It only
> >> contains
> >> symbols that are manipulated in a particular way.
> >
> >
> > You forgot the processes, which are interpreting those symbols.
> No, that's simply not how we defined the UD. The UD is defined by
> manipulation of symbols, not interpretation of symbols (how could we even
> formalize that?).
>

It may not be explicitly defined, but it follows, just as human cognition
follows from hydrogen atoms, given a few billion years.  Entities evolve
and develop within the UD who have the ability to interpret things on their
own.


>
>
> Jason Resch-2 wrote:
> >
> >> The definitions of the UD
> >> or a universal turing machine or of computers in general don't contain a
> >> reference to entities.
> >>
> >>
> > The definition of this universe doesn't contain a reference to human
> > beings
> > either.
> Right, that's why you can't claim that all universes contain human beings.
>

But the set of all possible universes does contain human beings.
Similarly, the UD contains all processes, and according to
computationalism, would also contain all possible minds.


>
>
> Jason Resch-2 wrote:
> >
> >> So you can only add that to its working in your own imagination.
> >>
> >>
> > I think I would still be able to experience meaning even if no one was
> > looking at me.
> Yes, because you are what is looking - there is no one looking at you in
> the
> first place, because someone looking is occur in you.
>
>
> Jason Resch-2 wrote:
> >
> >> Jason Resch-2 wrote:
> >> >
> >> >>
> >> >> Jason Resch-2 wrote:
> >> >> >
> >> >> >  The UD itself
> >> >> > isn't intelligent, but it contains intelligences.
> >> >> I am not even saying that the UD isn't intelligent. I am just saying
> >> that
> >> >> humans are intelligent in a way that the UD is not (and actually the
> >> >> opposite is true as well).
> >> >>
> >> >>
> >> > Okay, could you clarify in what ways we are more intelligent?
> >> >
> >> > For example, could you show a problem that can a human solve that a
> >> > computer with unlimited memory and time could not?
> >> Say you have a universal turing machine with the alphabet {0, 1}
> >> The problem is: Change one of the symbols of this turing machine to 2.
> >>
> >
> > Your example is defining a problem to not be solvable by a specific
> > entity,
> > not turing machines in general.
> But the claim of computer scientists is that all turing machines are
> interchangable,


In a certain sense.  Not in the sense where they have to escape their own
level to accomplish something in a physical universe.


> because they can emulate each other perfectly. Clearly
> that's not true because perfect computational emulation doesn't help to
> solve the problem in question, and that is precisely my point!
>

You seem to agree that a computer can answer any verbal problem that any
person can.

So it follows that the right program could answer the question of what a
particular person will do in a given situation.  Do you agree?


>
>
>
> Jason Resch-2 wrote:
> >
> >> Given that it is a universal turing machine, it is supposed to be able
> to
> >> solve that problem. Yet because it doesn't have access to the right
> >> level,
> >> it cannot do it.
> >
> > It is an example of direct self-manipulation, which turing machines are
> > not
> >> capable of (with regards to their alphabet in this case).
> >>
> >
> > Neither can humans change fundamental properties of our physical
> > incarnation.  You can't decide to turn one of your neurons into a
> magnetic
> > monopole, for instance, but this is not the kind of problem I was
> > referring
> > to.
> I don't claim that humans are all powerful. I am just saying that they can
> do things computer can't.
>
>
> Jason Resch-2 wrote:
> >
> > To avoid issues of level confusion, it is better to think of problems
> with
> > informational solutions, since information can readily cross levels.
>  That
> > is, some question is asked and some answer is provided.  Can you think of
> > any question that is only solvable by human brains, but not solvable by
> > computers?
> OK, if you want to ignore levels, context and ambiguity then the answer is
> clearly no!
> Simply write a program that takes the question X and gives the appropiate
> answer Y.
> Since all combinations of strings exist the right solution exists for every
> question.
> Then you would still have to write the right program, though, and for that
> you still need a human or a more powerful program.
>
> But this avoides my point that we can't imagine that levels, context and
> ambiguity don't exist, and this is why computational emulation does not
> mean
> that the emulation can substitute the original.
>
>
> Jason Resch-2 wrote:
> >
> >> You could of course create a model of that turing machine within that
> >> turing
> >> machine and change their alphabet in the model, but since this was not
> >> the
> >> problem in question this is not the right solution.
> >>
> >> Or the problem "manipulate the code of yourself if you are a program,
> >> solve
> >> 1+1 if you are human (computer and human meaning what the average humans
> >> considers computer and human)" towards a program written in a turing
> >> universal programming language without the ability of self-modification.
> >> The
> >> best it could do is manipulate a model of its own code (but this wasn't
> >> the
> >> problem).
> >> Yet we can simply solve the problem by answering 1+1=2 (since we are
> >> human
> >> and not computers by the opinion of the majority).
> >>
> >>
> > These are certainly creative examples, but they are games of language.  I
> > haven't seen any fundamental limitation that can't be trivially reflected
> > back and applied as an equivalent limitation of humans.
> You didn't state in which way my problem is invalid. That you consider it
> "just a game" doesn't change the objective conlusion at all.
>
> I actually fully agree with you that the *most* fundamental limitation of
> computers apply to humans as well (like for example being a particular
> thing
> with a particular structure). I am not one of these people that project a
> magical soul into human that makes them more special than everything else.
> But that doesn't change the point that humans can do some things computers
> can't, which is very important and relevant.
>
> You might still believe that computers can do, for all intents and
> purposes,
> what humans can do, but I fail to see how similiar examples of
> self-reference, self-manipulation, self-relativity don't occur all the time
> in high-level contexts.
>

Can you explain the above paragraph in another way?  I don't quite follow..


> And this is the reason that I fully expect computers to become much better
> than humans in terms of *relatively* low-level tasks (even on the level of
> reasoning about complex objectifiable topics), but not with regards to the
> most high level subjects (like consciousness, emotion, ambiguity,
> spirituality, axioms).
>

What do we have that the computers don't, that lets us have consciousness,
emotions, ambiguity, spirituality, axioms, but prohibits them from the same?

Nick Bostrom has said: "Substrate is morally irrelevant. Whether somebody
is implemented on silicon or biological tissue, if it does not affect
functionality or consciousness, is of no moral significance.
Carbon-chauvinism is objectionable on the same grounds as racism."  You
seem to think something made of silicon cannot be conscious in the same way
as something made of carbon.  Do you attribute this to some special
property of carbon, of our evolutionary history, of our something else?


> By the way, for a similar reason I believe that humans are in *some ways*
> more limited than animals or plants, because they assume and know to much /
> are too much concerned with relative notions (and thus can't go back to the
> "ignorance" of animals which is intelligent in that it ignores relatively
> superficial issues like descriptions).
>
> So I am not saying humans>all, I am just saying that different kinds of
> intelligence on different levels (like computer-, animal-, plant-, human-,
> spirit-, environmental-, galaxy-, space-intelligence) can't be substituted,
> but actually amplify and complement each other. They each have certain
> limitations that others don't have.
>
>
I agree different implementations of intelligence have different
capabilities and roles, but I think computers are general enough to
replicate any intelligence (so long as infinities or true randomness are
not required).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to