On Tue, Jul 1, 2008 at 2:02 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> 2008/6/30 Vladimir Nesov <[EMAIL PROTECTED]>:
>>
>> Well, yes, you implement some functionality, but why would you
>> contrast it with underlying levels (hardware, OS)?
>>
>> Like Java virtual
>> machine, your system is a platform, and it does some things not
>> handled by lower levels, or, in this case, by any superficially
>> analogous platforms.
>
> Because I want it done in silicon at some stage. It is also assumed to
> be the whole system, that is no other significant programs on it.
> Machines that run lisp natively have been made, this makes the most
> sense as the whole computer. Rather than as a component.

OK, you could've just said so from the start. :-)


>> If internals are programmed by humans, why do you need automatic
>> system to assess them? It would be useful if you needed to construct
>> and test some kind of combination/setting automatically, but not if
>> you just test manually-programmed systems. How does the assessment
>> platform help in improving/accelerating the research?
>>
>
> Because to be interesting the human specified programs need to be
> autogenous, as in Josh Storr Hall's terminology, which means
> self-building. Capable of altering the stuff they are made of. In this
> case machine code equivalent. So you need the human to assess the
> improvements the system makes, for whatever purpose the human wants
> the system to perform.
>

Altering the stuff they are made of is instrumental to achieving the
goal, and should be performed where necessary, but it doesn't happen,
for example, with individual brains. (I was planning to do the next
blog post on this theme, maybe tomorrow.) Do you mean to create
population of altered initial designs and somehow select from them (I
hope not, it is orthogonal to what modification is for in the first
place)? Otherwise, why do you still need automated testing? Could you
present a more detailed use case?


>>> Terran's artificial chemistry as whole could not be said to have a
>>> goal. Or to put it another way applying the intentional stance to it
>>> probably wouldn't help you predict what it did next. Applying the
>>> intentional stance to what my system does should help you predict what
>>> it does.
>>
>> What is `intentional stance'? Intentional stance of what? What is it good 
>> for?
>
> http://en.wikipedia.org/wiki/Intentional_stance
>
> It is folk psychology, good for predicting systems you don't know who
> they are designed.

OK, that is useful, thanks. I only skimmed the wikipedia article, but
as it integrates in my model of intelligence as optimization for
goals, I think it's wrong to say that any system that does anything at
all is without goal, or, in these terms, without intentional stance.
The applicability of the term seems to require a minimal level of
optimization power, so that rocks won't qualify, but evolution looks
powerful enough, especially over long enough time. Little nudges,
adding together, creating complex designs. The complexity and local
optimization power of these designs represent the goal, even if it's
hard to _visualize_. The optimization power is low, however, which
maps on the distinction which you make on this point.


>>> This means he needs to use a bunch more resources to get a singular
>>> useful system. Also the system might not do what he wants, but I don't
>>> think he minds about that.
>>>
>>> I'm allowing humans to design everything, just allowing the very low
>>> level to vary. Is this clearer?
>>
>> What do you mean by varying low level, especially in human-designed systems?
>>
> The machine code the program is written in. Or in a java VM, the java 
> bytecode.
>

This still didn't make this point clearer. You can't vary the
semantics of low-level elements from which software is built, and if
you don't modify the semantics, any other modification is superficial
and irrelevant. If it's not quite 'software' that you are running, and
it is able to survive the modification of lower level, using the terms
like 'machine code' and 'software' is misleading. And in any case,
it's not clear what this modification of low level achieves. You can't
extract work from obfuscation and tinkering, the optimization comes
from the lawful and consistent pressure in the same direction.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to