On Tue, Jul 1, 2008 at 1:31 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> 2008/6/30 Vladimir Nesov <[EMAIL PROTECTED]>:
>>
>> It is a wrong level of organization: computing hardware is the physics
>> of computation, it isn't meant to implement specific algorithms, so I
>> don't quite see what you are arguing.
>>
>
> I'm not implementing a specific algorithm I am controlling how
> resources are allocated. Currently architecture does whatever the
> kernel says, from memory allocation to irq allocation. Instead of this
> my architecture would allow any program to bid credit for a resource.
> The one that bids the most wins and spends its credit. Certain
> resources like output memory space, (i.e if the program is controlling
> the display or an arm or something) allow the program to specify a
> bank, and give the program income.
>
> A bank is a special variable that can't be edited by programs normally
> but can be spent. The bank of an outputing program  will be given
> credit depending upon how well the system as whole is performing . If
> it is doing well the amount of credit it gets would be above average,
> poorly it would be below. After a certain time the resources will need
> to be bid for again. So credit is coming into the system and
> continually being sunk.
>
> The system will be seeded with programs that can perform rudimentarily
> well. E.g. you will have programs that know how to deal with visual
> input and they will bid for the video camera interupt. They will then
> sell their services for credit (so that they can bid for the interrupt
> again), to a program that correlates visual and auditory responses.
> Who sell their services to a high level planning module etc, on down
> to the arm that actually gets the credit.
>
> All these modules are subject to change and re-evaluation. They merely
> suggest one possible way for it to be used. It is supposed to be
> ultimately flexible. You could seed it with a self-replicating neural
> simulator that tried to hook its inputs and outputs up to other
> neurons. Neurons would die out if they couldn't find anything to do.

Well, yes, you implement some functionality, but why would you
contrast it with underlying levels (hardware, OS)? Like Java virtual
machine, your system is a platform, and it does some things not
handled by lower levels, or, in this case, by any superficially
analogous platforms.


>>> How to do this? Form and economy based on
>>> reinforcement signals, those that get more reinforcement signals can
>>> outbid the others for control of system resources.
>>
>> Where do reinforcement signals come from? What does this specification
>> improve over natural evolution that needed billions of years to get
>> here (that is, why do you expect any results in the forseable future)?
>
> Most of the internals are programmed by humans, and they can be
> arbitrarily complex. The feedback comes from a human, or from a
> utility function although those are harder to define. The architecture
> simply doesn't restrict the degrees of freedom that the programs
> inside it can explore.

If internals are programmed by humans, why do you need automatic
system to assess them? It would be useful if you needed to construct
and test some kind of combination/setting automatically, but not if
you just test manually-programmed systems. How does the assessment
platform help in improving/accelerating the research?


>>> This is obviously reminiscent of tierra and a million and one other
>>> alife system. The difference being is that I want the whole system to
>>> exhibit intelligence. Any form of variation is allowed, from random to
>>> getting in programs from the outside. It should be able to change the
>>> whole from the OS level up based on the variation.
>>
>> What is your meaning of `intelligence'? I now see it as merely the
>> efficiency of optimization process that drives the environment towards
>> higher utility, according to whatever criterion (reinforcement, in
>> your case). In this view, how does "I'll do the same, but with
>> intelligence" differ from "I'll do the same, but better"?
>>
> Terran's artificial chemistry as whole could not be said to have a
> goal. Or to put it another way applying the intentional stance to it
> probably wouldn't help you predict what it did next. Applying the
> intentional stance to what my system does should help you predict what
> it does.

What is `intentional stance'? Intentional stance of what? What is it good for?


> This means he needs to use a bunch more resources to get a singular
> useful system. Also the system might not do what he wants, but I don't
> think he minds about that.
>
> I'm allowing humans to design everything, just allowing the very low
> level to vary. Is this clearer?

What do you mean by varying low level, especially in human-designed systems?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to