On 2/28/08, William Pearson <[EMAIL PROTECTED]> wrote:
> I'm going to try and elucidate my approach to building an intelligent
> system, in a round about fashion. This is the problem I am trying to
> solve.
>
> Imagine you are designing a computer system to solve an unknown
> problem, and you have these constraints
>
> A) Limited space to put general information about the world
> B) Communication with the system after it has been deployed. The less
> the better.
> C) We shall also assume limited processing ability etc
>
> The goal is to create a system that can solve the tasks as quickly as
> possible with the least interference from the outside.
>
> I'd like people to write a brief sketch of your solution to this sort
> of problem down. Is it different from your AGI designs, if so why?

Space/time-optimality is not my top concern.  I'm focused on building an AGI
that *works*, within reasonable space/time.  If you add these contraints,
you're making the AGI problem harder than it already is.  Ditto for the
amount of user interaction.  Why make it harder?

> System Sketch? -> It would have to be generally programmable, I would
> want to be able to send it arbitrary programs after it had been
> created, so I could send it a program to decrypt things or control
> things. It would also need to able to generate it's own programming
> and select between the different programs in order to minimise my need
> to program it. It is not different to my AGI design, unsurprisingly.


Generally programmable, yes.  But that's very broad.  Many systems have this
property.  Even system with only a declarative KB can re-program itself by
modifying the KB.

YKY

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to