On 28/02/2008, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
>
>
> On 2/28/08, William Pearson <[EMAIL PROTECTED]> wrote:
> > I'm going to try and elucidate my approach to building an intelligent
>  > system, in a round about fashion. This is the problem I am trying to
> > solve.
> >
> > Imagine you are designing a computer system to solve an unknown
> > problem, and you have these constraints
>  >
> > A) Limited space to put general information about the world
> > B) Communication with the system after it has been deployed. The less
> > the better.
> > C) We shall also assume limited processing ability etc
>  >
> > The goal is to create a system that can solve the tasks as quickly as
> > possible with the least interference from the outside.
> >
> > I'd like people to write a brief sketch of your solution to this sort
>  > of problem down. Is it different from your AGI designs, if so why?
>
>
> Space/time-optimality is not my top concern.  I'm focused on building an AGI
> that *works*, within reasonable space/time.  If you add these contraints, 
> you're > making the AGI problem harder than it already is.  Ditto for the 
> amount of user
> interaction.  Why make it harder?

I'm not looking for optimality, just that better is important. I don't
want to have to hold the hand of my system teaching it laboriously, so
the less information I have to feed it the better. Why ignore the
problem and make the job of teaching it harder?

Also we have limited space and time in the real world

> > System Sketch? -> It would have to be generally programmable, I would
> > want to be able to send it arbitrary programs after it had been
> > created, so I could send it a program to decrypt things or control
>  > things. It would also need to able to generate it's own programming
> > and select between the different programs in order to minimise my need
> > to program it. It is not different to my AGI design, unsurprisingly.
>
>
> Generally programmable, yes.  But that's very broad.  Many systems have this
> property.

Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be reprogrammed at run time.

http://en.wikipedia.org/wiki/Harvard_architecture


> Even system with only a declarative KB can re-program itself by modifying the
> KB.

So a program could get in and remove all the items from the KB? You
can have viruses etc inside the KB?

 Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to