2008/5/11 Russell Wallace <[EMAIL PROTECTED]>:
> On Sat, May 10, 2008 at 10:10 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>> It depends on the system you are designing on. I think you can easily
>> create as many types of sand box as you want in programming language E
>> (1) for example. If the principle of least authority (2) is embedded
>> in the system, then you shouldn't have any problems.
>
> Sure, I'm talking about much lower-level concepts though. For example,
> on a system with 8 gigabytes of memory, a candidate program has
> computed a 5 gigabyte string. For its next operation, it appends that
> string to itself, thereby crashing the VM due to running out of
> memory. How _exactly_ do you prevent this from happening (while
> meeting all the other requirements for an AI platform)? It's a
> trickier problem than it sounds like it ought to be.
>

I'm starting to mod qemu (it is not a straightforward process) to add
capabilities.  The VM will have a set amount of memory and if a
location outside this memory is referenced, it will throw a page fault
inside the VM, not crash it directly. The system will be able to deal
with it how it wants to, something smarter than, "Oh no I have done a
bad memory reference, I must stop all my work and lose everything!!!"
Hopefully.
In the greater scheme of things the model that a computer has
unlimited virtual memory has to go as well. Else you might get
important things on the hard disk and have much thrashing and ephemera
in main memory. You could still make high level abstractions but the
virtual memory one is not the one to display to the low level
programs.
  Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to