Re: Safety and security

2004-03-26 Thread James Mastros
Larry Wall wrote:
Do bear in mind that Perl can execute bits of code as it's compiling,
so if a bit of code is untrustworthy, you shouldn't be compiling it
in the first place, unless you've prescanned it to reject Cuse,
CBEGIN, and other macro definitions, or (more usefully) have hooks
in the compiler to catch and validate those bits of code before
running them.  
In other words, the compiler must be sure to run immediate bits of code 
with the same restrictions as it would run the real code.

This isn't a parrot issue per say; it's a compiler issue, and I don't 
see how it requires additional mechinisims for parrot, unless possibly 
it's running one pbc (the compiler itself) with one set of 
restrictions/quotas, and another bytecode segment (pbc generated during 
the compile) with another set.

I think we were planning on that anyway (to allow libraries to be more 
trusted then the code that calls them, and callbacks to be less trusted).

	-=- James Mastros


Re: Safety and security

2004-03-26 Thread Dan Sugalski
At 2:57 PM +0100 3/26/04, James Mastros wrote:
Larry Wall wrote:
Do bear in mind that Perl can execute bits of code as it's compiling,
so if a bit of code is untrustworthy, you shouldn't be compiling it
in the first place, unless you've prescanned it to reject Cuse,
CBEGIN, and other macro definitions, or (more usefully) have hooks
in the compiler to catch and validate those bits of code before
running them.
In other words, the compiler must be sure to run immediate bits of 
code with the same restrictions as it would run the real code.

This isn't a parrot issue per say; it's a compiler issue, and I 
don't see how it requires additional mechinisims for parrot, unless 
possibly it's running one pbc (the compiler itself) with one set of 
restrictions/quotas, and another bytecode segment (pbc generated 
during the compile) with another set.

I think we were planning on that anyway (to allow libraries to be 
more trusted then the code that calls them, and callbacks to be less 
trusted).
Yup. Subroutines and methods are privilege boundaries, and code with 
extra rights may call into less privileged code safely. We need to 
work out the mechanism though.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-26 Thread Larry Wall
On Fri, Mar 26, 2004 at 09:26:45AM -0500, Dan Sugalski wrote:
: Yup. Subroutines and methods are privilege boundaries, and code with 
: extra rights may call into less privileged code safely. We need to 
: work out the mechanism though.

One thing you'll have to do in that case is disable the ability to peek
outward into your dynamic scope for various tidbits, such as $CALLER::_.

Larry


Re: Safety and security

2004-03-25 Thread Joe Schaefer
[EMAIL PROTECTED] (Dan Sugalski) writes:

 At 5:48 PM -0500 3/23/04, Joe Schaefer wrote:

[...]

 IMO, the advantage would be that parrot apps will have a better idea
 of what security model is appropriate.
 
 Well... maybe.
 
 Parrot apps don't get a whole lot of say here--this is more on the
 order of OS level security. Not that it makes a huge difference, of course.

To be specific, I was thinking about embedded parrot apps like mod_perl, 
where it might be nice to enforce a security policy on a per-vhost
(virtual server) basis.  That isn't something all parrot apps would 
benefit from, of course.

-- 
Joe Schaefer


Re: Safety and security

2004-03-25 Thread James Mastros
[EMAIL PROTECTED] wrote:
It can be safe.  Normally, PCC works by certifying the code during 
compilation, and attaching the machine-checkable certificate with the 
resulting compiled code (be that bytecode, machine code or whatever).  
During runtime, a certificate checker then validates the certificate 
against the provided compiled code, to assure that what the certificate 
says it's true.
Oh.  In that case, the fact that it's proof carrying is just a 
particular case of signed code.  I think that's a solved problem in 
parrot, at least from a design-of-bytecode perspective.  It may have 
become unsolved recently, though.

I thought proof-carrying code contained a proof, not a certificate. 
(The difference is that a proof is verifiably true -- that is, it's 
givens match reality, and each step is valid.  OTOH, a certificate is 
something that we have to use judgment to decide of we want to trust or 
not.)

The main requirement is that Parrot permits some sort of 'hooks', so that

1. during compilation, a certificate of proof can be generated and 
attached with the bytecode, and

2. before evaluation of the code, a certificate checker has to 
validate the certificate against the code, and also that

3. Parrot's bytecode format must allow such a certificate to be 
stored with the bytecode.
I think we're done with step 3, but not 1 and 2.

If you are directly eval'ing an arbitrary string, then yes, you have to 
generate the proof when you compile that string to PBC.  But you can 
also provide a program/subroutine/etc as PBC with a certificate already 
attached.
Note that in the common case, there are no eval STRINGs (at runtime), 
and thus all you have to do is prove that you don't eval STRING, which 
should be a much easier proposition.

Back to reality. I understand that many of Parrot's features would be
difficult to prove, but I'm not sure it's fundamentally any more
difficult than most OO languages.
AFAIK (although I don't know that much :), the Java VM has been proved 
secure to a large extent.
I suspect most code that wants to be provable will attempt to prove that 
it does not use those features, rather then prove that it uses them safely.

(As pointed out in a deleted bit of the grandparent post, this may 
consist of proving that it has a bit set in the header that says that it 
shouldn't be allowed to eval string, which is easy to prove, since it's 
a verifiable given.)

	-=- James Mastros


Re: Safety and security

2004-03-25 Thread Dan Sugalski
At 1:06 PM -0500 3/24/04, Joe Schaefer wrote:
[EMAIL PROTECTED] (Dan Sugalski) writes:

 At 5:48 PM -0500 3/23/04, Joe Schaefer wrote:
[...]

 IMO, the advantage would be that parrot apps will have a better idea
 of what security model is appropriate.
 Well... maybe.

 Parrot apps don't get a whole lot of say here--this is more on the
 order of OS level security. Not that it makes a huge difference, of course.
To be specific, I was thinking about embedded parrot apps like mod_perl,
where it might be nice to enforce a security policy on a per-vhost
(virtual server) basis.  That isn't something all parrot apps would
benefit from, of course.
Ah, *that* is a different matter altogether.

I'm planning an alternate mechanism for that, though it may be a bit 
much--rather than restricting the dangerous things we make sure all 
the dangerous things can be delegated to the embedder. So file 
manipulation, mass memory allocation/deallocation, and real low-level 
signal handling, for example, all get punted to the embedder, who can 
then do whatever they want.

This means that when we go read some data from a file we call, say, 
Parrot_read, which for the base parrot'll be just read, while for an 
embedded parrot it may call some Apache thunking layer or something 
instead.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-25 Thread Larry Wall
Do bear in mind that Perl can execute bits of code as it's compiling,
so if a bit of code is untrustworthy, you shouldn't be compiling it
in the first place, unless you've prescanned it to reject Cuse,
CBEGIN, and other macro definitions, or (more usefully) have hooks
in the compiler to catch and validate those bits of code before
running them.  Doesn't do you much good to disallow

eval 'system rm -rf /';

at run time if you don't also catch

BEGIN { system rm -rf /; }

at compile time...

(Sorry if I'm just pointing out the obvious.)

Larry


Re: Safety and security

2004-03-25 Thread Rafael Garcia-Suarez
Larry Wall wrote in perl.perl6.internals :
 Do bear in mind that Perl can execute bits of code as it's compiling,
 so if a bit of code is untrustworthy, you shouldn't be compiling it
 in the first place, unless you've prescanned it to reject Cuse,
 CBEGIN, and other macro definitions, or (more usefully) have hooks
 in the compiler to catch and validate those bits of code before
 running them.  Doesn't do you much good to disallow
 
 eval 'system rm -rf /';
 
 at run time if you don't also catch
 
 BEGIN { system rm -rf /; }
 
 at compile time...

That's mostly what Perl 5's Safe is doing. Hence my previous comment.

The major flaw with this approach is that it's probably not going to
prevent
eval 'while(1){}'
or
eval '$x = take this! x 1_000_000'
or my personal favourite, the always funny 
eval 'CORE::dump()'
unless you set up a very restrictive set of allowed ops.

(in each case, you abuse system resources: CPU, memory or ability to
send a signal. I don't know how to put restrictions on all of these
in the general case...)


Re: Safety and security

2004-03-25 Thread Jarkko Hietaniemi
Rafael Garcia-Suarez wrote:

 prevent
 eval 'while(1){}'
 or
 eval '$x = take this! x 1_000_000'

Or hog both (for a small while):

eval 'while([EMAIL PROTECTED],0){}'

 or my personal favourite, the always funny 
 eval 'CORE::dump()'
 unless you set up a very restrictive set of allowed ops

 (in each case, you abuse system resources: CPU, memory or ability to
 send a signal. I don't know how to put restrictions on all of these
 in the general case...)



Re: Safety and security

2004-03-25 Thread Dan Sugalski
At 11:35 PM +0200 3/25/04, Jarkko Hietaniemi wrote:
Rafael Garcia-Suarez wrote:

 prevent
 eval 'while(1){}'
 or
 eval '$x = take this! x 1_000_000'
Or hog both (for a small while):

eval 'while([EMAIL PROTECTED],0){}'
Which, if the interpreter's running with quotas, will be caught when 
it either exceeds the allowable memory limits or CPU limits.

Yay, quotas! :)

  or my personal favourite, the always funny
 eval 'CORE::dump()'
 unless you set up a very restrictive set of allowed ops
 (in each case, you abuse system resources: CPU, memory or ability to
 send a signal. I don't know how to put restrictions on all of these
 in the general case...)


--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-24 Thread Joe Schaefer
[EMAIL PROTECTED] (Dan Sugalski) writes:

[...]

 #s 34 deal with security. This... this is a dodgier issue. Security's
 easy to get wrong and hard to get right. (Though quotas are
 straightforward enough. Mostly) And once the framework's in place,
 there's the issue of performance--how do we get good performance in
 the common (insecure) case without sacrificing security in the secure case?

You might wish to consider a modular design here, similar to linux 2.6's 
security modules (LSM)

  http://www.nsa.gov/selinux/papers/module/x47.html

IMO, the advantage would be that parrot apps will have a better idea 
of what security model is appropriate. So if the modular security hooks
can be made cheap enough, the more vexing security/performance tradeoffs 
can be left up to the parrot apps.

No clue how to achieve this though- just a thought from a member of the
peanut gallery.
-- 
Joe Schaefer


Re: Safety and security

2004-03-24 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:

 At any rate, perl 5's Safe module is a good example of the Wrong Way
 to do security, and as such we're going to take it as a cautionary
 tale rather than a template.

Ok. What about Ponie?

leo


Re: Safety and security

2004-03-24 Thread Dan Sugalski
At 2:50 PM +0100 3/24/04, Leopold Toetsch wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:

 At any rate, perl 5's Safe module is a good example of the Wrong Way
 to do security, and as such we're going to take it as a cautionary
 tale rather than a template.
Ok. What about Ponie?
What about it? Safe's one of those modules that's guaranteed to not 
work under Ponie, as are a number of the B modules. That's OK.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-24 Thread Rafael Garcia-Suarez
Dan Sugalski wrote in perl.perl6.internals :
 At 2:50 PM +0100 3/24/04, Leopold Toetsch wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:

  At any rate, perl 5's Safe module is a good example of the Wrong Way
  to do security, and as such we're going to take it as a cautionary
  tale rather than a template.

Ok. What about Ponie?
 
 What about it? Safe's one of those modules that's guaranteed to not 
 work under Ponie, as are a number of the B modules. That's OK.

Why?

OK, I understand that Ponie will compile Perl 5 source to parrot ops,
and that Safe's interface uses perl ops. However it's a pure
compile-time module -- it hooks into the optree construction routines --
so it may be possible to have an equivalent of it under Ponie.

(not saying that this would be necessarily a good idea, though)

-- 
rgs


Re: Safety and security

2004-03-24 Thread Dan Sugalski
At 2:50 PM + 3/24/04, Rafael Garcia-Suarez wrote:
Dan Sugalski wrote in perl.perl6.internals :
 At 2:50 PM +0100 3/24/04, Leopold Toetsch wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:

  At any rate, perl 5's Safe module is a good example of the Wrong Way
  to do security, and as such we're going to take it as a cautionary
  tale rather than a template.
Ok. What about Ponie?
 What about it? Safe's one of those modules that's guaranteed to not
 work under Ponie, as are a number of the B modules. That's OK.
Why?

OK, I understand that Ponie will compile Perl 5 source to parrot ops,
and that Safe's interface uses perl ops. However it's a pure
compile-time module -- it hooks into the optree construction routines --
so it may be possible to have an equivalent of it under Ponie.
It may be possible, but I'd not count on it. And given how busted it 
is, I think I'd actually prefer it not work.

Anything that twiddles deep in the internals of the interpreter is 
going to fail, and there's not a whole lot we can do about that--our 
internals look very different, and there's a lot that just can't be 
emulated.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-24 Thread Dan Sugalski
At 5:48 PM -0500 3/23/04, Joe Schaefer wrote:
[EMAIL PROTECTED] (Dan Sugalski) writes:

[...]

 #s 34 deal with security. This... this is a dodgier issue. Security's
 easy to get wrong and hard to get right. (Though quotas are
 straightforward enough. Mostly) And once the framework's in place,
 there's the issue of performance--how do we get good performance in
 the common (insecure) case without sacrificing security in the secure case?
You might wish to consider a modular design here, similar to linux 2.6's
security modules (LSM)
  http://www.nsa.gov/selinux/papers/module/x47.html

IMO, the advantage would be that parrot apps will have a better idea
of what security model is appropriate.
Well... maybe.

Parrot apps don't get a whole lot of say here--this is more on the 
order of OS level security. Not that it makes a huge difference, of 
course.

I'm not familiar with the new linux system, and I'm not *going* to 
get familiar enough with it to make any sensible decisions, so I 
think I'd prefer to stick with a system I'm comfortable with and that 
I know's got a solid background. (So at least any problems are a 
matter of implementation rather than design -- those, at least, are 
fixable)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-24 Thread Dan Sugalski
At 12:36 PM +1100 3/24/04, [EMAIL PROTECTED] wrote:
On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:

At any rate, perl 5's Safe module is a good example of the Wrong 
Way to do security, and as such we're going to take it as a 
cautionary tale rather than a template. For security I want to go 
with an explicit privilege model with privilege checking in 
parrot's internals, rather than counting on op functions to Do The 
Right Thing. That means that IO restrictions are imposed by the IO 
code, not the IO ops, and suchlike stuff. Generally speaking, we're 
going to emulate the VMS quota and privilege system, as it's 
reasonably good as these things go.

If we're going to tackle this, though, we need to pull in some 
folks who're actually competent at it before we do more than 
handwave about the design.
This is a question without a simple answer, but does Parrot provide 
an infrastructure so that it would be possible to have 
proof-carrying[1] Parrot bytecode?
In the general sense, no. The presence of eval and the dynamic nature 
of the languages we're looking at pretty much shoots down most of the 
provable bytecode work. Unfortunately.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-24 Thread Steve Fink
On Mar-24, Dan Sugalski wrote:
 At 12:36 PM +1100 3/24/04, [EMAIL PROTECTED] wrote:
 On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:
 
 This is a question without a simple answer, but does Parrot provide 
 an infrastructure so that it would be possible to have 
 proof-carrying[1] Parrot bytecode?
 
 In the general sense, no. The presence of eval and the dynamic nature 
 of the languages we're looking at pretty much shoots down most of the 
 provable bytecode work. Unfortunately.

? I'm not sure if I understand why. (Though I should warn that I did
not read the referenced paper; my concept of PCC comes from reading a
single CMU paper on it a couple of years ago.) My understanding of PCC
is that it freely allows any arbitrarily complex code to be run, as
long as you provide a machine-interpretable (and valid) proof of its
safety along with it. Clearly, eval'ing arbitrary strings cannot be
proved to be safe, so no such proof can be provided (or if it is, it
will discovered to be invalid.) But that just means that you have to
avoid unprovable constructs in your PCC-boxed code.

Eval'ing a specific string *might* be provably safe, which means that
we should have a way for an external (untrusted) compiler to not only
produce bytecode, but also proofs of the safety of that bytecode. We'd
also need, of course, the trusted PCC-equipped bytecode loader to
verify the proof before executing the bytecode. (And we'd need that
anyway to load in and prove the initial bytecode anyway.)

This would largely eliminate one of the main advantages of PCC, namely
that the expensive construction of a proof need not be paid at
runtime, only the relatively cheap proof verification. But if it is
only used for small, easily proven eval's, then it could still make
sense. The fun bit would be allowing the eval'ed code's proof to
reference aspects of the main program's proof. But perhaps the PCC
people have that worked out already?

Let me pause a second to tighten the bungee cord attached to my
desk -- all this handwaving, and I'm starting to lift off a little.

The next step into crazy land could be allowing the proofs to express
detailed properties of strings, such that they could prove that a
particular string could not possibly compile down to unsafe bytecode.
This would only be useful for very restricted languages, of course,
and I'd rather floss my brain with diamond-encrusted piano wire than
attempt to implement such a thing, but I think it still serves as a
proof of concept that Parrot and PCC aren't totally at odds.

Back to reality. I understand that many of Parrot's features would be
difficult to prove, but I'm not sure it's fundamentally any more
difficult than most OO languages. (I assume PCC allows you to punt on
proofs to some degree by inserting explicit checks for unprovable
properties, since then the guarded code can make use of those
properties to prove its own safety.)


Re: Safety and security

2004-03-24 Thread ozone
On 25/03/2004, at 2:39 PM, Steve Fink wrote:

On Mar-24, Dan Sugalski wrote:
At 12:36 PM +1100 3/24/04, [EMAIL PROTECTED] wrote:
On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:

This is a question without a simple answer, but does Parrot provide
an infrastructure so that it would be possible to have
proof-carrying[1] Parrot bytecode?
In the general sense, no. The presence of eval and the dynamic nature
of the languages we're looking at pretty much shoots down most of the
provable bytecode work. Unfortunately.
? I'm not sure if I understand why. (Though I should warn that I did
not read the referenced paper; my concept of PCC comes from reading a
single CMU paper on it a couple of years ago.) My understanding of PCC
is that it freely allows any arbitrarily complex code to be run, as
long as you provide a machine-interpretable (and valid) proof of its
safety along with it.

Clearly, eval'ing arbitrary strings cannot be proved to be safe,
It can be safe.  Normally, PCC works by certifying the code during 
compilation, and attaching the machine-checkable certificate with the 
resulting compiled code (be that bytecode, machine code or whatever).  
During runtime, a certificate checker then validates the certificate 
against the provided compiled code, to assure that what the certificate 
says it's true.

If you eval an arbitrary string, the compile/evaluate stages are more 
closely linked: you effectively run the code (and thus check the 
certificate) immediately after compilation.

The main requirement is that Parrot permits some sort of 'hooks', so 
that

1. during compilation, a certificate of proof can be generated and 
attached with the bytecode, and

2. before evaluation of the code, a certificate checker has to 
validate the certificate against the code, and also that

3. Parrot's bytecode format must allow such a certificate to be 
stored with the bytecode.

Eval'ing a specific string *might* be provably safe, which means that
we should have a way for an external (untrusted) compiler to not only
produce bytecode, but also proofs of the safety of that bytecode. We'd
also need, of course, the trusted PCC-equipped bytecode loader to
verify the proof before executing the bytecode. (And we'd need that
anyway to load in and prove the initial bytecode anyway.)
This would largely eliminate one of the main advantages of PCC, namely
that the expensive construction of a proof need not be paid at
runtime, only the relatively cheap proof verification.
If you are directly eval'ing an arbitrary string, then yes, you have to 
generate the proof when you compile that string to PBC.  But you can 
also provide a program/subroutine/etc as PBC with a certificate already 
attached.

Back to reality. I understand that many of Parrot's features would be
difficult to prove, but I'm not sure it's fundamentally any more
difficult than most OO languages.
AFAIK (although I don't know that much :), the Java VM has been proved 
secure to a large extent.

--
% Andre Pang : trust.in.love.to.save


Safety and security

2004-03-23 Thread Dan Sugalski
Okay, we'll try this again... (darned cranky mail clients)

We've two big issues to deal with here--safety and security. While 
related they aren't the same and there are different things that need 
doing. As far as I can see it, we need four things:

1) An oploop that checks branch destinations for validity

2) Opcodes that check their parameters for basic sanity--valid 
register numbers (0-31) and basically correct (ie non-NULL) register 
contents

3) An oploop that checks basic quotas, mainly run time

4) Opcodes that check to see if you can actually do the thing you've requested

#s 12 are safety issues. #2, specifically, can be dealt with by the 
opcode preprocessor, generating op bodies that do validity checking. 
#1 needs a bounds-checking runloop, which we mostly have already. I'm 
comfortable getting this done now, and this is what the framework 
that's going in should be able to handle OK.

#s 34 deal with security. This... this is a dodgier issue. 
Security's easy to get wrong and hard to get right. (Though quotas 
are straightforward enough. Mostly) And once the framework's in 
place, there's the issue of performance--how do we get good 
performance in the common (insecure) case without sacrificing 
security in the secure case?

At any rate, perl 5's Safe module is a good example of the Wrong Way 
to do security, and as such we're going to take it as a cautionary 
tale rather than a template. For security I want to go with an 
explicit privilege model with privilege checking in parrot's 
internals, rather than counting on op functions to Do The Right 
Thing. That means that IO restrictions are imposed by the IO code, 
not the IO ops, and suchlike stuff. Generally speaking, we're going 
to emulate the VMS quota and privilege system, as it's reasonably 
good as these things go.

If we're going to tackle this, though, we need to pull in some folks 
who're actually competent at it before we do more than handwave about 
the design.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Safety and security

2004-03-23 Thread Jarkko Hietaniemi
  At any rate, perl 5's Safe module is a good example of the Wrong Way 
 to do security, and as such we're going to take it as a cautionary 
 tale rather than a template. For security I want to go with an 
 explicit privilege model with privilege checking in parrot's 
 internals, rather than counting on op functions to Do The Right 
 Thing. That means that IO restrictions are imposed by the IO code, 
 not the IO ops, and suchlike stuff. Generally speaking, we're going 
 to emulate the VMS quota and privilege system, as it's reasonably 
 good as these things go.

For people who are wondering what has Dan got in his pipe today:
http://www.sans.org/rr/papers/22/604.pdf
And here a bit about quotas:
http://h71000.www7.hp.com/DOC/72final/5841/5841pro_028.html#58_quotasprivilegesandprotecti
(I swear I didn't make up the URL, HP did)

 If we're going to tackle this, though, we need to pull in some folks 
 who're actually competent at it before we do more than handwave about 
 the design.


Re: Safety and security

2004-03-23 Thread ozone
On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:

At any rate, perl 5's Safe module is a good example of the Wrong Way 
to do security, and as such we're going to take it as a cautionary 
tale rather than a template. For security I want to go with an 
explicit privilege model with privilege checking in parrot's 
internals, rather than counting on op functions to Do The Right Thing. 
That means that IO restrictions are imposed by the IO code, not the IO 
ops, and suchlike stuff. Generally speaking, we're going to emulate 
the VMS quota and privilege system, as it's reasonably good as these 
things go.

If we're going to tackle this, though, we need to pull in some folks 
who're actually competent at it before we do more than handwave about 
the design.
This is a question without a simple answer, but does Parrot provide an 
infrastructure so that it would be possible to have proof-carrying[1] 
Parrot bytecode?  I'm of course not advocating that we should look into 
proof-carrying code immediately, but I think it's important to realise 
that PCC exists, and that Parrot should be forward-compatible with it, 
if people want to put PCC concepts into Parrot at a later stage.

1. http://www.cs.princeton.edu/sip/projects/pcc/ -- Google around for 
plenty of other links!

--
% Andre Pang : trust.in.love.to.save