Hi Seth,

Thank you so much for taking the time to review manual and paper. Also
thanks for addressing the numerous typos and spelling errors (English
is not my native language, as you might have guessed :-)

> The only high-level thing that worries me is that aa_change_hat() and
> aa_revert_hat() may not be very useful under RAppArmor. Consider
> removing them -- or the documentation in the JSS article -- as it
> represents significant additional complexity that won't be useful to
> most readers. (And may even prove dangerous if misunderstood.)

I agree that aa_change_hat is a bit risky. It is in there because I
found it incredibly useful myself to get started with AppArmor and do
some testing and debugging by easily switching security contexts. But
you are right: I need to use stronger wording in the documentation and
paper to emphasize that this is not a waterproof solution. It will
prevent most 'general purpose' malicious activities; but if the
malicious code is actually aware of being sandboxed by AppArmor it can
probably find its way out.

> - You've got e.g. aa_find_mountpoint() and aa_getcon() documented
> before the immensely more useful eval.secure(). If this guide is
> simply an alphabetical listing of functions, then I suppose it's the
> way it ought to be, but I have trouble seeing what use
> aa_find_mountpoint() would be to the typical R user. (aa_getcon() is
> at least potentially a useful debugging aide...)

The RAppArmor-manual.pdf is auto-genrated from the inline comments; I
have no control over the layout or ordering. However, people rarely
read this document from a to z, as the man page are also directly
accessible from within R. The user can type e.g. ?eval.secure in the
console and see the man page for this function. The PDF is more of a
reference document for the archives that lists and documents each and
every function in the package in alphabetical order.


> - The eval.secure() documentation mentions that the current R session
> should not be influenced by the execution, but the RLIMIT_NPROC limits
> all processes owned by that user account. Once the "child" process is
> done, the limit is _still_ in place. I'm not sure if this is Good or
> Bad; it just Is, and I believe it should be documented both in the
> eval.secure() function and the more specific rlimit_nproc() function.

I am not quite sure this is correct. I have done some experimenting,
and it seems that, at least in Ubuntu 12.04 the NPROC limit is set on
a per-process basis. As I understand it, the total proc-count at any
given time is identical for all processes owned by a user of course,
but the different processes enforce different limits. So if a user has
150 processes running, and one of them has rlimit_nproc set to 100,
and another one has rlimit_nproc set to 200, the former won't be able
to fork, but the latter will.

You can verify this with the package by running the commands below.
The eval.secure function sets the limit to 100 in the child process
before executing the code, but afterwards when the child is done and
killed, the parent process still has the same nproc limit as it had
before:

> rlimit_nproc()
RLIMIT_NPROC:
Current limits: soft=39048; hard=39048

> out <- eval.secure(1+2, RLIMIT_NPROC=100)
RLIMIT_NPROC:
Previous limits: soft=39048; hard=39048
Current limits: soft=100; hard=100

> rlimit_nproc()
RLIMIT_NPROC:
Current limits: soft=39048; hard=39048


> - The rlimit_memlock() description should probably point out that
> "locked" memory is really intended for use for keys or passwords or
> other secrets that should not be saved to disk via swap; it isn't
> intended as a general "optimization" mechanism. (Does R expose
> mlock(2) or similar interfaces for user programs? Or would it only be
> useful from C- or C++-based modules? (Or FORTRAN??))
>
> - rlimit_nice() would sure be nice to have an example; the "20 -
> limit" is a bit awkward -- and I'm curious what it does for the
> current 'nice' value if the limits are set such that the current value
> is illegal. (From what I can read of the kernel code, it leaves the
> current 'nice' value alone; it looks like this rlimit only influences
> the nice(2) and setpriority(2) calls.

Hmm you are right I didn't test this properly yet. The getpriority and
setpriority wrappers are working as expected:

> getpriority()
[1] 0
> setpriority(10)
Setting priority...
[1] 10
> setpriority(100)
Setting priority...
[1] 19
> setpriority(10)
Setting priority...
Error in setpriority(10) : Failed to set priority to: 10.
Error: 13

But the rlimit_nice doesn't seem to be doing anything at all at this
point. I'll have to look into this.


> - The setuid() interface is very sparse; the documentation mentions a
> way to get the current id, but 'getuid()' isn't documented here.
> You'll probably also want wrappers for seteuid(2), setreuid(2),
> setresuid(2); setegid(2), setregid(2), setresgid(2); setgroups(2),
> getgroups(2); and the Linux-specific setfsuid(2) and getfsuid(2).
> (Maybe including all these is asking for more trouble than it is
> worth?)
>
> - eval.secure() probably _also_ needs some way to manage the
> supplementary groups; some security problems have come about because
> child processes had supplementary groups across an exec() that they
> should not have had. This could be a huge annoyance, but some
> mechanism to at least clear all existing supplementary groups might be
> most useful. (Of course, someone building an R application server will
> need to think this through carefully, same as anything else.)

The setuid is actually just there to be called by eval.secure indeed.
Honestly I have to study this topic some more to really understand the
differences between all these functions. I had sort of assumed that
calling setuid() would be equivalent to running the process as if it
were started by that user, but maybe that is incorrect.


> Do note that @HOME does not expand to /home/jeroen/ -- it expands
> to_all_ user's home directories (yes, even /root/). I don't know if
> this detail is important enough to document here, but it is an
> important detail. If you want to ensure your users cannot share data
> or code via /home/foo/R/ then you should also include the 'owner' rule
> prefix.

That is a good point. I am implicitly assuming that the standard
privileges are taken care off by the file access modes, and users
haven't made their private files public readable or so. But I should
mention this in the paper.


> I've made _some_ changes for \proglang{Java}, etc. but I tired of this
> and thought it'd be better to just mention that some standardization
> would be ideal. Even making new commands \Java for \proglang{Java}, \R
> for \proglang{R}, etc., would go a long way to getting them all
> correct. (But sometimes R is a language specification and sometimes it
> is a specific executable tool. Does it matter which is which?)

Yes I can be a bit sloppy; I should go over the document one more time
to make sure all the little layout details are OK :-)

Thanks again for your extensive feedback! I'm going to update some
things, and hopefully submit the paper somewhere in August. I'll keep
you (and others) posted about the developments.

-- 
AppArmor mailing list
AppArmor@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/apparmor

Reply via email to