Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
On Mon, Apr 16, 2007 at 11:00:01PM +0100, Alan Cox wrote: > > don't actually have to care --- if loading an invalid profile can bring > > down > > the system, then that's no worse than an arbitrary module that crashes the > > machine. Not sure if there will ever be user loadable profiles; at least at > > that point we had to care. > > CAP_SYS_RAWIO is needed to do arbitary patching/loading in the capability > model so if you are using lesser capabilities it is a (minor) capability > rise but not a big problem, just ugly and wanting a fix > > > > > + /* > > > > +* Replacement needs to allocate a new aa_task_context for each > > > > +* task confined by old_profile. To do this the profile locks > > > > +* are only held when the actual switch is done per task. While > > > > +* looping to allocate a new aa_task_context the old_task list > > > > +* may get shorter if tasks exit/change their profile but will > > > > +* not get longer as new task will not use old_profile detecting > > > > +* that is stale. > > > > +*/ > > > > + do { > > > > + new_cxt = aa_alloc_task_context(GFP_KERNEL | > > > > __GFP_NOFAIL); > > > > > > NOFAIL is usually a bad sign. It should be only used if there is no > > > alternative. > > > > At this point there is no secure alternative to allocating a task context > > --- > > except killing the task, maybe. > > Can you count the number needed, preallocate them and then when you know > for sure either succeed or fail the operation as a whole ? No, to be accurate the count would have to be made with the profile lock held, which would then need to be released so as not to use GFP_ATOMIC for the allocations. An iterative approach could be taken where we do something like repeat: lock profile count if preallocated < count unlock profile if (! allocate count - preallocated) Fail goto repeat do replacement pgp3UGFGlqBX8.pgp Description: PGP signature
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
> don't actually have to care --- if loading an invalid profile can bring down > the system, then that's no worse than an arbitrary module that crashes the > machine. Not sure if there will ever be user loadable profiles; at least at > that point we had to care. CAP_SYS_RAWIO is needed to do arbitary patching/loading in the capability model so if you are using lesser capabilities it is a (minor) capability rise but not a big problem, just ugly and wanting a fix > > > + /* > > > + * Replacement needs to allocate a new aa_task_context for each > > > + * task confined by old_profile. To do this the profile locks > > > + * are only held when the actual switch is done per task. While > > > + * looping to allocate a new aa_task_context the old_task list > > > + * may get shorter if tasks exit/change their profile but will > > > + * not get longer as new task will not use old_profile detecting > > > + * that is stale. > > > + */ > > > + do { > > > + new_cxt = aa_alloc_task_context(GFP_KERNEL | __GFP_NOFAIL); > > > > NOFAIL is usually a bad sign. It should be only used if there is no > > alternative. > > At this point there is no secure alternative to allocating a task context --- > except killing the task, maybe. Can you count the number needed, preallocate them and then when you know for sure either succeed or fail the operation as a whole ? - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
On Mon, Apr 16, 2007 at 08:27:08AM +0200, Andi Kleen wrote: > > It's nice to check for consistency though, so we're adding that. Profile > > loading is a trusted operation, at least so far, and so security wise we > > don't actually have to care --- if loading an invalid profile can bring > > down > > the system, then that's no worse than an arbitrary module that crashes the > > machine. Not sure if there will ever be user loadable profiles; at least at > > that point we had to care. > > A security system that allows to crash the kernel is a little weird > though. It would be better to check. Not that a recursion check > is particularly expensive. > Indeed. It will be fixed in the next rev. thanks john pgp4HkMa7wCNY.pgp Description: PGP signature
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
Hi! > > > + > > > + /* get optional subprofiles */ > > > + if (aa_is_nameX(e, AA_LIST, "hats")) { > > > + while (!aa_is_nameX(e, AA_LISTEND, NULL)) { > > > + struct aa_profile *subprofile; > > > + subprofile = aa_unpack_profile(e); > > > > Is there any check that would guard the recursion from stack > > overflow on malicious input? > > It's nice to check for consistency though, so we're adding that. Profile > loading is a trusted operation, at least so far, and so security wise we > don't actually have to care --- if loading an invalid profile can bring down > the system, then that's no worse than an arbitrary module that crashes the > machine. Not sure if there will ever be user loadable profiles; at least at > that point we had to care. It is not a _security_ problem, but face it, mount("/very weird filename") is not expected to crash the kernel, either. It is quality-of-impelmentation problem. -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
> It's nice to check for consistency though, so we're adding that. Profile > loading is a trusted operation, at least so far, and so security wise we > don't actually have to care --- if loading an invalid profile can bring down > the system, then that's no worse than an arbitrary module that crashes the > machine. Not sure if there will ever be user loadable profiles; at least at > that point we had to care. A security system that allows to crash the kernel is a little weird though. It would be better to check. Not that a recursion check is particularly expensive. -Andi - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
On Thursday 12 April 2007 15:46, Andi Kleen wrote: > > +struct aa_dfa { > > + struct table_header *tables[YYTD_ID_NXT]; > > +}; > > If that is passed in from user space you would need special compat > code for 64bit kernels who support 32bit userland. > Better to avoid pointers. This struct is not passed between the kernel and user space; no problem here. In fact the profiles format used is fully machine independent. > > + > > + /* get optional subprofiles */ > > + if (aa_is_nameX(e, AA_LIST, "hats")) { > > + while (!aa_is_nameX(e, AA_LISTEND, NULL)) { > > + struct aa_profile *subprofile; > > + subprofile = aa_unpack_profile(e); > > Is there any check that would guard the recursion from stack > overflow on malicious input? It's nice to check for consistency though, so we're adding that. Profile loading is a trusted operation, at least so far, and so security wise we don't actually have to care --- if loading an invalid profile can bring down the system, then that's no worse than an arbitrary module that crashes the machine. Not sure if there will ever be user loadable profiles; at least at that point we had to care. > > + /* > > +* Replacement needs to allocate a new aa_task_context for each > > +* task confined by old_profile. To do this the profile locks > > +* are only held when the actual switch is done per task. While > > +* looping to allocate a new aa_task_context the old_task list > > +* may get shorter if tasks exit/change their profile but will > > +* not get longer as new task will not use old_profile detecting > > +* that is stale. > > +*/ > > + do { > > + new_cxt = aa_alloc_task_context(GFP_KERNEL | __GFP_NOFAIL); > > NOFAIL is usually a bad sign. It should be only used if there is no > alternative. At this point there is no secure alternative to allocating a task context --- except killing the task, maybe. Thanks, Andreas - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
[EMAIL PROTECTED] writes: [didn't review code fully, just some stuff I noticed] > + > +struct aa_dfa { > + struct table_header *tables[YYTD_ID_NXT]; > +}; If that is passed in from user space you would need special compat code for 64bit kernels who support 32bit userland. Better to avoid pointers. > + > + /* get optional subprofiles */ > + if (aa_is_nameX(e, AA_LIST, "hats")) { > + while (!aa_is_nameX(e, AA_LISTEND, NULL)) { > + struct aa_profile *subprofile; > + subprofile = aa_unpack_profile(e); Is there any check that would guard the recursion from stack overflow on malicious input? > + /* > + * Replacement needs to allocate a new aa_task_context for each > + * task confined by old_profile. To do this the profile locks > + * are only held when the actual switch is done per task. While > + * looping to allocate a new aa_task_context the old_task list > + * may get shorter if tasks exit/change their profile but will > + * not get longer as new task will not use old_profile detecting > + * that is stale. > + */ > + do { > + new_cxt = aa_alloc_task_context(GFP_KERNEL | __GFP_NOFAIL); NOFAIL is usually a bad sign. It should be only used if there is no alternative. -Andi - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
> + th.td_id = ntohs(*(u16 *) (blob)); > + th.td_flags = ntohs(*(u16 *) (blob + 2)); > + th.td_lolen = ntohl(*(u32 *) (blob + 8)); Use cpu_to and _to_cpu functions for here so it is clear the intended direction and endianness. > + > +static inline int aa_inbounds(struct aa_ext *e, size_t size) > +{ > + return (e->pos + size <= e->end); > +} Can e->pos + size ever overflow. If so this check isn't safe. > + * aa_unpack_profile - unpack a serialized profile > + * @e: serialized data extent information > + * @error: error code returned if unpacking fails > + */ > +static struct aa_profile *aa_unpack_profile(struct aa_ext *e) > +{ > + struct aa_profile *profile = NULL; > + > + /* get optional subprofiles */ > + if (aa_is_nameX(e, AA_LIST, "hats")) { > + while (!aa_is_nameX(e, AA_LISTEND, NULL)) { > + struct aa_profile *subprofile; > + subprofile = aa_unpack_profile(e); > + if (IS_ERR(subprofile)) { What bounds recursion here on invalid input ? - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[AppArmor 39/41] AppArmor: Profile loading and manipulation, pathname matching
Pathname matching, transition table loading, profile loading and manipulation. Signed-off-by: John Johansen <[EMAIL PROTECTED]> Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]> --- security/apparmor/match.c| 232 security/apparmor/match.h| 83 security/apparmor/module_interface.c | 641 +++ 3 files changed, 956 insertions(+) --- /dev/null +++ b/security/apparmor/match.c @@ -0,0 +1,232 @@ +/* + * Copyright (C) 2007 Novell/SUSE + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation, version 2 of the + * License. + * + * Regular expression transition table matching + */ + +#include +#include +#include +#include "match.h" + +static struct table_header *unpack_table(void *blob, size_t bsize) +{ + struct table_header *table = NULL; + struct table_header th; + size_t tsize; + + if (bsize < sizeof(struct table_header)) + goto out; + + th.td_id = ntohs(*(u16 *) (blob)); + th.td_flags = ntohs(*(u16 *) (blob + 2)); + th.td_lolen = ntohl(*(u32 *) (blob + 8)); + blob += sizeof(struct table_header); + + if (!(th.td_flags == YYTD_DATA16 || th.td_flags == YYTD_DATA32 || + th.td_flags == YYTD_DATA8)) + goto out; + + tsize = table_size(th.td_lolen, th.td_flags); + if (bsize < tsize) + goto out; + + table = kmalloc(tsize, GFP_KERNEL); + if (table) { + *table = th; + if (th.td_flags == YYTD_DATA8) + UNPACK_ARRAY(table->td_data, blob, th.td_lolen, +u8, ntohb); + else if (th.td_flags == YYTD_DATA16) + UNPACK_ARRAY(table->td_data, blob, th.td_lolen, +u16, ntohs); + else + UNPACK_ARRAY(table->td_data, blob, th.td_lolen, +u32, ntohl); + } + +out: + return table; +} + +int unpack_dfa(struct aa_dfa *dfa, void *blob, size_t size) +{ + int hsize, i; + int error = -ENOMEM; + + /* get dfa table set header */ + if (size < sizeof(struct table_set_header)) + goto fail; + + if (ntohl(*(u32 *)blob) != YYTH_MAGIC) + goto fail; + + hsize = ntohl(*(u32 *)(blob + 4)); + if (size < hsize) + goto fail; + + blob += hsize; + size -= hsize; + + error = -EPROTO; + while (size > 0) { + struct table_header *table; + table = unpack_table(blob, size); + if (!table) + goto fail; + + switch(table->td_id) { + case YYTD_ID_ACCEPT: + case YYTD_ID_BASE: + dfa->tables[table->td_id - 1] = table; + if (table->td_flags != YYTD_DATA32) + goto fail; + break; + case YYTD_ID_DEF: + case YYTD_ID_NXT: + case YYTD_ID_CHK: + dfa->tables[table->td_id - 1] = table; + if (table->td_flags != YYTD_DATA16) + goto fail; + break; + case YYTD_ID_EC: + dfa->tables[table->td_id - 1] = table; + if (table->td_flags != YYTD_DATA8) + goto fail; + break; + default: + kfree(table); + goto fail; + } + + blob += table_size(table->td_lolen, table->td_flags); + size -= table_size(table->td_lolen, table->td_flags); + } + + return 0; + +fail: + for (i = 0; i < ARRAY_SIZE(dfa->tables); i++) { + if (dfa->tables[i]) { + kfree(dfa->tables[i]); + dfa->tables[i] = NULL; + } + } + return error; +} + +/** + * verify_dfa - verify that all the transitions and states in the dfa tables + * are in bounds. + * @dfa: dfa to test + * + * assumes dfa has gone through the verification done by unpacking + */ +int verify_dfa(struct aa_dfa *dfa) +{ + size_t i, state_count, trans_count; + int error = -EPROTO; + + /* check that required tables exist */ + if (!(dfa->tables[YYTD_ID_ACCEPT -1 ] && + dfa->tables[YYTD_ID_DEF - 1] && + dfa->tables[YYTD_ID_BASE - 1] && + dfa->tables[YYTD_ID_NXT - 1] && + dfa->tables[YYTD_ID_CHK - 1])) + goto out; + + /* accept.size == default.size == base.size */ + state_count = dfa->tables[YYTD_ID_BASE - 1]->td_lolen; + if