Re: [PATCH v4 0/1] Safe LSM (un)loading, and immutable hooks

2018-04-07 Thread Peter Dolding
On Sat, Apr 7, 2018 at 2:31 AM, Casey Schaufler  wrote:
> On 4/5/2018 9:12 PM, Peter Dolding wrote:
>> On Fri, Apr 6, 2018 at 11:31 AM, Sargun Dhillon  wrote:
>>>
>>> On Thu, Apr 5, 2018 at 9:29 AM, Casey Schaufler 
>>> wrote:
>>>> On 4/5/2018 3:31 AM, Peter Dolding wrote:
>>>>> On Thu, Apr 5, 2018 at 7:55 PM, Igor Stoppa 
>>>>> wrote:
>>>>>> On 01/04/18 08:41, Sargun Dhillon wrote:
>>>>>>> The biggest security benefit of this patchset is the introduction of
>>>>>>> read-only hooks, even if some security modules have mutable hooks.
>>>>>>> Currently, if you have any LSMs with mutable hooks it will render all
>>>>>>> heads, and
>>>>>>> list nodes mutable. These are a prime place to attack, because being
>>>>>>> able to
>>>>>>> manipulate those hooks is a way to bypass all LSMs easily, and to
>>>>>>> create a
>>>>>>> persistent, covert channel to intercept nearly all calls.
>>>>>>>
>>>>>>>
>>>>>>> If LSMs have a model to be unloaded, or are compled as modules, they
>>>>>>> should mark
>>>>>>> themselves mutable at compile time, and use the LSM_HOOK_INIT_MUTABLE
>>>>>>> macro
>>>>>>> instead of the LSM_HOOK_INIT macro, so their hooks are on the mutable
>>>>>>> chain.
>>>>>> I'd rather consider these types of hooks:
>>>>>>
>>>>>> A) hooks that are either const or marked as RO after init
>>>>>>
>>>>>> B) hooks that are writable for a short time, long enough to load
>>>>>> additional, non built-in modules, but then get locked down
>>>>>> I provided an example some time ago [1]
>>>>>>
>>>>>> C) hooks that are unloadable (and therefore always attackable?)
>>>>>>
>>>>>> Maybe type-A could be dropped and used only as type-B, if it's
>>>>>> acceptable that type-A hooks are vulnerable before lock-down of type-B
>>>>>> hooks.
>>>>>>
>>>>>> I have some doubts about the usefulness of type-C, though.
>>>>>> The benefit I see htat it brings is that it avoids having to reboot
>>>>>> when
>>>>>> a mutable LSM is changed, at the price of leaving it attackable.
>>>>>>
>>>>>> Do you have any specific case in mind where this trade-off would be
>>>>>> acceptable?
>>>>>>
>>>>> A useful case for loadable/unloadable LSM is development automate QA.
>>>>>
>>>>> So you have built a new program and you you want to test it against a
>>>>> list of different LSM configurations without having to reboot the
>>>>> system.  So a run testsuite with LSM off  then enabled LSM1 run
>>>>> testsuite again disable LSM1 enable LSM2. run testsuite disable
>>>>> LSM2... Basically repeating process.
>>>>>
>>>>> I would say normal production machines being able to swap LSM like
>>>>> this does not have much use.
>>>>>
>>>>> Sometimes for productivity it makes sense to be able to breach
>>>>> security.   The fact you need to test with LSM disabled to know if any
>>>>> of the defects you are seeing is LSM configuration related that
>>>>> instance is already in the camp of non secure anyhow..
>>>>>
>>>>> There is a shade of grey between something being a security hazard and
>>>>> something being a useful feature.
>>>> If the only value of a feature is development I strongly
>>>> advocate against it. The number of times I've seen things
>>>> completely messed up because it makes development easier
>>>> is astonishing. If you have to enable something dangerous
>>>> just for testing you have to wonder about the testing.
>>>>
>> Casey Schaufler we have had different points of view before.
>
> That's OK. I'm not always right.
>
>> I will
>> point out some serous issues here.  If you look a PPA
>
> Sorry, my acronym processor was seriously damaged in 1992.
> What's "PPA" in this context?
>

Personal Package Archives the ubuntu term sorry.


>>  and many other
>> locations you will find no LSM configuration

Re: [GIT PULL] Kernel lockdown for secure boot

2018-04-05 Thread Peter Dolding
>
> There's no inherent difference, in terms of the trust chain, between
> compromising it to use the machine as a toaster or to run a botnet - the
> trust chain is compromised either way.  But you're much more likely to
> notice if your desktop starts producing bread products than if it hides
> some malware and keeps on booting, and the second one is much more

> That is to say, as a result of the way malware has been written, our way
> of thinking about it is often that it's a way to build a boot loader for
> a malicious kernel, so that's how we wind up talking about it.  Are we
> concerned with malware stealing your data?  Yes, but Secure Boot is only
> indirectly about that.  It's primarily about denying the malware easy
> mechanisms to build a persistence mechanism.  The uid-0 != ring-0 aspect
> is useful independent of Secure Boot, but Secure Boot without it falls
> way short of accomplishing its goal.
>
> --
I am sorry the issue here this is really expanding Secure Boot to
breaking point.

Yes a person wants a secure system having the boot parts verified by
some means and using a lockdown is advantage.

Problem comes in with the idea that UEFI Secure Boot and lockdown are linked.

If I am running windows and linux on the same machine Secure Boot need
to be on so windows run happy.

Remember its my machine.  If I wish to compromise security on my
machine because it make sense I should be allowed to,

A proper lockdown would prevent you from messing with ACPI tables it a
very creative hack have kernel load a DSDT and have it from ring zero
turn bits in the kernel off.

The reality here is we need to be able to operate without lockdown due
to how badly broken some hardware it to configure system.

Yes the need to include option to push button to disable secure boot
is required due to how badly broken this stuff is.   Of course this
does not address the issue that if I am working on a system from
remote or embedded where I don't  have the push button to turn off as
a option this is still a problem.


Effective lockdown has to protect linux kernel boot parameters,
initramfs and other bits from being modified as well.   This lead us
to problem with the broken hardware in a machine we cannot turn secure
boot off we still need to perform all these alterations.


We do not live in a world of perfect computer hardware so at this
stage proper unattackable secureboot cannot be done.


We would be better off putting effort into improve means with UEFI of
adding own KEK.   This is so that only boot loaders and kernels from
the vendors user has approved in fact to work.  There could also be a
configuration KEK that gets disabled after all the required operating
systems are installed.So Microsoft non OS KEK makes sense to be
the configuration rule breaking KEK but the current deployments of
UEFI don't have a off switch option on it.


One KEK for everyone who is not Microsoft to boot with is highly insecure.


UEFI secureboot falls way short in the validation department currently
because too much is validated under one KEK key.

UEFI also fall short due to failing to provide a system to protect
boot parameters that can alter OS behaviour and make a secure kernel
insecure this include kernels with this lockdown patches,


Really you need to compare UEFI secureboot vs boot loader and /boot on
a read only media.   Every where you can change something in the UEFI
secureboot without is being signed that you cannot in the read only
media of the boot loader and /boot is a defect in the UEFI secureboot
design and implementation.

If boot parameters were properly secured there would be no need for
lockdown query if UEFI was in secureboot mode or not.

Also lockdown being on and kernel and boot loader not running secured
still would provide extra item attacker has to get past.

So fairly much remove the EFI interrogation patches and work with UEFI
to fix it properly.   Hacking around these UEFI defects means we will
end up being stuck with them and the system still not being properly
secured.


Peter Dolding


Re: [PATCH v4 0/1] Safe LSM (un)loading, and immutable hooks

2018-04-05 Thread Peter Dolding
On Fri, Apr 6, 2018 at 11:31 AM, Sargun Dhillon  wrote:
>
>
> On Thu, Apr 5, 2018 at 9:29 AM, Casey Schaufler 
> wrote:
>>
>> On 4/5/2018 3:31 AM, Peter Dolding wrote:
>> > On Thu, Apr 5, 2018 at 7:55 PM, Igor Stoppa 
>> > wrote:
>> >> On 01/04/18 08:41, Sargun Dhillon wrote:
>> >>> The biggest security benefit of this patchset is the introduction of
>> >>> read-only hooks, even if some security modules have mutable hooks.
>> >>> Currently, if you have any LSMs with mutable hooks it will render all
>> >>> heads, and
>> >>> list nodes mutable. These are a prime place to attack, because being
>> >>> able to
>> >>> manipulate those hooks is a way to bypass all LSMs easily, and to
>> >>> create a
>> >>> persistent, covert channel to intercept nearly all calls.
>> >>>
>> >>>
>> >>> If LSMs have a model to be unloaded, or are compled as modules, they
>> >>> should mark
>> >>> themselves mutable at compile time, and use the LSM_HOOK_INIT_MUTABLE
>> >>> macro
>> >>> instead of the LSM_HOOK_INIT macro, so their hooks are on the mutable
>> >>> chain.
>> >>
>> >> I'd rather consider these types of hooks:
>> >>
>> >> A) hooks that are either const or marked as RO after init
>> >>
>> >> B) hooks that are writable for a short time, long enough to load
>> >> additional, non built-in modules, but then get locked down
>> >> I provided an example some time ago [1]
>> >>
>> >> C) hooks that are unloadable (and therefore always attackable?)
>> >>
>> >> Maybe type-A could be dropped and used only as type-B, if it's
>> >> acceptable that type-A hooks are vulnerable before lock-down of type-B
>> >> hooks.
>> >>
>> >> I have some doubts about the usefulness of type-C, though.
>> >> The benefit I see htat it brings is that it avoids having to reboot
>> >> when
>> >> a mutable LSM is changed, at the price of leaving it attackable.
>> >>
>> >> Do you have any specific case in mind where this trade-off would be
>> >> acceptable?
>> >>
>> > A useful case for loadable/unloadable LSM is development automate QA.
>> >
>> > So you have built a new program and you you want to test it against a
>> > list of different LSM configurations without having to reboot the
>> > system.  So a run testsuite with LSM off  then enabled LSM1 run
>> > testsuite again disable LSM1 enable LSM2. run testsuite disable
>> > LSM2... Basically repeating process.
>> >
>> > I would say normal production machines being able to swap LSM like
>> > this does not have much use.
>> >
>> > Sometimes for productivity it makes sense to be able to breach
>> > security.   The fact you need to test with LSM disabled to know if any
>> > of the defects you are seeing is LSM configuration related that
>> > instance is already in the camp of non secure anyhow..
>> >
>> > There is a shade of grey between something being a security hazard and
>> > something being a useful feature.
>>
>> If the only value of a feature is development I strongly
>> advocate against it. The number of times I've seen things
>> completely messed up because it makes development easier
>> is astonishing. If you have to enable something dangerous
>> just for testing you have to wonder about the testing.
>>
Casey Schaufler we have had different points of view before.   I will
point out some serous issues here.  If you look a PPA and many other
locations you will find no LSM configuration files.

Majority of QA servers around the place run with LSM off.   There is a
practical annoying reason.   No point running application with new
code with LSM on at first you run with LSM off to make sure program
works.   If program works and you have the resources then transfer to
another machine/reboot to test with LSM this creates a broken
workflow.   When customer gets untested LSM configuration files and
they don't work what do support straight up recommend turning the LSM
off.

Reality enabling LSM module loading and unloading on the fly on QA
servers will not change their security 1 bit because they are most
running without LSM at all.   Making it simple to implement LSM
configuration testing on QA servers will reduce the number of times
end users at told to turn LSM off on their machines that will effect
over all security

Re: [PATCH v4 0/1] Safe LSM (un)loading, and immutable hooks

2018-04-05 Thread Peter Dolding
On Thu, Apr 5, 2018 at 9:34 PM, Igor Stoppa  wrote:
> On 05/04/18 13:31, Peter Dolding wrote:
>> On Thu, Apr 5, 2018 at 7:55 PM, Igor Stoppa  wrote:
>> There is a shade of grey between something being a security hazard and
>> something being a useful feature.
>
> Maybe the problem I see is only in the naming: if what right now is
> addressed as "mutable" were to be called in some other way that does not
> imply that it's impossible to lock it down, then I think there wouldn't
> be much of a problem anymore.
>
> How about s/mutable/protectable/g ?
>
> Then it could be a boot time parameter to decide if the "extra" hooks
> should be protected or stay writable, for example for performing more
> extensive testing.
>
Due to being a shades of grey area I would say kconfig 2 option and 1
boot time parameter.

Some kernels the ability to change LSM on fly should be fully disabled
in the build process.

The abplity to change LSM at runtime has limited valid usage cases so
even if the kernel is built with the ablity to change LSM at runtime
enabled it should still be off by default until enabled with boot time
parameter.   So those who need the feature turn it on and those who
don't need it on cannot turn it on just by installing a kernel.   For
systems that cannot use Linux kernel command line features to turn
stuff on and off a kernel configuration option to change the default
to on with warning in kernel configure.   So 2 kconfig entries and 1
Kernel Boot Parameter.   Of course when a feature like this is enabled
there should be a kernel message so its recorded in logs and is check
able if a feature like this that can lower security is turn on.

The ability to change LSM on fly i see the development usage cases.
I can not think of a single production system usage case for anyone
who is not a developer.   I might be thinking way to narrow.  Unless
someone else can find another use case I would suggest following what
I suggest how a feature like this is enabled.

Naming the feature what ever you think is suitable.

Peter Dolding


Re: [PATCH v4 0/1] Safe LSM (un)loading, and immutable hooks

2018-04-05 Thread Peter Dolding
On Thu, Apr 5, 2018 at 7:55 PM, Igor Stoppa  wrote:
> On 01/04/18 08:41, Sargun Dhillon wrote:
>> The biggest security benefit of this patchset is the introduction of
>> read-only hooks, even if some security modules have mutable hooks.
>> Currently, if you have any LSMs with mutable hooks it will render all heads, 
>> and
>> list nodes mutable. These are a prime place to attack, because being able to
>> manipulate those hooks is a way to bypass all LSMs easily, and to create a
>> persistent, covert channel to intercept nearly all calls.
>>
>>
>> If LSMs have a model to be unloaded, or are compled as modules, they should 
>> mark
>> themselves mutable at compile time, and use the LSM_HOOK_INIT_MUTABLE macro
>> instead of the LSM_HOOK_INIT macro, so their hooks are on the mutable
>> chain.
>
>
> I'd rather consider these types of hooks:
>
> A) hooks that are either const or marked as RO after init
>
> B) hooks that are writable for a short time, long enough to load
> additional, non built-in modules, but then get locked down
> I provided an example some time ago [1]
>
> C) hooks that are unloadable (and therefore always attackable?)
>
> Maybe type-A could be dropped and used only as type-B, if it's
> acceptable that type-A hooks are vulnerable before lock-down of type-B
> hooks.
>
> I have some doubts about the usefulness of type-C, though.
> The benefit I see htat it brings is that it avoids having to reboot when
> a mutable LSM is changed, at the price of leaving it attackable.
>
> Do you have any specific case in mind where this trade-off would be
> acceptable?
>

A useful case for loadable/unloadable LSM is development automate QA.

So you have built a new program and you you want to test it against a
list of different LSM configurations without having to reboot the
system.  So a run testsuite with LSM off  then enabled LSM1 run
testsuite again disable LSM1 enable LSM2. run testsuite disable
LSM2... Basically repeating process.

I would say normal production machines being able to swap LSM like
this does not have much use.

Sometimes for productivity it makes sense to be able to breach
security.   The fact you need to test with LSM disabled to know if any
of the defects you are seeing is LSM configuration related that
instance is already in the camp of non secure anyhow..

There is a shade of grey between something being a security hazard and
something being a useful feature.

If development people are not testing that LSM configuration are clean
because they don't really have enough machined to boot up individual
instances with each LSM and this results in broken LSM configuration
being shipped resulting in end users turning LSM off completely.
Then a little security risk from providing unload-able LSM hooks when
particularly requested is not that high really.

With all the different LSM options how do application/distribution
makers validate all the different LSM configurations effectively is a
question that does need to be answered.  In answering this question
allowing this form of compromised security as a option might be quite
a valid move.

Peter Dolding


Re: [GIT PULL] Kernel lockdown for secure boot

2018-04-04 Thread Peter Dolding
On Thu, Apr 5, 2018 at 2:26 AM, Matthew Garrett  wrote:
> On Tue, Apr 3, 2018 at 11:56 PM Peter Dolding  wrote:
>> On Wed, Apr 4, 2018 at 11:13 AM, Matthew Garrett  wrote:
>
>> > There are four cases:
>> >
>> > Verified Boot off, lockdown off: Status quo in distro and mainline
> kernels
>> > Verified Boot off, lockdown on: Perception of security improvement
> that's
>> > trivially circumvented (and so bad)
>> > Verified Boot on, lockdown off: Perception of security improvement
> that's
>> > trivially circumvented (and so bad), status quo in mainline kernels
>> > Verified Boot on, lockdown on: Security improvement, status quo in
> distro
>> > kernels
>> >
>> > Of these four options, only two make sense. The most common
> implementation
>> > of Verified Boot on x86 platforms is UEFI Secure Boot,
>
>> Stop right there.   Verified boot does not have to be UEFI secureboot.
>>You could be using a uboot verified boot or
>> https://www.coreboot.org/git-docs/Intel/vboot.html  google vboot.
>> Neither of these provide flags to kernel to say they have been
>> performed.
>
> They can be modified to set the appropriate bit in the bootparams - the
> reason we can't do that in the UEFI case is that Linux can be built as a
> UEFI binary that the firmware execute directly, and so the firmware has no
> way to set that flag.
>
With some of your embedded hardware boot loaders you have exactly the
same problem.   Where you cannot set bootparams instead have to hard
set everything in the kernel image.  This is why there is a option to
embedded initramfs image inside kernel image because some of them will
only load 1 file.

So not using UEFI  you run into the exact same problem.   So lockdown
on or off need to be a kernel build option setting default.   This
could be 3 options Always on, Always off and "Automatic based on boot
verification system status".

https://linux.die.net/man/8/efibootmgr

Also I have a problem here in non broken UEFI implementations -@ |
--append-binary-args that is very simple set the command line passed
into UEFI binary loaded by the firmware with the Linux kernel this
comes bootparams.   Yes using --append-binary-args can be a pain it is
used to tell the Linux kernel where to find the / drive.   So turning
lockdown off by bootparams is down right possible with working UEFI.
There is a lot of EFI out there that does not work properly.

>> Now Verified Boot on, lockdown off.   Insanely this can be required in
>> diagnostic on some embedded platform because EFI secureboot does not
>> have a off switch.These are platforms where they don't boot if
>> they don't have a PK and KEK set installed.  Yes some of these is jtag
>> the PK and KEK set in.
>
>> The fact that this Verified Boot on, lockdown off causes trouble
>> points to a clear problem.   User owns the hardware they should have
>> the right to defeat secureboot if they wish to.
>
> Which is why Shim allows you to disable validation if you prove physical
> user presence.

Good idea until you have a motherboard where the PS2 ports have failed
and does not support usb keyboard so you have no keyboard until after
the kernel has booted so no way to prove physical presence.   Or are
working on something embedded that has no physical user presence
interface in the boot stages these embedded devices can also be UEFI
with secureboot.  Not everything running UEFI has keyboard,
screenanything that you can prove physical user presence with
sometimes you have to pure depend on the signing key.

If I am a person who has made my own PK and has my own KEK in UEFI
system I should have the right to sign kernel with lockdown off by
default.   I may need this for diagnostics on hardware without user
interface and I may need this because the hardware is broken and I
have set PK and KEK set by direct firmware  flash access possibly by
jtag or possibly before critical port on motherboard died.

Of course I am not saying that Microsoft and others cannot have rules
that say if using their KEK that you cannot do this.   But if the
machine is my hardware and I have set my own PK and KEK set I do know
what I am doing and I should be allowed to compromise security if I
wish its my hardware.   I should not have to custom hack to do it.
Of course I am not saying that the setting in Linux kernel
configuration system cannot have a big warning that you should not do
this unless you have no other valid option and I am not saying that
the kernel should not log/report if it see what appears to be a
questionable configuration like dmesg "SECURITY ISSUE:  UEFI
secureboot detected enabled kernel built with lockdown disabled system
at risk of comprise".  Something audit tools could check

Re: [GIT PULL] Kernel lockdown for secure boot

2018-04-04 Thread Peter Dolding
> If you don't have secure boot then an attacker with root can modify your
> bootloader or kernel, and on next boot lockdown can be silently disabled.

Stop being narrow minded you don't need secure boot to protect
bootloader or kernel the classic is only boot from read only media.

Another is network boot using https can coreboot firmware.   This
checks the certificate  of the https server against selected CA before
downloading anything and as long as the firmware is set read only in
hardware the attack has absolutely nothing to work on.

In fact the network boot form https server is more secure than UEFI
secureboot due to highly limited parties who can alter/provide the
approved boot loader/kernel image.

Having root user rights does not override physical security.The
fact there are other ways of doing bootloader and kernel security
other than UEFI secureboot that are in lots of cases more secure than
UEFI secureboot due to using more limited keys is the absolute reason
why lockdown is required without UEFI secureboot.

It would make sense to extend kexec to support UEFI secureboot
verification  and also kexec to have frameworks to support other
security options like https server storage of all kernel images.
Please note kexec supporting UEFI secureboot verification should also
support booting non UEFI secureboot but verified by some other method
and having own PK/KEK set for kexec and this would be when the Linux
kernel is placed in firmware and used instead of EFI firmware..

Please note there are many UEFI firmwares that with secureboot off
allow setting up secure https bootting where you are not in fact
validating the boot loader or kernel but validating the source you get
them from.

There are three different ways to achieve a protected boot process.
1) validate the boot files.(this is like UEFI secure boot and many
other methods)
2) validate source the boot files.  Yes this can be apply key check to
image if image is not signed don't boot from it and the image contain
boot loader and kernel then not bother validating the boot loader and
kernel image/parts individually same with https.
3) make boot files read only.

All three achieve the same level of security.   If you are using any
of the three lockdown option may provide some benefit.

Yes https network boot effectively does 2 and 3 so making having a
very limit threat against the boot process.

Remember there is more 1 way to skin a cat just like there is more
than 1 way to make a secure system.   Currently being too narrow in
methods for doing protected booting.


Peter Dolding.


Re: [GIT PULL] Kernel lockdown for secure boot

2018-04-03 Thread Peter Dolding
.

On Wed, Apr 4, 2018 at 11:13 AM, Matthew Garrett  wrote:

> There are four cases:
>
> Verified Boot off, lockdown off: Status quo in distro and mainline kernels
> Verified Boot off, lockdown on: Perception of security improvement that's
> trivially circumvented (and so bad)
> Verified Boot on, lockdown off: Perception of security improvement that's
> trivially circumvented (and so bad), status quo in mainline kernels
> Verified Boot on, lockdown on: Security improvement, status quo in distro
> kernels
>
> Of these four options, only two make sense. The most common implementation
> of Verified Boot on x86 platforms is UEFI Secure Boot,

Stop right there.   Verified boot does not have to be UEFI secureboot.
  You could be using a uboot verified boot or
https://www.coreboot.org/git-docs/Intel/vboot.html  google vboot.
Neither of these provide flags to kernel to say they have been
performed.

So Verified boot looking off to kernel yet lockdown needing to be on
is one very valid combination and must be supported because the Linux
kernel does not always know when it verified boot environment.  When
the Linux kernel thinks verified boot is off it may not be trivial to
circumvent.

Now Verified Boot on, lockdown off.   Insanely this can be required in
diagnostic on some embedded platform because EFI secureboot does not
have a off switch.These are platforms where they don't boot if
they don't have a PK and KEK set installed.  Yes some of these is jtag
the PK and KEK set in.

The fact that this Verified Boot on, lockdown off causes trouble
points to a clear problem.   User owns the hardware they should have
the right to defeat secureboot if they wish to.

In fact the issue that you can not install a KEK per operating system
installed shows a problem as well.

So all OS use the same KEK for their installers and then you have all
non Microsoft in a lot of cases the same KEK for booting OS.   Any of
these bootloaders/kernels with defect will end up with the security
being exactly like Verified Boot on, lockdown off.  Remember attackers
will send around copies of what ever they need to so they can breach a
system so they find a defective solution some where they will ship it
everywhere.   Attackers that secureboot is attempted to prevent are
criminal anyhow what is a little bit of copyright violation to them..
  So when the current UEFI design is security theatre there should not
be any special effort to support it.

If UEFI was not security theatre there would be a clean way for people
install and setup up their systems to list what operating system KEK
should be accepted so allowing attack surface area to be minimised and
the damaged form any flawed implementation to also be limited.   This
way end users could opt in or out of operating systems based on
security.   If user has opted out of all operating systems doing
Verified Boot on, lockdown off: those are not a threat.   Also any OS
with defective kernel or bootloader that the system has not allowed
the KEK of would also not be a threat.

Really I see no reason to be bending over in the Linux kernel for UEFI
secureboot.   You list all 4 types need to exist for different usage
case of the Linux kernel.   The fact UEFI secureboot currently is
implemented on x86 does not handle the fact all 4 use cases need to
exist is really a issue with UEFI Secureboot that needs to be fixed by
those designing UEFI for the future.

Allowing the kernel to be configured the 4 different ways does not
mean a party like Microsoft has to sign off on everything the Linux
kernel can do.   Its not like android/IOT vendors have to bow to
Microsoft.

The Linux kernel should not show favouritism.   This does mean that
all 4 modes should be in the kernel configuration options.

Matthew Garrett your mistake is that only 2 are valid when all 4 are
valid in different usage cases.Circumventing security is sometimes
required  accepting that case is hard for some people.   Of course
when a party need perform circumventing security the fact that it
currently gives out the keys to world of UEFI systems is a very big
security design flaw in UEFI.

Why should the Linux kernel contain code to work around defective
design of UEFI and limit what users not using UEFI and using UEFI can
do?

Peter Dolding


Re: [kernel-hardening] Re: [PATCH v7 2/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-06-03 Thread Peter Dolding
On Sun, Jun 4, 2017 at 8:22 AM, Matt Brown  wrote:
> On 06/03/2017 06:00 PM, Alan Cox wrote:
>>>
>>> TIOCSLCKTRMIOS
>>
>>
>> That one I'm more dubious about
>>
>>> TIOCSLTC
>>> TIOCSSOFTCAR
>>
>>
>> tty_io.c also has a few and n_tty has a couple we'd want.
>>
>>>
>>> would it be overkill to have a sysctl kernel.ttyioctlwhitelist.X where X
>>> is one of the ioctls above?
>>
>>
>> Why would anyone want to change the entries on that list
>>
>
> Did you see Serge's proposed solution? I want us to not be talking past
> each other. Serge proposed the following:
>
> | By default, nothing changes - you can use those on your own tty, need
> | CAP_SYS_ADMIN against init_user_ns otherwise.
> |
> | Introduce a new CAP_TTY_PRIVILEGED.
> |
> | When may_push_chars is removed from the whitelist, you lose the
> | ability to use TIOCSTI on a tty - even your own - if you do not have
> | CAP_TTY_PRIVILEGED against the tty's user_ns.
>
> The question is how do you add/remove something from this whitelist? I
> assume by add/remove we don't mean that you have to recompile your
> kernel to change the whitelist!
>
> you earlier said you wanted the check to look like this:
>
> | if (!whitelisted(ioctl) && different_namespace && magic_flag)
>
> I want to know which namespace you are talking about here. Did you mean
> user_namespace? (the namespace I added tracking for in the tty_struct)

There are many ways to attempt to cure this problem. They some
that are just wrong.

Pushing stuff up to CAP_SYS_ADMIN is fairly much always wrong.

Using a whitelisted solution does have a downside but to use some
application that use TIOCSTI safely I have not had to push application
to CAP_SYS_ADMIN.

Another question due to the way the exploit work a broken TIOCSTI
where push back could be something someone as CAP_SYS_ADMIN run.

What I don't know if yet set when ever an application used TIOCSTI to
push back chars back into input that this would set input to be
flushed on tty disconnect or application termination would this break
any applications.

So it may be possible to allow applications to freely use TIOCSTI just
make sure that anything an application has pushed back into input
buffer cannot get to anything else.

The thing to remember is most times when applications are controlling
other applications they are not pushing data backwards on input..

Question I have is what is valid usage cases of TIOCSTI.   Thinking
grscecurity got away with pushing this up to CAP_SYS_ADMIN there may
not be many.

If there is no valid usage of TIOCSTI across applications there is no
reason why TIOCSTI cannot be setup to automatically trigger input
flushs to prevent TIOCSTI inserted data getting anywhere.
.
This could be like X11 and it huge number of features where large
number were found that no one ever used just was created that way
because it was though like it would be useful.

My problem here is TIOCSTI might not need a flag at all.   TIOCSTI
functionality maybe in need of limitations particularly if TIOCSTI
push back into input cross from one application to the next has no
genuine application usage.

So far no one has started that exploited TIOCSTI functionality exists
in any genuine application as expected functionality.   I cannot find
example of where pushing back into input then going to background or
dieing/exiting and having that pushed back input processed is done by
any genuine application as expected functionality.   That is something
that could be limited if there is no genuine users and close the door
without having to modify existing applications that don't expect to-do
that.

Its really simple to get focused in on quick fix to problems without
asking is the behaviour even required.

Peter Dolding


Re: [RFC 0/3] WhiteEgret LSM module

2017-06-03 Thread Peter Dolding
On Thu, Jun 1, 2017 at 1:35 AM, Serge E. Hallyn  wrote:
> Quoting Casey Schaufler (ca...@schaufler-ca.com):
>>
>>
>> On 5/31/2017 3:59 AM, Peter Dolding wrote:
>> > ...
>> >
>> > Like you see here in Australian government policy there is another
>> > thing called whitelisted.
>> > https://www.asd.gov.au/publications/protect/top_4_mitigations_linux.htm
>> > Matthew Garrett you might want to call IMA whitelisting Australian
>> > government for one does not agree.  IMA is signed.   The difference
>> > between signed and white-listed is you might have signed a lot more
>> > than what a particular system is white-listed to allowed used.
>> >
>> To be clear, I'm all for a security module to support this policy.
>> As the explicit requirement is for a whitelist, as opposed to allowing
>> for a properly configured system*, you can't use any of the existing
>> technologies to meet it. This kind of thing** is why we have a LSM
>> infrastructure.
>>
>> Unfortunately, the implementation proposed has very serious issues.
>> You can't do access control from userspace. You can't count on
>> identifying programs strictly by pathname. It's much more complicated
>> than it needs to be for the task.
>>
>> Suggestion:
>>
>> Create an security module that looks for the attribute
>>
>>   security.WHITELISTED
>
> Bonus, you can have EVM verify the validity of these xattrs, and
> IMA verify the interity of the file itself.

Complete fail.   You have to think of a whitelist as a list you give
to a security at a gate.

Shot-gunned all over the file system that you have to search down what
is approved is not acceptable.

I should be more clear you need a whitelist file to tick the box.
Where you can open up 1 file and see everything that is on the
approved list.   Same with blacklist.   Think of it like a list of
invited guests given to a security guard at a door.   You can check
who is invited by look at that list. Attribute is like saying if
the person has X id let them in and going to the guard at the door to
see who is let in is not going to help you.

Of course just because the guard at door is letting people on the list
in does not mean they are not checking ids as well.   This is not an
either or issue this is an add a feature.

So whitelist file and Attribute in production usage function way
differently.   You don't want to have to scan a complete filesystem
all the time looking for stray set attributes.

Whitelist and Blacklisting fits into IMA not LSM really.Because
you need to be able to use other LSM at the same time as
white/blacklists.

EVM and attributes  they are so easy to use that implement
whitelist/blacklist files has not be done.  Including means to sign
whitelist files to prevent modification when required.

So what both of you are suggest is not the right item to tick the box
to claim Linux has whitelist support.

Linux has hacks to implement whitelist support not properly whitelist
support that is functional in the right way.

Whitelist functional in the right way look in 1 location know what is set.

Also IMA support for containers is kind required supporting
whitelist/blacklist files because setting everything into attribute
can become very impractical.

So this is something that is missing.

Peter Dolding


Re: [RFC 0/3] WhiteEgret LSM module

2017-06-03 Thread Peter Dolding
On Thu, Jun 1, 2017 at 1:36 AM, Mehmet Kayaalp
 wrote:
>
>> On May 31, 2017, at 6:59 AM, Peter Dolding  wrote:
>>
>> Number 1 we need to split the idea of signed and whitelisted.   IMA is
>> signed should not be confused with white-listed.You will find
>> policies stating whitelist and signed as two different things.
>
> IMA-appraisal can do both. If the securtiy.ima extended attribute
> of the file is a hash and not a signature, then it is whitelisting.

This this point you straight up fail.   This is no long classes a
whitelist.  Its now an extended attribute checksum.

IMA with proper whitelist support were whitelist is a file allows IMA
to have hashs and so on stored in that file so removing the need for
the filesystem where IMA being used to have extended attributes.

Second for this is being able to open up 1 file and see what is approved.


>
>> Like you see here in Australian government policy there is another
>> thing called whitelisted.
>> https://www.asd.gov.au/publications/protect/top_4_mitigations_linux.htm
>> Matthew Garrett you might want to call IMA whitelisting Australian
>> government for one does not agree.  IMA is signed.   The difference
>> between signed and white-listed is you might have signed a lot more
>> than what a particular system is white-listed to allowed used.
>
> I doubt the Australian government is an authority on Linux features.
> IMA-appraisal can be set to "fix" mode with a boot parameter. If the
> policy covers what you want to whitelist (e.g. files opened by user x),
> and then when those files are accessed, the kernel writes out the hash.
> Then, you can switch to "enforce" mode to allow only files with hashes.

Question does this feature support booting the system in different
modes giving different accessible files.

The feature says whitelist but is fact to tick the box is lists.   So
booted into standard mode has one lot of applications and booted into
repair mode has another lot of applications and so on with this being
achieved by choosing different whitelist files at boot.
>
> Also, you can achieve the same thing by signing all whitelisted
> files and add the certificate to .ima keyring and throwing away the
> signing key.

This here is signed nothing to-do with whitelist.This is using
signing to hack round not having a proper whitelist feature so this
can never ticks the box.
>
>> The feature need to include in it name whitelisting or just like the
>> Australian Department of Defence other parties will mark Linux has not
>> having this feature.
>
> I guess we need to advertise IMA-appraisal better.

They have looked that and it fails.   Because IMA currently is lacking
the feature.

I do see that whitelist and blacklist file support added to IMA so IMA
does not need extended attribute file systems and for those who want
all the setting in one file would be a good thing.

UUID of the file system could be included in path to file in whitelist.
>
>> Whitelist is program name/path and checksum/s.   If the file any more
>> than that is now not a Whitelist but a Security Policy Enforcement or
>> signing.   Whitelist and blacklists are meant to be simple things.
>> This is also why IMA fails and is signed to too complete to be a basic
>> Whitelist.
>
> When you work out all the little details, you arrive at IMA-appraisal.
> You have to consider how the scheme is bootstrapped and how it
> is protected against the root. IMA-appraisal either relies on a boot
> parameter and write-once policy, or the trusted keyrings.
>
Here you have gone wrong.

You are presume whitelist has to be protected against root.   A signed
whitelist does have to be protected against root.   Unsigned whitelist
in fact being alterable by those with privilege is expected..

You don't have firewall rules always protected against root right.
Unsigned whitelists are in the same camp.

Also other mistakes is when they are looking for whitelist feature
they are also looking for blacklist feature.

IMA features need to apply just as much to containers as they do to
the complete system.   This is where things get tricky.   Putting
entries in filesystem xattr for every service you have risks running
you out of file system xattr space.

>From my point view the missing system wide whitelist and blacklist
file support is a defect of IMA that is not design at this stage to be
able to function without filesystem xattr as soon as remove the means
to use xattr design IMA you are forced to implement whitelist file at
least..

Also I see it as a weakness in IMA that it cannot be done on a per
container base and this is also mostly likely due to over file system
dependence.

I am not saying that IMA xattr usage has to be removed but it should
not be the only option IMA has.

Peter Dolding


Re: [kernel-hardening] Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-31 Thread Peter Dolding
On Wed, May 31, 2017 at 7:52 AM, Alan Cox  wrote:
>> > So tty stuff should under a tty capabilities.
>>
>> (last reply on this)
>>
>> Currently capabilities.7 says
>>
>>   * employ  the  TIOCSTI ioctl(2) to insert characters into the 
>> input queue of a
>> terminal other than the caller's controlling terminal;
>>
>> for CAP_SYS_ADMIN.
>>
>> So you can create a new CAP_SYS_TIOCSSTI if you like, and offer a patch where
>> *both* CAP_SYS_ADMIN and CAP_SYS_ADMIN suffice.  Again, see CAP_SYSLOG for a
>> prior example.
>
> Even then it wouldn't be useful because the attacker can use every other
> interface in the tty layer, many of which you can't magic away behind a
> capability bit. And the applications would need changing to use the
> feature - at which point any theoretical broken apps can instead be fixed
> to use a pty/tty pair and actually fix the real problem.
>
Alan is right.   CAP_SYS_ADMIN allows crossing the tty barrier.

Broken applications that you can wrap in a pty/tty pair as the lxc
application does would be defeated if those applications move up to
CAP_SYS_ADMIN.  Because you have granted the high right of cross
pty/tty containment.

Pushing CAP_SYS_TIOSSTI out by itself without the feature in
CAP_SYS_ADMIN means broken applications can be allowed to run in like
a lxc container where they cannot go anywhere with the exploit because
the pty/tty they are picking is not going to get them very far at all.

Pushing TIOSSTI up to CAP_SYS_ADMIN to address this problem is wrong.
 Question is also how many applications use CAP_SYS_ADMIN feature to
push chars into other pty/tty on the system.   Pushing across pty/tty
barrier may not be a suitable feature to be generically in
CAP_SYS_ADMIN in the first place.

http://www.halfdog.net/Security/2012/TtyPushbackPrivilegeEscalation/

This here is example of TIOSSTI pushback as CAP_SYS_ADMIN being bad.
I don't know of a genuine program using push back in exploiting way
where the pushed back input is expected to be processed after the
program has terminated.

Really we need to work out how many breakage will in fact be caused by
majority restricting both pushback and write across tty barrier.
This is not like CAP_SYS_LOG these are features that can be used to
exploit system badly.   It is possible that the exploiting form of
TIOSSTI pushback is used by nothing genuine userspace in any properly
functional case.   So if that is the case unconstrained TIOSSTI
push-back would only be making application crashes worse.

The reason I want TIOSSTI pushback moved to its own CAP_SYS first is
to find out if anything is in fact using it as part of genuine usage
and allowing anyone caught out to work around it.I am sorry this
is me most likely using X11 logic break it and see if anyone yells.
If no one complains disappear the feature completely then this closes
this form of exploit for good.

Peter Dolding


Re: [RFC 0/3] WhiteEgret LSM module

2017-05-31 Thread Peter Dolding
t what major end
consumers of this are asking for.

Now I am only referring to how Australian government will title the
Linux kernel features and the requirement they are looking for.   I
would not be surprised if other governments are the same in their
titling of Linux features.

I see this idea of this patch kinda on the right path but
implementation is very lacking.Maybe system wide whitelist
features should be linked to IMA as a user-space callable program of
course that program does not override signed or not signed approval
only checks against what ever the current whitelist is.

Whitelist is program name/path and checksum/s.   If the file any more
than that is now not a Whitelist but a Security Policy Enforcement or
signing.   Whitelist and blacklists are meant to be simple things.
This is also why IMA fails and is signed to too complete to be a basic
Whitelist.

Whitelists expected systemwide and per user/service.   So the ability
to connect a whitelist to a namespace could possibly be used to do the
per user/service.

Reason for the userspace is old Linux system and government policy
says some new checksum the old Linux kernel does not have.   Of course
this issue could possible be handled another way allowing the Linux
kernel to use assigned userspace programs for checksumming.   Remember
what we make to today will be old at some point in the future running
10+ year old system is nothing new to governments.

Yes inverted policy was not in this module being a blacklist due to
using a userspace application it would not be hard for the userspace
program to be set to approve everything bar what it had on a black
list..

So design need to include option to use both whitelist and blacklist
with these being simple filenames and path with checksums.   We need
something in Linux kernel documentation covering whitelist and
blacklist with them being simple.

Peter Dolding.


Re: [kernel-hardening] Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-29 Thread Peter Dolding
On Sat, May 20, 2017 at 12:33 AM, Serge E. Hallyn  wrote:
> On Fri, May 19, 2017 at 12:48:17PM +1000, Peter Dolding wrote:
>> Using cap_sys_admin as fix is like removing car windsheld because
>> vision is being blocked by a rock hitting it.
>
> Nonsense.  If the application has cap_sys_admin then it is less contained and
> more trusted anyway.  If I went to the trouble to run an application in a
> private user namespace (where it can have cap_sys_admin, but not targeted
> at my tty) then it should be more contained.  That's the point of targeted
> capabilities.

The thing that is missed every time is how much is cap_sys_admin.

So you are saying a user namespace has to be set up to contain the defect.

Really no application should have cap_sys_admin.

The theory of capabilities is that security should be broken down into
logical blocks.

So tty stuff should under a tty capabilities.

This one here should not be shoved into cap_sys_admin because can you
show a single case of a general used application performing this
action in the exploit way that is normal behaviour.

The exploits are doing behaviours that have no general place.

Its really simple to shove everything to cap_sys_admin instead of hey
lets look at the exploits how they work and if this should be fairly
blanked banned.  The behaviour that is question is being able push
chars into input stream and have them processed after application has
terminated or after application has switched to background.   That is
not pushing data into another tty.   Pushing data into a different tty
is already restricted to cap_sys_admin.

Personally from my point of view when application terminates or
switches to background  what ever it pushed back into the input buffer
should be junked and maybe a special cap to deal with rare case of
applications that expect this behavour.

Also please remember one of the application using this behaviour of
pushing stuff back to input buffer is csh.  In other words a general
user shell.   This will not be the only application that is general
usage after the change of pushing to cap_sys_admin that would also
have to be pushed to cap_sys_admin because they use TIOCSTI in a way
that the patch will block when the program does not have
cap_sys_admin.   So now you have more applications running as
cap_sys_admin so more security problems.

Peter Dolding.


Re: [kernel-hardening] Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-18 Thread Peter Dolding
ungetc().   So this is something that is quite well used.   I see
ungetc as kind of a bad idea to be implemented kernel level going to
tty shareable between processes.

> If there is some better solution that the kernel can provide to
> mitigate processes misusing ttys, then by all means, we can add that
> too, but has nothing to do with refusing this change. This solves a
> specific problem that in many cases is the only path to privilege
> escalation (as Daniel mentioned). Refusing this change is nonsense.
> "Your car shouldn't have seat belts because maybe something will stab
> you through the windshield" isn't a reasonable argument to make.

Using cap_sys_admin as fix is like removing car windsheld because
vision is being blocked by a rock hitting it.

Kees the problem with accepting a security fix that is wrong the
proper change never gets worked on.

I am not saying there is not a real problem here.   The fix is not
push it to CAP_SYS_ADMIN.   Due to TIOCSTI that cross process and tty
boundaries been used in security breaks and limit applications that
uses use as either Administrator or as normal User means this ability
does not own in CAP_SYS_ADMIN either.

The same is true of obsolete function calls that have been shoved into
CAP_SYS_ADMIN already there comes point where no valid userspace
application is using that function.   At that point the only thing
using that function is exploits so then really CAP_SYS_ADMIN should
not have access to those obsolete functions.   There is a real need
for CAP_SYS_OBSOLETE.   If a program has to ahve CAP_SYS_OBSOLETE set
on it this means it using functionally that is known busted in some
way..

This is the biggest problem here.   There is no real agreement how we
exterminate/restrict flawed functions to progressive reduce the number
of applications and users who can access the flawed functions to allow
them to be fully disabled for 99.99 percent of people using systems.

That is what people are not getting CAP_SYS_ADMIN has too many users
to be counted and mitigation as well.

Yes we must not break userspace but we also must mitigate against
userspace issues.   We do need a clear rules on how to fix these
security problem user-space is allowed to be broken and of course made
part of the Linux kernel documentation.   Altering the binary is for
sure out because the person operating the system may not have that
right.   Placing a flag on a binary so it works I would see as
acceptable as long as that flag was not grant a stack of other
privileges that application would have never had before..

Peter Dolding


Re: [kernel-hardening] Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-16 Thread Peter Dolding
On Wed, May 17, 2017 at 1:48 AM, Serge E. Hallyn  wrote:
> Quoting Kees Cook (keesc...@chromium.org):
>> On Tue, May 16, 2017 at 5:22 AM, Matt Brown  wrote:
>> > On 05/16/2017 05:01 AM, Peter Dolding wrote:
>> >>>
>> >>>
>> >>> I could see a case being make for CAP_SYS_TTY_CONFIG. However I still
>> >>> choose to do with CAP_SYS_ADMIN because it is already in use in the
>> >>> TIOCSTI ioctl.
>> >>>
>> >> Matt Brown don't give me existing behaviour.CAP_SYS_ADMIN is
>> >> overload.   The documentation tells you that you are not to expand it
>> >> and you openly admit you have.
>> >>
>> >
>> > This is not true that I'm openly going against what the documentation
>> > instructs. The part of the email chain where I show this got removed
>> > somehow. Again I will refer to the capabilities man page that you
>> > quoted.
>> >
>> > From http://man7.org/linux/man-pages/man7/capabilities.7.html
>> >
>> > "Don't choose CAP_SYS_ADMIN if you can possibly avoid it!
>> > ...
>> > The only new features that should be associated with CAP_SYS_ADMIN are
>> > ones that closely match existing uses in that silo."
>> >
>> > My feature affects the TIOCSTI ioctl. The TIOCSTI ioctl already falls
>> > under CAP_SYS_ADMIN, therefore I actually *am* following the
>> > documentation.
>>
>> CAP_SYS_ADMIN is the right choice here, I agree with Matt: it is
>> already in use for TIOCSTI. We can't trivially add new capabilities
>> flags (see the various giant threads debating this, the most recently
>> that I remember from the kernel lock-down series related to Secure
>> Boot).
>
> Consideer that if we use CAP_SYS_TTY_CONFIG now, then any applications
> which are currently being given CAP_SYS_ADMIN would need to be updated
> with a second capability.  Not acceptable.  Even when we split up
> CAP_SYSLOG, we took care to avoid that (by having the original capability
> also suffice, so either capability worked).
>
There is another option create a security bit.

That could be called SECBIT_CONTAINER.   This turns off functionally
like TIOCSTI that can be used to it break out with.

This case the mainlined code the TIOCSTI currently in CAP_SYS_ADMIN is
a container breaker its designed to allow reaching cross users and
TTYs.   SECBIT is a inverted capability so when it enabled it disables
something and once enabled it cannot be disabled.   So the lxc
addressed to the user TIOCSTI causing a breakout does not work against
the CAP_SYS_ADMIN one.   If there was a security bit that disabled
TIOCSTI completely that prevents all the escape paths by TIOCSTI.

There would also be room for a SECBIT_NO_OBSOLETE what is quite simple
make using obsolete functions application fatal.   Now with CAP_SYSLOG
if SECBIT_NO_OBSOLETE programs using the old capability could find the
feature removed.   So over time we can systematically remove the multi
entry path we have now as userspace updates and stops requiring the
second path.

There is more than 1 way to skin this cat.   There is no need to add
more to CAP_SYS_ADMIN and it particular bad when you consider having
to obey the Linux Rule of user-space compatibly would result in having
apply CAP_SYS_ADMIN to existing applications with Matts patch.

Peter

.


Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-16 Thread Peter Dolding
On Wed, May 17, 2017 at 12:28 AM, Kees Cook  wrote:
> On Tue, May 16, 2017 at 5:22 AM, Matt Brown  wrote:
>> On 05/16/2017 05:01 AM, Peter Dolding wrote:
>>>>
>>>>
>>>> I could see a case being make for CAP_SYS_TTY_CONFIG. However I still
>>>> choose to do with CAP_SYS_ADMIN because it is already in use in the
>>>> TIOCSTI ioctl.
>>>>
>>> Matt Brown don't give me existing behaviour.CAP_SYS_ADMIN is
>>> overload.   The documentation tells you that you are not to expand it
>>> and you openly admit you have.
>>>
>>
>> This is not true that I'm openly going against what the documentation
>> instructs. The part of the email chain where I show this got removed
>> somehow. Again I will refer to the capabilities man page that you
>> quoted.
>>
>> From http://man7.org/linux/man-pages/man7/capabilities.7.html
>>
>> "Don't choose CAP_SYS_ADMIN if you can possibly avoid it!
>> ...
>> The only new features that should be associated with CAP_SYS_ADMIN are
>> ones that closely match existing uses in that silo."
>>
>> My feature affects the TIOCSTI ioctl. The TIOCSTI ioctl already falls
>> under CAP_SYS_ADMIN, therefore I actually *am* following the
>> documentation.
>
> CAP_SYS_ADMIN is the right choice here, I agree with Matt: it is
> already in use for TIOCSTI. We can't trivially add new capabilities
> flags (see the various giant threads debating this, the most recently
> that I remember from the kernel lock-down series related to Secure
> Boot).

We cannot just keep on expanding CAP_SYS_ADMIN either.
>
>>> I fact this usage of TIOCSTI I personally think should require two
>>> capabilities flags set.   CAP_SYS_ADMIN section left as it is at this
>>> stage.   With TIOSCTI stuck behind another capability.
>>>
>>> If you had added a new capability flag you could set file capabilities
>>> on any of the old applications depending on the now secured behaviour.
>
> If we're adjusting applications, they should be made to avoid TIOSCTI
> completely. This looks to me a lot like the symlink restrictions: yes,
> userspace should be fixed to the do the right thing, but why not
> provide support to userspace to avoid the problem entirely?
>
Kees I like but you have forgot the all important rule.   The Linus
Rule.Existing applications must  have a method work.
  So modify applications  binary is not way out of problem.

Please note making CAP_SYS_ADMIN the only way to use TIOCSTI also
means setting CAP_SYS_ADMIN on all the existing applications to obey
the Linus Rule of not break userspace.   So this is why the patch is
strictly no as this means elevating privilege of existing applications
and possibly opening up more security flaws.

Reality any patch like the one we are talking about due to the Linus
Rule and the security risk it will open up obey this it just be
rejected.   There is another kind of way I will cover with Serge.

Peter Dolding.


Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-16 Thread Peter Dolding
>
> I could see a case being make for CAP_SYS_TTY_CONFIG. However I still
> choose to do with CAP_SYS_ADMIN because it is already in use in the
> TIOCSTI ioctl.
>
Matt Brown don't give me existing behaviour.CAP_SYS_ADMIN is
overload.   The documentation tells you that you are not to expand it
and you openly admit you have.

Does anything of TIOSCTI functionally say that it really should be in
CAP_SYS_ADMIN.

If functionality is going to cause security for containers maybe it
should be not in CAP_SYS_ADMIN but in its own capability that can
enabled on file by file base.

>
> You might be right that CAP_SYS_ADMIN is overloaded, but my patch
> barely adds anything to it since TIOCSTI already falls under its
> control. It seems extreme to say this patch ought to be rejected just
> because it contains CAP_SYS_ADMIN. If we want to fix the state of Linux
> capabilities, then I suggest that should be a separate patchset to
> reorganize them into a more modular set of controls.
>
We have end up with CAP_SYS_ADMIN a mess by the of a death by a
thousand cuts.   Each person to extend CAP_SYS_ADMIN to it current
mess said the same thing.   My patch barely added anything times that
by a few thousand and you end up with what we have today.   At some
point no more has to be said.

There is no point attempting to tidy it up of the rules are not put in
place so it does not turn into a mess again.

This is not something that is suitable to be done as one large
patchset.   This is better done in the same kind of method that made
it.  So every time people want to alter something associated with
CAP_SYS_ADMIN it has to get assessed and the patch has to be one that
partly corrects the existing mess.Do this enough times and we will
no longer have a mess on CAP_SYS_ADMIN.

https://www.freedesktop.org/software/systemd/man/systemd.exec.html
Please note the CapabilityBoundingSet=

Your current patch adds no extra controls for me running a service
under systemd or anything else like it to say I don't want the
processes having the means to-do this even that they are running with
CAP_SYS_ADMIN to perform other tasks..

--employ the TIOCSTI ioctl(2) to insert characters into the input
queue of a terminal other than the caller's controlling terminal;--
This currently under CAP_SYS_ADMIN is vastly more powerful than the
one you are attempt to take away with your patch.  This one can send
messages into other terminals.   This is a vastly more powerful
version of TIOSCTI.

I fact this usage of TIOCSTI I personally think should require two
capabilities flags set.   CAP_SYS_ADMIN section left as it is at this
stage.   With TIOSCTI stuck behind another capability.

If you had added a new capability flag you could set file capabilities
on any of the old applications depending on the now secured behaviour.

https://github.com/lxc/lxc/commit/e986ea3dfa4a2957f71ae9bfaed406dd6e16

Also the general user TIOCSTI issue can be handled a different way as
LXC fix shows.  Where they uses a pty to isolate so meaning in their
fixed setup user TIOSCTI was not harmful but CAP_SYS_ADMIN TIOSCTI
still could be.  You patch as not address this problem because you
shoved everything under CAP_SYS_ADMIN.   But if you add different
capability to use TIOSCIT that is not CAP_SYS_ADMIN  to allow
CAP_SYS_ADMiN TIOSCTI functionality to be disabled.

This is what you see more often than not when you dig into this
patches adding more CAP_SYS_ADMIN.  Adding CAP_SYS_ADMIN is not fixing
a problem in most cases.   Breaking CAP_SYS_ADMIN functionality up
should be the goal not expand it.

Basically you have done something documentation has a note to
developer not to-do.   If you start looking at the problem what you
doing is not helping.   If people end up using CAP_SYS_ADMIN to access
TIOSCTI its giving the program a more powerful version of TIOSCTI to
do more harm with so reduced containment.  Totally anti to what you
are meant to be doing with capabilities..


Re: [PATCH v6 0/2] security: tty: make TIOCSTI ioctl require CAP_SYS_ADMIN

2017-05-15 Thread Peter Dolding
On Tue, May 16, 2017 at 6:57 AM, Alan Cox  wrote:
> O> I'm not implying that my patch is supposed to provide safety for
>> "hundreds of other" issues. I'm looking to provide a way to lock down a
>> single TTY ioctl that has caused real security issues to arise. For
>
> In other words you are not actually fixing anything.
>
>> this reason, it's completely incorrect to say that this feature is
>> snake oil. My patch does exactly what it claims to do. No more no less.
>>
>> > In addition your change to allow it to be used by root in the guest
>> > completely invalidates any protection you have because I can push
>> >
>> > "rm -rf /\n"
>> >
>> > as root in my namespace and exit
>> >
>> > The tty buffers are not flushed across the context change so the shell
>> > you return to gets the input and oh dear
>>
>> This is precisely what my patch prevents! With my protection enabled, a
>> container will only be able to use the TIOCSTI ioctl on a tty if that
>> container has CAP_SYS_ADMIN in the user namespace in which the tty was
>> created.
>
> Which is not necessarily the namespace of the process that next issues a
> read().
>
> This is snake oil. There is a correct and proper process for this use
> case. That proper process is to create a new pty/tty pair. There are two
> cases
>
> - processes that do it right in which case the attacker is screwed and we
>   don't need to mess up TIOCSTI handling for no reason.
>
> - processes that do it wrong. If they do it wrong then they'll also use
>   all the other holes and attacks available via the same path which are
>   completely unfixable without making your system basically unusable.
>
>
> So can we please apply the minimum of common sense reasoning to this and
> drop the patch.
>
> Alan
You missed some important.

From: http://man7.org/linux/man-pages/man7/capabilities.7.html
Don't choose CAP_SYS_ADMIN if you can possibly avoid it!
A vast  proportion of existing capability checks are associated with this
capability (see the partial list above).  It can plausibly be
called "the new root", since on the one hand, it confers a wide
range of powers, and on the other hand, its broad scope means that
 this is the capability that is required by many privileged
programs.  Don't make the problem worse.  The only new features
that should be associated with CAP_SYS_ADMIN are ones that closely
match existing uses in that silo.

This not only a improper fix the attempted fix is breach do
documentation.   CAP_SYS_ADMIN is that far overloaded it does not
require any more thrown in it direction.

This is one of the grsecurity patches mistakes.   GRKERNSEC_HARDEN_TTY
 is from 18 Feb 2016 this documentation as in place at the time they
wrote this.  Yes GRKERNSEC_HARDEN_TTY does exactly the same thing.
Yes Grsecurity guys did the same error and the grsecurity patches are
filled with this error.

The result is from the TIOCSTI patch done this way is you have to use
CAP_SYS_ADMIN to use TIOSCTI so opening up every exploit that
Grsecurity has added and every exploit CAP_SYS_ADMIN can do what is
quite a few.

Now I don't know if CAP_SYS_TTY_CONFIG what is an existing capability
might be what TIOCSTI should own under.

The reality here is CAP_SYS_ADMIN as become the Linux kernel security
equal what big kernel lock was for multi threading.

 In a ideal world CAP_SYS_ADMIN would not be used directly in most
cases.  Instead CAP_SYS_ADMIN would have a stack of sub capabilities
groups under it.

The excuse for doing it wrong grsecurity is
https://forums.grsecurity.net/viewtopic.php?f=7&t=2522

Yes most capabilities open up possibility of exploiting the system.
They are not in fact designed to prevent this.   They are designed to
limit the damage in case of malfunction so that a program/user has
only limited methods of damaging the system.  Like a program
malfunctioning with only limit capabilities if it does an action those
capabilities don't allow no damage will happen.   Now CAP_SYS_ADMIN is
for sure not limited.

But since grsecurity developers took the point of view these are False
Boundaries they then proceed to stack item after item under
CAP_SYS_ADMIN because the boundary made no sense to them.Also some
mainline Linux Kernel developers are guilty of the same sin of
overloading CAP_SYS_ADMIN.

>From my point of view any new patching containing CAP_SYS_ADMIN
directly used should be bounced just for that.   If features need to
be added to CAP_SYS_ADMIN now they should have to go into another
capability that is enabled when  CAP_SYS_ADMIN is and hopeful if we do
this over time we will be able to clean up CAP_SYS_ADMIN into sanity.


Peter


Re: More LSM vs. Containers (having nothing at all to do with the AppArmor Security Goal)

2007-11-18 Thread Peter Dolding
On Nov 18, 2007 5:22 AM, Casey Schaufler <[EMAIL PROTECTED]> wrote:
>
>
> --- Peter Dolding <[EMAIL PROTECTED]> wrote:
>
> > On Nov 17, 2007 11:08 AM, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> > > Peter Dolding wrote:
> > > >>> What is left unspecified here is 'how' a child 'with its own profile'
> > is
> > > >>> confined here. Are it is confined to just its own profile, it may that
> > > >>> the "complicit process" communication may need to be wider specified 
> > > >>> to
> > > >>> include this.
> > > >>>
> > > > Sorry have to bring this up.  cgroups why not?
> > > Because I can't find any documentation for cgroups? :)
> > >
> > > >   Assign application to
> > > > a cgroup that contains there filesystem access permissions.   Done
> > > > right this could even be stacked.  Only give less access to
> > > > application unless LSM particularly overrides.
> > > >
> > > This comes no where close to AppArmor's functionality:
> > >
> > > * Can't do learning mode
> > > * Can't do wildcards
> > > * Sucks up huge loads of memory to do that much FS mounting (imagine
> > >   thousands of bind mounts)
> > > * I'm not sure, but I don't think you can do file granularity, only
> > >   directories
> > >
> > Ok sorry to say so far almost percent wrong.  Please note netlabels
> > falls into a control group.  All function of Apparmor is doable bar
> > exactly learning mode.   For learning mode that would have to be a
> > hook back to a LSM I would expect.
> >
> > Done right should suck up no more memory than current apparmor.  But
> > it will required all LSM's doing file access to come to common
> > agreement how to do it.  Not just hooks into the kernel system any
> > more.
>
> The ability to provide alternative access control schemes is the
> purpose of LSM. The fact that we insane security people can't come
> to the agreement you require is why we have LSM. You can not have
> what you are asking for. Please suggest an alternative design.

Part of the building the alternative design requires aggreeing to
build sections common.
Like the netlabels.  We need this for other parts like filesystems.
>
> > At the container entrance point there needs file granularity applied
> > for complete and correct container isolation to be done.
> > >
> > > > There are reasons why I keep on bring containers up it changes the
> > > > model.  Yes I know coming to a common agreement in these sections will
> > > > not be simple.   But at some point it has to be done.
> > > >
> > > Containers and access controls provide related but different functions.
> > > Stop trying to force containers to be an access control system, it does
> > > not fit well at all.
> > >
> > > Rather, we need to ensure that LSM and containers play well together.
> > > What you proposed in the past was to have an LSM module per container,
> > > but I find that absurdly complex: if you want that, then use a real VMM
> > > like Xen or something. Containers are mostly used for massive virtual
> > > domain hosting, and what you want there is as much sharing as possible
> > > while maintaining isolation. so why would you corrupt that with separate
> > > LSM modules per container?
> >
> > Please stop being foolish.  Xen and the like don't share memory.   You
> > are basically saying blow out memory usage just because someone wants
> > to use a different LSM.
>
> Yup. No one ever said security was cheap. Most real, serious security
> solutions implemented today rely on separate physical machines for
> isolation. Some progressive installations are using virtualization,
> and the lunatic fringe uses the sort of systems well served by LSM.
> Let's face it, people who really care are willing to pay a premium.

Bigger problem Containers are processor neutral Xen and lot of the
other solutions are not.  There are advantages for people who don't
need full blown.  There needs to be two levels.  Ie VM for the heavy.
 Containers for where the security is needed but no to the point of
needing two different kernels.  Now restricting what can be in a
Container due to some poor reason that has not been attempted to be
worked around is not a valid reason.   In theory using Containers you
should be able to run every Linux distro on earth under one kernel as
long as it supports that seri

Re: More LSM vs. Containers (having nothing at all to do with the AppArmor Security Goal)

2007-11-16 Thread Peter Dolding
On Nov 17, 2007 11:08 AM, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> Peter Dolding wrote:
> >>> What is left unspecified here is 'how' a child 'with its own profile' is
> >>> confined here. Are it is confined to just its own profile, it may that
> >>> the "complicit process" communication may need to be wider specified to
> >>> include this.
> >>>
> > Sorry have to bring this up.  cgroups why not?
> Because I can't find any documentation for cgroups? :)
>
> >   Assign application to
> > a cgroup that contains there filesystem access permissions.   Done
> > right this could even be stacked.  Only give less access to
> > application unless LSM particularly overrides.
> >
> This comes no where close to AppArmor's functionality:
>
> * Can't do learning mode
> * Can't do wildcards
> * Sucks up huge loads of memory to do that much FS mounting (imagine
>   thousands of bind mounts)
> * I'm not sure, but I don't think you can do file granularity, only
>   directories
>
Ok sorry to say so far almost percent wrong.  Please note netlabels
falls into a control group.  All function of Apparmor is doable bar
exactly learning mode.   For learning mode that would have to be a
hook back to a LSM I would expect.

Done right should suck up no more memory than current apparmor.  But
it will required all LSM's doing file access to come to common
agreement how to do it.  Not just hooks into the kernel system any
more.

At the container entrance point there needs file granularity applied
for complete and correct container isolation to be done.
>
> > There are reasons why I keep on bring containers up it changes the
> > model.  Yes I know coming to a common agreement in these sections will
> > not be simple.   But at some point it has to be done.
> >
> Containers and access controls provide related but different functions.
> Stop trying to force containers to be an access control system, it does
> not fit well at all.
>
> Rather, we need to ensure that LSM and containers play well together.
> What you proposed in the past was to have an LSM module per container,
> but I find that absurdly complex: if you want that, then use a real VMM
> like Xen or something. Containers are mostly used for massive virtual
> domain hosting, and what you want there is as much sharing as possible
> while maintaining isolation. so why would you corrupt that with separate
> LSM modules per container?

Please stop being foolish.  Xen and the like don't share memory.   You
are basically saying blow out memory usage just because someone wants
to use a different LSM.

Besides file access control is part of running containers isolated in
the first place and need to be LSM neutral.

This is the problem current model just will not work.  Some features
are need in Linux kernel all the time and have to become LSM neutral
due to the features of containers.

Next big after filesystem most likely will be the common security
controls for devices.  These are just features need to complete
containers.  Basically to do containers LSM have to be cut up.  Or
containers function will be dependent on the current LSM to be use
completely.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Apparmor-dev] Re: AppArmor Security Goal

2007-11-15 Thread Peter Dolding
> > What is left unspecified here is 'how' a child 'with its own profile' is
> > confined here. Are it is confined to just its own profile, it may that
> > the "complicit process" communication may need to be wider specified to
> > include this.

Sorry have to bring this up.  cgroups why not?  Assign application to
a cgroup that contains there filesystem access permissions.   Done
right this could even be stacked.  Only give less access to
application unless LSM particularly overrides.

Comtainers allow overriding / in chroot style.  This needs file or
label based protection no matter the security framework.  So we don't
have the chroot problems of applications breaking out.

Apparmors file access control features along with selinux's as a
combined into a cgroup would be good.

Same is required for device control.

There are reasons why I keep on bring containers up it changes the
model.  Yes I know coming to a common agreement in these sections will
not be simple.   But at some point it has to be done.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Defense in depth: LSM *modules*, not a static interface

2007-11-06 Thread Peter Dolding
On Nov 7, 2007 2:11 PM, Tetsuo Handa <[EMAIL PROTECTED]> wrote:
> Hello.
>
> Casey Schaufler wrote:
> > Fine grained capabilities are a bonus, and there are lots of
> > people who think that it would be really nifty if there were a
> > separate capability for each "if" in the kernel. I personally
> > don't see need for more than about 20. That is a matter of taste.
> > DG/UX ended up with 330 and I say that's too many.
>
> TOMOYO Linux has own (non-POSIX) capability that can support 65536 
> capabilities
> if there *were* a separate capability for each "if" in the kernel.
> http://svn.sourceforge.jp/cgi-bin/viewcvs.cgi/trunk/2.1.x/tomoyo-lsm/patches/tomoyo-capability.diff?root=tomoyo&view=markup
>
> The reason I don't use POSIX capability is that the maximum types are limited 
> to
> bitwidth of a variable (i.e. currently 32, or are we going to extend it to 
> 64).
> This leads to abuse of CAP_SYS_ADMIN capability.
> In other words, it makes fine-grained privilege division impossible.
>
> Since security_capable() cannot receive fine-grained values,
> TOMOYO can't do fine-grained privilege division.
>
Seen same problem.  Tetsuo Handa.

Capabilities alone does not.   Capabilities make up part of the engine.

As you can see currently it allows controls by block.  Now if
something has no network access at all does it have filtering rules no
it does not.  Same with file access.  There are some applications that
never need write or read from file systems.  So why are they granted
that.

These broad area covering controls can be provided to applications
without very much complexity.  Applications can use these features
internally to harden their security.  Make sections of program only
have read only file access other sections having read write other
sections have no file access.  Same with network access.  This is a
layer that is over looked and lacking power.

Capabilities do big blocks of security.  Bottom point of capabilities
should be a static application that loads into ram runs but cannot
report or allocate any memory.  Ie basically contained harmless and
useless.

The LSM takes control of permission allocation not enforcement in my
model.  The enforcement are done by sections like Capabilities and
Netlabels and some filesystem part that is missing.  Other parts might
be missing too.  Really need to be bashed out.   The Capabilities
could even tell you if those features are applied to your application.
 Now application can respond more correctly to user cannot access
directory because blocked by LSM/Application security settings not
just failed access.

Note Capabilities can provide a nice central point to give a basic
quick overview of what a application can and cannot do.  This
application does not have network access and is locked that way no
need to process Netlabels.  Same with filesystem.

330 is not too many if they exist for valid reasons.  20 appears to be
too few.  Most of the capabilities have be designed with the idea of
breaking up root powers.   This does not provide enough for
applications own internal security.

Its like currently you have a under 1024 port access switch and a Raw
network access switch.  Now there is no mirror switch for over 1024 so
all networking to application could be turned off.  Also applications
under 1024 then many not have the right to magically open up a back
door on higher user like ports.

On filesystem Read Write Execute and Change stat.   Memory allowed to
Allocate memory,  Memory map.Device access limitations flags.
This is quickly list getting to 10 more at least needed.

Basically there are quite a few still missing in Capabilities that are
needed for application own security.  No permissions issued threw
Capabilities should equal application paper weight.   There are also
missing engine parts.   Netlabels is only one part.

Basically Capabilities flags as the hub.  With sections like Netlabels
and other security processing engines forking off it.  Sections like
Netlabels only need settings if Capabilities allows anything in the
first place.  This allows special engines for sections.  Yet not
having to allocate the memory when you don't need it.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Defense in depth: LSM *modules*, not a static interface

2007-11-06 Thread Peter Dolding
Lets on paper do what Crispin Cowan said to be a good stacker apparmor
become purely restrictive and modules like it.  This will explain were
stacking ends up dead meat.

Most people don't notice that the default system is there Posix
Capabilities.   So effectively just by changing apparmor you have now
double layered the code.   Effectively slower.

Stacking risks creating a longer and longer path to permit and refuse
a request.  Since modules cannot take advantage of system provided
parts.  Without dealing with that problem stacking is a speed problem
as well as a security one.

I know a strange question would the be a advantage in adding a set of
restrictive options to POSIX.1e Capabilities section.  Could this
reduce the overall amount of code making up LSM's.  Might cure the
depth problem of stacking most of the time.  Less traveling threw the
LSMs less problems on speed.. And reduce the numbers of hooks needed
by LSM's into kernel.

I agree with Crispin Cowan at moment some modules are not stackable.
But there is a large price to pay by making them stackable too.  With
modules that are permissive and restrictive I have never come up a
good solution in Crispin Cowan eyes.  Its the reason why I was
splitting the security into zones.  It was the only way I could come
up with to make sure they could not fight.  Give them each there own
limited domain of control with limited permissions on offer.

Depth of stacking is important from admin point of view.  Ok something
coders can simply forget.  Lets say I have a application not working
due to secuirty on a linux I have never used before.  Yes this happens
when you replace admins.   Stacked also risks giving me more configs
to look threw to find the file that is blocking.

Yes I agree with layering security.  But there comes a point were
complexly removes gain of layering.

"AppArmor profile denies all network traffic to a specific
application"  Ok why should AppArmor be required to do this.  Would it
not be better as as part of Capabilities that is always there and is
application controllable.  It would be a security advantage if data
processing threads that don't do network access inside a application
don't have it.  Basically this feature could be done in mirror.  Allow
Network access Capabilities flag.  Not set application cannot access
network at all.  All LSM's would be able to use that to cut of network
access to applications.  As a standard feature of kernel if a new
network stack or some other alteration is done LSM hooks would not
need altering.  Lot of LSM hooks would disappear.  Need for LSM to
monitor and run different code to kernel in a lot of places would also
disappear.

With Capabilities expand it to point that applications cannot do
anything without permissions.  Both models are do able.  Restrictive
can be done in a Permissive model effectively if the starting point of
the Permissive is that you cannot do anything without permissions
being granted.  Big different is that the Permissive Model is the
kernel default.  Some LSM are design in conflict with the main model
of the OS.  You really only want one model from speed point of view.

This is the main problem with LSM most are forcing self against grain
of the OS's design.  Needing lots of hooks into different places
should have been a big hint. You are ending up with 2 models where you
should have only one.  Ie permissive controlled can do a MAC just as
effectively as hooking everywhere in kernel.   To be correct is a lot
safer since applications would be able to drop there permissions as
needed.  LSM rootkit gets massive amounts of power at moment not by
breaking anything because everything is piped in its direction.  What
is the point of layers when there a layer that can do what ever it
sees fit.

Basically look at the main kernel design work with it don't fight with
it.   This will reduce code everyone needs.

Only effective way I can come up with of layering and not having one
layer grant something that another has taken away/change.  Is to
double up  posix capability and other security appling systems.  One
containing the final output and one to list was is still allowed to be
changed.   This is going to have a speed cost.  Thank god only the
final output needs to be entering kernel processing. Once something is
removed from being setable the next LSM along cannot change it and
should send a error message.   This still requires building a
completely common core security engine so that a LSM does not do a
feature outside so we don't have grant and forbid fights any more.

Only different to my container model is depth of travel.  This one
will be slower than Container model I was putting up before.

Note a engine to do security can be basically security module neutral.
 Yes at least we don't need to debate about if the engine should be
permissive or restrictive.  Since the engine already exists.
Permissive it is.  Fighti

Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-11-03 Thread Peter Dolding
On 11/1/07, David Newall <[EMAIL PROTECTED]> wrote:
> Jan Engelhardt wrote:
> > On Nov 1 2007 12:51, Peter Dolding wrote:
> >
> >> This is above me doing code.   No matter how many fixes I do to the
> >> core that will not fix dysfunction in the LSM section.  Strict
> >> policies on fixing the main security model will be required.
> >>
> >
> > If there is no one wanting to fix the existing code, then the
> > perceived problem is not a problem.
>
> What an absurd claim.
>
I agree.  If they can provide a reason.  A correct reason why its not
being fixed then the perceived problem does not exist.  Until then it
common human flaw Tunnel vision.  People normally don't look at the
big picture.

Common fake reason is that Linus does not approve.   History of
patches completely disagrees with that.  Parts Linus has blocked have
been out of alignment with the build into kernel security model.  Yet
other parts that were in alignment with the model have got in during
the same time.  Perfect example of dysfunction making up a lie because
things will not go into kernel exactly how they want.

LSM is nothing more than a testing and model zone.   A place were
important features should not be.  It was put there because model
designs could not get along.   Did that mean that LSM was were the
features were intended to stay.  No the goal should be simple to get
as many good features into the main line as possible while staying in
alignment with the main kernels model.  I don't know where the wrong
idea that the main line did not have a security model came from
either.  Something does not have to be a LSM to be a security model.

XEN, KVM and lguest are not a suitable workaround to problem.  Its
more of LSM developers trying to say its not needed so don't have to
work with each other.  I am not always using x86 machines so at times
not one of those solutions fit.

Containers in Linux kernel get to be processor neutral in features.
So it will not matter what the processor chip it will work.  So the
correct solution to running many LSM somehow has to be done with
Containers.

Note calling me a know it all is not an answer either.   Either they
can put have a good explanation for there failing or the need asses
kicked.  Heck if I am wrong I need ass kick and perfectly prepared to
accept it.  The problem is I am not a person to accept invalid answers
what they have been giving me so far.

My main base is System Administration.  Not coding please note that
System Administrators are the final clients.  If you want someone with
System Administration back ground to take up the leadership of LSM and
bash it into a System Administrator friendly shape I am more than
prepared to do so.  I can bet a System Administrator in charge who is
looking from flexibility's and security point of view is going to get
noses really badly out of joint.  The flexibility bit is currently
missing.   Its not always possible to reboot a server just because the
security framework is not up to the job or client wants you to use a
tighter model.

Yes so people trying to lie to me is something I have very little
tolerance with.  Paperwork like PHD don't scare me off.  I have had to
repair networks destroyed by people with PHD with masters in computer
programming because they run a BIOS destroying virus from a outside
source.  So lets just say my trust has to be earned and using
incorrect facts don't get trust from me.

There is a bigger one than just Containers.  Its called linux on
desktop.   Some how security models will have to tolerate being
controlled from a central server.  Preferred 1 model so any number of
Linux Distros can be used in a network.  Just like different versions
of windows can now.  So somehow we have to get to one master model.
Even if the other models are just like feature tweaks.  Application
controlled allows pam and ldap into play.

Selinux jamed in does not really suit what is needed.  The world of
Linux is changing the LSM need to get there but into gear and catch
up.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-31 Thread Peter Dolding
On 11/1/07, Casey Schaufler <[EMAIL PROTECTED]> wrote:
>
> --- Peter Dolding <[EMAIL PROTECTED]> wrote:
>
>
> > Improvements to the single security framework are getting over looked.
>
> Please post proposed patches.
>
> >   I would have personally though selinux would have done Posix file
> > capabilities as a general service to all.
>
> Posix capabilities predate SELinux. SELinux is not interested in
> Posix capabilities.
>
> > But no IBM had to do it.
>
> Err, no. It was done by Andrew Morgan back in the dark ages.
> Why on earth do you think IBM did it?


Posix file capabilities the option to replace SUID bit with something
more security safe only handing out segments of root power instead of
the complete box and dice like SUID.  Even different on a user by user
base.

Posix capabilites is what Posix file capabilities is based on.  Yes I
know the words appear close.  The word file is important.  Please read
Website.  http://www.ibm.com/developerworks/linux/library/l-posixcap.html

IBM coders worked and got it into the main line really recently to
provide at least some way to avoid fault of SUID of course it could
still be made better.  I would have though being a important problem
that other LSM guys would have done it first.  So door to add new
features to kernel is open past any question.  Of course the features
have to be for everyones good.

Andrew Morgan Posix capabilities is something far older its been there
for ages pre selinux the correct fix to SUID for everyone has always
been there by extending Andrew Morgan's work.  So I will ask again why
did IBM have to do Posix file capabilities instead of Selinux.
Selinux has had 7+ years to do it.

Thank you for proving my point past question Casey Schaufler.  You
don't have a single clue of the alterations happing to the main
security model so there is every chance you will overlap with it.

Please get you tech right.  How many other holes are sitting open
because you patch them at LSM level and don't look down into default
security system to see if it should be fixed there.

>
> OK, you have all the answers. Show us some code or STFU.

That is no explanation to why the default security frame work is being
neglected.  I don't have all the answers.  It does not take a person
that high so see that LSM is a screwup leading to people being out of
touch with the main security model and its neglect.  It should not be
requiring outside parties to fix things that in the main security
model.  Only way that can be happening is if LSM is dysfunctional.  7+
year fault at min is not what you can call someone fixing a new fault.
 Now how are we going to fix the mess of LSM's to work correctly for
the good of linux.

One way is appoint one hard minded maintainer that is above my rights.

This is above me doing code.   No matter how many fixes I do to the
core that will not fix dysfunction in the LSM section.  Strict
policies on fixing the main security model will be required.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-31 Thread Peter Dolding
The Clear and Important thing is there is already a single security framework.

The single security framework is the security that exists when no LSM
is loaded.  It turns out the more I look most of my model already
exists just not being used effectively.  There is a capabilities frame
work at worse needs expanding to handling more detailed controls.
Like X thread can only read x y z files an write to a file.

Improvements to the single security framework are getting over looked.
  I would have personally though selinux would have done Posix file
capabilities as a general service to all.  But no IBM had to do it.
This shows a problem.  Critical upgrades to the single security
framework are not being made by LSM producers.  This means one thing
LSM producers are putting there framework before the good of everyone.
  This just cannot keep on going its a path to more and more forks and
more confusion.

Is it Linus getting in way I think not.   Because upgrades are making
it.  Slowly making it but they are making it.   Is it LSM makers
trying to alter the single security framework to there model.  I am
almost perfectly sure that is the main problem.  Next problem is LSM
vs LSM one up man ship ie mine is better than yours or Mine can do all
yours can with 12 times more complex config file.  Not taken a
complete look to see if both sides have advantages.

Part the problem is if you upgrade the single security framework
enough selinux apparmor... most of the LSM will become side shows of
very little importance to security more a s backup to the main
security.  Focus may move back to the old unix locations.  PAM for
creating users and assigning rights and application controlled
security.

Key thing to put  features into the existing single security framework
is flexibility and application control.  Application controlled
security can always beat selinux apparmor and every other LSM I have
ever seen.  The most advanced design of security just happens to the
one you cannot remove.   It also happens to be the most neglected.
The weaknesses that exist in the single security framework is lack of
advancement and repair.

What I class as features is like a fix to a small part.  Ie SUID too
powerful fine grained controls required.
Disk access control methods for file systems without using the
permission system. << Apparmor and relegated path base back end
engine. This also partly allows applications to protect there own
internal users from each other without needing to create system wide
users.  Basically in time internal application users should be equally
protected from each other as system wide users.  This enhancement goes
far past the common day scope of apparmor.  This is the advantage of
taking it out of LSM or at least looking to.  You may see where it can
be make 1000 times more useful as feature than a LSM and provide many
more times system protection even in ways a LSM never could.  Yes
altered apparmor could be really sell able as a core feature.

There are a lot of parts in LSMs that can be broken down into single
feature enhancements.  Major difference is how these features are
controlled.   Applications must be able to directly lower access on a
thread by thread base.  Never raise it.  These features are also
provided to all users on the system to control always even if they
cannot use them due to lack of rights.

Explain to me how its not bitrot leaving the key security framework
without features and then dividing up those features between different
incompatible parts.  This is the basic define of bitrot because you
are making a bigger and bigger mess.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-31 Thread Peter Dolding
On 10/31/07, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> Peter Dolding wrote:
> > Lets end the bitrot.  Start having bits go into the main OS security
> > features where they should be.
> >
> Linus categorically rejected this idea, several times, very clearly.
>
> He did so because the security community cannot agree on a
> one-true-standard for what that OS security feature set should be. From
> looking at this thread and many others, he is correct; there is no
> consensus on what the feature set should be.
>
> So you can wish for the "main OS security features" all you want, but it
> is not going to happen without a miraculous degree of consensus abruptly
> arising.
>
Not really.  Every time Linus has rejected is when people have tried
to thump complete models like Selinux into kernel as one huge mother
of a blocks without grounds.

MultiAdmin is different most other models.  Please look at Posix File
Capabilities.  Grounds and security advantages of that module was
proven while not fighting with other LSM means to do protection.
There are grounds for many admins in linux for tracking admin
alterations.  It falls into a cat of a feature request and security
feature.  Of course not all of MultiAdmin features will be suitable at
main OS security features.  Yes MultiAdmin will have to be happy to
break there code up to get the as there key feature to as many users
as able.  This is a difference UID 0 all powerful I guess everyone
here can agree that is not exactly good.  Not being able to trace who
did alterations as UID 0 is not good either.  Where does the other
frameworks deal with it.

The path is still open into kernel.  Complete models of course are
never going to make it.  100 percent consensus is not always required.
 Same reasons for Posix File Capabilities providing a segmented SUID
feature.  Applying the same Capabilities on a user by user base also
has equal advantages instead of having UID 0 or not UID 0.

Next important question why not look at segments to put forward
consensus.  This is something is not clearly being looked for.

100 percent consensus has never been true for every feature in Linux.

>On the contrary, security, done well, is a tight fitting suit. It must
>be tight, or it allows too much slack and attackers can exploit that.

I love that quote.  There is difference to tight fitting and covering
everything needed.  Ie tight fitting suit without pockets is going to
be a pain.

Main OS security features always made tight by the LSM.  Since they
are override able.

This can solve the stack problem to a point.   Of course not a perfect solution.

Chain passing threw LSM is not a solution.  Never will be.  A
applications on systems may require many different security models to
protect them.

Needing hooks everywhere with unlimited control provided at a single
interface does not look like a tight security model to me.  Makes LSM
look like the Ideal rootkit location.

LSM bundling hooks into security interfaces segments and reduces
threat.  Since each interface has rules and limitations.

Of course my ideas have not been fully documented out correct.   I am
not foolish my skills are not perfect.  The reason behind my ideas is
to get past the limitations of LSM.

The differences between LSMs get less different the closer you get to
the LSM interface.

Label vs path based is the biggest divide.   Including the config
system of modules makes merging hard.  Catch is Label and path based
both have there places.  Ie filesystem limitations(path based) and
speed(label based).  So both being side by side in the kernel I have
no issue with.  I really have to ask why selinux does not support path
based for the odd file systems that don't support labels and the
reverse with apparmor?  Is it that the developers have been building a
empire and not see the need for the others features so failing
completely to build the most powerful security framework.

Yes LSM is only a testing ground and for features that no everyone
wants.  ie Not everyone wants selinux apparmor... Models.  For things
like posix file capabilities its just a testing ground for features
before it moved into kernel full time.

LSM has two uses. Not one.

'one true security model' I am not talking about that with Multiadmin
main goal is a Security Feature it really does not make up a complete
model in its own right.  Different Admins with different capability's.
 Now the final form of Multiadmin who knows.  If we had file access
controls at the same level of control as posix file capabilities there
is a chance that Multiadmin core features could be done threw pam.
Lack of core features is forcing things into the LSM level that may
not need to be there.  Having users with permissions more limited to
filesystem would be useful.  There are small fragments of LSM that
have uses out side the LSM framework also what you are failing to
offer.

Th

Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-30 Thread Peter Dolding
On 10/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> On Wed, 31 Oct 2007, Peter Dolding wrote:
>
> > MultiAdmin loaded before Selinux breaks Selinux since Multi Admin rules are
> > applied over using Selinux rules.  This is just the way it is stacking LSM's
> > is Just not healthy you always risk on LSM breaking another.  Part of the
> > reason why I have suggested a complete redesign of LSM.  To get away from
> > this problem of stacking.
>
> since the method of stacking hasn't been determined yet, you can't say
> this.

I can because that is the current day problem. With many LSM's loaded
they stack completely as a complete mess and with problems.  They
fight with each other.  Lack of define on stacking equals big
problems.   Since you have not created a standard for stacking does
not stop the problem from existing.  Nice lack of planing when LSM
started or maybe its intentional.  When you need stacking its about
time you start moving things into the OS?

There is a way around the problem too without allowing LSM to stack.
Good advantage backward compatible  because your are not playing with
the LSM standard to do it so no LSM modules should need large
alterations.  At worse mirror extensions to handle the new OS feature.
 Posix File Capabilities provide the solution.   First done as a LSM
risked conflict.   Moved in as a operating system extension by by
conflict.  Fragments from LSM's should exactly move that way if they
expect to be overlapped by other models.

Lot of stacking problems can be avoided if segments are complete
standard extensions.
>
> it would be possible for MultiAdmin to grant additional access, that
> SELinux then denies for it's own reasons.
>
> if the SELinux policy is written so that it ignores capabilities, and
> instead just looks at uid0 then that policy is broken in a stacked
> environment, but it's the polciy that's broken, not the stacking.

That is not how current day always works.  MultiAdmin grants and that
can be the end of the treeing.   Selinux does not get asked if it
refuses it or not.  So no matter what was set in the Selinux policy it
may never get used.   Adding more layers is also bad for performance
to.  Treeing threw modules for rights is a really slow process.  As
like a posix feature extension.   Selinux/Other LSM's is at top of
allocation no flaw no bypass.

> yes, there will be interactions that don't make sense, but just becouse
> something can be used wrong doesn't mean that there aren't other cases
> where it can be used properly.
>
We are talking security here if its not order safe its not good.
MultiAdmin done as a posix feature extension is order safe.
MultiAdmin done as a LSM is not order safe.

System Admins are humans too.  Getting orders backwards does happen.
So should be avoided where able.

This completely avoids the need for adding another layer of stacking
and since built inside current day framework.  Does doing this risk
the end of LSM's as we know it yes it does.  Since it is not being
used as LSM were intended.  LSM is just a addon to standard OS
security what is either a testing ground for new features to secure
the OS that get build into the OS in time or as location for security
modules.

Somethings should be just done in the Standard OS security nothing to
do with LSM.

Little bit hard for some I guess to hear that LSM are not all
important and not all Security features should be done in them.  Some
should be done in the main OS security features.

Biggest current day problem with LSM is they have forgot that LSM is
only a testing ground or a zone for features that people will only
want some of the time.

MultiAdmin is a feature that can enhance means to Audit OS ie who did
what.  Enhance security hand outs and can be really handy with almost
any LSM on the system.  Its description of what it is sounds very much
like every other standard feature.

Lets end the bitrot.  Start having bits go into the main OS security
features where they should be.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-30 Thread Peter Dolding

Jan Engelhardt wrote:

I disagree.

Traditionally, Linux has given a process all capabilities when the
UID changed to 0 (either by setuid(2) or executing a SUID binary).
This has been relieved over the years, and right now with LSMs in the
field, it is possible to 'deactivate' this special case for UID 0.
  
SELinux does not have this special case for UID 0. Neither does it

seem to use capabilities (quick grep through the source). So
basically, all users are the same, and no one has capabilities by
default. Does SELinux thus break with other LSMs?

Now assume a SELinux system where all users have all capabilities
(and the policy that is in place restricts the use of these
capabilities then) -- should not be that unlikely. Does that break
with other LSMs?
  
MultiAdmin loaded before Selinux breaks Selinux since Multi Admin rules 
are applied over using Selinux rules.  This is just the way it is 
stacking LSM's is Just not healthy you always risk on LSM breaking 
another.  Part of the reason why I have suggested a complete redesign of 
LSM.  To get away from this problem of stacking.


I see MultiAdmin purely in the class of posix file capabilities( Fine 
grained replacement to SUID).
This is a standard feature fix not part of LSM.  Note it can not replace 
all SUID bits due to some internals of applications design need to be 
changed to support posix file capabilities in particular not checking if 
running as UID 0.  Traditional  UID 0 is already optional for 
applications without  LSM's.


Posix file capabilities only applies to applications only.  MultiAdmin 
being the user mirror of Posix file capabilities.


MultiAdmin patch to the user side may allow more SUID bits to be killed 
off from the start line.  So increasing overall system security.


Of course MultiAdmin might end up two halfs.   One a standard feature 
that hands out capabilities to users that LSMs can overrule.  And one a 
user by user directory access control LSM directory control LSM less 
likely to cause problems.


I really don't see the need for a LSM stacking order.  Some features 
just should not be LSM's in my eyes.  MultiAdmin is one of them.


Traditional way has all ready been expanded for applications without 
LSM's.  So my call still stand O heck head ache rating.   Because its in 
the wrong place.  Particularly when you think people will want to use it 
stacked with other LSM's.   Stacking should be avoided where able.   
This means at least some of Multiadmin features just have to be done 
core kernel as a normal kernel module to avoid stacking and breaking the 
LSM.


Note posix file capabilities was developed as a LSM module too at first 
the point came where it was going to cause more trouble for other LSMs 
granting stuff in conflict.Both Multiadmin and posix file 
capabilities share a lot in common.  Both developed in the wrong place.  
Both required to be else where.  Even there function is similar breaking 
down root powers and handing them out more effectively.  So in my eyes 
it is a pure Posix extension not a LSM.


Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-29 Thread Peter Dolding
On 10/30/07, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> Ah! So the proposal really is to have an LSM maintainer for each
> "family" of models, acting as a resource and arbiter for modules in a class.

I see it a little bit different one LSM maintainer for the lot of
modules who kicks the ass's of thoses who are not prepaid to share.
>
> I like that idea, and have no objection to it. However, it does have
> resource problems, in that the pool of LSM maintainers is not that
> large. There is also the likely objection that this degree of scale is
> not needed until at least there are multiple families of models in the
> upstream kernel, and possibly until there are multiple instances of a
> single family in the upstream kernel.
>
> It also begs the question of what constitutes a family.
>
> * AppArmor, SELinux, and TOMOYO are all ambient capability systems
>   o AppArmor and TOMOYO are pathname based
>   o SELinux is label based

Here as always all three should see where they can share code and get
the best performance this might mean AppArmor and Tomoyo use Selinux
labels because it causes less overhead.  Or Selinux provides a
optional path based using the other engine.  Both are providing the
same feature in different ways.   Question does have to be asked is
there bench testable justification for need two for file systems
filters.

> * SELinux and SMACK are label-based
>   o I don't know if SMACK is an ambient capability system

Both of these are sharing backwards and forwards between each other so
being nice with each other.  LSM overall Maintainer only really need
to at worst way xyz sections are not merged/shared and to document why
with benchmarks if they are not going to be.  Ie tested reason.

> * Rob Meijer implicitly advocated for an object capability LSM
>   o would it be pathname or label based? You could do either or
> both ...
Both is a valid answer.   Sections done path based should be shared
with other path based and labal based shared with other label based.

> * The LSPP work from RH, Tresys, and TCS is MLS based
>   o this is a subset of both label-based and ambient capability
> based
Ok section by section where would it be best for that code base to share with.

> * I have no clue what family to put MultiADM or Dazuko into
MultiADMIN falls under o my god head ache.  This is more a posix
standard feature altered ie 1 root user turned into many.  This really
risks breaking the other models as a LSM.  Its more of a all in or all
out.  Really it needs to be lowered out of LSM into a standard
optional Linux feature so it cannot breach the security of other LSM
modules.  Also LSM modules will need to be made able to tell a
MultiADMIN root users.  This is part of what I was talking about some
parts need to be as lower down module not at full blown LSM level.
This is the rare one where the complete model needs to be moved down.
There are bits in almost all LSM that need to be looked at being made
full time features of linux like quotas and posix file capability's .

Dazuko is the rare user mode controlled interface.  Still same rule
share code where able.  Anti-Virus integration and other protecting
systems are commonly overlooked by LSM's.Same here if this should
be a LSM or a kernel optional feature independent to LSM that a LSM
can block from happening.

> * Getting very formal, I could imagine a Clarke-Wilson module
> * Getting very informal, I could imagine a module that is a
>   collection of cute intrusion prevention hacks, such as the Open
>   wall Linux symlink and hardlink restrictions, and my own RaceGuard
>   work
>   o Oh wait, I published
> <http://citeseer.ist.psu.edu/cowan01raceguard.html>
> RaceGuard. Does that make it formal? :-)
>
You will hate me but I don't call that formal enough.  Its lacks the
critical bit of doc written in terms that any system admin can
understand what they are being given.

Next question should RaceGuard be a LSM at all.  Or should it be a
standard feature what LSM can over rule? 

Lot of things are being pushed as LSM's when they should be pushed as
expanded default features outside LSM.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-29 Thread Peter Dolding
On 10/29/07, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> I *really* dislike this idea. It seems to set up the situation that the
> only acceptable modules are those that follow some "formal" model. Problems:
>
> * What qualifies as a formal model? This becomes an arbitrary litmus
>   test, depending on whether the model was originally published in a
>   sufficiently snooty forum.

Agree slows development and experimentation with a idea BAD.  I would
more say function and reach of model and how its ment to operate
documented clearly.  So next coder along is not taken wild guess.  It
must be so clearly documented that almost any system admin could
understand how much or how little protection it is providing.

> * What if someone invents a new model that has not been "formalized"
>   yet? Should Linux be forced to wait until the idea has been
>   through the academic mill before we allow someone to try
>   implementing a module for the idea?

Experimental until documented out of tree following mine.
.
> * The proposal only allows a single implementation of each formal
>   model. In theory, theory is just like practice, but in practice it
>   is not. SMACK and SELinux follow substantially similar formal
>   models (not exactly the same) so should we exclude one and keep
>   the other? No, of course not, because in practice they are very
>   different.

Drop a stick on both of them.   Since they both operate substantially
similar they should be directly prepared to share code with each
other.   If one is not prepared to work with the other openly and
fairly out the tree it goes.

It could quite simple become the only thing different between Smack
and Selinux in they way they read configuration files.   Long term the
most user friendly and security solid modules win.  So long term one
wins short term any number of models.  We of course wait for that to
happen.  Note it could be agreement between coders.   Since they were
force to work as one if there is merit it will come out.  There is no
room for empire builders.   We need security builders.  Part of that
is getting your code examined by the most number of people able.

Same with apparmor vs selinux both can provide file access filtering.
So why two sets of code to do it.

LSM need a really strong maintainer prepared to beat a few ears in.
Or all we will endup with is usable modules.   Selinux pain in but to
configure no not really usable.   Maybe flaws in Smack so force to use
Selinux.  Apparmor too weak.

Basically a stack of trash is the current LSM model.  LSM's are fast
turning into x86 vs x86-64 bitrot all over again accept this time 100
times worse.  What is basically bitrot caused by not sharing.
Stackable modules maybe.  More important shared source code to do
stuff and common standards between LSM used where able.  This also
should make it simpler for new models to be added and experimented
with.  Since the new model may not need to redo all there hooking
system all over again.  Reduced security risk more tested code used in
new models.

Next more important extend security down into applications if
applications need it.   File access filtering would be a great feature
at the thread level.

We have enough LSM's to be hard. It a privilege to be in the main
kernel tree not a right.  Part of having a module in the Linux kernel
tree is a promise to do what is right for everyones security using the
kernel even if it means the end to your LSM's existence.  Part of
doing what is right is sharing of code.   All LSM developers should be
looking at there code and asking should this be like posix file caps
and for everyone.  Or should this be a LSM only feature because its
useless to everyone else.  From what I am seeing LSM maintainers don't
seam to think its part of the requirement to help other LSM's be
better even if theirs ends up losing.  Only if you are building a
Empire does it matter who wins or loses the long term of LSM's.   Only
thing that truly matter is that we get the best long term.


Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-28 Thread Peter Dolding
On 10/29/07, Crispin Cowan <[EMAIL PROTECTED]> wrote:
> To reject an LSM for providing "bad" security, IMHO you should have to
> show how it is possible to subvert the self-stated goals of that LSM.
> Complaints that the LSM fails to meet some goal outside of its stated
> purpose is irrelevant. Conjecture that it probably can be violated
> because of $contrivance is just so much FUD.

LSM is providing bad security on two fronts.

Number 1 LSM are speeding effort to create features.  Apparmor and
Selinux both provide file access filtering from applications.  Yet
they double up the code to do it.   So they cannot be used with each
other without doubling up hooks.  Then other LSM are creating there
own sections of code to do the same thing.   Simple rule more code
more risk of bugs since it will be less audited.  Duplication defense
features is really bad for security.  Some how code sharing has to be
got into LSM construction.

Number 2 The critical bit LSM stop at the edges of applications.   Now
without overlapping my hooks with existing LSMs I cannot create
application level protections.  Overlapping hooks will cause speed
loss.

LSM is set to protect the application. But inside the application
there will be sections that need the access rights and others that
don't.  Now a exploit in any section of the application under LSM gets
the LSM assigned rights.   With application level this can be done a
few ways extension to elf that is create by the complier and api calls
lowering access functions of current thread or for a starting thread.
If you exploit a section of the code without access to disk network
access and so on and without the right to call any function that does
that.   What have you exploited.   Minor data damage compared to
complete data that the application have access to being stolen as what
is the case with LSM at moment.  Basically LSM prevent taking control
of the complete system but don't help to stop peoples private data
from being stolen.  Both are bad to happen to a person.

LSM design is there to help security development not get in its way or
to cause bitrot.  Currently LSM design is causing major risk bitrot in
duplicated code.

Reading and processing configuration files should be independent to
the protection methods.   Hopefully designed to be able to run user
mode to test if the new profile for a application is safe to add
before adding it to the OS.   Typo prevention on both sides.  Current
method of just sticking everything into one huge blob is preventing
code sharing and risking more security holes.

The current LSM design is bad on so many levels.  I am surprised that
it takes a Non PHD System Admin to see it.  Some how I think its a
empire thing.   If everything was just simple blocks a person could
write a new LSM in hours with pretty good security.  Compared to
todays long time trying effort.  Leads of Apparmor selinux and so on
not being prepared to give up there little empire for the greater
good.

I personally hate stacking as a idea.  I personally prefer two layers
only.  Config reading and  enforcement.   Of course that does not stop
applications being assigned to different config reading systems.
Depth the two layers should stay fixed even if you have many different
models in use.

All LSM seam to want to force System Admins to pick there LSM over
another.   Instead of being able to pick LSM for task at hand.   Same
with poor security being better than no security its true.  Its
nothing strange to find selinux based systems with there security
disabled because the Admin cannot configure it.   But the reverse is
also true that when you have skilled Admins stuck with a system like
Apparmor cannot harden the system as far as they could with selinux.
Both ways its causing security holes poor security when you should
have good security is bad too.  Part of the problem LSM maintainers
are not at the front lines is all I can guess.  Because they don't
seam to know what is really needed.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-25 Thread Peter Dolding
Ok lets get to a good point.

Lets define a key bit.   What is a good software security lock?

My define is that its available to be used everywhere its needed and
when ever its need without flaw.

This is where most LSM fall in a heap.  Because you have to have the
LSM loaded to have its security features and cannot always be mixed
with other LSM it failes the when ever it is needed test.

On top of this most LSM features don't provide any form of direct
control to non admin users or applications to lower there access
rights.  So it also fails to be used everywhere its needed.

Since the LSM design itself is flawed in my eyes.   These flaws make
it hard for LSM to share tech advantages with each other.   LSM are
very much like putting a lock threw the front wheel of a bike.   So
the thief removes the front wheel and walks off with the rest of the
bike.  The critical data is in the user accounts.

The big thing with most LSM how do they handle security inside a
application on a thread by thread base.  They don't reason it gets too
compex without known the internals of the application.

We are talking security here and design of LSM's are not offering the
option max security.

Max security has to get down to a single thread inside a application
with all the security blocking features LSM's offer.  Reason a flaw in
that thread could be made completely harmless even that the other
threads in the application has complete system rights.

Idea of Max is to keep application flaws to as minor security flaw as
they could have been.   Ie Hopefully no risk because the flaw happened
in a section of code with no rights.

This is virtually imposable for any form of profiling creating
security to ever do(LSM profile based security).  What is needed is
application controlled security with profile based security as fail
back.  I know this means ripping your LSM parts apart and designing in
application controls.  Allowing features to be shared between LSM and
even to be there when the LSM that feature came from is not being
used.

First goal should not be to get a LSM static linked into kernel or
anything else bar getting the security system to a point that max
security is on the table if people want it.

I will say this again in my eyes LSM's should be thrown out of the
kernel completely because they are only offering fake max security.
Selinux and other LSM's on max is not even close to what should be
offered.

Basically Linux is a sitting duck for data thief third party  that
steal from the users home directory personal information.   And its
not like application developers are being given the tools to prevent
that.  Cost and loss does not start only when applications normal
profile of access is breached.   It starts way before that.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Security *Module* Framework (Was: LSM conversion to static interface)

2007-10-24 Thread Peter Dolding
I have different deal breakers.

If a LSM is something simple/commonly required it should be made like
posix file capability's provided to all to use.   Sorry to say I see
the file protection in apparmor as something everyone should be able
to use at will like posix file capability's.  All enforcement features
should be common.

I see a LSM as a director commander its reason for existence is to
read security configs and hand them permissions and respond to
problems.   Any enforcement should be default in kernel.

So LSM could be roll based, mac or any other model.  Current problem
enforcement and guiding are mixed up in one block.  So evolution is
not happening.

The enforcing bits of LSM's should be a simple no brainier addons to
the Linux kernel.  The problem is at moment they are mixed up with Mac
  A security model to use has to be picked to suit job.   Role
Based can be Better than Mac and Mac can be better than Role based.
It all depends on what you are defending.

Thing common all need to protect suid, file access, network access...
The bits you need to defend don't change if or if not you are running
a LSM.  So why are these bits bottled up inside LSM forcing people to
choose the wrong security model for there task to get protection at
times.

There can never be one LSM to do every job.   But the big but all the
common bits to protect every job could be in kernel.  Only thing
missing is the director.

This is exactly the same problem Virtual Server solutions had when
then wanted to get into the kernel.   At least the Virtual Server
solutions were not as pig headed as some of the LSM guys about it.
Where its all in or not in at all.   Little bits into kernel is better
than nothing.

Really this will sound bad if I had my way I would kick all LSM's out
of the main kernel tree until they learn to work with each other to
share bits.  We don't need 10 copies of protect files from access.  Or
10 copies of limit what .so a application and interface with and so
on.

It worked with Virtual Servers to get them to sit down and start
talking.  What we really need working on is system wide security.  No
bothering a lot about the little box of LSM.

Yes I am not nice to LSM.   I see them as bitrot.  They are going to
cause containers problems in there current form as containers evolve.
They are not improving the base line security level.  Yes selinux
saying make me default to improve secuity says that in selinux there
are parts that should be chopped out and made default.  But since it
contains a security model it cannot be all made default because it
just will not fit everywhere.

Basically a LSM should make it simpler to run security tight.  The big
all mighty but it should not alter achievable security.  If its
altering achievable security main kernel is missing features and
someone needs to slice and dice that LSM.

Peter Dolding
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/